[ad_1]
PRIVACY AT THE COST OF SPEED
The most sophisticated AI tools today process your queries on powerful cloud servers that require an Internet connection. Apple’s iPhone has a fraction of the power of those servers, but to make its AI service private and quick, it will run some AI queries via Siri “on device”, on a small language model Apple built itself to work on an iPhone. No internet connection needed.
Apple Intelligence will also decide, on the fly, if a query like “Will I get to my daughter’s play performance on time?” requires extra computing power. If it does, it’ll access a bigger AI model that Apple made, via something called “Private Cloud Compute”, which is essentially Apple’s own servers.
Anything even more complex will request a query to ChatGPT, via a partnership with OpenAI. Apple, admirably, has gone to great lengths to keep this process private, with query requests being end-to-end encrypted and inaccessible to others.
The price for all of this could be speed.
When Apple answers a query using its smaller on-device AI, it’ll do so with a latency of 0.6 milliseconds per prompt token, according to Apple’s blog post announcing the features, or faster than the blink of an eye.
But Apple didn’t offer corresponding latency times for when the phone has to access its Private Cloud Compute for more complex tasks, and that’s a noteworthy omission. It’ll likely be slower, but by how much? Apple doesn’t say.
As shallow as this sounds, consumers hate having to wait a few extra seconds for things they can do themselves, and if it’s simply quicker to look something up in their calendar or mapping apps, they might decide to avoid using Apple Intelligence altogether.
[ad_2]
Source link