Visual Intelligence could be Apple’s killer AI wearable feature


Summary created by Smart Answers AI
In summary:
- Macworld explores Apple’s Visual Intelligence technology, which debuted with the iPhone 16 Pro and could become the defining feature for upcoming AI wearables like smart glasses and camera-equipped AirPods Pro.
- This AI-based system identifies objects via cameras and provides contextual information while maintaining privacy with on-device processing and Private Cloud Compute architecture.
- Tim Cook places visual intelligence at the heart of Apple’s future product strategy, potentially giving Apple a competitive advantage in the emerging AI wearable device market.
Mark Gurman’s latest Power On newsletter contains several interesting tidbits about upcoming Apple products, but perhaps the most fascinating concerns Apple’s plans for future AI-powered wearable devices.
We’ve heard about it before: Apple is working on smart glasses (similar to Meta Ray-Bans), AirPods Pro with cameras and some sort of pin/pendant. All are at different stages of development and will apparently all rely heavily on visual intelligence.
This is Apple’s brand for applying AI to things seen by your device’s camera. It was launched as part of the iPhone 16 Pro and then came to other devices with expanded capabilities. You can take a photo of something around you to get contextual information about it, or even take a screenshot and do the same.
You can also ask ChatGPT about the topic, and the system is smart enough to change your options contextually. If you view an event poster with dates and times, you can simply add it to your calendar. If it’s a restaurant, you can check reviews, hours or the menu. You can identify plants or animals and do a Google image search to find similar items online.
Apparently, Tim Cook sees this area of AI technology as a central part of his upcoming AI devices. Apple builds its own visual models and intends to make this technology – contextual knowledge based on what the AI “sees” – a central pillar of future devices.
For example, you can simply look at your plate of food for ingredient information, portion sizes, or nutritional information. Turn-by-turn directions could use visual cues instead of just street names or distances. Reminders can be triggered by walking up to something and seeing it, not just times and locations.
Cook has singled out this feature in his recent appearances. He praised it during the company’s latest earnings conference call and during an all-hands meeting where he discussed the company’s AI ambitions. It’s a little strange to talk about it so consistently when it’s not entirely new and hasn’t changed much in the last year or so. Clearly, technology is on his mind, probably because he’s focused on the company’s upcoming new products.
Obviously, privacy is at the heart of AI processing what it sees around you. And in this area, Apple has an advantage: Powerful neural processors in hundreds of billions of devices enable more on-device processing than most competitors, and the company’s Private Cloud Compute architecture ensures that anything processed in the cloud also protects your privacy by design.



