Siri expected to gain these 7 new features when Google Gemini takes over


Summary created by Smart Answers AI
In summary:
- Macworld reports that Apple is integrating Google’s Gemini AI into Siri, bringing seven major improvements, including improved conversational memory and agent task capabilities.
- Expected features include factually accurate responses, storytelling, emotional support, travel booking, document creation, and proactive suggestions for Apple users.
- The rollout begins with iOS 26.4 in the spring, with additional features launching at WWDC in June, preserving privacy through on-device processing.
A few days ago, Apple and Google confirmed a lot of information and announced that the next Siri redesign would be based on Google’s Gemini AI platform. Apple has struggled to create its own AI technology, so this move seems like a smart shortcut to “innovative new experiences” for its users.
At the time of the announcement, the companies only commented in general terms on the nature of the partnership, but a new report from The Information, a generally reliable site, revealed seven new features expected to come to Siri following Google’s input. Plus a few more details that might reassure Apple fans worried about their products being Googled.
Basing its predictions on testimony from a “person who has been involved in the project” and a (separate, implied) “person familiar with the partnership talks,” The Information this week published a detailed review (subscription required) of how the deal will work. In general, this underlines a certain degree of continuity: the interfaces of Siri and Apple products in general will not look and behave simply like Google Gemini. Apple will be able to fine-tune Gemini to work the way it wants, or ask Google to make changes. Current prototypes don’t even feature any Google branding, although it’s unclear whether Google will be happy for that to remain the case once the project goes public.
Likewise, sources are optimistic on the privacy front. “To meet Apple’s commitment to privacy,” they explain, “Gemini-based AI will run directly on Apple devices or on its private cloud system…rather than on Google servers.”
So far, everything is promising. But the key is what Gemini can offer Apple that Siri can’t already achieve. The following new features and improvements are all on the way, according to The Information’s sources:
- Answer “factual questions” more accurately, conversationally, and by citing the source
- Tell stories
- Providing in-depth, conversational emotional support, “for example when a customer tells the voice assistant that they are feeling alone or discouraged”
- Agent tasks such as booking travel
- Other types of tasks “like creating a Notes document with a cooking recipe or information on the main causes of drug addiction”
- Memorize past conversations and use them as context to understand new commands more accurately
- Proactive suggestions, like leaving the house early to avoid traffic
Not all of these features will arrive at once, The Information says. Some are expected to launch in the spring, likely with iOS 26.4, while others (especially the last two items on the list above) won’t be announced until WWDC in June.
Considering how long we’ve already waited for Siri 2.0 to launch, a delay of between two and five months before we receive a batch of new features is probably better than most Apple fans expected. Watch this space for more updates as the release approaches.



