Ford’s AI voice assistant is coming later this year, L3 driving in 2028

Ford’s new AI-powered voice assistant will roll out to customers later this year, the company’s top software executive said today at CES. And in 2028, the automaker will introduce Level 3 hands-free, eyes-free autonomous driving as part of its more affordable (and hopefully more cost-effective) Universal Electric Vehicle (UEV) platform, scheduled to launch in 2027.
More importantly, Ford said it would develop much of the core technology around these products in-house to reduce costs and maintain greater control. Please note, the company does not create its own models in broad language or design its own silicon, like Tesla and Rivian. Instead, it will build its own electronic and computer modules, smaller and more efficient than the systems currently in place.
“By designing our own software and hardware in-house, we found a way to make this technology more affordable,” Doug Field, Ford’s director of electric vehicles and software, wrote in a blog post. “This means we can bring advanced hands-free driving into the vehicles people actually buy, not just the unattainably priced vehicles.” »
Ford said it would develop much of the core technology around these products in-house.
The news comes as Ford faces growing pressure to roll out more affordable electric vehicles after its big bet on electric versions of the Mustang and F-150 pickup truck failed to excite customers or turn a profit. The company recently canceled the F-150 Lightning amid slowing electric vehicle sales and said it would make more hybrid vehicles as well as battery storage systems to meet growing demand for AI data center construction. Ford also recalibrated its AI strategy after stopping its autonomous vehicle program with Argo AI in 2022, moving from Level 4 fully autonomous vehicles to Level 2 and Level 3 conditional autonomous driving assistance features.
Amid all this, the company is trying to find a happy medium when it comes to AI: not going all-in on a robot army like Tesla and Hyundai, while still committing to some AI-based products, like voice assistants and automated driving features.
Ford said its AI assistant would launch on the Ford and Lincoln mobile apps in 2026, before expanding to the in-vehicle experience in 2027. An example would be a Ford owner standing in a rig, unsure how many bags of mulch will fit in the bed of their truck. The owner could take a photo of the mulch and ask the assistant, which could respond with a more precise answer than, say, ChatGPT or Google’s Gemini, because it contains all the information about the owner’s vehicle, including the truck’s bed size and trim level.
At a recent technology conference, Sherry House, Ford’s chief financial officer, said Ford would integrate Google’s Gemini into its vehicles. That said, the automaker is designing its assistant to be chatbot-agnostic, meaning it will work with a variety of different LLMs.
Amid all this, the company is trying to find common ground when it comes to AI.
“The bottom line is we take that LLM, and then we give it access to all the relevant Ford systems so that LLM knows what specific vehicle you’re using,” Sammy Omari, Ford’s head of ADAS and infotainment, told me.
Autonomous driving features will come later with the launch of Ford’s universal EV platform. Ford’s flagship product at the moment is BlueCruise, its Level 2 hands-free driver assistance feature that’s only available on most highways. Ford plans to roll out a point-to-point hands-free system that can recognize traffic lights and navigate intersections. And then, eventually, it will launch a level 3 system in which the driver must always be able to take control of the vehicle on request, but will also be able to take their eyes off the road in certain situations. (Some experts have argued that L3 systems can be dangerous given the need for drivers to remain attentive even as the vehicle performs most driving tasks.)
Omari explained that by rigorously examining every sensor, software component and computing unit, the team resulted in a system that is approximately 30 percent lower in cost than the current hands-free system, while offering significantly greater capabilities.
All of this will depend on a “radical overhaul” of Ford’s IT architecture, Field said in the blog post. This means a more unified “brain” capable of processing infotainment, ADAS, voice commands and much more.
For nearly a decade, Ford has built a team with the expertise to lead these projects. The former Argo AI team, initially focused on Level 4 robotaxi development, was recruited to the mothership for their expertise in machine learning, robotics and software. And a team of BlackBerry engineers, initially hired in 2017, is now working to build next-generation electronics modules to enable some of these innovations, Paul Costa, Ford’s executive director of electronics platforms, told me.
But Ford doesn’t want to enter “a TOPS arms race,” Costa added, referring to the metric for measuring AI processor speed in billions of operations per second. Other companies, like Tesla and Rivian, have focused on the processing speed of their AI chips to prove the power of their automated driving systems. Ford is not interested in playing this game.
Rather than optimizing for performance alone, they sought a balance between performance, cost, and size. The result is a computing module that is significantly more powerful, less expensive and 44% smaller than the system it replaces.
“We’re not just picking one area to optimize at the expense of everything else here,” Costa said. “We were actually able to optimize every aspect, and that’s why we’re so excited about it.” »



