Waabi CEO Raquel Urtasun on Level 4 Autonomous Trucks


Raquel Urtasun spent 16 years in the autonomous driving arealong enough to navigate each glorious metaphorical hill and plunging valley. She made the trip from from the first “chimerical” layoffs, to “we are This close » certainty, and vice versa.
The industry is NOW riding a new wave of optimism and investment, notably Waabi Innovation Inc.the autonomous trucking company founded by Urtasun in 2001. The Spanish-Canadian professor of University of Torontoand former chief scientist of Uber’s Advanced Technologies group, helped establish Waabi as a key player. Since fall 2023, the Toronto-based startup has been operating geofenced freight routes from Dallas to Houston in a fleet of retrofitted Peterbilt tractor-trailers, even navigating residential streets under load, 36,000 kilograms (80,000 pounds) behemoths without humans on board.
In October, the company reached an important milestone by integrating its “Driver Waabi” physical AI system in Volvo’s new VNL autonomous truck, which the Swedish automaker is building in Virginia. This autonomous driving solution uses that of Nvidia Driving AGX Thor, A AI-based platform for autonomous and software-defined vehicles.
In January, the Toronto-based startup raised $750 million in its latest funding round to expand its autonomous driving system into the fiercely competitive robo-taxis space. Donors include Khosla Ventures, NvidiaAnd Volvo.
Urtasun says that Driver Waabi can accommodate a full range of vehicles, geographies and environments — although snowstorms can always create a no-go zone at this time. It’s powered by what Urtasun calls the industry’s most advanced neural simulator, enabling a “shared brain” that partners can transplant into cars, trucks, and just about anything that rolls. The idea is to grab a share of a global autonomous trucking company that McKinsey says could be worth more than 600 billion dollars per year by 2035; autonomous carriers being responsible for 15% of total truck miles traveled in the United States by 2030.
Supported by a supplement $250 million from UberWaabi plans to deploy at least 25,000 autonomous taxis across Uber ride-sharing service, whose global reach spans 70 countries, approximately 15,000 cities and more than 200 million monthly users.
Urtasun spoke with IEEE Spectrum about Waabi account on sensors and simulation to prove safety in the real world; and why the move to autonomy is a moral imperative that outweighs disruption for human drivers…whether they drive trucks or family sedans. Our conversation has been edited for length and clarity.
IEEE Spectrum: Until recently, autonomous technology seemed to have hit a wall, at least in the public mind. Today, investors are once again flooding the area and businesses are going all-in. What happened?
Raquel Urtasun: There were a lot of empty promises, or [people] without realizing the complexity of the problem. We realized that in reality, this problem was more difficult than expected. It’s also due to the type of technology that was developed at the time, what we call “AV 1.0.” These are hand-designed systems that must be forced by humans. You need a lot of capital and a considerable number of miles on the road just to get to the first deployment.
What you see with the next generation (AV 2.0 and systems capable of reasoning) is that you finally have a scalable solution. When we started the company, it was a very contrarian vision. But today, advances in AI make it clear that this is the next big thing. It’s not just about more calculation; it’s about building a brain capable of generalizing. This is the “aha moment” that the industry is currently experiencing.
Even for someone who believes in technology, seeing one without a driver semi-trailer in your rearview mirror can be unsettling. You have now integrated your technology into the aerodynamic Volvo VNL diesel-powered autonomous truck. How do we convince regulators and the public that these trucks belong on the streets?
Urtasun: Safety, when you’re thinking about carrying 80,000 pounds on this massive rig, is definitely a priority. We believe the only way to do this safely is to use a redundant platform fully developed and validated by the OEM, not through an upgrade. The original equipment manufacturer makes a special type of truck that has all redundant steering, power, and braking, so that no matter what happens, there is always a way to safely interface and activate that truck. We are then responsible for the sensors, the calculation and obviously the brain that drives these trucks.
The impact of AI on trucking jobs
One of the main points of contention is the displacement of human drivers. As AI disrupts a whole host of workplaces, how do we respond to people who say it will eliminate good-paying manual jobs?
Urtasun: We believe that everyone who is a truck driver today and wants to retire as a truck driver will be able to do so. This is a physical AI; it’s not like the digital world where all of a sudden you can switch to this technology immediately. This adoption and scaling will take time. Many jobs will also be created thanks to this technology; remote operations, terminal operations and other things. You have time to change the form of work you do on the road, which lasts for weeks at a time – and it’s really hard and dehumanized work, let’s be honest – into something you can do locally. There was an interesting [U.S.] A study by the Department of Transportation showed that as a result of this gradual adoption, more jobs will be created than actually lost.
You talked about a personal motivation behind this. Why do you think the benefits of autonomy outweigh the increasing difficulties, including the risk of unexpected accidents or even death?
Urtasun: There are 2 million road deaths every year around the world, and no one questions it. It’s the status quo. If you think machines have to be perfect to be deployed, you’re sacrificing many humans along the way that you could have saved. Human error in accidents is between 90 and 96 percent. These accidents could be preventable. Some accidents will always be inevitable; a tire could explode for a machine in the same way as for a human. But the important comparison is how much safer we are. This technology is the answer to many, many things.
Most of the industry focuses on “hub to hub” highway driving. But you argued that Waabi’s AI could handle the complexity of local streets.
Urtasun: The rest of the industry has opted for this economic model where you need hubs next to the highway. This adds a lot of friction and cost. With our end-to-end verifiable AI system, we can drive on the surface [local] streets. We can make unprotected left turns, traffic lights and sharp turns. These essential capabilities allow us to reach the end customer. We already transport commercial loads for customers like Samsung through our Uber Freight partnership.
You mentioned that Waabi doesn’t like to talk about “miles” traveled as a metric. For an engineering audience, this seems counterintuitive. How does your “simulation-driven” approach replace the need for real-world drive times?
Urtasun: In industry, kilometers have been used as an indicator of progress. How many miles does Tesla have to drive to see one of these situations? But we are a simulation-driven company. Waabi World can simulate all the sensors, human behaviors, everything. It’s the only simulator where you can mathematically prove that testing and driving in simulation is the same as driving in the real world. You can expose the system to billions of simulations in the cloud. This is what allows us to be so efficient and quick with capital.
Verifiable AI against Black Box systems
What is the difference between your “interpretable” AI and the “black box” systems we see elsewhere?
Urtasun: We have noticed an evolution in passenger cars for the level– 2+ systems with end-to-end black box architectures. But these are not verifiable. You cannot validate and verify these systems, which is a major problem considering that regulators and OEMs trust this technology.
What Waabi has built is end-to-end, but fully verifiable. The system is forced to interpret what it perceives and use these interpretations to reason, so that it can understand the consequences of each action. This is much more similar to how our brains actually work; your “Type 2” thinking, where you start thinking about causes, effects, and consequences, and then you usually make a much better choice in your maneuver.
Tesla relies almost exclusively on camera data to operate and improve its autonomous driving systems. Not a fan of this approach?
Urtasun: We use several sensors: lidar, camera and radar. This is very important because the failure modes of these sensors are very different and very complementary. We do not compromise security to reduce the nomenclature cost today.
Those (private car)level-2+ systems are not architected for level 4, where there are no humans on board. People don’t necessarily realize that there’s a huge difference in bar when there’s no human to rely on. It’s not, “Well, if I don’t have a lot of intervention on the system, I’m almost there.” » This is not a measure. We are native level 4. We decide in which areas the system can circulate and under what conditions. We are building technology capable of driving different formats (trucks or robo-taxis) with the same brain.
From the articles on your site
Related articles on the web


