World News

Car Microphone Enhances Autonomous Vehicle Safety

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Autonomous vehicles have eyes—cameras, lidar, radar. But ears? That’s what researchers at Fraunhofer Institute for Digital Media Technology’s Oldenburg Branch for Hearing, Speech and Audio Technology in Germany are building with the Hearing Car. The idea is to outfit vehicles with external microphones and AI to detect, localize, and classify environmental sounds, with the goal of helping cars react to hazards they can’t see. For now, that means approaching emergency vehicles—and eventually pedestrians, a punctured tire, or failing brakes.

“It’s about giving the car another sense, so it can understand the acoustic world around it,” says Moritz Brandes, a project manager for the Hearing Car.

In March 2025, Fraunhofer IDMT researchers drove a prototype Hearing Car 1,500 kilometers from Oldenburg to a proving ground in northern Sweden. Brandes says the trip tested the system in dirt, snow, slush, road salt, and freezing temperatures.

How to Build a Car That Listens

The team had a few key questions to answer: What if the microphone housings get dirty or frosted over? How does that affect localization and classification? Testing showed performance degraded less than expected once modules were cleaned and dried. The team also confirmed the microphones can survive a car wash.

Each external microphone module (EMM) contains three microphones in a 15-centimeter-wide package. Mounted at the rear of the car—where wind noise is lowest—they capture sound, digitize it, convert it into spectrograms, and pass it to a region-based convolutional neural network (RCNN) trained for audio event detection.

If the RCNN classifies an audio signal as a siren, the result is cross-checked with the vehicle’s cameras: Is there a blue flashing light in view? Combining “senses” like this boosts the vehicle’s reliability by lowering the odds of false positives. Audio signals are localized through beamforming, though Fraunhofer declined to provide specifics on the technique.

All processing happens onboard to minimize latency. That also “eliminates concerns about what would happen in an area with poor Internet connectivity or a lot of interference from [radio-frequency] noise,” Brandes says. The workload, he adds, can be handled by a modern Raspberry Pi.

According to Brandes, early benchmarks for the Hearing Car system include detecting sirens up to 400 meters away in quiet, low-speed conditions. That figure, he says, shrinks to under 100 meters at highway speeds due to wind and road noise. Alerts are triggered in about 2 seconds—enough time for drivers or autonomous systems to react.

tablet-size display indicates weather conditions ( sunny, rain, or snow), road wetness level (dry, slightly moist, and wet), and whether a siren has been detected. This display doubles as a control panel and dashboard letting the driver activate the vehicle’s “hearing.”Fraunhofer IDMT

The History of Listening Cars

The Hearing Car’s roots stretch back more than a decade. “We’ve been working on making cars hear since 2014,” says Brandes. Early experiments were modest: detecting a nail in a tire by its rhythmic tapping on the pavement or opening the trunk via voice command.

A few years later, support from a tier 1 supplier (a company that provides complete systems or major components such as transmissions, braking systems, batteries, or advanced driver assistance systems (ADASs) directly to automobile manufacturers) pushed the work into automotive-grade development, soon joined by a major automaker. With EV adoption rising, automakers began to see why ears mattered as much as eyes.

“A human hears a siren and reacts—even before seeing where the sound is coming from. An autonomous vehicle has to do the same if it’s going to coexist with us safely.” —Eoin King, University of Galway Sound Lab

Brandes recalls one telling moment: Sitting on a test track, inside an electric vehicle that was well insulated against road noise, he failed to hear an emergency siren until the vehicle was nearly upon him. “That was a big ‘ah-ha!’ moment that showed how important the Hearing Car would become as EV adoption increased,” he says.

Eoin King, a mechanical engineering professor at the University of Galway in Ireland, sees the leap from physics to AI as transformative.

“My team took a very physics-based approach,” he says, recalling his 2020 work in this research area at the University of Hartford in Connecticut. “We looked at direction of arrival—measuring delays between microphones to triangulate where a sound is. That demonstrated feasibility. But today, AI can take this much further. Machine listening is really the game changer.”

Physics still matters, King adds: “It’s almost like physics-informed AI. The traditional approaches show what’s possible. Now, machine learning systems can generalize far better across environments.”

The Future of Audio in Autonomous Vehicles

Despite progress, King, who directs the Galway Sound Lab’s research in acoustics, noise, and vibration, is cautious.

“In five years, I see it being niche,” he says. “It takes time for technologies to become standard. Lane-departure warnings were niche once too—but now they’re everywhere. Hearing technology will get there, but step by step.” Near-term deployment will likely appear in premium vehicles or autonomous fleets, with mass adoption further off.

King doesn’t mince words about why audio perception matters: Autonomous vehicles must coexist with humans. “A human hears a siren and reacts—even before seeing where the sound is coming from. An autonomous vehicle has to do the same if it’s going to coexist with us safely,” he says.

King’s vision is vehicles with multisensory awareness—cameras and lidar for sight, microphones for hearing, perhaps even vibration sensors for road-surface monitoring. “Smell,” he jokes, “might be a step too far.”

Fraunhofer’s Swedish road test showed that durability is not a big hurdle. King points to another area of concern: false alarms.

“If you train a car to stop when it hears someone yelling ‘help,’ what happens when kids do it as a prank?” he asks. “We have to test these systems thoroughly before putting them on the road. This isn’t consumer electronics, where, if ChatGPT gives you the wrong answer, you can just rephrase the question—people’s lives are at stake.”

Cost is less of an issue: microphones are cheap and rugged. The real challenge is ensuring algorithms can make sense of noisy city soundscapes filled with horns, garbage trucks, and construction.

Fraunhofer is now refining algorithms with broader datasets, including sirens from the United States, Germany, and Denmark. Meanwhile, King’s lab is improving sound detection in indoor contexts, which could be repurposed for cars.

Some scenarios—like a Hearing Car detecting a red-light-runner’s engine revving before it’s visible—may be many years away, but King insists the principle holds: “With the right data, in theory it’s possible. The challenge is getting that data and training for it.”

Both Brandes and King agree no single sense is enough. Cameras, radar, lidar—and now microphones—must work together. “Autonomous vehicles that rely only on vision are limited to line of sight,” King says. “Adding acoustics adds another degree of safety.”

From Your Site Articles

Related Articles Around the Web

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button