Exclusive: Adobe’s Corrective AI Can Change the Emotions of a Voice-Over
Adobe’s Oriol Nieto uploaded a short video with a handful of scenes and voiceover, but no sound effects. The AI model analyzed the video and broke it down into scenes, applying emotional tags and a description to each scene. Then the sound effects came. The AI model captured a scene with an alarm clock, for example, and automatically created a sound effect. He identified a scene where the main character (an octopus, in this case) was driving a car, and added a door closing sound effect.
It wasn’t perfect. The alarm sound was unrealistic and in a scene where two characters were hugging, the AI model added an unnatural rustling of clothing that didn’t work. Instead of editing manually, Adobe used a conversational interface (like ChatGPT) to describe changes. In the car scene, there was no ambient sound coming from the car. Rather than manually selecting the scene, Adobe used the conversational interface and asked the AI model to add a car sound effect to the scene. He managed to find the scene, generate the sound effect and place it perfectly.
These experimental features are not available, but they are generally integrated into the Adobe suite. For example, Harmonize, a Photoshop feature that automatically places elements with precise colors and lighting in a scene, was introduced at Sneaks last year. Now it’s in Photoshop. Expect them to appear sometime in 2026.
Adobe’s announcement comes just months after video game players ended a nearly year-long strike to secure protections around AI: Companies are required to obtain consent and provide disclosure agreements when game developers want to recreate an actor’s voice or likeness via AI. Voice actors have been preparing for the impact AI will have on the business for some time now, and Adobe’s new features, while not generating voiceovers from scratch, are another marker of the change AI is forcing on the creative industry.


