AI music is getting messy

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

A version of this article was originally published in Quartz’s AI & Tech newsletter. Sign up here to receive the latest news, analysis and insights on AI and technology straight to your inbox.

Xania Monet just became the first “artificial” artist to chart on the Billboard streaming chart and land a multi-million dollar recording contract. But most listeners can’t tell that she’s not actually human: she’s a creation of generative AI. This disconnect is a problem the music industry is working to solve.

Monet’s breakthrough comes as the recording industry, already transformed by two decades of digital disruption, enters its next phase of reinvention. Major labels that once fought against streaming are now rushing to stake their claim to AI turf, negotiating deals that will determine how music is created, who gets paid and what consumers actually know – or care – about what they’re listening to.

The sound of uncertainty

The deal that landed Monet his recording contract came after what Billboard described as “a bidding war,” suggesting that several labels saw commercial potential in an artist who didn’t exist beyond the code. Her Apple Music profile describes her as “an AI figure presented as a contemporary R&B singer in the highly expressive, church-bred, down-to-earth vein” of established soul and R&B artists.

Behind Monet is Telisha Nikki Jones, a Mississippi poet who writes the lyrics that Monet performs using Suno’s generative AI platform. Monet has released at least 31 songs since the summer, including a full album “Unfolded” in August with 24 tracks. His songs “Let Go, Let God” and “How Was I Supposed to Know” were featured on Billboard’s Hot Gospel Songs and Hot R&B Songs, respectively, a first for artificial artists.

“AI does not replace the artist,” Romel Murphy, Monet’s manager, told CNN. “It doesn’t diminish creativity or take away from the human experience. It’s a new frontier.”

But this border is different depending on where you are. Working musicians find their already precarious livelihoods threatened by the countless alternatives generated by AI. Industry executives see both an opportunity and an existential threat. And the listeners? For the most part, they don’t know what they’re hearing.

A recent study found that listeners could only correctly identify AI-generated music 53% of the time, which is only slightly better than a chance guess. When presented with stylistically similar human and AI songs, accuracy improved to 66%, but that still means one in three listeners couldn’t tell the difference.

From the courtroom to the boardroom

The speed with which the industry is taking a stand on AI is dizzying. Last year, Universal Music Group, Sony Music and Warner Music Group sued AI music startups Suno and Udio, accusing them of training their models on copyrighted music without permission. NOW Universal settled with Udioagreeing to launch a subscription service next year allowing fans to create remixes and custom tracks using licensed songs.

The terms of the settlement remain confidential, but the structure hints at industry strategy. Artists must choose to include their music and all AI-generated content must remain on the Udio platform. Similar deals would be within weeks. According to the Financial TimesUniversal and Warner are in talks with Google, Spotify and various AI startups, including Klay Vision, ElevenLabs and Stability AI. Labels are pushing for a streaming-style payment model, in which every use of their music in AI training or generation triggers a micropayment.

The urgency is understandable. Besides Monet, Billboard said at least one new AI artist has appeared on the charts in the past five weeks, meaning there is increasing potential for chart-topping confusion. Spotify revealed that it deleted 75 million tracks last year to maintain quality, although the company won’t specify how many were generated by AI. Deezer, another streaming platform, reports that up to 70% AI-generated music streams on its platform are fraudulent, suggesting the technology is already being used as a weapon for large-scale streaming fraud.

The human cost

For independent artists and small bands, the implications are stark. Unlike Taylor Swift or Billie Eilish, who wield influence through their labels and enormous fans, emerging musicians face an ecosystem in which they compete against infinite variations of themselves.

The lack of transparency around what music AI models are trained on means independent artists could lose their compensation without even knowing their work has been used. Industry groups are calling for mandatory labeling of AI-generated content, warning that without safeguards, artificial intelligence risks repeating the streaming model in which tech platforms profit while creators struggle.

Currently, streaming platforms have no legal obligation to identify AI-generated music. Deezer uses detection software to tag AI tracks, but Spotify doesn’t tag them at all, leaving consumers in the dark about what they’re hearing.

The industry’s challenge goes beyond detection or regulation. Music has always been much more than sound waves arranged in pleasing patterns. It’s about human connections, shared experiences, and the stories we tell each other about the songs we love.

As AI-generated artists climb the charts and get recording deals, the question isn’t whether machines can create music that sounds real. They already can.

The question is whether listeners will still care about the difference once they know the truth.

📬 Sign up for the daily briefing

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button