The music industry has a fraud problem measured in billions of dollars. And for the first time, a streaming platform is handing its competitors the weapon to fight back.

On Thursday, January 29, 2026, French streaming service Deezer announced it would begin licensing its proprietary AI music detection technology to other platforms, record labels, and distributors. The tool, which the company claims can identify AI-generated tracks with 99.8% accuracy, represents the most aggressive move yet by any streaming service to combat the synthetic music flood that's threatening artist livelihoods across the industry.

The timing isn't coincidental. Deezer now receives approximately 60,000 fully AI-generated tracks every single day, roughly 39% of all daily music deliveries to the platform. That's up from 10% just 12 months ago. The company has identified and tagged more than 13.4 million AI-generated songs on its service, and here's the number that should alarm everyone in the music business: 85% of streams from those fully AI-generated tracks are deemed fraudulent.

"Music generated entirely by AI has become nearly indistinguishable from human creation," said Deezer CEO Alexis Lanternier. "Our approach remains clear: transparency for fans and protection of artists and songwriters."

What makes this announcement significant isn't just the technology itself. It's that Deezer is explicitly positioning its detection tool as shared industry infrastructure rather than competitive advantage. In an era where streaming platforms typically guard their algorithms like state secrets, Deezer is offering rivals the chance to adopt identical protections — and in doing so, challenging them to demonstrate whether they actually prioritize artist welfare over engagement metrics.


The Technology That Can Hear What Humans Cannot

Deezer's AI detection system analyzes audio signals for patterns created by AI music generators like Suno and Udio. The technology examines micro-variations in vocal timbre, breathing patterns, and performance inconsistencies that human singers naturally exhibit but AI systems struggle to replicate convincingly. These are artifacts invisible, or rather, inaudible, to the average listener but detectable through sophisticated pattern recognition.

The company has trained its system on 94 million songs and filed two patents for the detection methodology in 2024. When a track is flagged as fully AI-generated, Deezer takes three immediate actions: it labels the content for listener transparency, removes it from algorithmic and editorial recommendations, and excludes it from royalty pools. The music isn't banned outright, users can still find AI-generated tracks through direct searches, but the content cannot game discovery systems or siphon money from human creators.

Streaming platforms distribute royalties from a finite pool based on market share. Every stream that goes to a fraudulent AI track is money directly taken from a legitimate artist. When bad actors can generate thousands of unique songs at minimal cost and stream each one just enough times to avoid detection, they can extract millions from the system while flying under the radar.

The detection technology represents two years of internal development at Deezer, building on the company's long-running research in audio AI and content identification. Sacem, the French music rights management organization representing over 300,000 creators and publishers, including David Guetta and DJ Snake, has already completed successful tests with the tool and signed on as a licensing partner.

Deezer declined to provide specific pricing information, stating that costs vary based on the type of partnership and technical requirements. The company indicated it's in discussions with other European collective societies and plans to engage with organizations in Los Angeles during Grammy Week.


The $2 Billion Problem Nobody Wants to Solve

To understand why Deezer's move matters, you need to understand the scale of the crisis facing the music industry. Streaming fraud – the practice of artificially inflating play counts to game royalty systems – costs the global music industry an estimated $2 billion annually. That's money extracted from a finite royalty pool, meaning every fraudulent dollar represents real compensation stolen from real artists.

Traditional streaming fraud was relatively straightforward to detect. Bad actors would upload a few songs and have bot farms play them millions of times, creating obvious statistical anomalies. But generative AI has fundamentally transformed the economics of fraud.

The new playbook works like this: generate hundreds of thousands of unique songs using AI tools, upload them under various fake artist profiles, then stream each track just a few thousand times – enough to collect royalties but not enough to trigger fraud detection algorithms. Spread the activity across millions of tracks, and you can extract massive sums while appearing statistically invisible.

The case of Michael Smith illustrates how lucrative this scheme can be. In September 2024, the North Carolina musician became the subject of what federal prosecutors called "the first criminal case involving artificially inflated music streaming." According to the Department of Justice, Smith allegedly created hundreds of thousands of AI-generated songs and used automated bots to stream them billions of times, extracting more than $10 million in fraudulent royalties between 2017 and 2024.

Smith's operation allegedly peaked at over 10,000 active bot accounts generating approximately 661,440 streams per day. To avoid detection, he distributed streams across a vast catalog of tracks. "We need a TON of songs fast to make this work around the anti-fraud policies these guys are all using now," Smith allegedly wrote in an email to co-conspirators in 2018.

The AI-generated songs had file names that were randomized strings of letters and numbers. The CEO of the AI music company supplying Smith with tracks acknowledged the nature of what they were creating.

"Keep in mind what we're doing musically here," he wrote in one email included in the indictment. "This is not 'music,' it's 'instant music.'"

Smith has pleaded not guilty to charges of wire fraud conspiracy, wire fraud, and money laundering conspiracy, each carrying a maximum sentence of 20 years in prison.


Why Most Platforms Won't Touch This Problem

If detection technology exists and the fraud problem is costing billions, why haven't other major streaming platforms deployed similar solutions?

The uncomfortable answer is that streaming services have conflicting incentives when it comes to content volume. More songs on a platform means more content for recommendation algorithms to surface, more reasons for users to stay engaged, and more data about listening preferences. Whether those songs were created by humans or machines matters less to engagement metrics than whether users keep listening.

Spotify, the world's largest streaming service, has taken a different approach than Deezer. In September 2025, the company announced policy updates including a new impersonation rule prohibiting unauthorized AI voice clones, a spam filter to catch bad actors gaming recommendations, and a commitment to industry-standard AI disclosures in music credits. The company also revealed it had removed over 75 million "spammy" tracks from its platform in the previous twelve months.

But Spotify's policies don't remove AI-generated music that isn't explicitly impersonating human artists or engaging in obvious spam tactics. The Velvet Sundown – an AI-generated psych-rock "band" that accumulated over 1.4 million monthly listeners in summer 2025 before being exposed – illustrates the gap in this approach. The AI act had a verified artist profile, appeared in users' Discover Weekly playlists, and released multiple albums before admitting in their bio that they were "a synthetic music project guided by human creative direction."

Deezer's detection system flagged The Velvet Sundown's music as 100% AI-generated. Spotify's systems did not automatically identify or label the content.

"We don't prioritize or benefit financially from music created using AI tools," a Spotify spokesperson told press at the time. "All tracks are created, owned, and uploaded by licensed third parties."

The distinction Spotify draws, between detecting AI music and actually doing something about it, reveals the core tension in the industry. Detection is a technical problem. Action is a policy choice that affects engagement metrics, content volume, and relationships with distributors who profit from uploading massive catalogs regardless of how that music was created.


The Divide Between Labels and Platforms

The music industry's response to AI has fractured along fault lines that would have been unimaginable just two years ago.

On one side, you have platforms like Deezer and Bandcamp taking aggressive stances against AI-generated content. Bandcamp announced in January 2026 that it was banning AI-generated music entirely, any audio "generated wholly or in substantial part by AI" is no longer permitted on the platform. The company framed the decision as protecting its community of independent artists and maintaining authenticity for fans.

"We believe that the human connection found through music is a vital part of our society and culture," Bandcamp wrote in its announcement. "Today we are fortifying our mission by articulating our policy on generative AI, so that musicians can keep making music, and so that fans have confidence that the music they find on Bandcamp was created by humans."

iHeartRadio took a similar position, announcing in late 2025 that it would no longer play AI music featuring synthetic vocalists pretending to be human. The company has since removed AI artists like Xania Monet — who made headlines as the first significantly AI-assisted talent to sign a multi-million dollar record contract — from its radio stations nationwide.

On the other side, major record labels appear to be embracing AI rather than fighting it. Universal Music Group settled its copyright lawsuit against AI music platform Udio in October 2025, signing a deal to develop a new AI music platform trained on licensed content. Warner Music Group followed with settlements against both Udio and Suno in November 2025, striking what the company called "landmark" partnerships for "next-generation licensed AI music."

"This landmark pact with Suno is a victory for the creative community that benefits everyone," said WMG CEO Robert Kyncl. "With Suno rapidly scaling, both in users and monetization, we've seized this opportunity to shape models that expand revenue and deliver new fan experiences."

The settlements mark a dramatic reversal from the lawsuits filed in 2024, when all three major labels sued Suno and Udio for "mass infringement" of copyright, alleging the AI platforms had trained on millions of unlicensed recordings. The pivot to licensing deals suggests labels see more profit potential in embracing AI than fighting it — a calculation that may not align with the interests of the artists signed to their rosters.

"We've seen this before — everyone talks about 'partnership,' but artists end up on the sidelines with scraps," said Irving Azoff of the Music Artists Coalition following the Universal-Udio settlement. "Artists must have creative control, fair compensation, and clarity about deals being done based on their catalogs."

What Detection Actually Means for Artists

For the hundreds of thousands of musicians trying to build sustainable careers on streaming platforms, AI detection technology represents more than a technical achievement. It represents a potential reversal of a trend that has steadily eroded their earning power.

Consider the math. Streaming services distribute royalties based on market share, if your music generates 1% of all streams on a platform, you receive roughly 1% of the royalty pool. Every AI-generated track that accumulates streams reduces everyone else's slice of that pie. When fraud accounts for an estimated 10% of all streaming activity, the impact on legitimate artists is devastating.

A study by CISAC and PMP Strategy projects that nearly 25% of creators' revenues could be at risk by 2028 due to AI's impact on the music ecosystem — approximately €4 billion annually. That projection assumes current trends continue without significant intervention.

Deezer's decision to make its detection technology available to the broader industry could shift those projections if enough platforms adopt it. By removing AI-generated tracks from royalty pools entirely, platforms using the technology would effectively redirect fraudulent revenue back to human creators. The company estimates it has successfully identified and removed up to 85% of fraudulent AI-generated music streams from its royalty pool in 2025.

The technology also addresses a transparency problem that has plagued the streaming era. Most listeners have no idea whether the music surfacing in their algorithmic playlists was created by humans or machines. Deezer's labeling system makes that information visible, allowing users to make informed choices about what they listen to. Given that 97% of people cannot distinguish AI-generated music from human-made tracks, according to a study by Deezer and Ipsos, that transparency may be the only way listeners can know what they're actually consuming.


The Bigger Picture

Beyond the immediate questions of fraud and royalties, Deezer's technology forces a philosophical reckoning that the music industry has been avoiding: what role should AI-generated content play in the streaming ecosystem?

Some argue that the answer should be none. Ed Newton-Rex, founder of Fairly Trained and the organizer behind the Statement on AI Training signed by over 30,000 creators, sees AI music platforms as fundamentally exploitative. "This is exactly what artists have been worried about," he said of AI bands accumulating millions of streams. "It's theft dressed up as competition."

Deezer became the first music streaming platform to sign that statement in October 2024, joining actors Kevin Bacon, Kate McKinnon, Kit Harington, and Rosie O'Donnell, as well as authors Kazuo Ishiguro and James Patterson, and musicians including ABBA's Björn Ulvaeus and Radiohead's Thom Yorke. The statement's core message is unambiguous: "The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted."

Others argue that AI tools are simply the latest in a long line of technologies that have transformed music creation, and that attempting to ban or restrict their use is both futile and counterproductive. Mikey Shulman, CEO of Suno, has pushed back against the idea that AI music threatens human artists. "People don't realize just how depersonalized music has become and how little connection the average person has with the artist behind the music," he told NBC News. "It's a failure of imagination to think that in the future, it can't be a lot better."

The tension between these positions may ultimately be resolved not by philosophical argument but by market forces. If platforms that detect and restrict AI-generated content attract more artists and listeners than platforms that don't, the industry will follow the money. If listeners genuinely don't care whether their music was made by humans or machines, detection technology may prove irrelevant.

What Deezer has done is force that experiment to happen. By making its detection tools available to competitors, the company is essentially daring other platforms to demonstrate where they stand.


What Happens Next

The next few months will reveal whether Deezer's gambit succeeds in catalyzing industry-wide adoption of AI detection, or whether the technology remains an outlier approach that most platforms ignore.

Several factors will influence that outcome.

  • Artist pressure. Musicians and songwriters increasingly understand that AI-generated content threatens their livelihoods, and they have platforms – social media, industry organizations, and direct relationships with fans to demand action from streaming services. If enough artists publicly call for detection technology adoption, platforms may feel compelled to respond.
  • Regulatory attention. The music industry's AI challenges haven't escaped the notice of lawmakers. The European Union's AI Act includes provisions that could affect how platforms handle AI-generated content, and several U.S. states are considering legislation around synthetic media disclosure. Platforms that proactively adopt detection technology may find themselves better positioned for regulatory compliance than competitors who resist.
  • Competitive dynamics. If artists begin preferentially releasing music to platforms that protect them from AI competition, or if listeners migrate toward services that guarantee human-created content, the business case for detection technology becomes compelling regardless of philosophical positions.

For now, Deezer has taken the most aggressive public stance of any major streaming platform, backing its rhetoric with technology and making that technology available to the broader industry. The company isn't just fighting AI-generated music on its own service – it's attempting to establish an industry-wide standard for how streaming platforms should handle synthetic content.

Whether that standard takes hold will depend on choices made by competitors, regulators, artists, and listeners over the coming months. But the tools now exist for any platform that wants to use them. The question is which ones will.


Frequently Asked Questions

What is Deezer's AI music detection tool and how does it work?

Deezer's AI detection tool is a proprietary technology that analyzes audio signals to identify music created entirely by AI generators like Suno and Udio. The system examines micro-variations in vocal patterns, breathing, and performance characteristics that human singers naturally exhibit but AI systems struggle to replicate. Once a track is flagged as AI-generated, Deezer labels it for listener transparency, removes it from algorithmic recommendations, and excludes it from royalty pools. The company claims 99.8% accuracy and has filed two patents for the detection methodology.

Why is Deezer making its AI detection tool available to other platforms?

Deezer is positioning the technology as shared industry infrastructure rather than competitive advantage. The company aims to promote transparency across the streaming ecosystem and reduce incentives for AI-music fraud. By allowing rivals to adopt identical protections, Deezer is essentially challenging other platforms to demonstrate whether they prioritize artist welfare over content volume and engagement metrics. CEO Alexis Lanternier stated the goal is "broad adoption rather than competitive advantage."

How much does streaming fraud cost the music industry?

According to data tracking firm Beatdapp Software, streaming fraud costs the global music industry approximately $2 billion annually. This represents roughly 10% of all streaming activity being fraudulent. The money is siphoned from a finite royalty pool, meaning every dollar going to fraudulent streams is directly taken from legitimate artists who earned those streams authentically.

What percentage of music uploads are AI-generated?

As of January 2026, Deezer reports receiving approximately 60,000 fully AI-generated tracks per day, representing about 39% of all daily music deliveries to the platform. This is a dramatic increase from 10% just twelve months earlier. The company has identified and tagged more than 13.4 million AI-generated songs total, and estimates that 85% of streams from these tracks are fraudulent.

How does AI enable new types of streaming fraud?

Traditional streaming fraud involved uploading a few songs and using bot farms to play them millions of times, creating obvious statistical anomalies. AI has transformed this by allowing fraudsters to generate hundreds of thousands of unique songs cheaply, then stream each track just a few thousand times — enough to collect royalties but not enough to trigger detection algorithms. This "low and slow" approach distributes fraudulent activity across millions of tracks, making it appear statistically invisible.

What happened in the Michael Smith AI streaming fraud case?

In September 2024, North Carolina musician Michael Smith was charged by the Department of Justice in what prosecutors called "the first criminal case involving artificially inflated music streaming." Smith allegedly created hundreds of thousands of AI-generated songs and used automated bots to stream them billions of times, extracting over $10 million in fraudulent royalties between 2017 and 2024. At peak operation, he allegedly used over 10,000 bot accounts generating approximately 661,440 streams per day. Smith has pleaded not guilty to charges of wire fraud conspiracy, wire fraud, and money laundering conspiracy.

How is Deezer's approach different from Spotify's AI policies?

Deezer actively detects, labels, and excludes AI-generated music from recommendations and royalty pools regardless of whether the content is engaging in obvious fraud. Spotify's September 2025 policy updates focus on prohibiting unauthorized voice clones, filtering spam, and implementing disclosure standards — but don't remove AI-generated music that isn't explicitly impersonating artists or gaming systems. Spotify removed 75 million "spammy" tracks in the past year but doesn't systematically identify or restrict all AI-generated content the way Deezer does.

What is Bandcamp's position on AI-generated music?

Bandcamp announced in January 2026 that it was banning AI-generated music entirely. Any audio "generated wholly or in substantial part by AI" is no longer permitted on the platform. The company also prohibits using AI tools to impersonate other artists or styles. Bandcamp framed the decision as protecting its community of independent artists and ensuring fans have confidence that music on the platform was created by humans.

Why are major record labels signing deals with AI music companies?

Universal Music Group settled with AI platform Udio in October 2025, and Warner Music Group settled with both Udio and Suno in November 2025, striking licensing partnerships for AI music platforms trained on their catalogs. The labels appear to see more profit potential in licensing their content for AI training and receiving compensation than in continuing expensive litigation. Critics argue these deals may benefit labels financially while leaving individual artists without meaningful control or compensation.

Can listeners tell the difference between AI and human-made music?

According to a study by Deezer and Ipsos, 97% of people cannot reliably distinguish AI-generated music from human-made tracks. This makes detection technology and platform labeling crucial for transparency, as listeners have no natural ability to identify synthetic content without being told.

What is the Statement on AI Training that Deezer signed?

The Statement on AI Training is a declaration organized by Ed Newton-Rex of Fairly Trained, stating that "the unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted." Over 30,000 creators have signed, including actors Kevin Bacon and Kate McKinnon, musicians from ABBA and Radiohead, and authors like Kazuo Ishiguro and James Patterson. Deezer became the first music streaming platform to sign in October 2024.


AI Music Creation: Suno vs Udio vs ElevenLabs Music – My Comprehensive Experience After 6 Months of Testing
I tested Suno, Udio & ElevenLabs Music for 6 months, generating 500+ tracks. Complete comparison of pricing, audio quality, speed & real-world results.
AI Music Revolution: How It’s Changing the Industry Forever
Explore how AI is reshaping music creation, production, and distribution—unlocking new markets, creativity, and challenges for artists and fans.
Best AI Music Generators 2025: Free & Paid Tools Ranked
Discover the ultimate ranking of AI music generators in 2025. Compare free and paid tools like Suno AI, Udio, MusicGen, and Soundraw. Learn which platforms deliver professional-quality tracks, offer commercial licensing, and fit your budget.