The volume hit 11 in April, says ‘Variety’, when ‘Heart On My Sleeve’, a song with AI-generated vocals by a fake Drake and a fake Weeknd, racked up millions of streams before being removed by the streaming services.
And then electronic artiste Grimes not only promised a 50-50 split with anyone who wants to use her AI voice on a song, she launched a software called Elf.Tech to help them do it.
Artificial intelligence by way of machine learning, notes ‘Variety’, is the latest existential threat to the music business, and unlike the frequently cited precedent of Napster-era piracy, which opened the door to illegal downloads, the industry has mobilised quickly to respond, with takedown orders, petitions, op-eds.
There’s also the Human Artistry Campaign, an initiative established to set fair practices in AI, not just in music but in other arts and even sports; Human Artistry’s dozens of members range from the Recording Academy to the Graphic Artists Guild, adds ‘Variety’.
The questions around AI and creators’ rights are so head-spinning, says ‘Variety’, that it’s hard to know where to begin: If David Guetta uses ChatGPT to create a fake Eminem verse for a song, who gets paid? Should it be Eminem, or could it fall under fair use or even parody, which is protected by the First Amendment to the U.S. Constitution?
Should it be the engineers of ChatGPT — or, since the machine did not create the verse completely by itself, should it be the music that was programmed into the technology that enabled it to create fake-Eminem’s rhymes? That’s just one example.
An industry that saw its value cut literally in half by the rise of illegal downloads two decades ago is determined not to let the same thing happen again, ‘Variety’ reports. Instead, it wants to harness the upside that AI can deliver while protecting the business from costly consequences.
Music professionals and trade group executives who monitor AI’s progress believe that the industry is far better prepared to deal with the technology’s potential challenges than it was to combat the wave of peer-to-peer file sharing that followed Napster’s 1999 launch.
‘Obviously, Chat GPT made a lot of people realise how close the next stage of AI is,’ says Tatiana Cirisano, an analyst for UK-based Midia Research, quoted by ‘Variety’. ‘But it’s not as if we haven’t been living with AI in our daily lives for years, and even in music-making. It’s been a steady progression.’
Jacqueline Sabec, partner at King, Holmes, Paterno & Soriano, added, according to ‘Variety’. ‘My general belief is that artistes are going to do what they’ve always done and ultimately embrace the technology and create things that we’ve never seen before or thought of to entertain us and drive human development.
‘The biggest threat is the economic threat,’ she concludes, ‘but we’ll probably figure out the economic solutions, as we’ve done before with photocopy machines, recorded music, Napster and YouTube.’
In fact, ‘Variety’ notes, many feel that AI can actually be used to police copyright infringement, whether committed by humans or machines.
Engineer and attorney Matthew Stepka, who was previously VP of business operations and strategy for special projects at Google and now lectures at the business and law schools of University of California, Berkeley, and invests in AI ventures, notes that AI has the potential to be an effective plagiarism detective.
‘With YouTube, they did fingerprinting on music so if it’s played in the background, the artiste can get paid, but it has to be an exact copy of a commercially published version,’ Stepka says. ‘AI can actually get over that hurdle: It can actually see things, even if it’s an interpolation or someone just performing the music.’
Sabec, the partner at King, Holmes, Paterno & Soriano, adds: ‘If AI listens to music and any derivative content with an algorithm to identify where the music originated, and creates a mechanism to collect revenue generated by that content with the ability to then pay the content creators, that could be a huge benefit to artistes.’