Each autumn, I begin my course on the intersection of music and artificial intelligence (AI) by asking my students if they are concerned about AI’s role in composing or producing music.
So far, the question has always elicited a resounding “yes”.
Their fears can be summed up in a sentence: AI will create a world where music is plentiful, but musicians get cast aside.
In the upcoming semester, I am anticipating a discussion about Paul McCartney, who in June 2023 announced that he and a team of audio engineers had used machine learning to uncover a “lost” vocal track of John Lennon by separating the instruments from a demo recording.
But resurrecting the voices of long-dead artists is just the tip of the iceberg in terms of what is possible – and what has already being done.
In an interview, McCartney admitted that AI represents a “scary” but “exciting” future for music. To me, his mix of consternation and exhilaration is spot on.
Here are three ways AI is changing the way music gets made – each of which could threaten human musicians in various ways:
1. Song composition
Many programmes can already generate music with a simple prompt from the user, such as “Electronic Dance with a Warehouse Groove”.
Fully generative apps train AI models on extensive databases of existing music. This enables them to learn musical structures, harmonies, melodies, rhythms, dynamics, timbres and form, and generate new content that stylistically matches the material in the database.
There are many examples of these kinds of apps. But the most successful ones, like Boomy, allow non-musicians to generate music and then post the AI-generated results on Spotify to earn money. Spotify recently removed many of these Boomy-generated tracks, claiming that this would protect human artists’ rights and royalties.
The two companies quickly came to an agreement that allowed Boomy to re-upload the tracks. But the algorithms powering these apps still have a troubling ability to infringe upon existing copyright, which might go unnoticed to most users. After all, basing new music on a data set of existing music is bound to cause noticeable similarities between the music in the data set and the generated content.
Furthermore, streaming services like Spotify and Amazon Music are naturally incentivised to develop their own AI music-generation technology. Spotify, for instance, pays 70 per cent of the revenue of each stream to the artist who created it. If the company could generate that music with its own algorithms, it could cut human artists out of the equation altogether.
Over time, this could mean more money for giant streaming services, less money for musicians – and a less human approach to making music.