If you’ve been on TikTok over the past month, it’s likely that Taylor Swift’s The Eras Tour has been all over your feed. You’ll have seen clips from the three-hour set, fan theories about rumoured new relationships and thousands upon thousands of videos of followers’ tour outfits.
One specific audio clip plays over those. It’s of Swift saying: “Oh my God, your Eras Tour outfit looks so f***ing good” – and it has been used in 22,000 different videos. The thing is, Swift never actually said it.
That AI-generated clip is one of the more anodyne examples of a “deepfake”, a user-created piece of synthetic media that imitates the likeness of an individual. On TikTok, any sound that you can think of – from Swift and Kanye West singing High School Musical songs together to Freddie Mercury covering Michael Jackson’s “Billie Jean” – isn’t just readily available to use, but easy to make, using third-party AI platforms, and upload online.
The users who create and use the Swift deepfake audios probably don’t have any malicious intent – they just want to create content relating to their fandom. But for those who are preying on the loose legalities of synthetic media and deepfakes, it can cause a lot of harm to the person they are trying to fake.
“This recent trend of creating artificial versions of celebrities’ voices, while fascinating from a technological standpoint, raises profound ethical questions and challenges our understanding of identity, consent, and privacy,” says Rijul Gupta, founder and chief executive of Deep Media, an AI platform that incorporates synthetic voices into a translating system to bridge communication across language barriers.
“While it is tempting to experiment with new vocal combinations and reimagine classic songs, we must remember that at the core of this technology lies the power to manipulate and distort reality. Misusing this power is not only harmful to the individuals involved but can also undermine the very fabric of trust and truth in our society.”
Social media’s role in parasocial relationships – “relationships” where one person has an illusion of intimacy with another person, usually a celebrity, who has no idea they exist – has blurred the line between true and false. By having access to celebrities’ social platforms and being updated on their personal life, fans gain the feeling of being “involved” in their lives.
The ethics around parasocial relationships and fan-made media isn’t anything new; for years people have been writing fan fiction based on real-life celebrities. Some fellow writers and fans have criticised these fanworks because they strip away the celebrity’s agency, turning them into a character and writing about them as they please, as seen on shows like Euphoria.
It might seem harmless to create audio deepfakes of celebrities, but it’s a sign of a larger issue that could be harmful to them both professionally and personally. With the increased use of synthetic media and AI-generated content created by fans, there has been a unique shift in the different types of fan-made work they create.
And regardless of intent, deepfakes are non-consensual. They are the creation and dissemination of a fabricated work that mimics the likeness of a real, living person. In extreme cases, pornographic deepfakes have been made of celebrity women, something that can inflict psychological harm on the victim.
If a celebrity is in the public eye, is their voice fair game? Gupta doesn’t think so. “Voice, much like one’s likeness, is an intrinsic part of an individual’s identity,” he says. “To use someone’s voice without their permission is, undeniably, an ethical violation.
The question of whether a person can own their voice is complex, and while current legal frameworks may not provide adequate protection, it is our responsibility as a society to collectively address this challenge and ensure that advancements in AI technology respect the dignity and rights of all individuals.”
From a public relations perspective, for an artist like Taylor Swift who has had her own experiences with having misinformation spread about her, deepfakes have negative effects on both her professional career and personal life. A popular TikTok audio deepfake featured her voice giving a snarky wealth-shaming comment about not performing to “poor bitches”.
Avid fans know that Swift would never say something like this. But for the casual fan, it’s difficult to discern between fiction and reality. “With synthetic media, celebrities may lose control over their public image and personal brand,” says Gupta.
“Unauthorised deepfakes and AI-generated content can misrepresent the artists, potentially leading to reputational damage or the dissemination of misinformation.”
For musicians, there is a huge financial issue with AI-generated music. Last month, what was believed to be a clip of a collaboration between Drake and The Weeknd went viral on TikTok.
Shortly after, a full-length version appeared on all music streaming platforms, garnering 630,000 streams on Spotify and 230,000 views on YouTube within 24 hours. And just like the Swift audio, it was fake. Universal Music Group, the label for both, noted that AI platforms have an ethical responsibility to do what they can to ensure artists are protected financially and creatively.
Scott Keniley, an entertainment lawyer, notes that “creating the deepfake to commit a fraud is of major concern in the same manner as stealing another’s identity”.
When discussing the ethics and legalities around deepfakes, Keniley cites a 1988 court case between singer Bette Midler and Ford, where Midler won a case after claiming her likeness was used for commercials by the automotive company. “It is my opinion that deepfaking another’s identity for commercial gain is wrong,” says Keniley.
This leaves everyone – fans, celebrities, record labels – in a legal grey space. TikTok recently announced guidelines that require users to “clearly disclose synthetic media and manipulated content that depict realistic scenes with fake people, places, or events”. That’s a start, but it’s not enough.
It seems the best way for someone to protect themselves would be to trademark their own voice. But that’s not possible – yet.
While digital signal processors and record labels wait for stricter laws around synthetic media, they must do what they can to either embrace aspects of AI or fight back against its use.
The artists are at odds about what to do: Timbaland recently debuted a song with an AI-generated verse by the late Notorious B.I.G. And Nick Cave has said that AI may “save the world, but it can’t save our souls”.
Keniley recommends coming up with unique ideas to merge music and AI so artists can be properly compensated. “Thinking outside the box should be the first reaction on how to embrace the AI tech.”
Take the musician Grimes: she created the platform elf.tech, which allows users to use her AI-generated voice to make new music in exchange for 50 per cent of the royalties. “It’s a start,” says Keniley.
In America, election season looms. And as generative AI becomes more widely used, it will be easier than ever to create an AI-generated clip of a candidate saying just about anything – as seen in one AI audio of President Joe Biden.
To recognise and identify manipulated content, Gupta recommends using tools, like Deep Media’s DeepID platform, that can detect deepfakes in various forms, such as images, videos, or audio, to “foster a more trustworthy digital environment”.
As fans continue to lip-sync to fake duets or create fan edits of their favourite celebrities “shouting” out to them on TikTok, it’s important to advocate for the ethical use of AI. Anything that is not consensual is stripping a celebrity of their agency – and surely no fan wants that.