Deepfakes are out of control: Victims speak out
Deepfakes, the art of using artificial intelligence to create hyper-realistic but entirely fake content, has gained a reputation for causing distress, harm, and havoc. In partnership with Eiris, TMF! has investigated the devastating state of image-based abuse, through the voice of the victims.
1. Sexual Abuse
Many women, including celebrities and everyday individuals, have found their faces superimposed onto explicit content against their will. This non-consensual pornographic use can lead to deep emotional scars.
Celebrity deepfakes initially garnered significant media attention, with the likes of Gal Gadot becoming involuntary subjects. Now, it has trickled down to not only influencers but everyday people like Lauren, who described watching deepfake porn of herself sent by a stalker felt akin to digital rape.
“Even though it was fake, it still made me feel really ashamed and gross.”
At least 244,625 videos have been uploaded to the top 35 websites set up either exclusively or partially to host deepfake porn videos in the past seven years. Over 50% of those were uploaded in 2023.
2. Financial Scams and Fraud
Deepfakes can also be used for fraud, with individuals posing as loved ones to extort money. We are truly at a point in time, when we can’t trust the faces and voices of our loved ones anymore.
When a 73-year-old Ruth Card heard what she thought was the voice of her grandson Brandon on the other end of the line saying he needed money for bail, she and her husband rushed to the bank.
"It was definitely this feeling of... fear. That we've got to help him right now."
Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Now, the Carnegie Endowment has issued an official warning to get ready for deepfakes to be used in scams.
3. Bullying
Cyberbullying has a hot new tool: AI. Children are using this technology to mock, belittle, or intimidate others. Miriam’s 14 year old daughter was sent nude photos of herself from anonymous accounts, who asked for money to stop it from circulating in her location community. 30 more girls were targeted in the following month.
“If I didn’t know my daughter’s body, this photo looks real.”
The Struggle for Recourse
Despite this acceleration of damage, victims of deepfakes face an uphill battle when seeking justice or even a semblance of reprieve. Platforms, in their attempt to maintain neutrality, often take a hands-off approach, putting the onus on the victim to prove the content is fake. This can be a painstaking process requiring technological expertise and financial resources. Governments and law enforcement agencies, too, are grappling with the legal implications of deepfakes and are often ill-equipped to handle them.
The lack of explicit laws against deepfakes, combined with the challenges of jurisdiction in the online world, leaves victims feeling helpless, further magnifying their trauma.
So… what’s the solution?
We need both 1) Proactive and 2) Reactive measures to image-based abuse.
1. Proactive Measures:
Education and Awareness: The first line of defense against deepfakes is to be aware of their existence and potential harm. Educational campaigns can be initiated in schools, workplaces, and communities to help individuals identify and report deepfakes.
Technological Solutions: Firms are developing deepfake eradication tools that can detect deepfakes, including invisible watermarks and facial recognition and identification software that are designed by humans and powered by AI. The more we train AI models to detect manipulated content, the better our chances of keeping such content at bay.
Platform Policies: Social media platforms and other content-sharing websites must have stringent policies and AI tools in place to detect and remove deepfakes before they go viral. Think tanks such as tethix can be a good start.
2. Reactive Measures:
Support for Victims: Provide counseling and psychological support for victims. Dealing with the trauma of being a deepfake victim requires professional help. NGOs and governmental bodies can collaborate to offer these services.
Community Solidarity: As a society, we should stand with victims, not stigmatise them. Deepfake victims, much like other cyberbullying victims, want to be part of a community that fully understands the impacts of these experiences.
Cognitive Behavioural Therapy: Techniques including journaling, mood trackers and mindfulness meditation, can be helpful for deepfake victims as they focus on changing thought patterns and behaviours to cope with their emotional and psychological trauma.
Psychological Self-Defence: Encourage victims to use tech tools to manage their social media environment and protect their wellbeing by disabling or limiting notifications, muting or blocking users and increasing their privacy settings.
In conclusion, while deepfakes pose a considerable threat, combining technological solutions, legal changes, and societal support can pave the way to a safer digital future. It’s a collective fight, one where every stakeholder – from tech firms to governments to everyday internet users – has a vital role to play.