What is a deepfake?
If you have seen Mark Zuckerberg brag about having ‘total control of billions of people’s stolen data’, or the uncanny Tom Cruise on TikTok, you have seen a deepfake. Originating in 1997 with the Video Rewrite program, Deepfake uses a form of artificial intelligence called deep learning to create a video of a person in which their face or body has been digitally altered, so that they appear to be someone else. Whilst their creation was previously limited to high-end desktops with powerful graphic cards and those with technological expertise, they are being made increasingly available to the wider public, with several companies now creating them for you, and mobile phone apps allowing you to create them yourself.
What are the main problems with it?
“Deepfake technology is being weaponised against women”- Danielle Citron, a professor of law at Boston University.
Despite plenty of spoof and satire deepfakes, the large majority of videos are pornographic: according to cybersecurity company Deeptrace, 96% of all deepfakes are non-consensual porn. Furthermore, as new techniques allow unskilled people to make deepfakes with a handful of photos, it is being increasingly used beyond the celebrity world to fuel revenge porn. In fact, the government says around 1 in 14 adults in England and Wales say they have been threatened with intimate images being shared against their will. It also reported that one website which creates nude images from clothed ones received 38 million visits last year. This issue was brought to light in August by a series on BBC Panorama, which exposed a network of men on Reddit who traded women’s nudes online- including some which had been faked- as well as harassing and threatening the women.
What does the law say?
Deepfake pornography is a type of image-based sexual abuse. It is already an offence in Scotland to share images or videos that show another person in an intimate situation without their consent. However, in England, as of now, it is only an offence if it can be proved that such actions were intended to cause the victim distress; a loophole which has meant that in some cases under the existing law, men have admitted sharing women’s intimate images without consent, but have not been prosecuted because they said they did not intend to cause harm.
Fortunately, a planned new law- the Online Safety Bill- would make sharing pornographic deepfakes without consent a crime in England and Wales, as well as making it easier to charge people with sharing intimate photos without consent. Furthermore, prosecutors would no longer need to prove they intended to cause distress, meaning the number of successful prosecutions in these type of offences would likely increase.
Issues with trust
“The problem may not be so much the faked reality as the fact that real reality becomes plausibly denial”- Professor Lilian Edwards, Newcastle University.
Finally, an obvious problem with deepfake is trust: how do we distinguish truth from falsehood as deepfakes become increasingly prevalent? We have already seen the impacts of this erosion of trust in the media, as it becomes easier to raise doubts about specific events. For example, in Prince Andrew’s BBC interview with Emily Maitlis, he challenged the authenticity of a photo taken with Virginia Guiffre. Moreover, last year, Cameroon’s minister of communication dismissed as fake news a video that Amnesty International believes shows Cameroonianthe country’s soldiers executing civilians.
Big Tech companies are coming under increasing pressure to deal with such issues of disinformation. New EU regulation will demand tech firms take action on deepfakes and fake accounts, or risk being fined up to 6% of their global turnover if they do not comply. The strengthened code, which has already been signed by Clubhouse, Google, Meta, TikTok and Twitter, aims to prevent profiting from disinformation and fake news on their platforms.
Comments