Skip to content

I (DON’T) KNOW IT WHEN I SEE IT: THE DANGERS OF DEEPFAKES

By: Ryken Kreps, Vol. 107 Staff Member[1]

Deepfakes are images, videos, or audio clips created by artificial intelligence that show people doing whatever the deepfake creator wants to show them doing with eerie accuracy.[2] Part I of this Post discusses the background of deepfakes and the recent controversy surrounding them. Part II is intended to call attention to the limited legal options available to victims of deepfaked sexual content. Part III will then discuss the state statute that is currently moving through the Minnesota legislature and what the legislative response to deepfakes should be: one, that there should be a response, and two, that states should seek to enact language that makes the law easy to find.

I. BACKGROUND

Deepfakes have been around for several years, but at the end of January, Twitch[3] streamer Brandon “Atrioc” Ewing created a storm of controversy that brought the dark side of deepfakes back into cultural focus.[4] He accidentally showed an open tab of a deepfake porn website for a second on his livestream, and the moment went viral.[5] This exposed a lot of people to the startling realism of deepfakes, and caused that website to receive a lot of increased traffic.[6] Many women who have amassed large online followings were deepfaked onto sexual content on that website, and were subsequently subject to harassment by people spreading deepfake images from that website.[7]

One of Ewing’s friends, who goes by QTCinderella (QT), has spoken at length about the various sorts of harassment and harm she suffered as a result of her images being on the site.[8] People sent her deepfakes of her with harassing messages, and seeing those images led to severe emotional harm and body dysmorphia.[9] She said during a podcast that she would not be able to convince her dad that those videos were not really of her, which shows both the disturbingly realistic nature of the images and the potential social and emotional harms that victims of deepfakes are worried about.[10]

QT promised that she would sue the creator of the deepfake site for these damages, but doing so is difficult.[11] Unfortunately, the legal system is currently not well equipped to provide a remedy for people who are targeted by deepfaking.[12] While this recent flurry of activity in deepfakes has mostly been about celebrities, the technology is widespread enough that anyone can be a potential victim.[13] Deepfakes can go hand-in-hand with “revenge porn,” where someone spreads sexual content of someone else in order to harass them.[14] States need to enact measures to protect their citizens.[15]

The problem with deepfakes is also not limited to images or videos. Recently, a popular new video format has gone viral: using AI to make it sound like celebrities are saying things they did not say.[16] The proliferation of these videos shows how easy it is for anyone to find and use this technology. These viral memes are created for humorous reasons, but it is not hard to imagine the sorts of personal or professional damage targeted use of this technology could do on a large or small scale.

II. REMEDIES

There are limited remedies for victims of deepfakes. Some states have laws that specifically address deepfake pornography, but in most states, victims will have to seek recovery under mechanisms less specific to deepfakes, such as anti-cyberbullying laws and general torts.

A. Legal Options for Victims that Directly Target Creators of Deepfakes

Currently, only California, Georgia, Hawaii, New York, Virginia, Florida, and South Dakota have statutes that specifically address deepfakes or “falsely created images” that show a person engaged in sexual activity.[17] In most other states, victims need to rely on more general legal principles to seek relief.[18] California’s statute is a leading example of an anti-deepfake law that allows for a private right of action.[19]

Under the California law, which is generally reflective of states with similar laws, an individual depicted in deepfake pornography can sue the creator of the deepfake, or someone who intentionally distributes a deepfake.[20] The depicted individual can get money damages including:

  • Whatever the creator gained by selling the content.
  • Damages for harm caused by distributing the content, including emotional distress.
  • Punitive damages against the creator.
  • Attorney’s fees and costs.

This law explicitly covers sexual content—harms done by other sorts of deepfakes are currently not covered, other than some election-specific provisions that protect political candidates.[21]

B. The Limited Legal Tools Available in States Where Deepfakes Are Not Illegal

If deepfake pornography is created and distributed to specifically harm someone (like the direct harassment that happened to QT), that can be classified as cyberbullying. Every state has anti-cyberbullying statues that will apply to people who specifically use the deepfakes to harass someone.[22] There could also be a claim for intentional infliction of emotional distress, if someone uses deepfakes to intentionally cause severe damage to emotional health (such as by targeted harassment).[23] Emotional distress can be hard to recover for if it is not paired with a physical injury, but it could be a starting point for a lawsuit or an additional claim to bring.

The most difficult instance for a victim to recover for is when a deepfake is created without specific intent to harass that person via cyberbullying and the state does not ban deepfakes. This is a problem especially for female celebrities. In QT’s case, it is hard to prove that the creator of the site created deep fakes in order to cyberbully any specific person. They were just monetizing artificial intelligence (in a highly unethical way).[24] Depending on the state, the tort of “false light” may be a potential route to recovery.[25] This tort broadly covers any situation in which someone (the deepfake creator) spreads lies (the deepfakes) about someone else (the person targeted) that purport to show them doing something they did not actually do (the sexual content). To succeed in the claim, the average person must find that false content objectionable (as would be true for deepfaked sexual content).[26]

C. DMCA Takedowns to Remove Unwanted Content from Google Search Results

For many people, especially public figures who are targeted by massive amounts of deepfaking, the legal remedy is much less important than removing the ability for people to find the content.[27] Since the content uses the celebrity’s image, people targeted by deepfaking can submit Digital Millennium Copyright Act (DMCA) requests to have that content taken down.[28] The most important effect of this is that it can remove deepfake content from Google search results, which makes it much harder to find.[29] In keeping with a promise to right his wrongs, Atrioc himself is now working with a company that uses AI to submit takedown requests much more quickly than traditional legal services can.[30] Early results of this work with women who have been targeted by deepfakes indicate that this could be a very effective way to prevent unwanted deepfake content from spreading.[31]

III.      LEGISLATIVE RESPONSES

On February 9, 2023, the Minnesota house introduced a bill to create both a civil cause of action and a criminal penalty for the creation and dissemination of deepfake porn.[32] This bill has several important clauses: damages, injunctions, confidentiality, and a tolling of the statute of limitations until the victim knows about the harm.[33] It, like the California law, is a useful model for other states. The California law has many definitions that are important for clarifying the type of harm, while the Minnesota law has a confidentiality provision that other states should implement to protect victims during the actual court process.[34] The nature of deepfakes being a heavily internet-based harm means that the more states that enact similar statutes, the better, in order to more easily catch interstate harms done online. States should also attempt to use terminology consistent with statutes in other states, and it would be smart to use the term “deepfakes” in the statute, in order to make it easier to find legal remedies in each state. For example, several sources do not list Hawaii in the states that have laws protecting against deepfakes, even though the sources were created after passage of the law; this is likely because the Hawaii law references “falsely created images” rather than the keyword “deepfake,” which makes the Hawaii statute harder to find.[35] The South Dakota law references content “that has been intentionally manipulated to create a realistic but false image or recording that would cause a reasonable person to mistakenly believe that the image or recording is authentic.”[36] An article from March 13, 2023, well after Hawaii and South Dakota’s laws were enacted, claims that only four states have anti-deepfake laws.[37] States should use language that makes the law easy to find, so that victims can more readily find their potential legal remedies and so that potential deepfake creators are deterred from making that content.

It must be said that creating new legal remedies is not a cure-all. The legal process itself can inflict new harms on people seeking relief.[38] This effect is worsened by the specific nature of this harm; even with confidentiality provisions that insulate the victim from the public, the fact is that the very images that caused the damage will have to be submitted to the judge in order to resolve claims. Inevitably, that legal process requires victims to relive their experiences and could damage their emotional recovery. However, it is still better to have an on-point statute that can work for victims quickly, rather than have their lawyers try to win cases based on less specific torts or other laws, or have discouraged victims give up on a legal remedy due to insufficient protections.[39] There is also the issue that deepfakes are spread by the internet, which makes it inherently difficult to track down and prosecute the creator, especially when it is done by an anonymous person online and not by a person the victim knows in real life. More states need to enact relevant laws in order to increase the chance that a given deepfake creator is violating those laws and can be found.

The focus of this Post has been the legal remedies, but the real work that needs to be done is the cultural work. Atrioc created the harms done to QT and other women because he failed to internalize how deepfake porn sexualizes and dehumanizes a person without their consent.[40] People sometimes need to see how real people are affected to have a reason to change their behaviors.[41]

 

[1] The title, I (Don’t) Know It When I See It, is a reference to Jacobellis v. Ohio. 378 U.S. 184, 197 (1964) (Stewart, J., concurring) (saying that he could not define porn, but he knew it when he saw it).

[2] Ian Sample, What Are Deepfakes – and How Can You Spot Them?, Guardian (Jan. 13, 2020), https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them [https://perma.cc/9B2R-UMZ4].

[3] Twitch is a website where people livestream themselves playing video games and interacting with viewers. Twitch, https://www.twitch.tv/p/en/about [https://perma.cc/3FLV-ARV9].

[4] Samantha Cole, Deepfake Porn Creator Deletes Internet Presence After Tearful ‘Atrioc’ Apology, Vice (Jan. 31, 2023), https://www.vice.com/en/article/jgp7ky/atrioc-deepfake-porn-apology [https://perma.cc/WRT3-JNZV].

[5] Id.

[6] Id.

[7] Id.

[8] Id. See also Fear&, Mike Majlak, QTCinderella, Hasan & Will Have A Serious Conversation.. (Deepfakes, CryptoZoo & More), YouTube (Feb. 6, 2023), https://www.youtube.com/watch?v=SWyoP1ztgzk [https://perma.cc/4YK5-9EWP](discussing QT’s experiences from 1:12:00-1:35:00).

[9] Fear&, supra note 7, at 1:20:00.

[10] Fear&, supra note 7.

[11] Cole, supra note 4; Fear&, supra note 7, at 1:29:29. (“If my sixty year old dad were to see those videos, I would never be able to convince him I didn’t do it.”).

[12] Eric Kocsis, Comment, Deepfakes, Shallowfakes, and the Need for A Private Right of Action, 126 Dick. L. Rev. 621, 628 (2022).

[13] See, e.g., Coleman Spilde, The College Student Whose Face Was Deepfaked onto Porn, Daily Beast (Mar. 11, 2023), https://www.thedailybeast.com/another-body-at-sxsw-the-college-girl-whose-face-was-deepfaked-onto-porn [https://perma.cc/BS8L-TF2S].

[14] Rebecca Ruiz, Deepfakes Are About to Make Revenge Porn So Much Worse, Mashable (June 24, 2018), https://mashable.com/article/deepfakes-revenge-porn-domestic-violence#7ZvLTMiseOq0 [https://perma.cc/A8QD-SU3Y].

[15] Sensity (@sensityai), Twitter (Oct. 20, 2020, 9:51 AM), https://twitter.com/sensityai/status/1318565385449373697?s=20&t=n0UCLDvPx0HqODbc2xi98Q [https://perma.cc/A7NZ-CTJL] (“Today we go public with the findings of a new investigation: early this year we uncovered a new deepfake bot on Telegram, an evolution of the infamous DeepNude from 2019, which ‘undressed’ at least 100.000 women without their knowledge.”); Tiffany Hsu, As Deepfakes Flourish, Countries Struggle with Response, N.Y. Times (Jan. 22, 2023), https://www.nytimes.com/2023/01/22/business/media/deepfake-regulation-difficulty.html [https://perma.cc/4ZMD-4SSA?].

[16] See, e.g., Dalton Bantz, Biden & The Gang Play Uno (AI Voice Meme), YouTube https://www.youtube.com/watch?v=28trJ24MGF8 [https://perma.cc/5YGP-LKLP].

[17] Cal. Civ. Code § 1708.86 (West 2022); N.Y. Civ. Rts. Law § 52-c (McKinney 2022); Va. Code Ann. § 18.2-386.2 (2022); Haw. Rev. Stat. § 711-1110.9 (2022); Ga. Code Ann. § 16-11-90 (2022); Fla. Stat. § 836.13 (2022); S.D. Codified Laws § 22-21-4 (2022).

[18] Cal. Civ. Code § 1708.86 (West 2022); N.Y. Civ. Rts. Law § 52-c (McKinney 2022).

[19] Kocsis, supra note 11, at 624. Cal. Civ. Code § 1708.86 (West 2022). Virginia and Texas have laws that allow for criminal prosecutions in certain circumstances. Va. Code Ann. § 18.2-386.2 (2022) (criminalizing altered videographic content that shows people doing things they did not do); Tex. Elec. Code Ann. § 255.004 (West 2021) (criminalizing deepfakes in the context of elections).

[20] Cal. Civ. Code § 1708.86 (West 2022).

[21] Cal. Civ. Code § 1708.86 (West 2022). Kocsis, supra note 11, at 635.

[22] Denis Binder, A Tort Perspective on Cyber Bullying, 19 Chap. L. Rev. 359, 362 (2016).

[23] Intentional Infliction of Emotional Distress, Legal Info. Inst. https://www.law.cornell.edu/wex/intentional_infliction_of_emotional_distress [https://perma.cc/Z7ZN-PNGK].

[24] The creator of the deepfake site posted an apology claiming that he did not understand how badly the things he did could affect people. Cole, supra note 4.

[25] False Light, Legal Info. Inst., https://www.law.cornell.edu/wex/false_light [https://perma.cc/ERZ2-JQ9X].

[26] Id.

[27] Atrioc VODs, An Update, YouTube (Mar. 18, 2023), https://www.youtube.com/watch?v=1iwCEGgxJE0 [https://perma.cc/55KS-TARV].

[28] What Is a DMCA Takedown?, DMCA.com (Feb. 28, 2023), https://www.dmca.com/FAQ/What-is-a-DMCA-Takedown [https://perma.cc/D6U3-L629].

[29] Atrioc VODs, supra note 26.

[30] Id.

[31] Id.

[32] H.F. 1370, 93rd. Leg. (Mn. 2022). It also contains a provision for election tampering.

[33] Id.

[34] Cal. Civ. Code § 1708.86 (West 2022); H.F. 1370, 93rd. Leg. (MN. 2022).

[35] See, e.g., Abigail Loomis, Deepfakes and American Law, Davis Pol. Rev. (Apr. 20, 2022), https://www.davispoliticalreview.com/article/deepfakes-and-american-law [https://perma.cc/T9XA-QLAQ] (listing only Virginia, Texas, and California); Deep Fake Laws, Cyber Civ. Rts. Initiative (Sept. 22, 2021), https://cybercivilrights.org/deep-fake-laws [https://perma.cc/TKF9-J9AP] (listing only California, Georgia, Virginia, and New York); Frederick Dauer, Law Enforcement in the Era of Deepfakes, Police Chief Online (June 29, 2022), https://www.policechiefmagazine.org/law-enforcement-era-deepfakes [https://perma.cc/6JJJ-EY6E] (listing only Virginia, Texas, and California); @MatthewFFerraro, Twitter (Mar. 16, 2022, 9:20 AM), https://twitter.com/MatthewFFerraro/status/1504102569907470336 [https://perma.cc/UYB6-JUDX] (“#BREAKING! #SouthDakota becomes the 8th state to enact a law banning some #deepfakes! SD’s law bars nonconsensual #deepfake pornography.”). Ferraro’s twitter thread claims that Wyoming has a law against deepfake; no relevant Wyoming law could be found.

[36] S.D. Codified Laws § 22-21-4 (2022).

[37] Moira Donegan, Demand for Deepfake Pornography Is Exploding. We Aren’t Ready for the Assault on Consent, Guardian (Mar. 13, 2023), https://www.theguardian.com/commentisfree/2023/mar/13/deepfake-pornography-explosion [https://perma.cc/BB7F-KFP2].

[38] Nina I. Brown, Deepfakes and the Weaponization of Disinformation, 23 Va. J.L. & Tech. 1, 41 (2020).

[39] Kocsis, supra note 11, at 628.

[40] Fear&, supra note 7, at 1:21:59. (“He did not have a visceral reaction enough to exit out.”)

[41] Cole, supra note 4; Atrioc VODs, supra note 26.