top of page

Deep Fake Legislation in the USA: A Criticisms of Proposed Bill H.R. 3230

Updated: Mar 9, 2020


“A lie can travel halfway around the world while the truth is still putting on its shoes.”


Not only does the content of this quote emphasise why preventing disinformation is important, but its common attributions to Mark Twain and Winston Churchill demonstrate how easily disinformation spreads on the internet. In fact, the true origin of this quote comes from something satirist Jonathan Swift wrote long before either Twain or Churchill were alive. While the misappropriation of a quote may not be a critical problem, the growing potential for computer generated videos and audio that looks and sounds real is far more concerning in our age of “fake news.” Historically, audio and (especially) video media have been considered reliable sources of information, but the introduction of deep fakes has thrown this assumption into question.


In the USA, legislative attempts to stop deep fakes in their tracks are being tabled as part of the country's response to past manipulations of its democracy and elections. Specifically, the DEEP FAKES Accountability Act (Bill H.R. 3230) ("the Deep Fake Bill") attempts "to combat the spread of disinformation through restrictions on deep-fake video alteration technology" by requiring that deep fake creators place watermarks and audio disclosures (“safeguards”) on their created/altered content. If passed, the Deep Fake Bill would impose both civil and criminal punishments on deep fake creators who fail to implement the legislated safeguards.


However, the very scope of the Deep Fake Bill and definitional framework outlined within it exemplify how difficult it is to truly identify "deep fakes"—even conceptually—and offers an example of how important legislative definitions are when crafting law and policy surrounding developing technology. It also exemplifies how difficult it is to legislate the internet and identify appropriate informational chokepoints within US jurisdiction.


Overall, the draft bill fails in its attempt to prevent disinformation through visual and audio content in three ways:


  • First, its definitional scope fails to include doctored “real” videos (e.g. cheap fakes, maliciously edited content) that could be as damaging as deep fakes.

  • Secondly, it doesn’t burden social media and online platform providers with a duty to adhere to a definitional standard of "deep fakes" and other maliciously edited media, and

  • Thirdly, it fails to properly burden deep fake software developers with a duty to ensure that the safeguards the Deep Fake Bill attempts to legislate are, indeed, implemented.


All is not lost, however. Putting aside the technical difficulties of enforcing the law against any foreign actors, the Deep Fake Bill has good bones and with three substantive changes, could provide a very useful sword for both law enforcement and private individuals (yes, the bill allows for private right of action!) to use against, at least, domestic perpetrators.


Below, I will suggest three substantive improvements that could put H.R. 3230 in a better position to serve as a global standard for other nations to follow and a potential template for much needed international agreements aimed and stopping the wide spread of disinformation without trampling on free-speech rights.

 

1) Expand The Bill's Scope & Definitions to Include "Cheap Fakes" and Other Maliciously Edited Media


The Deep Fake Bill defines what it calls “Advanced Technological False Personation Record” [1], as any “deep fake” that a reasonable person would believe accurately exhibits any “material activity” of a living or deceased person which the person did not actually undertake and was produced without the person or person’s estate’s consent [2]. It goes on to define “material activity” as “any falsified speech, conduct, or depiction” which a reasonable person would consider to cause harm to the individual or society by altering a public policy debate or election, causing the subject reputational damage, etc. [3]. Finally, the Deep Fake Bill’s definition of “deep fake” itself includes all kind of visual and audio representation created by “technical means” rather than impersonation (e.g. satire) [4].


While this definition seems quite exhaustive and well considered, it fails to capture other dangerous forms of audiovisual disinformation that may not focus on identifiable individuals doing or saying things they never really did. Some “cheap fakes”, which can be created by maliciously re-contextualizing or editing (adding elements, speeding, or slowing existing) “real” footage, are just as capable of misleading the public and damaging trust as true deep fakes. The danger of simple cheap fakes is aptly demonstrated by the recent child kidnap video that inspired multiple mob beatings and murders in India [5]. The video, which depicted a man on a moped grabbing a child from the street and driving away, was actually a segment of a child safety awareness film; the unedited version ends with the child being returned and one of the “kidnappers” holding up a sign that explains the incident [6]. The video, which was primarily distributed by WhatsApp, was accompanied by messages warning people of kidnappers arriving in cities and snatching children [7].


Such a simple edit does not appear to be accounted for in the proposed Deep Fake Accountability Act since the “material activity” of riding on a moped and snatching a child is not a “falsified conduct” [8] but rather an intentional misrepresentation of a legitimate video. Additionally, under the Deep Fake Bill’s definition of a deep fake, the edited video does not actually depict conduct that the subject did not “in fact engage in” [9]. Finally, under the definition of “Advanced Technological False Personation Record”, which clearly implies that a video must use the likeness of an identifiable person without their consent, this edited version of the awareness video does not rely on misrepresenting a person’s identity to wreak havoc; indeed the moped drivers’ likeness is completely obscured by helmets [10] and those murdered were not targeted because they shared some likeness to the perpetrators in the video. Rather, they were visiting men and women who were “unfamiliar” to locals and distrusted because they could not speak the regional language [11].


Similarly, the current draft of the Deep Fake Bill would not encompass, for instance, someone taking innocent footage of an airshow over a major city and digitally adding bombs to insight public panic. Yet, such a video may be provably malicious and worthy of the criminal punishments outlined under H.R. 3230. Thus, the current definition regime outlined in the Deep Fake Bill becomes limited by its technicalities and may undermine the legislature's broader goal of preventing the spread of misinformation.

 

2) Require Online Platform Providers to Use Legislative Definitions in Their Own User Agreements and Internal Policies


While the recent pressure from legislators and national governments on social media companies to "control" the content posted to their platforms is fraught with rule of law and free speech issues, there is an argument for social media companies to retain some civil liability under the Deep Fake Bill. Currently, companies like Facebook and Google actively conduct "voluntary" deep fake detection efforts and have greatly contribute to research on deep fake detection [12]. Furthermore, Facebook has recently decided to officially ban deep fakes under its user policies and has promised to make efforts to remove them from their platform [13].


However, Facebook’s new policy on deep fakes would not extend to equally damaging cheap fakes—most notably the “drunken” Nancy Pelosi video that surfaced on the platform in May 2019 [14] which called into question the speaker's credibility and ability to perform her very public role. Further, other platform companies may choose to use different definitional frameworks as they implement their own internal and user policies relating to disinformation. Such inconsistency and half-hearted protection efforts do not insight confidence or in the public—many of whom (rightly or wrongly) rely on such platforms for their news and other important information.


While technical challenges around identifying deep fakes may prevent burdening platform companies with full legislative responsibility (until detection efforts in Section 7 of the Bill are fully developed) imposing the Bill’s definitional framework—after it is edited to include cheap fakes and other malicious deceptions—on these companies is a prudent interim step to ensure some consistency and an acceptable level of protection across the internet's most prevalent content distributed.

 

3) Impose Responsibility on Software Developers to Automatically Apply Legislated Safeguards on Content Made Using Their Products

Section 3 of the Bill requires that software developers who reasonably believe that their product may be used to produce deep fakes ensure that their software has the technical capability to insert the watermark and disclosures required by the Bill and ensure their terms of use require users to affirmatively acknowledge their legal obligations to heed the Bill’s safeguard requirements.

This section could be made fare more aggressive and effective by requiring that software developers automatically apply watermark and disclosures to final products that utilize deep fake creation features in software. They could provide an option for such safeguards to be removed by the user (should such automation affect software marketability or desirability abroad) provide a pop-up warning about legislative requirements in the USA is shown to users choosing this option. Ultimately, this will lessen accidental publishing and potentially make proving intent in criminal and civil proceedings much easier since creators will have had to intentionally remove the safeguards and agreed to assume responsibility under the legislation.

 

In conclusion, while the Deep Fake Bill may cite deterrence as its main goal, it is unlikely that international deep fake creators will take much heed; the damage that even a handful of well-placed deep fakes or cheap fakes can create warrants legislation with more aggressive preventative measures. The problems raised above demonstrate current weaknesses in the proposed legislation and provide practical solutions that balance constitutionally protected freedom of speech with the need to protect the integrity of American democracy and freedom of information.

 

Disclaimer: Nothing in the text above constitutes legal advice or gives rise to a solicitor/client relationship. Specialist legal advice should be taken in relation to specific circumstances. The contents of this article are for general information and academic purposes only. Whilst we endeavour to ensure that the information on this site is correct, no warranty, express or implied, is given as to its accuracy and we do not accept any liability for error or omission.


Citations


  1. DEEP FAKE Accountability Act of 2019, H.R. 3230, 116th Cong. §1041 (n)(1) (2019).

  2. Ibid. (n)(1)(A)-(B) (2019).

  3. Ibid. (n)(2) (2019).

  4. Ibid. (n)(3)(A)-(B) (2019).

  5. India WhatsApp ‘child kidnap’ rumors claim two more victims, BBC News (Jun 11, 2018) https://www.bbc.com/news/world-asia-india-44435127(last accessed Jan 20 2020).

  6. Ibid.

  7. Ibid.

  8. DEEP FAKE Accountability Act (n. 1) (n)(1)(A)-(B) (2019).

  9. DEEP FAKE Accountability Act (n. 1) (n)(3)(A) (2019).

  10. The News Minute, False Whatsapp messages on child abduction trigger violence in TN, killing two, YouTube (May 10, 2018) https://www.youtube.com/watch?v=1Qv8wR4B_bI (last accessed Jan 20, 2020)

  11. BBC News (n. 5)

  12. See, Facebook, Creating a data set and a challenge for deefakes, Facebook AI (Sep 5, 2019) https://ai.facebook.com/blog/deepfake-detection-challenge/; Lisa Vaas, Google made thousands of deepfakes to aid detection efforts, Naked Security (Sep 27, 2019).https://nakedsecurity.sophos.com/2019/09/27/google-made-thousands-of-deepfakes-to-aid-detection-efforts/.

  13. David McCabe and Davey Alba, Facebook Says It Will Ban ‘Deepfakes’, New York Times (Jan 7 2020)https://www.nytimes.com/2020/01/07/technology/facebook-says-it-will-ban-deepfakes.html?auth=linked-google.

  14. Ibid.

57 views
bottom of page