Skip to content


By: Ellison Snider, Volume 106 Staff Member

Last month, Frances Haugen, former product manager at Facebook, testified to the Senate Committee on Commerce, Science, and Transportation about the company’s one-way mirror on its users.[1] After leaking private internal Facebook documents to the Wall Street Journal, Haugen showed the public that Facebook knows a lot about its users[2] and its users do not know nearly enough about Facebook. Haugen explained how Facebook’s products, including Instagram and WhatsApp, impact the safety and well-being of individuals[3] and the company’s powerful influence on society.[4] Importantly, Haugen testified that Facebook knew its products caused harm and, with that knowledge, chose to implement a new algorithm that would exacerbate that harm.[5] Haugen’s testimony indicates a much needed departure from Internet Exceptionalism, the idea that because the Internet is so unique and fragile, it should be shielded from standard legal mechanisms aimed at accountability.[6] One way to begin regulating the Internet is to clarify and amend the scope of Section 230 of the Communications Decency Act of 1996[7] and hold sites like Facebook liable for the impact and injury of their algorithms.


Websites use algorithms to collect user data and curate user experience. An algorithm is “any well-defined computation procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.”[8] Facebook’s algorithm is a machine learning algorithm, meaning it induces from user data and creates predictive outputs, influencing and motivating engagement with user content.[9] Algorithms are sophisticated tools used to solve problems and efficiently complete tasks that are often taken for granted by Internet users.[10]

Algorithms can be used as nefarious tools, however. For example, as Facebook’s own research found, its algorithm negatively impacts the mental health of young users, particularly adolescent girls experiencing body image issues.[11] Despite Facebook’s findings that its products hurt young people, internal documents indicate that the company wanted to increase its reach to young users.[12] Moreover, in 2018, Facebook’s algorithm was adapted to purposefully sow division among users, amplifying content that provoked strong emotions and kept users constantly engaged with Facebook products.[13] Beyond the partisan chasm perpetuated by Facebook’s algorithmic change, this algorithm is credited in Facebook’s own reports for spreading misinformation.[14]

Put simply, Facebook’s algorithm analyzes user-developed content, predicts which content will increase site engagement, and amplifies that content to increase Facebook’s profits, no matter the cost. As the law stands today, until Congress takes action, Facebook is seemingly allowed to do this.


Section 230 (The Safe Harbor Provision) of the Communications Decency Act of 1996[15] has been credited for the creation of the Internet.[16] Section 230 immunizes “interactive computer services,” such as Facebook, from liability for the information users post on their sites.[17] Heated debate surrounds § 230’s role in incentivizing public discourse and innovation, on one hand, and disincentivizing regulation for safety and truth, on the other. Surprisingly, however, there is bipartisan agreement that § 230 requires Congressional attention and action.[18]


Section 230 was enacted well before “interactive computer services” began implementing algorithms to increase engagement and improve user experience. However, when the question of whether algorithms are subject to § 230 was raised before the Second Circuit, the court ruled that algorithms fall within the Internet’s Safe Harbor.[19] In Force v. Facebook, Inc., family members of Americans killed in Israel by Hamas, an international terrorist organization, alleged that Facebook’s friend suggestion algorithm materially assisted the terrorist organization.[20] In a 2–1 decision, the court held that Facebook’s algorithm was immune from liability within the bounds of § 230 because Facebook was arranging and distributing third-party information, a typical editorial function immune from liability under § 230, and that the content was exclusively developed by Hamas.[21] The dissent, however, articulated two compelling reasons why Facebook acted beyond the scope of § 230 immunity. First, the dissent argued that Facebook was creating and communicating its own message to users—“that it thinks you the reader—you, specifically— will like this content.”[22] Second, the dissent noted the role Facebook’s algorithm plays in the creation of offline social networks, a utility beyond the traditional editorial functions protected under § 230.[23] While Force captures key arguments on both sides of the debate regarding algorithmic liability under § 230, it is just one circuit’s analysis of the question and, on May 18, 2020, the U.S. Supreme Court denied certiorari on plaintiffs’ appeal.[24]

In February of this year, however, two years after Force, the Texas Supreme Court issued a writ of mandamus on the question of algorithmic immunization under § 230.[25] The writ collectively  addressed three separate lawsuits brought by survivors of sex trafficking who alleged that Facebook knowingly facilitates human trafficking on its platforms.[26] Three years earlier, Congress passed legislation that made an overdue amendment to the scope of § 230 to ensure it did not prevent federal and state enforcement of sex trafficking statutes.[27] Taking § 230 with state sex trafficking law, the court held that the plaintiffs could proceed on their claims that Facebook had “affirmatively acted” beyond the immunization bounds of § 230.[28] While a subject-area specific ruling, the court wrote broadly, describing what it believed to be Facebook’s potential liability under § 230, writing: “We find it highly unlikely that Congress, by prohibiting treatment of internet companies ‘as . . . publisher[s],’ sought to immunize those companies from all liability for the way they run their platforms, even liability for their own knowing or intentional acts as opposed to those of their users.”[29] Taken together, the disparate stances of the Second Circuit and Texas Supreme Court indicate a clear need for congressional clarification of § 230, particularly concerning algorithmic amplification of user content and influence on user conduct.


In her testimony, Haugen called on Congress to reform § 230 to exclude algorithms from immunity, making companies like Facebook liable for amplifying harmful content, particularly content known to be harmful to users.[30] Some members of Congress agree with Haugen. There has been a recent proliferation of legislative proposals aimed to amend § 230 to account for the impact of algorithms. Earlier this year, before Haugen’s testimony, a bipartisan Congressional effort was underway with the introduction of the Don’t Push My Buttons Act, directed at narrowing the scope of § 230 to exclude websites that collect user data and use that data to intentionally show content to agitate users.[31] Additionally, the Protecting Americans from Dangerous Algorithms bill was introduced to amend § 230 to reference existing civil rights law to mitigate algorithmic amplification.[32] The Disincentivizing Internet Service Censorship of Online Users and Restrictions on Speech and Expression (DISCOURSE) Act has been introduced to remove § 230 liability for algorithms that censor user content.[33] Finally, since Haugen’s testimony, the House Committee on Energy & Commerce introduced the Justice Against Malicious Algorithms Act to remove absolute immunity for online platforms that recommend content to users that materially contributes to physical or severely emotional injury.[34] Though there is disagreement about how to amend § 230, there is bipartisan agreement that § 230 must be amended.

The Internet has transformed the way we experience and perceive of the world. In its infancy, the Internet needed a safe harbor for comfortable innovation. The Internet, however, is no longer any kind of exception. The Internet is the rule. It needs to be carefully regulated as such. As more is learned about algorithms, Congressional clarification and amendment of § 230 is needed to account for algorithmic harm.


[1] Protecting Kids Online: Testimony from a Facebook Whistleblower Before the S. Subcomm. on Consumer Prot., Prod. Safety, and Data Sec., 117th Cong., 1st Sess. (2021).

[2] See generally The Facebook Files, Wall St. J., [] (analyzing Facebook’s internal documents leaked by Haugen to the Wall Street Journal).

[3] The Journal, The Facebook Files, Part 2: ‘We Make Body Image Issues Worse’, Wall St. J., (Sept. 14, 2021), [] (describing internal research at Facebook on how one of its products, Instagram, negatively impacts the mental health on young girls); The Journal, The Facebook Files, Part 3: ‘This Shouldn’t Happen on Facebook’, Wall St. J., (Sept. 18, 2021),[] (describing Facebook documents about human trafficking happening on the site and its inadequate response).

[4] The Journal, The Facebook Files, Part 4: The Outrage Algorithm, Wall St. J., (Sept. 18, 2021), [] (describing Facebook documents laying out the company’s decision to implement a new algorithm in 2018 that increased user engagement by increasing sensational, divisive content and misinformation).

[5] Id.

[6] Neil Fried, The Myth of Internet Exceptionalism: Bringing Section 230 into the Real World, Am. Aff. J. (2021), visited Oct. 21, 2021); see also Ashley Deeks, Facebook Unbound Foreword, 105 Va. L. Rev. Online 1, 6–7 (2019) (“First, members of Congress lack sophisticated understandings of how these companies—and the technologies that undergird their products—work . . . . Second, knowing that to regulate, in what level of detail, and at what stage in the overall development of technologies such as machine learning is simply hard . . . . Third, Congress fears undercutting U.S. innovation by regulating too soon.”).

[7] 47 U.S.C. § 230.

[8] Thomas H. Cormen et al., Introduction To Algorithms 5 (3d ed. 2009).

[9] Gabriel Nicholas, Explaining Algorithmic Decisions, 4 Geo. L. Tech. Rev. 711, 714 (2020).

[10] Lee Rainie & Janna Anderson, Code-Dependent: Pros and Cons of the Algorithm Age, Pew Rsch. Ctr. (Feb. 8, 2017),[].

[11] The Journal, The Facebook Files, Part 2: ‘We Make Body Image Issues Worse’, Wall St. J., (Sept. 14, 2021), [].

[12] The Journal, The Facebook Files, Part 5: The Push To Attract Younger Users, Wall St. J., (Sept. 29, 2021), [].

[13] The Journal, The Facebook Files, Part 4: The Outrage Algorithm, Wall St. J., (Sept. 18, 2021), [].

[14] Id.

[15] 47 U.S.C. § 230.

[16] See generally Jeff Koseff, The Twenty-Six Words That Created the Internet (2019) (detailing the history of Section 230).

[17] 47 U.S.C. § 230(c)(2).

[18] See Michael D. Smith & Marshall Van Alstyne, It’s Time to Update Section 230, Harvard Bus. Rev. (Aug. 12, 2021), [] (quoting then-presidential candidate Joe Biden, saying that Section 230 should be “revoked, immediately,” and Senator Lindsey Graham, saying that “Section 230 as it exists today has got to give.”).

[19] Force v. Facebook, 934 F.3d 53 (2d Cir. 2019).

[20] Id. at 66.

[21] Id.

[22] Id. at 82.

[23] Id.

[24] Force v. Facebook Inc., SCOTUSblog, [] (last visited Nov. 10, 2021).

[25] In re Facebook, Inc., No. 20-0434 (Tex. Feb. 24, 2021) (order denying in part and granting in part writ of mandamus).

[26] Id.

[27] Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No. 1115-164, 132 Stat. 1253 (codified as amended in scattered sections of 18 and 47 U.S.C.) (2018).

[28] In re Facebook, Inc., No. 20-0434, 32 (Tex. Feb. 24, 2021).

[29] Id. at 27 (omission original) (emphasis original).

[30] Shirin Ghaffary, Facebook’s Whistleblower Tells Congress How to Regulate Tech, Vox (Oct. 5, 2021), [].

[31] H.R. 5815, 116th Cong., 2d Sess. (2020); Press Release, John Kennedy, Kennedy Introduces Don’t Push My Buttons Act to Deny Immunity to Manipulative Social Media Companies (July 13, 2021), [].

[32] H.R. 2154, 117th Cong., 1st Sess. (2021); Press Release, Tom Malinoski, Reps. Malinowski and Eshoo Reintroduce Bill to Hold Tech Platforms Accountable for Algorithmic Promotion of Extremism (Mar. 24, 2021), [].

[33] S. 2228, 117th Cong., 1st Sess. (2021); Press Release, Marco Rubio, Rubio Introduces Sec 230 Legislation to Crack Down on Big Tech Algorithms and Protect Free Speech (June 24, 2021), [].

[34] H.R. 5596, 117th Cong., 1st Sess. (2021); Press Release, House Committee on Energy & Commerce, E&C Leaders Announce Legislation to Reform Section 230 (Oct. 14, 2021) [].