Skip to content

PRESERVING THE E-MARKET OF IDEAS: HOW A NARROW “RIGHT TO BE FORGOTTEN” EXCEPTION TO SECTION 230 CAN COMBAT DIGITAL HARASSMENT WITHOUT DECIMATING DIGITAL DISCOURSE

By: Jordan Francis, Volume 105 Staff Member

Depending on who you ask, we have either handed the levers of public discourse over to the maleficent interests of “Big Tech,” thereby making the Mark Zuckerbergs of the world the arbiters of truth and justice, or we have abandoned all control and given misinformation and hate speech the algorithmic power to violently crowd out truth and reason. Puzzlingly, proponents of each proposition place the blame on the same source: Section 230.

Platform moderation on social media has been a hot topic in recent years. That conversation is louder than ever following Twitter’s permanent suspension of Donald Trump’s personal account in response to the attempted insurrection at the United States’ Capitol in January.[1] Much of this conversation is focused on the often-maligned “Section 230.”[2] In many circles, Section 230 is praised. Noted cybersecurity law scholar Jeff Kosseff called the law “the twenty-six words that created the internet.”[3] However, a quick social media search for “Section 230” reveals a litany of vicious attacks decrying the law as a miscarriage of justice. Conservative commentators claim the law enables “liberal” social media giants to censor their opinions by insidiously insulating these companies from civil liability for taking down posts.[4] Progressive commentators lambast the law as enabling hate speech and argue that its repeal would force these companies to intervene.[5] With both sides of the political spectrum clamoring to repeal and replace the law, now is the time to examine how Section 230 actually functions and how it may benefit from narrow reform.

I. ORIGINS OF SECTION 230 AND THE LAW’S SUBSEQUENT JUDICIAL EXPANSION

Section 230 is the last remaining vestige of the Communications Decency Act of 1996. In the mid-1990s, Congress was increasingly concerned about the threat the internet posed to decency (i.e., the spread of pornography).[6] Competing bills were introduced in each chamber of Congress, and, after the reconciliation process, the Communications Decency Act (CDA) was signed into law.[7] This law was primarily designed to limit minors’ access to obscene materials online. However, there was one large problem that needed to be addressed: incentivizing online service providers to cooperate and willingly moderate the content on their sites.

Online services were concerned that engaging in content moderation would expose them to publisher liability. There is an important distinction in defamation law between a publisher and a distributor.[8] A publisher, because it selectively chooses what content it wants to put out into the world, can be held liable for defamation, whereas a distributor can only be held liable if it “knew or had reason to know of the defamatory statement at issue.”[9] When the internet was new, this raised an interesting question about where online platforms landed in that dichotomy. A trial court in New York made waves by holding that Prodigy, an online service that provided email, news, and messaging boards, was subject to publisher liability because it monitored and edited a computer bulletin, thus directing and controlling the actions of a user who allegedly made defamatory comments.[10] Section 230 was a direct response to that case, which explains its somewhat incongruous presence in the CDA. Ultimately, the portions of the CDA barring transmission of indecent material and display of patently offensive content were found unconstitutional in 1997, largely gutting the law.[11] Section 230 therefore occupies a strange place as a sort of “living ruin,” the last functioning element of the otherwise defunct CDA.

In stark contrast to its complicated history, the text of Section 230 is rather concise. The relevant portion of the statute is subsection (c), “Protection for ‘Good Samaritan’ blocking and screening of offensive material,” which breaks down into two components. First is the elimination of publisher liability, which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[12] This was Congress’s response to the Prodigy case. The second half of (c), confusingly labeled “Civil Liability,”[13] states that providers/users of “interactive computer service[s] shall not be held liable” for “any action voluntarily taken in good faith to restrict access to . . . material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable . . . .”[14] Subsection (2) has a much narrower application than (1), and appears to be invoked less frequently.[15] It is perhaps most notable for its requirement of “good faith” which implies that good faith is not required to shield online platforms from publisher liability.

Despite the relative simplicity of Section 230’s text, the statute’s meaning has been complicated by judicial expansion. In Zeran v. America Online, the Fourth Circuit held that (c)(1) eliminated not just publisher liability, but distributor liability as well.[16] This decision was based on two premises. First, that Congress intended to remove the incentives against self-moderation, and therefore the purpose of the statute weighed against retaining distributor liability.[17] Second, the court considered distributor liability to be a subset of publisher liability, thereby harmonizing its conclusion with the text of the statute.[18] This judicial gloss has held firm since 1997, but there are recent signs that the Supreme Court is willing to finally address the issue if the right case presents itself.[19]

II. A NARROW “RIGHT TO BE FORGOTTEN” EXCEPTION TO SECTION 230 COULD MITIGATE ONLINE ABUSE WITHOUT DESTROYING THE FLOURISHING DIGITAL ECONOMY

The above analysis shows that Section 230 is a mess—it is the poorly written, judicially expanded remnant of an otherwise unconstitutional act written to regulate a fledgling internet that does not reflect our modern world. As Section 230’s vices have become apparent in recent years, a movement to repeal or reform the law has materialized. Proponents of repeal or reform tend to fall into two categories: those who oppose “censorship,”[20] and those who want social media companies to regulate what they deem to be harmful speech.[21] Both positions misinterpret what Section 230 does and what a post-Section 230 world would look like.

The censorship refrain is common on social media, but it is entirely misplaced. Social media platforms cannot censor people because these companies are private entities and therefore not bound by the First Amendment.[22] However, there is a credible version of this argument if one believes that Section 230 does enable something akin to censorship in a practical sense. By removing publisher liability for content moderation, Congress has not only removed a barrier but provided an incentive for tech companies to engage in moderation. Absent that legal protection, it is likely these platforms would either engage in significantly more moderation (which would be costly from an administrative perspective), or no moderation whatsoever. Therefore, by removing the incentive to moderate, repealing Section 230 could be an enticing gamble for those who feel that social media companies are silencing their opinions. Those same people should recognize, however, that their gamble could also backfire and increase the number of user posts taken down.

The second major group of Section 230 skeptics, which includes President Biden,[23] argues that the law allows tech companies to propagate harmful, hateful, and outright false information with impunity.[24] The logic behind this position is that removing this legal shield and holding platforms liable for the defamatory and libelous statements of their users will force these platforms to step in and combat misinformation in order to protect their bottom line.[25] This position has a lot of merit factually and conceptually, but it is not as straight forward as proponents might believe. First, as noted above, if a Section 230 repeal is not done correctly, the result could be less moderation, not more.[26] Second, eliminating Section 230 may harm other social justice initiatives. For example, marginalized groups who rely on social media to share their experiences may suddenly find themselves under the thumb of aggressive content moderators, who err on the side of suppressing content that is important for public discourse but opens the platform up to potential legal liability.[27]

Despite its many flaws, Section 230 clearly still plays a paramount role in preserving the social aspects of the internet and fostering public discourse. That does not mean, however, that reform is out of the question entirely. A carefully reasoned, narrowly targeted reform could mitigate some of the unintended harms of Section 230’s insulation from civil liability, particularly for victims of abuse. For advocates who want to combat online harassment, one potential avenue could be implementing a narrow “right to be forgotten.”[28] The right to be forgotten is a feature of EU law that requires[29] an organization in possession of personal data to delete that data at the individual’s request.[30] In the context of internet search engines, such as Google or Bing, this means that citizens can request that links containing sensitive information be suppressed from search results.[31] One of the unique things about online harassment is that it sticks with victims—in decades past, a victim of malicious personal attacks could move and start fresh, but a Google search for someone who has been victimized brings up all of that abusive content.[32] For that reason, one potential tweak to Section 230 could be amending the law to exempt search engines from the publisher liability protection under (c)(1), thereby holding the likes of Google accountable for promoting content that defames people.[33]

Proponents of Section 230 who are motivated by free speech ideals might oppose this idea. The right to be forgotten, as applied in the EU, does make exceptions for the right of freedom of expression and information.[34] However, the United States is exceptionally sensitive to free speech concerns, so it might be more politically viable to implement a narrower right to be forgotten. One possible limitation, already mentioned, would address the relevant actors who could be held liable—search engines. Another limitation would address the content that would trigger liability—search results that a concerned citizen can show either defames them or makes out a prima facie case of a tortious interference with privacy.[35]

Opponents of Section 230 who are motivated by combatting misinformation and online harassment might oppose these limitations. Limiting the exception to search engines does nothing to reduce harassment on social media platforms, where the harm can be greater because the intended audience is one’s peers. Likewise, limiting an exception to defamatory content or invasions of privacy does nothing to address problems of misinformation on issues like QAnon, COVID-19, or the U.S. election. Those are serious concerns, but they exceed the scope of this article. As explained above, the practical implications of a full Section 230 repeal are difficult to predict. Whether a world without Section 230 is one in which truth prevails and tech behemoths stamp out harassment is unclear. What is clear is that we should be cautious in continuously adding exceptions to Section 230.[36]

CONCLUSION

The temptation to do away with Section 230 is understandable, but misplaced. To some, it is the conduit of their frustrations with the culture wars and perceived bias on social media. To others, it enables the proliferation of hate and violence online. To the vast majority of online users, however, Section 230 is a background factor that quietly facilitates a pleasant user experience, carefully striking a balance by enabling some moderation but not requiring an overly cautious, burdensome amount. Reforming Section 230 is a legitimate endeavor, and the narrow proposal above is one option that policy makers could entertain. However, in deciding whether Section 230 should be reformed, and what that reform would look like in practice, policy makers need to be aware of these competing perspectives and be honest about how Section 230 actually operates. The internet is still relatively nascent, and a misstep in amending a law as seminal as Section 230 could have serious unintended consequences. With that in mind, any reform must be justified by a truthful, impartial cost-benefit analysis that is not marred by a faulty understanding of what this complicated, concise law actually achieves.[37]

 

[1] Tiffany C. Li, Trump’s Twitter Reign of Terror is Over. But His Impact on Social Media Isn’t., MSNBC, https://www.msnbc.com/opinion/trump-hated-section-230-after-capitol-riots-he-may-get-n1253524 [https://perma.cc/QH92-7S4S] (Jan. 8, 2021 5:30 PM) (“In response to the Trump-supporting extremists storming the Capitol, platforms like Twitter and Facebook have finally taken the previously unprecedented leap to ban or block Trump’s account. Twitter blocked Trump’s ability to tweet — first for less than a day, before permanently suspending his account.”). For an excellent analysis of how Trump used his Twitter account during his presidency and the concomitant First Amendment implications, see Emilie Erickson, Access Denied: @RealDonaldTrump and the First Amendment, Minnesota Law Review De Novo (Jan. 6, 2021), https://minnesotalawreview.org/2021/01/06/access-denied-realdonaldtrump-and-the-first-amendment/ [https://perma.cc/E3M8-LJ2A].

[2] 47 U.S.C. § 230.

[3] See generally Jeff Kosseff, The Twenty-Six Words That Created the Internet (2019).

[4] See, e.g., Philip Hamburger, The Constitution Can Crack Section 230, Wall Street Journal (Jan. 29, 2021, 2:00 PM), https://www.wsj.com/articles/the-constitution-can-crack-section-230-11611946851 [https://perma.cc/LPN8-D2TN].

[5] See Scott Pelley, Why Victims of Internet Lies Want Section 230 Repealed, CBS News (Jan. 3, 2021), https://www.cbsnews.com/news/section-230-internet-60-minutes-2021-01-03/ [https://perma.cc/KB8V-ACVA].

[6] William A. Sodeman, Communications Decency Act, Britannica, https://www.britannica.com/topic/Communications-Decency-Act (last visited Jan. 14, 2021) (noting that the Communications Decency Act was “enacted by the U.S. Congress in 1996 primarily in response to concerns about minors’ access to pornography via the Internet.”).

[7] Id.

[8] Stratton Oakmont v. Prodigy Servs. Co., 1995 N.Y. Misc. LEXIS 229, at *6 (N.Y. Sup. Ct. May 24, 1995) (“A finding that PRODIGY is a publisher is the first hurdle for Plaintiffs to overcome in pursuit of their defamation claims, because one who repeats or otherwise republishes a libel is subject to liability as if he had originally published it. . . . In contrast, distributors such as bookstores and libraries may be liable for defamatory statements of others only if they knew or had reason to know of the defamatory statement at issue.”).

[9] Id.

[10] Id. at *17.

[11] Reno v. ACLU, 521 U.S. 844, 882 (1997) (finding that “the CDA places an unacceptably heavy burden on protected speech, and that the defenses do not constitute the sort of ‘narrow tailoring’ that will save an otherwise patently invalid unconstitutional provision.”).

[12] 47 U.S.C. § 230(c)(1).

[13] The preceding subsection about publisher liability also addresses civil liability.

[14] 47 U.S.C. § 230(c)(2).

[15] As of February 2, 2021, a simple case search on Lexis for “47 U.S.C. § 230(c)(1)” produces 490 results, compared to 68 for “47 U.S.C. § 230(c)(2),” indicating that (1) is invoked with much more frequency.

[16] Zeran v. America Online, Inc., 129 F.3d 327, 331–32 (4th Cir. 1997).

[17] Id. at 331 (“Congress enacted § 230 to remove the disincentives to self-regulation created by the Stratton Oakmont decision.”).

[18] Id. at 332 (“Assuming arguendo that Zeran has satisfied the requirements for imposition of distributor liability, this theory of liability is merely a subset, or a species, of publisher liability, and is therefore also foreclosed by § 230.”).

[19] See Malwarebytes, Inc. v. Enigma Software Grp. USA, 208 L. Ed. 2d 197, at *202 (Thomas, J., concurring in denial of certiorari) (“Without the benefit of briefing on the merits, we need not decide today the correct interpretation of §230. But in an appropriate case, it behooves us to do so.”).

[20] See Philip Hamburger, The Constitution Can Crack Section 230, Wall Street Journal (Jan. 29, 2021, 2:00 PM), https://www.wsj.com/articles/the-constitution-can-crack-section-230-11611946851 [https://perma.cc/LPN8-D2TN].

[21] E.g., Scott Pelley, Why Victims of Internet Lies Want Section 230 Repealed, CBS News (Jan. 3, 2021), https://www.cbsnews.com/news/section-230-internet-60-minutes-2021-01-03/ [https://perma.cc/KB8V-ACVA].

[22] U.S. Const. amend. I.

[23] Makena Kelly, Joe Biden Wants to Revoke Section 230, The Verge (Jan. 17, 2020, 10:29 AM), https://www.theverge.com/2020/1/17/21070403/joe-biden-president-election-section-230-communications-decency-act-revoke [https://perma.cc/4Q4S-GTM3].

[24] See Sara Morrison, How the Capitol Riot Revived Calls to Reform Section 230, Vox, https://www.vox.com/recode/22221135/capitol-riot-section-230-twitter-hawley-democrats [https://perma.cc/5NTB-5R3H] (Jan. 11, 2021, 4:55 PM).

[25] Id.

[26] See Stratton Oakmont v. Prodigy Servs. Co., 1995 N.Y. Misc. LEXIS 229, at *6 (N.Y. Sup. Ct. May 24, 1995). Section 230 was originally meant to encourage moderation.

[27] Adi Robertson, Social Justice Groups Warn Biden Against Throwing Out Section 230, The Verge (Jan. 27, 2021, 6:00 AM), https://www.theverge.com/2021/1/27/22251093/section-230-civil-rights-groups-letter-biden-harris-congress-defense [https://perma.cc/LTA9-XJPX].

[28] For an overview of the right to be forgotten (sometimes known as the right to erasure), see generally Everything You Need to Know About the “Right to be Forgotten”, GDPR, https://gdpr.eu/right-to-be-forgotten/#:~:text=An%20individual%20has%20the%20right,that%20individual%20withdraws%20their%20consent [https://perma.cc/A84Z-2J28] (last visited Feb. 16, 2021).

[29] Subject to select limitations. See id.

[30] Id.

[31] Leo Kelion, Google Wins Landmark Right to be Forgotten Case, BBC (Sept. 24, 2019), https://www.bbc.com/news/technology-49808208 [https://perma.cc/BG6D-D5Y9] (“In the case of search engines, Europeans have had the right to request links to pages containing sensitive personal information about them be removed since 2014. But the General Data Protection Regulation (GDPR) which came into force in 2018, added further obligations.”).

[32] E.g., Kashmir Hill, A Vast Web of Vengeance, N.Y. Times (Feb. 2, 2021), https://www.nytimes.com/2021/01/30/technology/change-my-google-results.html [https://perma.cc/562F-8HFZ] (explaining how harmful online harassment can be when Google searches continue to perpetuate defamatory content intended to smear someone).

[33] Commentators have advocated for a general right to be forgotten in the U.S. and identified that Section 230 is currently a bar to any such law. Kevin L. Vick, The Right to Be Forgotten, Media Law, https://www.medialaw.org/component/k2/item/3994-the-right-to-be-forgotten#:~:text=By%20contrast%2C%20there%20is%20no,Communications%20Decency%20Act%2C%2047%20U.S.C.&text=U.S.%20law%20recognizes%20the%20paramount,public’s%20right%20to%20access%20information [https://perma.cc/46AB-MSFQ] (last visited Feb. 16, 2021).

[34] Everything You Need to Know About the “Right to be Forgotten”, GDPR, https://gdpr.eu/right-to-be-forgotten/#:~:text=An%20individual%20has%20the%20right,that%20individual%20withdraws%20their%20consent [https://perma.cc/A84Z-2J28] (last visited Feb. 16, 2021).

[35] States recognize a number of different torts for invasions of privacy, such as intrusion on seclusion and public disclosure of embarrassing facts. For an overview of the torts related to privacy and their application to digital media, see Scott Jon Shangin, The Prosser Privacy Torts in a Digital Age, 2008 N.J. Law. 9 (2008).

[36] Section 230 expert Jeff Kosseff stressed this point recently in response to different congressional proposals amending the law: “Once you add enough exceptions to 230, it loses its primary benefits, and it makes little sense to have a piece of Swiss cheese on the books.” Jeff Kosseff (@jkosseff), Twitter (Feb. 5, 2021, 3:13 PM), https://twitter.com/jkosseff/status/1357799629996371968 [https://perma.cc/A26C-NFC4].

[37] For further reading on which institution is proper to reform Section 230, see Alan Z. Rozenshtein, Section 230 and the Supreme Court: Is Too Late Worse Than Never?, Lawfare Blog (Oct. 20, 2020, 1:01 PM), https://www.lawfareblog.com/section-230-and-supreme-court-is-too-late-worse-than-never [https://perma.cc/P8KP-WPD8] (arguing that, in light of the “profound trade-offs regarding speech, safety and economic competitiveness,” Congress should decide whether the Zeran interpretation of Section 230 needs reform).