Evolving Online Terrain in an Inert Legal Landscape: How Algorithms and AI Necessitate an Amendment of Section 230 of the Communications Decency Act
By Ellison Snider. Full Text.
The consequences of online speech are undeniable, and yet, as the internet rapidly evolves, Section 230 of the Communications Decency Act (CDA 230), the federal law most concerned with internet regulation, stays the same. The pervasive presence of algorithms and artificial intelligence (AI), sophisticated technologies used by platforms to autonomously organize and facilitate the spread of online speech, vindicates a reassessment of CDA 230’s scope and applicability in the context of the modern internet.
The internet’s impact on modern society is profound. In recent years, it has supported powerful grassroots organizing for social movements and allowed connection to distant loved ones during a global pandemic. The internet has also enabled new forms of abuse and harassment and catalyzed the spread of misinformation about American democracy, culminating in a historic insurrection. This movement-building and misinformation distribution were enabled by platforms, such as Facebook, YouTube, and Twitter, that rely on algorithms and AI. These tools have dramatically changed the internet since 1996, the year CDA 230 was enacted and subsequently “created the internet.” Employing algorithmic technology and AI, platforms now autonomously impact massive amounts of online speech, for example, by recommending user-developed content and enforcing community guidelines to moderate user-developed content.
The legal analysis of liability should change in step with the internet’s changes, and platforms should be liable for the way their own algorithmic tools shape and control online speech. CDA 230’s robust immunity, however, as currently interpreted by the courts, makes it impossible to hold any platforms accountable for their harmful offline consequences. Indeed, courts have expansively interpreted CDA 230 immunity such that, as platforms have become more autonomous and impactful, it has grown to swallow any platform liability.
This question—whether CDA 230’s immunity applies when a platform has employed algorithms and AI to impact the reach of user-developed content—has been litigated among several federal courts of appeal and is under consideration by the U.S. Supreme Court. Whether the Court answers in the negative or affirmative, several notable scholars have already offered meaningful and reasonable proposals to amend CDA 230 to account for platforms’ algorithmic impact on online speech. After surveying these proposals, this Note concludes that a new or reformed internet law should hold “Bad Samaritans” responsible for explicitly or knowingly perpetuating harmful content on its platform; be narrow enough in scope to regulate online speech, rather than all online activity; and add a flexible modern legal standard so that courts may more proficiently and consistently assess algorithmic harm on platforms.
The internet is not hopeless. It is a remarkable innovation, and CDA 230 is crucial to its flourishing. But as the internet evolves and grows in sophistication, so should the law. This issue is too important to avoid action any longer.