By Jenna Hensel. Full Text.
Social media companies enjoy a broad scope of protection from liability due to Section 230 of the Communications Decency Act. Section 230 of the Communications Decency Act offers social media companies two prominent protections: (1) protection from liability for user content posted on their websites because social media companies “cannot be treated as the publisher or speaker” of user content, and (2) protection for taking good faith actions to restrict objectionable user content on their websites, thereby serving as “Good Samaritans.” Moreover, Congress intended for social media companies to serve as Good Samaritans through implementation of responsible content moderation practices to restrict objectionable user content. In addition, social media companies moderate content due to a sense of corporate responsibility to uphold First Amendment values and to preserve advertising revenue by restricting material that may impede it.
Yet, despite current social media companies’ Good Samaritan content moderation practices, users on social media websites are still being hurt. Social media users are experiencing physical, psychological, and emotional harm due to being victimized by other users on social media websites. Additionally, harmful sub-cultures are developing and expanding on social media websites. Finally, social media content moderators are making an impermissible amount of errors when moderating content. In short, social media companies are not sufficiently serving as Good Samaritans because an undue amount of user harm is occurring on social media websites. Therefore, social media companies are currently falling short of appropriately balancing preserving freedom of speech and preventing user harm.
This Note argues that in order for social media companies to meet Congress’s expectations of serving as Good Samaritans and adequately protect their users, social media companies need to be held to more rigorous criteria for obtaining immunity by serving as Good Samaritans. The courts should read these new criteria into Section 230 and thus overrule Zeran v. America Online, Inc. The two components of the new criteria are (1) social media companies adopting and implementing new, more objective definitions of prohibited content categories into their content moderation practices, and (2) social media companies training their AI to proficiently utilize these definitions when screening content. This Note concludes that these new criteria will allow social media companies to serve as more effective Good Samaritans by prescribing social media companies to take a more proactive approach in preventing user harm. Social media companies will then be able to appropriately balance maintaining freedom of speech and upholding user safety on their websites.