Making Sense of the Future for Section 230
Social media platforms have evolved far beyond their traditional roles as passive conduits. In response, courts are grappling with the tension between free speech and content moderation, tucking the latter beneath the former’s protective umbrella, only to gradually peel away its statutory shield. In July, the Supreme Court issued its much-anticipated opinion defining the First Amendment’s application to social media companies in Moody v. NetChoice. And, just days ago, the Third Circuit in Anderson v. TikTok harnessed the Moody opinion to consider Section 230’s scope in light of this First Amendment application. Together, these recent cases paint an incongruous picture of where social media platforms stand in their current protections.
Section 230: Then and Now
Section 230 was enacted to shield platforms from vicarious liability for user-generated content while allowing them to engage in ongoing content moderation. Initially aimed at protecting platforms from defamation claims arising from third-party posts, the statute now faces unprecedented challenges as evolving editorial policies and technologies enable platforms to exert socio-political influence, almost as if they possess a voice of their own. The way platforms both remove and promote content in attunement with user preferences suggests a particularized knowledge and employment of participants’ online data trails. These moderation mechanisms and algorithms, by leveraging user preferences, can now relay content in ways that, deliberately or not, shape national discourse and influence user behavior. Although platforms justify their moderation decisions through terms of service provisions—such as the authority to remove hate speech—courts are increasingly focused on the tangible consequences of these actions, clarifying the boundaries of First Amendment and Section 230 protections in response to their real-world effects. Recently, these media companies began getting hit with criticism on all sides, with complaints from the right of excessive censorship and from the left of insufficient protection against harmful speech. Of the two sides, conservatives were keener to seek redress through legislation curtailing the operational freedom of social media companies.
Moody Discussion
Although Section 230 surfaced in the Moody discourse due to the statute’s presumptive preemption of state laws, the Supreme Court primarily focused on calling out Texas and Florida legislatures for flunking rudimentary First Amendment jurisprudence. Both states introduced laws targeting major social media companies, seeking to promote moderation transparency, limit interference with political candidates' online presence, and enable enforcement through private rights of action. These laws were promptly enjoined. The Eleventh Circuit largely upheld the Florida law injunction, while the Fifth Circuit went completely sideways in a highly politicized and outcome-driven opinion.
Responding to the resultant circuit split, Justice Kagan’s majority opinion affirmed that platforms’ content moderation activities––such as selecting, filtering, ordering, and excluding content––constitute expressive speech protected by the First Amendment. The Court firmly rejected the Fifth Circuit’s view that content moderation is not speech, drawing parallels to traditional media precedent like Miami Herald Publishing Co. v. Tornillo and Hurley v. Irish-American Gay, Lesbian and Bisexual Group, to cement the idea that editorial discretion is a form of protected expression, regardless of its situation in the old or new media context. The majority also acknowledges the Fifth Circuit’s longstanding commitment to balancing a marketplace of ideas, but maintains that stripping platforms of this editorial discretion would mandate social media platforms’ showcasing of even the most reprehensible content. Moreover, Barrett’s concurrence grapples with the expanding influence of algorithms, raising concerns about whether speech generated through algorithmic processes deserves the same First Amendment protections as other content moderation mechanisms. While Barrett may not have explicitly anticipated Anderson, her focus on the complexities involved in litigating algorithmic curation foreshadow the issues at the heart of the subsequent decision.
Anderson Discussion
Anderson v. TikTok, building on the principles established in Moody, centers around the tragic death of a child who attempted the “Black Out Challenge” after TikTok’s algorithm recommended it on her For You Page (“FYP”). In what feels like a response to Barrett’s probing concurrence in Moody, the Third Circuit examined whether algorithms––presumed to be modes of protected expression––could strip platforms of their Section 230 shield when the curated content leads to fatal outcomes. Boiled down, Anderson affirms the Supreme Court’s position that content moderation is expressive speech protected by the First Amendment, but flips this protection into a liability for TikTok under Section 230. The court held that while Section 230 immunizes TikTok for third-party content (such as user-generated videos promoting the “Blackout Challenge”), it may be responsible for its own speech (its algorithm suggesting the “Blackout Challenge” video in the first instance). Under the Third Circuit’s interpretation of Section 230, the mere act of transforming user data into personalized suggestions converts protected third-party speech into unprotected first-party speech, effectively collapsing the distinction between the two degrees of expression.
Judge Matey’s dissent takes an even more restrictive approach to Section 230, asserting that the statute only protects platforms that passively host content, not those that curate or recommend it. His stance echoes Section 230’s early use in shielding online message board providers from vicarious liability for its users’ blog posts, but diverges from decades of precedent that place editorial discretion within the statute’s protective bounds. In today’s age, where nearly all platforms depend on their algorithmic crutches to spur user interest (and, by proxy, retain user participation), Matey’s view restricts Section 230’s protections to a limited subset of passive platforms akin to Google Drive. By pigeonholing Section 230’s applicability to receptacle-like hosts, Matey’s dissent constrains the statute beyond its original intent and implicitly reopens the question of whether content curation constitutes expressive conduct—a matter that Moody decisively addressed. Even under a charitable reading, Matey’s dissent construes Section 230 more as a limited defense than a broad protection.
While social media platforms may have breathed a sigh of relief after Moody, the recent decision in Anderson has terminated the short-lived respite. Whether tinkering with political candidates’ social media visibility or recommending harmful trends on users’ feeds, platforms’ expressive actions have consequences that reverberate across our digital society, molding national discourse and influencing user behavior in increasingly palpable ways. But when such communication is filtered through layers of technology and shaped by user input, can it still be considered the platform’s own speech? Moody says yes. And if Moody is correct, should platforms be held responsible for the consequences of these algorithms, even if they are not directly commanding users to think or act in certain ways? Anderson answers in the affirmative. If meeting the First Amendment's protective threshold in this manner strips a platform of Section 230 immunity, the statute’s scope shrinks perilously close to irrelevance. As Moody awaits its second appellate review, Anderson places decades of cross-circuit Section 230 jurisprudence in question, presumably setting the stage for Supreme Court review. With legislative initiatives already pressing to curtail Section 230, these cases amplify the growing urgency for courts to define the future of free speech and platform accountability in the digital age.
—
Composed on September 6, 2024.
Special thanks to Daniel Lifschitz, Esq. for his guidance in preparing this post.
*Disclaimer: The views expressed herein are solely those of the author and do not necessarily reflect or represent the views, policies, or positions of any employer or affiliated organization.