Social Media Websites’ Liability for Algorithms –After All, It Is An Editorial Choice

Abstract

            Congress now faces new issues surrounding internet safety due to the increased sophistication of social media algorithms and the increase in children using social media. The First Amendment and Section 230 of the Communications Decency Act of 1996 are key to providing the internet with immunity surrounding a website’s conduct. However, Section 230 has been interpreted to provide broad immunity giving social media platforms a “get-out-of-jail-free card.” There is a congressional interest to legislate issues surrounding Section 230 and ensure that the internet is safe for its users. While numerous legislative efforts attempt to limit Section 230 immunity, they are misguided approaches at improving internet safety.

            This Article looks at the recent holding in Moody v. NetChoice, which held that social media algorithmic functions are editorial judgments protected by the First Amendment. It also looks at Anderson v. TikTok, where the court applied the Moody rationale but reasoned that TikTok’s algorithm reflected editorial judgment. Therefore, it should not be protected by Section 230. Together, these cases highlight the central problem: the algorithmic curation of content is expressive conduct that implicates the First Amendment but also disqualifies platforms from Section 230 immunity. Instead of leaving this issue to the courts, Congress should clearly distinguish between first-party content and third-party content, ensuring that each is protected by the appropriate legal safeguard. An algorithm that provides a personalized viewing experience should be treated as first-party content not protected under Section 230. Therefore, it may be subject to liability for the furtherance of unlawful conduct on its website.


INTRODUCTION

I. BACKGROUND

A. Explanation of Algorithm

B. Background and History of § 230 of the CDA

C. First Amendment Rights on the Internet 

II. ANALYSIS

A. The Current Landscape of the First Amendment and § 230 of the CDA

1. Moody v. NetChoice case:

2. Anderson v. TikTok case:

B. How Social Media Websites Are Liable for Algorithms?

1. Case Law of § 230 and Algorithms

2. First-Party Content versus Third-Party Content

III. RECOMMENDATIONS: WHEN CONTENT CURATION BECOMES CREATION

CONCLUSION


Introduction

Social media platforms once simply hosted content. Now, as algorithms grow increasingly sophisticated, they dictate what we see, when we see it, and even how we feel about it. In 2024, there were two court decisions relating to the First Amendment and Section 230 of the Communications Decency Act of 1996 (CDA) (hereinafter § 230).[1] Both cases looked at the relationship between § 230, algorithms, and the First Amendment.[2] Some social media platforms, such as TikTok, will display content regardless of whether a user follows that content creator.[3] Out of its one billion active users, 27.45% of TikTok’s monthly active users are under 18.[4] TikTok learns what a user prefers, and will show more videos tailored to those preferences.[5] The TikTok algorithm has four main goals: “user value, long-term user value, creator value, and platform value,” achieved by obtaining user information.[6] Typically, applications with end-to-end encryption do not give content moderators information about what its users send to other users or what users upload but do not share.[7] TikTok however, does not have end-to-end encryption, allowing its content moderators to have significantly more information about its users than other applications with end-to-end encryption.[8]

This Article argues that the growing conflict between the First Amendment and § 230 turns on how we understand the content that platforms deliver to its users.[9] Sometimes, platforms act as neutral hosts of user-generated content; other times, algorithms push them closer to being speakers of the content.[10] Section 230 protects platforms only when they are hosting third-party content[11] because the website does not, in whole or in part, create or develop the content.[12] But when the conduct crosses the line into the creation or development of content, the immunity weakens.[13] Still, social media platforms “have a First Amendment right to curate the third-party speech they select for and recommend to its users.”[14]

Part I of this Article provides background information on algorithms, § 230, and First Amendment protections on the Internet. Part II explains how courts historically ruled on algorithms, and what is necessary to bar § 230 immunity. It also argues that the increased level of sophistication of algorithms will make content first-party content, and therefore ineligible for § 230 immunity. Part III assesses recommendations on how to modify § 230 and how the government can clarify the statute to align with increased technological advancements. As algorithms become more sophisticated, social media websites will lose or weaken their § 230 protections and must rely on the First Amendment. When social media platforms curate content using increasingly sophisticated algorithms, they engage in expressive editorial choices. This conduct should be treated as transforming neutral third-party content into first-party content, not protected by § 230. This moment, when content curation becomes creation, marks the legal shift that Congress and courts must recognize.


I. Background

This Part provides the legal and technological foundation for understanding how algorithms intersect with § 230 and the First Amendment. It explains how different types of algorithms shape user experience; outlines § 230’s history, scope and exceptions; and situates these questions within the First Amendment’s protections for editorial judgment.


A. Explanation of Algorithms

Algorithms are automated problem-solving mechanisms that produce expressive outputs and perform mechanical tasks.[15] There are two types of algorithms: content moderation algorithms and content navigation algorithms.[16] Content moderation algorithms determine what type of content is allowed to exist on the website.[17] Content navigation algorithms are responsible for a user’s interaction with the website.[18] Content navigation algorithms have two subcategories: algorithms that determine what is trending, and personalization algorithms.[19] Algorithms that determine trending content look at content that the public, at large, engages with.[20] Personalization algorithms use information about the content an individual engaged with to create an individually curated digital experience.[21] They can generate social benefits, such as connecting marginalized groups or reducing misinformation.[22] That said, public opinion on reducing the spread of misinformation is divided with thirty-eight percent of Americans viewing it as beneficial, while thirty-one percent view it negatively.[23]A third type of algorithm, known as automated editorial acts, “slightly alter or add to the complained-of-third-party content” and act as “neutral conduits for third-party information.”[24]  

The TikTok algorithm relies on how long an individual spends watching a video and then recommends similar videos to keep individuals on its app, even when the videos are harmful.[25] For example, when an individual watches a “sad” video, it will continue to push its users to sad content.[26] Thereby recommending content to its users that promotes suicide or self-harm.[27]  Its goal is to maximize daily active users by analyzing both user retention and time spent.[28] Its algorithm is described as trying “to get people addicted rather than giving them what they want.”[29] However, TikTok’s algorithm also helps smaller content creators find a larger audience and changing how the algorithm functions would eliminate part of what makes TikTok unique.[30] That said, TikTok has faced some legal challenges,[31] such as allegations in Utah that TikTok contributes to the sexual exploitation of children.[32] Algorithms like this do not just host content; they push, prioritize, and sometimes distort content. This raises the question of whether platforms are truly neutral spaces.


B. Background and History of § 230 of the CDA

In 1996, the Communications Act of 1934 was expanded to include § 230 in the CDA.[33] This expansion provided immunity to interactive computer services,[34] intended to regulate online obscenity and indecency,[35] and protect children on the internet.[36] There are two provisions of § 230 that provide immunity to interactive computer services: § 230(c)(1) and § 230(c)(2).[37] Section 230(c)(1) “specifies that interactive computer services, providers and users may not ‘be treated as the publisher or speaker of any information provided by another information content provider.’”[38] It provides immunity to interactive computer services on the Internet[39] “acting as a ‘publisher or speaker’ of another’s content.”[40] Section 230(c)(2) explains that interactive computer services “may not ‘be held liable’ for voluntary, ‘good faith’ actions ‘to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.’”[41] It is intended to restrict specific types of objectionable content[42] and came in response to the “increasing availability of pornographic materials.”[43] Outside of the objectionable content, “it was not the legislative intent of § 230 to give companies a tool to silence speech.”[44] Section 230 has also been interpreted to not provide immunity to information content providers.[45]

Section 230 distinguishes between interactive computer services, [46] which act as neutral hosts, and information content providers,[47] which are responsible, in whole or in part, for creating or developing content.[48] A third category, access software provider, includes tools that filter, screen, organize and translate content.[49] This is one of the biggest hurdles to hold social media algorithms liable for its content.[50] These statutory definitions help form the basis of whether a platform can exercise § 230 immunity. However, § 230 does not extend to federal criminal law, intellectual property law, certain sex-trafficking claims, or other narrow cases.”[51]

There are two cases from the 1990s that led to the creation of § 230: Cubby v. CompuServe and Stratton Oakmont v. Prodigy Services Co. [52] Cubby v. CompuServe is an early case where a website was sued for defamation of statements made by a third-party on its platform.[53] CompuServe is a “computer service company that provided its subscribers with access to [an] electronic library of news publications put together by independent third part[ies].”[54] One of the forums on CompuServe’s website was Rumorville USA (hereinafter Rumorville), “a daily newspaper that provide[d] reports about broadcast journalism and journalists.”[55] The plaintiffs, Cubby, Inc. and Robert Blanchard developed a “computer database designed to publish and distribute electronically news and gossip.”[56] Rumorville published statements about the plaintiffs, that the plaintiffs contended were defamatory.[57] CompuServe however, argued that it was a distributor of Rumorville, not a publisher, and as a distributor, it could not be held liable for Rumorville’s statements because it did not have a reasonable expectation of knowing about Rumorville’s statements.[58] The court agreed with CompuServe, noting that “(1) CompuServe's contract gave it no editorial control over content; and (2) CompuServe was mostly a ‘passive conduit.’”[59]

This decision clashed with the 1995 case of Stratton Oakmont v. Prodigy Services Co. The plaintiffs contended that the defendant was a publisher of the content posted on the website.[60] The court grappled with whether Prodigy Services Co. (hereinafter “Prodigy”) had “editorial control over its computer bulletin boards to render it a publisher with the same responsibilities as a newspaper.”[61] This decision differed from the Cubby case because Prodigy claimed to control the content of the computer bulletin boards.[62] Because Prodigy exercised greater editorial control, the court held it liable for the content on its website.[63]

These inconsistent rulings pushed Congress to enact § 230 in 1996, granting broad immunity that “immunize[s] third-party publishers from liability for user content.”[64] The goals of § 230 were to “promote unfettered speech on the internet,” and block offense and obscene materials.[65] The First Amendment reflects “a profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open.”[66] Section 230 was not part of the CDA initially but was later added in a proposed House Bill.[67] At the time of its enactment, the legislative intent of § 230 was to promote the growth of the internet and encourage a competitive free market without government interference.[68] The statute has been amended twice: once in 1998 to require interactive computer services to inform customers about parental controls; and the second time in 2018 when Congress passed the “Fight Online Sex Trafficking Act” (hereinafter FOSTA).[69] In 2018, FOSTA “create[s] liability for [interactive computer services] if any third-party content on their websites ‘unlawfully promote[s] or facilitate[s] prostitution’ and also imposes liability on ‘websites that facilitate traffickers in advertising the sale of unlawful sex acts with sex trafficking victims.”[70] A Harvard University School of Law professor also noted that “Congress believed that it needed to alter the common law, even more than it had been modified by the First Amendment, to give Internet intermediaries the chance to make their business models work.”[71] Congress provided safe harbors so that the internet could freely grow.[72] The core congressional  intent of § 230 was to promote free speech on the internet.[73]

However, since § 230 has been enacted since 1996, the Supreme Court has never interpreted the statute.[74] That said, lower court decisions, such as Zeran v. America Online, Inc.[75] and Fair Housing Council v. Roommates.com, LLC,[76] helped interpret the application of § 230. The Zeran case held that ‘“the court effectively disincentivized the self-censorship that Congress intended when it passed § 230 and overturned Stratton-Oakmont.’”[77] The court noted that because there was a large number of posts, internet service providers could not reasonably screen every single post made on their websites.[78] If there was an expectation for internet websites to screen and restrict speech, it would hinder free speech on the internet.[79] The Zeran court looked at the congressional intent of § 230[80] and noted that § 230(c)(1) bars “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.”[81] This clarified what qualified as editorial functions. In Fair Housing Council v. Roommates.com, LLC, the Ninth Circuit held that Roommates.com received § 230 immunity because it was in line with the congressional intent “to preserve the free-flowing nature of Internet speech and commerce without unduly prejudicing the enforcement of other important state and federal laws.”[82]

Several circuit courts have attempted to provide clarification on how to analyze when § 230 immunity applies. The Ninth Circuit asks whether each claim: “(1) is against an interactive computer service provider, (2) treats that provider like a publisher, and (3) seeks to hold the provider liable for content developed by a third party such as a user.”[83] If the answer to any of these questions is in the affirmative, then § 230 prohibits the claim.[84] The Second Circuit has a more general approach to determining when § 230 immunity is granted, by looking at “if the lawsuit is in any way related to harmful third-party content.”[85] This way of looking at § 230 provides for a broad immunity because nearly everything on the internet is related to third-party content.[86] Section 230 created “broad immunity that allows the early dismissal of many legal claims against interactive computer service providers, preempting lawsuits and statutes that would impose liability based on third-party content.”[87]


C. First Amendment Rights on the Internet 

The First Amendment of the United States protects free speech rights, stating that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”[88] The First Amendment does not allow “state actors [to] restrict the speech of private properties, including social media platforms.”[89] The First Amendment is committed to providing an uninhibited, robust and open environment to share opinions on public issues[90] by ensuring private parties’ right to exercise editorial judgments and editorial functions.[91] The Supreme Court reiterates this right.[92] Editorial judgments are a type of commercial speech that allows publishers to exercise discretion in the editorial process.[93] Editorial judgments, as identified in Zeran, receive First Amendment protections[94] because editorial functions are considered commercial speech protected by the First Amendment.[95] Privately owned social media companies are not state actors so they are not required to provide full First Amendment protections to all users of its website.[96] Therefore, social media companies can restrict speech on their platforms.[97] Generally, content on social media is altered or removed when it is hate speech and because social media companies are privately owned, when an online private platform alters or removes  hate speech, it is not violating the First Amendment.[98] In the past, commercial speech did not receive as much protection as political speech, however, commercial speech now receives greater First Amendment protection.[99]

Strict scrutiny should be applied to determine when a statute that regulates the content of speech on the internet is a constitutional violation.[100] Content-based laws that alter or regulate a speaker’s speech based on the topic, idea, or message are also subject to strict scrutiny.[101] While the government cannot generally restrict the speech of privately owned companies, the time, place, and manner doctrine assists the government determine when it can actually restrict the content of speech by placing reasonable limits on when, where and how that speech occurs.[102] Time, place, or manner restrictions would survive intermediate restriction when it is “narrowly tailored, serves a significant government interest unrelated to the content of the speech and leaves open adequate channels for communication.”[103] The internet is used by criminals to display and host unlawful content, however, courts interpret § 230 to provide broad immunity to internet platforms.[104] Internet platforms do not have an incentive to “address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability.”[105] Platforms want the best of both worlds: to claim the First Amendment’s editorial rights, while using § 230’s immunity when those editorial choices cause harms. A platform cannot be a neutral host and an expressive editor at the same time.


II. Analysis

Having set out the technological background, statutory framework, and constitutional principles, this Part turns to how courts are applying them to algorithmic curation in practice. It examines how courts address the intersection of algorithms, § 230, and the First Amendment, first by considering the current legal landscape through the lenses of recent cases. These cases help redefine how algorithmic feeds are understood as expressive or potentially harmful conduct. This Part then turns to the question of liability, looking closely at when an online platform’s algorithmic functions can lose § 230 immunity. These discussions highlight the growing tension between protecting online speech and holding algorithms accountable for the harms their systems cause.


A. The Current Landscape of the First Amendment and § 230 of the CDA

This Section explores how courts are reshaping the relationship between the First Amendment and § 230 in the age of sophisticated algorithmic feeds. Moody v. NetChoice marked a turning point by recognizing that algorithmic curation is expressive conduct entitled to constitutional protections. Anderson v. TikTok then built on that reasoning, asking whether algorithmic recommendations that cause harm can still fall within § 230 immunity. Taken together, these cases show how courts are shifting from treating algorithms as neutral tools to viewing them as active expressions of editorial judgment.


1. Moody v. NetChoice case:

In the Supreme Court case of Moody v. NetChoice, the Court looked to whether two laws on social media content moderation, one based out of Texas and  the other based out of Florida, violated social media websites’ First Amendment rights.[106] The Court held that the state laws triggered the First Amendment because the laws interfered with private entities’ editorial discretion.[107] The Court looked at three general points when coming to this conclusion. First, the Court noted that a desire to offer multiple viewpoints is not sufficient to force a private entity to carry speech it disagrees with.[108] Second, social media platforms can “exclude the handful of messages it most disfavor[s].”[109] Third, the Court cannot force a private entity to present views against its message.[110] Platforms are exercising their editorial judgments by combining different posts to curate their feed.[111] The majority opinion compared social media feeds to newspapers because a newspaper company makes editorial decisions about the type of content that appears in its paper.[112] The government cannot force a newspaper to publish certain types of content because it would violate the  newspaper’s First Amendment rights by “interfer[ing] with the newspaper’s exercise of editorial control or judgment.”[113] The Court’s holding recognized algorithmic feeds as expressive editorial judgments entitled to the same protection.[114] While the Court did not directly decide whether curation crosses the line into creation, highly personalized algorithmic feeds push platforms outside the scope of § 230 immunity.

 The concurrences in the opinion highlight different perspectives that should be considered in relation to algorithmic functions. The concurrence from Justice Alito highlights different theories to handle the case at hand. Justice Alito brings up the fact that the legal system “subject[s] certain businesses, known as common carriers, to special regulations, including a general requirement to serve all corners”[115] and the common carrier doctrine should also be used in cases such as these.[116] If social media platforms are treated as common carriers, they would be forced to “impose non-discrimination policies, preventing them from censoring the speech of their users.”[117] Another concurrence from Justice Thomas, joined by Justices Alito and Gorsuch, points out that social media platforms have become a resource for news[118] and that social media could be seen as the “modern public square.”[119] Despite the benefits of social media, the Justices also note that social media has led to increases in depression, isolation, bullying, and pressure to endorse what is trending online.[120] They also identify that the vast majority of social media feeds and content curation are done by algorithms and not manually by people.[121] Artificial Intelligence algorithms and the Community Standards and Community Guidelines decide what “speech, including what viewpoints, are not worthy of promotion.”[122] The concurrences raise a number of concerns with algorithms and social media platforms that will be further addressed in this Article.


2. Anderson v. TikTok case:

While Moody created the foundation that algorithmic feeds reflect a platform’s editorial judgment, Anderson v. TikTok, takes it a step further and asks what happens when those same editorial decisions cause harm.[123] TikTok’s algorithm recommends videos posted by third parties[124] and creates a custom feed for each of its users, using a personalization algorithm.[125] The algorithm in this case recommended the “Blackout Challenge”[126] to  ten-year old Nylah Anderson, who later died attempting it.[127] Her parents sued TikTok for products liability and negligence.[128] The Third Circuit applied Moody’s holding, reasoning that the act of ordering third-party speech into a personalized feed is the platform’s editorial judgement and thus an expressive activity protected by the First Amendment. By making those editorial choices, the court treated TikTok’s algorithmically displayed feed as its own expressive conduct, not protected by § 230.[129] Section 230 only gives a platform immunity when liability arises from third-party content of a passive host.[130] Accordingly, the court held that TikTok recommending the “Blackout Challenge” was not protected under § 230.[131]

Judge Matey’s concurrence points to some of the current drawbacks of § 230.[132] Social media platforms are minimally liable for third-party content “no matter the cause of action and whatever the provider’s actions.”[133] Judge Matey point to how this strays from the congressional intent of § 230.[134] Many of the CDA’s provisions were to protect minors from offensive material on the internet.[135] However, it has now become a “lawless no-man’s-land of legal liability” on the internet.[136] A statute must be looked at its entirety, and the legal backdrop when it was enacted.[137] Judge Matey stated that “when § 230(c)(1) prohibits treating TikTok as the publisher of videos posted by third parties, that means TikTok cannot be liable for the mere act of hosting these videos.”[138] That said, “§ 230(c)(1) . . . allows suits to proceed if the allegedly wrongful conduct is not based on the mere hosting of third-party content, but on the acts or omissions of the provider of the interactive computer service.”[139] However, § 230(c)(1) does not address a website’s own conduct beyond hosting third-party content.[140] Judge Matey encouraged looking towards the congressional intent at the time the statute was enacted and § 230 was designed to protect minors, not put them at risk.

The Third Circuit could reach the same holding by applying  different theory.[141] The Ninth Circuit utilizes a three-prong test to determine when § 230 would apply to a lawsuit: (1) is the claim “against an interactive computer service provider,” (2) does the claim “treat[] the provider like a publisher, and” (3) does the claim “seek[] to hold the provider liable for content developed by a third party.”[142] If the answer to these questions are in the affirmative, § 230 immunity would not apply to a lawsuit.[143] The third prong is significant because it was TikTok’s own conduct, through its algorithm, that continually pushed the Blackout Challenge to Anderson.[144] This does not mean that TikTok will automatically be held liable but rather, that it will not receive § 230 immunity.[145]


B. Can Social Media Websites be Liable for Algorithms?

This Section looks at how online platforms can actually be held responsible for the consequences of their algorithms. It begins with the case law for interpreting § 230, showing how courts distinguish between neutral algorithms and conduct that materially contributes to the unlawful conduct. It then turns to the key divide between first-party and third-party content, which often determines whether an online platform receives § 230 immunity. By tracing these categories, this Section highlights how personalization and targeted curation can push algorithms beyond neutrality.


1. Case Law of § 230 and Algorithms

Section 230 of the CDA has never been interpreted by the Supreme Court,[146] so it is important to look to lower courts’ interpretation to gain a better understanding of how the statute applies to algorithms. Furthermore, technology is developing rapidly so it is also critical to look at recent court holdings to understand how courts look at algorithms with current algorithmic technology.

Platforms that arrange and display third-party content in a neutral manner cannot be said to be a developer of content and would therefore be eligible for § 230 immunity.[147] In 2019, in the case of Force v. Facebook, Inc., defendant Facebook’s “newsfeed,” which is controlled by an algorithm, looked to its users’ prior behavior to determine what content the users would likely engage with.[148] The plaintiffs contended that, despite the fact that it is against Facebook’s terms and policies, foreign terrorist organizations were able to continually maintain and update pages associated with their organization.[149] Defendants sought immunity under § 230.[150] Circuits have applied § 230 (c)(1) broadly in favor of immunity,[151] resulting in a broad understanding of when a website or platform is treated as the publisher of third-party content.[152] The Second Circuit noted that when the defendant is arranging and distributing third-party content, whether online or in print media, it is an “essential result of publishing”[153] and the editorial decisions of the platform.[154] Courts have recognized a defendant would be considered a developer of third-party content if the defendant were to “materially contribute[] to what made the content itself unlawful.”[155] If Facebook was, even in part, a creator or developer of the content on its website, it would be an “information content provider” and not be protected under § 230.[156] However, Facebook does not edit the content its users post, nor does it “acquire certain information from users [that] render it a developer” under § 230.[157] Instead, “Facebook acts as a neutral intermediary,” that arranges and displays content through algorithms, which is not sufficient to be considered a developer or creator of the content.[158] Instead, the Facebook algorithm curated general, not individual, content, and remained protected under § 230.[159]

Section 230 immunity does not apply if an interactive computer service is wholly or partially responsible for the creation or development of unlawful conduct.[160] In the case of Doe v. MindGeek USA Inc., the plaintiff alleged that the defendant knowingly posted, enabled the posting of, and profited from pornographic videos from individuals under the age of 18.[161] The defendants instructed third-parties uploading content to use titles such as “less than 18, the best collection of young boys, and under-age” to entice users to watch the videos.[162] Furthermore, defendant’s website moderators were aware that videos that depicted underaged individuals were approved for viewing on defendant’s platform.[163] Additionally, the defendants did not try to verify the age of individuals in their videos.[164] The sixteen-year old plaintiff alleged that the defendants profited off her child pornography, when her ex-boyfriend filmed and uploaded a video of them engaging in sexual intercourse.[165] Without the knowledge or consent of the plaintiff or verification of her age, these videos were approved for posting.[166] These videos had advertisements placed next to them and the defendants financially benefited from these advertisements.[167] If the interactive computer service aids wholly or partially in the creation or development of unlawful content on the platform, it cannot escape liability by requesting § 230 immunity.[168] Courts have noted the difficulty in determining what constitutes creation or development of content on the internet[169] so, courts also employ a test[170] that looks to whether the platform is responsible for what makes the displayed content unlawful.[171] The defendants in the case supported displaying child pornography through its encouraged titles[172] and profit from child pornography, so they are liable for the unlawful conduct of child pornography on their website.[173]

“[A]n interactive computer service does not create or develop content by merely providing the public with access to its platform.”[174] In Gonzalez v. Google LLC, the plaintiffs alleged that YouTube was aware of terrorism content on its platform, and became “an essential and integral part of ISIS’s program of terrorism and that ISIS use[d] YouTube to recruit members, plan terrorist attacks, issue terrorist threats, instil fear, and intimidate civilian populations.”[175] The plaintiffs claimed that Google was aware of the presence that ISIS had on its platform and did not make sufficient efforts to ensure ISIS was not using its platform.[176] The plaintiffs argued that § 230 did not apply to the current case because § 230 immunities do not apply to the Anti-Terrorist Act (ATA) under § 230(e)(1), which states “[n]othing in this section shall be construed to impair the enforcement of . . . any . . . Federal criminal statute.”[177] The Supreme Court “vacated and remanded [this case] and its application of Section 230”; the facts were equivalent to Twitter, Inc. v. Taamneh, where the Court “declined to address the application of Section 230.”[178] The Court held that Google did not create or develop the content from ISIS,[179] because it used content-neutral algorithms.[180] As such, Google cannot be held liable for content posted on its platform.[181] The plaintiff also made revenue-sharing allegations that did not arise from publishing third-party content.[182] That said, the Court noted that the defendant could be liable for revenue-sharing because it supported ISIS by giving it money, which is not related to YouTube’s algorithm.[183] If a platform does not show preferential or different treatment of a specific type of content, then that platform cannot be held liable for the third-party content.[184]

While part of an internet platform’s conduct may be protected under § 230, another part of it may not receive the same immunity.[185] In Liapes v. Facebook, the court held that interactive computer service providers have § 230 immunity if it is not also an information content provider.[186] If the platform passively displays content in a neutral manner, it would eligible for § 230 immunity.[187] When determining whether an internet platform has § 230 immunity, courts look at if the website, in part or in whole, aids in the creation or development of the content.[188] In this case, when Facebook’s ad-delivery algorithm used personal information of its users to determine the advertisements the users could view, Facebook was responsible for the development of the content.[189] As such, Facebook’s ad-delivery algorithm could not seek § 230 immunity for its ad-delivery algorithm.[190]

When a platform uses a neutral algorithm to display third-party content, it receives immunity under § 230 and is not involved in the development of the content.[191] In Wozniak v YouTube, LLC, a cryptocurrency scam circulated on YouTube with text stating that anyone who sends bitcoin to a specific account will receive double that amount.[192] In reality, users who transferred the cryptocurrency received nothing and the transaction was irreversible.[193] In some of the scam videos, YouTube issued a verification badge.[194] The court noted that § 230 is designed to allow internet platforms to avoid liability when they are a publisher of third-party content and allow platforms to exercise editorial functions, “such as deciding whether to publish, withdraw, postpone or alter content.”[195] An interactive computer service is not liable for displaying content created by third-parties unless, it is also a content provider, which is responsible, in whole or in part, for the creation or development of the content.[196] The court explained that the legislative history and case law of § 230 “explained that the [§ 230] immunity does not apply to claims that are either (1) content neutral, or (2) do not derive from the defendant's status or conduct as a publisher or speaker.”[197] Granting § 230 immunity is contingent on whether a website is a content provider that creates or develops the content on the website.[198] The court held that YouTube is generally granted § 230 immunity and it is a publisher of third-party content.[199] However, when YouTube placed its verification badge, it altered content; this was enough to lose § 230 immunity.[200] Similarly, in Doe K.B. v. Backpage.Com, LLC, the court held that Meta did not create the private messages that led to unlawful conduct so, it could not be held liable for unlawful activities of that content.[201]

The case law surrounding § 230 immunity demonstrates that platforms cannot be granted immunity when they act as a provider of the content.[202] An information content provider, in whole or in part, must participate in the creation or development of content.[203] Courts apply the material contribution test to distinguish between passive hosting and active content development: a platform that merely displays content neutrally is protected under § 230 whereas a platform that materially contributes to the unlawful conduct is not.[204] Taken together, these cases suggest that platforms engage in a personalized display of content, so courts could view them as stepping into the realm of content development and in that moment, platforms should lose § 230 immunity.[205] Courts struggle to capture the distinction between neutral hosting from development. The distinction is best understood by drawing the line between third-party content, which a platform merely hosts, and first-party content, which they create or materially shape.


2. First-Party Content versus Third-Party Content

Social media platforms argue that their content is protected by the First Amendment because content organization and curation are exercises of editorial judgments.[206] When a platform engages this kind of expressive conduct, the result is first-party content.[207] By contract, when a platform neutrally and passively displays content without material alteration or development, the content remains third-party content.[208] Both first-party and third-party receive First Amendment protection.[209] The difference is that § 230 provides additional immunity only when the platform acts as a neutral host of third-party content .[210] Once a platform curates or develops content, it does not lose its First Amendment protection; it loses its ability to pair that protection with § 230 immunity. Section 230 provides social media platforms with “quicker and more certain dismissal of lawsuits”[211] by granting complete immunity for third-party content,[212] regardless of if the speech is lawful or not.[213] The key differentiation between first-party content and third-party content lies in the development of the content. If the interactive computer service, in whole or in part, creates or develops the content, it should be treated as first-party content; if it merely hosts the content neutrally, it is third-party content.[214] Case law highlights that when websites use personal information to create an individualized curation of content, they engage in expressive conduct, which courts should treat as first-party content, outside §230’s protections.[215]

Website platforms claim First Amendment protections when they exercise editorial functions, such as the organization and curation of the content.[216] Private parties, such as social media platforms, exercise constitutionally protected editorial discretion when they decide when speech should remain on their website.[217] However, when website platforms are asked to be held accountable for the promoting unlawful content on the website via algorithms, they argue that they are not speakers of third-party content, and therefore should receive § 230 immunity.[218] Courts tend to grant § 230 immunity to algorithms that contribute to unlawful activity, if the unlawful activity comes from a neutral algorithm.[219] Two factors are looked at to determine when an algorithm is neutral: (1) “the quality of the data input” and (2) “the quality of the decisional outcomes produced from the raw data filtered by the algorithm.”[220] “[L]iability is extinguished even when an algorithm created by the company distributes content it knows or reasonably should have known is illegal.”[221] However, the congressional intent was not for § 230 to provide website platforms with a “get-out-jail free card.”[222] When websites’ algorithms develop third-party content, § 230 immunity does not apply because the third-party content is transformed into first-party content.[223] Online platforms cannot be held liable for the passive or neutral display of its users’ content,[224] but if a lawsuit arises from a platform’s own conduct, then that conduct is not protected under § 230.[225] When a website uses either non-neutral tools or machine learning algorithms, or personal information from its users to create or develop its algorithms, the website loses § 230 immunity.[226]

When a website only displays what its users follow or neutrally display content on its algorithm, it is engaging in third-party content.[227] This differs from TikTok, where the algorithm’s curation of videos on the “For You Page” is arguably first-party content because it is TikTok’s expressive conduct and it materially alters the viewing experience of its users.[228] The algorithm does not post videos in a neutral way,[229] but instead takes the personal viewing habits of its users to determine what content will be displayed on the “For You Page,” which effectively makes it first-party content.[230] TikTok’s algorithm is not neutral, is designed to influence its users to stay on the application longer,[231] and is known for “being inside a user’s head” because it “generate[s] content for [its users], as opposed to other platforms where a user must follow other users to see content.[232] This effectively eliminates the neutrality of TikTok’s “For You Page.”[233]

Furthermore, users of TikTok do not have control over what they view because TikTok has an ineffective “Not Interested” button.[234] The “Not Interested” button is supposed to show fewer videos of that nature.[235] However, the TikTok algorithm is designed to increase retention time on the application[236] and since TikTok’s content is comprised of videos, users will need to view a video for a few seconds before determining whether or not they will swipe away from it.[237] Those few seconds increase the engagement time or a video type, leading to similar videos continuing to show up on the “For You Page.”[238] This differs from other platforms, that depend on actually following the content creator.[239] Therefore, TikTok’s content, as opposed to other internet websites, may be subject to a greater chance of facing liability for not moderating unlawful content on its platform. TikTok’s “For You Page” arguably crosses into the development of content and could be treated as first-party speech, stripping away its § 230 immunity. The differences between first-party and third-party content are key in determining whether a website or platform can exercise § 230 immunity. As social media algorithms become more sophisticated and apply practices similar to TikTok, it may also lose the broad application of § 230 immunity.

Social media platforms may argue that, through a statutory analysis of the language of § 230, algorithmic functions will receive immunity.[240] Statutory analysis begins with the text of the statute[241] and should be analyzed with the statute’s full context.[242] Section 230 immunity comes from the “Good Samaritan” clause, which states “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[243] The phrases “interactive computer service” and “information content provider” are statutorily defined,[244] so those definitions should be used in the analysis of § 230.[245] An interactive computer service is defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.”[246] The statute defines access software provider and it is through the statutory definition that social media attempt to make a textualist argument that the algorithms used to recommend content are protected under the § 230.[247] Access software provider is defined by § 230 as “a provider of software (including client or server software), or enabling tools that do any one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick, choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”[248]

On its face, the definition of access software provider suggests that all algorithmic functions receive § 230 immunity; however, information content providers do not receive § 230 immunity[249] because it is responsible for the “creation or development of information provided through the Internet or any other interactive computer service.”[250] Algorithms that are information content providers are not neutral, and therefore first-party content. While the statutory definition of access software provider implies that all algorithms receive § 230 immunity, the immunity hinges on whether, in whole or in part, there is any creation or development of content.[251] Taken in context, neutral algorithms that passively display content are protected under the statute. However, the personalized curation of social media feeds moves beyond neutral hosting and transforms third-party content into first-party speech. When a platform merely organizes, reorganizes, picks, and chooses content, it is third-party content and if it materially alters or develops the content, it is first-party content. When a platform’s algorithm builds a feed, it stops merely hosting content and starts speaking. TikTok’s “For You Page” exemplifies this because content is not passively displayed; it is personalized, curated, and trained.[252] That makes it first-party content and should be reflected in the statute.

Courts cannot resolve the doctrinal gap on their own. Lower courts remain split on how far § 230 immunity reaches, and judges are left stretching the language of a statute about the internet written in 1996 to technologies it never envisioned. Congress, not courts, should define the point where third-party content becomes first-party content.


III. Recommendation: When Content Curation Becomes Creation

Courts should recognize that a platform crosses the line from neutral hosting into expressive creation when its algorithm individually personalizes a feed by materially shaping the order, emphasis or visibility of content based on user-specific data. This rule reflects what Moody and Anderson already imply: neutral, passive tools remain protected by § 230, but individualized curation transforms third-party speech into the platform’s own expressive product.[253] Once curation becomes creation, platforms cannot claim both § 230 immunity and First Amendment editorial discretion.

Section 230 was written in 1996, before platforms could predict what users want to see before they even knew it themselves. “The Department of Justice has concluded that the time is ripe to realign the scope of Section 230 with the realities of the modern internet. . . . Every year, more citizens—including young children—are relying on the internet for everyday activities, while online criminal activity continues to grow.”[254] The Department of Justice recognizes the importance of ensuring the internet is “both an open and safe space for our society” and points out how § 230’s immunity is for third-party content, while the First Amendment allows for the removal of content.[255] Section 230 of the CDA has not been analyzed by the Supreme Court, so lower courts turn to one another to gain an understanding of the statute.[256] The realities of modern day internet means that “[s]ome form of regulation of social media companies is imminent, whether it comes from Congress or from a Supreme Court decision that changes the current scope of Section 230.”[257]

A Facebook whistleblower explained that Facebook’s internal research includes topics such as human traffickers and teen mental health,[258] showing that Facebook is aware of some of the illegal activity that occurs on its platforms. Congress has shown that it is not opposed to amending § 230, as seen in the 2018 FOSTA amendment.[259] One suggestion from scholars, a state attorney general and a Supreme Court Justice is to regulate social media as a public utility.[260] While social media can sometimes be viewed as a modern day public square,[261] regulating the internet as a public utility poses complications that would hinder internet growth and free speech online. This goes against the original intent of § 230, which was to promote online speech.[262] Social media platforms are privately owned and operate for profit, whereas a physical public square is not intended to be used for profit.[263]

As seen through the 2018 FOSTA amendment of § 230, Congress has not forgotten that one of § 230’s original goals was to protect minors from unlawful activity.[264] There are other proposed amendments to § 230 that have not been passed, such as The Protecting Americans from Dangerous Algorithms Act, The Safe Tech Act, and The Federal Big Tort Act.[265] Under The Protecting Americans from Dangerous Algorithms Act, legislation attempts “to hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence.”[266] The Safe Tech Act attempts to update § 230 to hold platforms accountable, when they “enabl[e] cyber-stalking, targeted harassment, and discrimination on their platforms” by reducing a platform’s shield from “any information” posted by a third party to “any speech” and to bar immunity from ads and other paid content. [267] However, the Safe Tech Act would not guarantee social media platform liability.[268] Congress also showed an interest in promoting the well-being of children with the Federal Big Tech Tort Act, where “[a] social media company [is] . . . liable . . . to any individual who suffers bodily injury or harm to mental health that is attributable, in whole or in part, to the individual's use of a covered interactive computer service provided by the social media company when the individual was less than 16 years of age.”[269]

These proposals, while flawed in execution, show that Congress is willing to legislate around § 230. Congress does not need to narrow § 230; it needs to codify that when platforms curate personalized feeds, they are acting as speakers protected by the First Amendment. That same conduct cannot also claim § 230 immunity. To draw this line, Congress could amend § 230(c)(1) to make explicit that a platform loses immunity when it individually personalizes content based on user-specific data.[270] This would allow courts to preserve immunity for neutral hosting of third-party content, while treating individualized curation first-party editorial production. When a platform’s algorithm shapes what a user sees, it is making an editorial choice and should therefore be responsible for the legal consequences.

That said, codification carries risks. Defining first-party content to include sophisticated algorithms would cause a major shift in how platforms display content. Currently, several social media platforms follow a model similar to TikTok’s “For You Page.”[271] If personalized feeds are treated as first-party content, companies would have to fall back on showing only user-followed/sought content and generally-trending content. For example, X (formerly known as Twitter) would remove the “For You” Tab and rely on its “Following” and “Explore” tabs,[272] since generally trending content is not curated on an individual basis and remains third-party content. Limiting algorithmic personalization in this way could unintentionally narrow how information and content available to users.[273] At the same time, an overreliance on algorithmic processes reveals statutory gaps that § 230 was never designed to address, such as the ways that a platform uses a user’s data to shape and steer content viewing.[274] Some scholars have argued that these gaps warrant treating platforms as “information fiduciaries,” owing their users duties of care and loyalty.[275] Although the fiduciary model helps capture platforms’ unique access to user data, it remains too broad and imprecise to guide courts in concrete § 230 cases. However, it underscores why Congress should clarify § 230 and differentiate between hosting and creating content.

When a platform exercises editorial choice, it is protected under the First Amendment. This conduct disqualifies it from § 230 immunity.  A platform that builds a feed does not merely passively display the content, it is exercising editorial control. That conduct remains protected under the First Amendment but also falls outside § 230 immunity and can carry liability. These risks show what is at stake, but they do not lessen the need for Congress to clearly draw the line between hosting content and creating content.


Conclusion

Historically, case law has interpreted § 230 to provide broad immunity over content posted on the internet. However, after the Moody case, where the Supreme Court held that algorithmic display of content is a platform’s editorial choice and therefore First Amendment protected,[276] courts can hold platforms liable when an algorithm furthers unlawful conduct. The Anderson case is a pioneer in applying the Moody rationale by holding that § 230 immunity does not apply to TikTok’s algorithm.[277] Together, these cases suggest that that curated feeds should not be treated as neutral hosting; they are first-party speech. Platforms cannot claim both editorial freedom of the First Amendment and liability protection from § 230.

Instead of amending § 230 to reduce the protection it grants, Congress should provide a clear line between first-party content and third-party content. Algorithms that curate feeds on a user-specific basis are first-party content and are unprotected by § 230. Without a clear statutory framework that accounts for algorithmic development, courts will continue to struggle with assigning liability to social media platforms and algorithmically curated content, leaving § 230 dangerously overapplied. At the end of the day, content curation becomes creation. And that, after all, is an editorial choice.





[1] See Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024); Anderson v. TikTok, Inc., 116 F.4th 180 (3d Cir. 2024).


[2] Id.


[3] Eric N. Holmes, Cong. Rsch. Serv., R47753, Liability for Algorithmic Recommendations 6 (2023).


[4] Fabio Durate, TikTok User Age, Gender, & Demographics, Exploding Topics (Nov. 27, 2024), https://explodingtopics.com/blog/tiktok-demographics; Backlinko Team, TikTok Statistics You Need to Know, Backlinko Team (July 1, 2024), https://backlinko.com/tiktok-users.


[5] Holmes, supra note 3, at 4.


[6] Ben Smith, How TikTok Reads Your Mind, N.Y. Times (Dec. 5, 2021), https://www.nytimes.com/2021/12/05/business/media/tiktok-algorithm.html.


[7] Id.


[8] Id.


[9] Holmes, supra note 3, at 5.


[10] Id.


[11] Id.


[12] Daniloff v. Google, LLC, No. 3:22-cv-01271-IM, 2023 U.S. Dist. LEXIS 15039, at *7 (D. Or. Jan. 30, 2023).


[13] Anderson v. TikTok, Inc., 116 F.4th 180, 183 (3d Cir. 2024).


[14] David Greene, Platforms Have First Amendment Right to Curate Speech, As We’ve Long Argued, Supreme Court Said, But Sends Laws Back to Lower Court to Decide If That Applies to Other Functions Like Messaging, Elec. Frontier Found. (July 13, 2024), https://www.eff.org/deeplinks/2024/07/platforms-have-first-amendment-right-curate-speech-weve-long-argued-supreme-1.


[15] Holmes, supra note 3 at 5.


[16] Haley Griffin, Laws in Conversation: What the First Amendment Can Teach Us About Section 230, 32 Fordham Intell. Prop. Media & Ent. L.J. 473, 499 (2022).


[17] Id.


[18] Id.


[19] Id.


[20] Id.


[21] Id.


[22] Danielle Draper & Sabine Neschke, The Pros and Cons of Social Media Algorithms, Bipartisan Policy Center, 2-3 (Oct. 2023,), https://bipartisanpolicy.org/download/?file=%2Fwp-content%2Fuploads%2F2023%2F10%2FBPC_Tech-Algorithm-Tradeoffs_R01.pdf.


[23] Lee Rainie, Cary Funk, Monica Anderson & Alec Tyson, Mixed Views About Social Media Companies Using Algorithms to Find False Information, (Mar. 17, 2022), http://www.pewresearch.org/internet/2-22/03/17/mixed-views-about-social-media-companies-using-algorithms-to-find-false-information/.


[24] Id. at 502–03.


[25] Id. at 503.


[26] Id.


[27] Id.


[28] Id.


[29] Id.


[30] Alexander Paykin, A Tik-Tok Ban? The First Amendment Implications Should Not Be Underestimated, N.Y. St. Bar Ass’n J. 38, 42 (Fall 2024).


[31] Id. at 43; see also NEWS RELEASE: Utah Division of Consumer Protections Announces Release of Previously Redacted Information in TikTok Inc. Complaint Filing, Utah Commerce (Jan. 3, 2025), https://blog.commerce.utah.gov/2025/01/03/news-release-utah-division-of-consumer-protections-announces-release-of-previously-redacted-information-in-tiktok-inc-complaint-filing/ (explaining that the Utah Division of Consumer Protection alleges that because the TikTok algorithm favors live feeds with high currency exchange, it invertedly recommends content that encourages money laundering or sexual content).


[32] Paykin, supra note 30, at 43.


[33] Holmes, supra note 3, at 6.


[34] Id.


[35] Kira M. Geary, Section 230 of the Communications Decency Act, Product Liability, and A Proposal for Preventing Dating-App Harassment, 125 Penn St. L. Rev. 501, 505 (2021).


[36] Valarie C. Brannon & Eric N. Holmes, Cong. Rsch. Serv., R6751, Section 230: An Overview (2024).


[37] Holmes, supra note 3, at 6–7.


[38] Id. at 6 (citing 47 U.S.C. § 230(c)(1)).


[39] Brandon Salter & Dhillon Ramkhelawan, Section 230 Immunity: How the Trump Era Has Exposed the Current Conflict Between the First Amendment and the Good Samaritan Clause in the Modern Public Square, 43 U. Ark. Little Rock L. Rev. 239, 240 (2020).


[40] Holmes, supra note 3, at 7.


[41] Id. (citing 47 U.S.C. §230(c)(2))


[42] Id.


[43] Geary, supra note 35, at 505.


[44] Salter & Ramkhelawan, supra note 39, at 252.


[45] See Force v. Facebook, Inc., 934 F.3d 53, 66 (2d Cir. 2019).


[46] An interactive computer service is defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.” See 47 U.S.C. § 230(f)(2)


[47] An information content provider is defined as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service. § 230(f)(3).


[48] Brannon & Holmes, supra note 36, at 6.


[49] An access service provider is defined as “a provider of software . . . or enabl[es] tools to do one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick, choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.” See 40 U.S.C. § 230(f)(4).


[50] Ben Sperry, Between a TikTok and a Hard Place: Products Liability, Section 230, and the First Amendment, Truth on the Market (Sept. 12, 2024), https://truthonthemarket.com/2024/09/12/between-a-tiktok-and-a-hard-place-products-liability-section-230-and-the-first-amendment/.


[51] Cong. Rsch. Serv., Section 230: An Overview (2024), https://crsreports.congress.gov/product/pdf/IF/IF12584 [hereinafter CRS Report Section 230].


[52] Salter & Ramkhelawan, supra note 39, at 242-43.


[53] Id. at 243 (citing Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 139 (S.D.N.Y. 1991)).


[54] Cubby, Inc., 776 F. Supp. at 135.


[55] Id. at 137.


[56] Id. at 138.


[57] Id.


[58] Id. at 139.


[59] Salter & Ramkhelawan, supra note 39, at 244.


[60] Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *2 (N.Y. Sup. Ct. May 24, 1995).


[61] Id. at *3.


[62] Id. at *4.


[63] Id. at *5.


[64] Salter & Ramkhelawan, supra note 39, at 245.


[65] Geary, supra note 35, at 508.


[66] N.Y. Times Co. v. Sullivan, 376 U.S. 254, 270 (1964).


[67] Salter & Ramkhelawan, supra note 39, at 246.


[68] Id. (citing 47 U.S.C. § 230(b)(1)–(2) (1996)).


[69] Brannon & Holmes, supra note 36, at 4–5.


[70] Geary, supra note 35, at 511.


[71] Salter & Ramkhelawan, supra note 39, at 249 (citing Rebecca Tushnet, Power Without Responsibility: Intermediaries and the First Amendment, 76 Geo. Wash. L. Rev. 986, 1008 (2008) [hereinafter Tushnet].


[72] Id. (citing Tushnet at 1007–9).


[73] Id. (citing Tushnet at 1010–11).


[74] Holmes, supra note 3, at 16.


[75] Salter & Ramkhelawan, supra note 39, at 247.


[76] Id. at 249.


[77] Id at 247 (quoting Joseph G. Marano, Note, Caught in the Web: Enjoining Defamatory Speech that Appears on the Internet, 96 Hastings L.J. 1311, 1318 (2018)).


[78] Id. at 247–248.


[79] Id. at 248 (citing Zeran v. America Online, Inc. 129 F.3d 327, 331 (4th Cir. 1997)).


[80] Id 249.


[81] Holmes, supra note 3, at 8.


[82] Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1175 (9th Cir. 2008).


[83] Tom McBrien, In Anderson v. TikTok, the Third Circuit Applies Questionable First Amendment Reasoning to Arrive at the Correct Section 230 Outcome, Elec. Priv. Info. Ctr. (Oct. 10, 2024), https://epic.org/in-anderson-v-tiktok-the-third-circuit-applies-questionable-first-amendment-reasoning-to-arrive-at-the-correct-section-230-outcome/.


[84] Id.


[85] Id.


[86] Id.


[87] Brannon & Holmes, supra note 36, at 11.


[88] U.S. Const. amend. I.


[89] Alex Chemerinsky & Erwin Chemerinsky, Misguided Federalism: State Regulation of the Internet and Social Media, 102 N.C. L. Rev. 1, 19 (2023).


[90] Salter & Ramkhelawan, supra note 39, at 252.


[91] Chemerinsky & Chemerinsky, supra note 89, at 19.  


[92] Id.


[93] Griffin, supra note 17, at 484.


[94] Id. (citations omitted).


[95] Id. at 483.


[96] Chemerinsky & Chemerinsky, supra note 89, at 21.


[97] Id.


[98] Id.


[99]  Griffin, supra note 17, at 483–84.


[100] Chemerinsky & Chemerinsky, supra note 89, at 23.


[101] Id.


[102] Paykin, supra note 30, at 41.


[103] Id.


[104] Id. at 43.


[105] Id.


[106] Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2388 (2024).


[107] Id.


[108] Id. at 2401.


[109] Id.


[110] Id. at 2402.


[111] Id.


[112] Id.


[113] Id. (citations omitted).


[114] Id.


[115] Id. at 2411.


[116] Id.


[117] Edward W. McLaughlin, How to Regulate Online Platforms; Why Common Carrier Doctrine is Inappropriate to Regulate Social Networks and Alternate Approaches to Protect Rights, 90 Fordham L.R. 185 (2021).


[118] Moody, 144 S. Ct. at 2422.


[119] Id.


[120] Id. at 2422–23.


[121] Id. at 2438.


[122] Id. at 2404 n.5.


[123] See Anderson v. TikTok, Inc., 116 F.4th 180 (3d Cir. 2024).


[124] Id. at 181.


[125] See Griffin, supra note 17, at 499.


[126] The “Blackout Challenge” reflects a challenge, comprising of TikTok videos, where individuals are encouraged to choke themselves until they pass out. See Anderson, 116 F.4th at 182.


[127] Id.


[128] Id.


[129] Id. at 184 (citations omitted).


[130] Id. at 183.


[131] Id. at 184.


[132] Id. at 185.


[133] Id. at 191.


[134] Id.


[135] Id. at 190.


[136] Id. at 191 (citation omitted) (quotation omitted).


[137] Id.


[138] Id. at 192


[139] Id.


[140] Id.


[141] McBrien, supra note 83.


[142] Id.


[143] Id.


[144] Id.


[145] Id.


[146] Holmes, supra note 3, at 16.


[147] See Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019).


[148] Id. at 57–58.


[149] Id. at 59.


[150] Id. at 57.


[151] Id. at 64.


[152] Id. at 65.


[153] Id. at 66.


[154] Id.


[155] Id. (citations omitted) (quotations omitted).


[156] Id.


[157] Id. at 70.


[158] Id.


[159] Id. at 68.


[160] See Doe v. MindGeek USA Inc., 574 F. Supp. 3d 760, 767 (C.D. Cal. 2021).


[161] Id. at 763.


[162] Id. at 774 (quotations omitted).


[163] Id.


[164] Id. at 765.


[165] Id.


[166] Id.


[167] Id.


[168] Id. at 767.


[169] Id.


[170] Id. at 768.


[171] Id.


[172] Id. at 770.


[173] Id. at 771–72.


[174] Gonzalez v. Google LLC, 2 F.4th 871, 893 (9th Cir. 2021).


[175] Id. at 881 (quotations omitted).


[176] Id. at 882.


[177] Id. at 890 (citations omitted).


[178] Julianne Gabor, The TikTok Algorithm Is Good, But Is It Too Good? Exploring the Responsibility of Artificial Intelligence Systems Reinforcing Harmful Ideas on Users, 32 Cath. U. J. L. & Tech 109, 122–123 (2023).


[179] Gonzalez, 2 F.4th at 893.


[180] Id. at 896.


[181] Id.


[182] Id. at 898.


[183] Id.


[184] Id. at 898.


[185] See Liapes v. Facebook, Inc., 95 Cal. App. 5th 910, 926 (2023).


[186] Id. at 928.


[187] Id.


[188] Id.


[189] Id. at 928–29.


[190] Id. at 931.


[191] See Wozniak v. YouTube, LLC, 100 Cal. App. 5th 893, 917 (2024).


[192] Id. at 901.


[193] Id.


[194] Id.


[195] Id. at 908 (citations omitted) (quotations omitted).


[196] Id.


[197] Id. at 911.


[198] Id. at 923.


[199] Id. at 899.


[200] Id.


[201] See Doe K.B. v. Backpage.Com, LLC, No. 23-cv-02387-RFL, 2024 U.S. Dist. LEXIS 101530, at *7 (N.D. Cal. Mar. 20, 2024).


[202] See Force v. Facebook, Inc., 934 F.3d 53, 66 (2d Cir. 2019); Liapes v. Facebook, Inc., 95 Cal. App. 5th 910, 928 (2023); Wozniak v. YouTube, LLC, 100 Cal. App. 5th 893, 908 (2024).


[203] See 47 U.S.C. 230(f)(3); Force, 934 F.3d at 66; Liapes, 95 Cal. App. 5th at 928; Wozniak, 100 Cal. App. 5th at 908.


[204] Force, 934 F.3d at 66; Liapes, 95 Cal. App. 5th at 928; Wozniak, 100 Cal. App. 5th at 908.


[205] See Id.


[206] See Doe Through Roe v. Snap, Inc., 144 S. Ct. 2493, 2494 (2024).


[207] Greene, supra note 19.


[208] Liapes, 95 Cal. App. 5th at 928.


[209] Greene, supra note 19.


[210] CRS Report Section 230, supra note 51.


[211] Id.


[212] Daniloff v. Google, LLC, 2023 U.S. Dist. LEXIS 15039 (D. Or. Jan. 30, 2023).


[213] CRS Report Section 230, supra note 51.


[214] See Force, 934 F.3d at 66; Liapes, 95 Cal. App. 5th at 928; Wozniak, 100 Cal. App. 5th at 908.


[215] See Anderson v. TikTok, Inc., 116 F.4th 180 (3d Cir. 2024); Liapes 95 Cal. App. 5th at 926 (2023); Force, 934 F.3d at 70.


[216] Doe Through Roe v. Snap, Inc., 144 S. Ct. 2493, 2494 (2024).


[217] CRS Report Section 230, supra note 51.


[218] Doe Through Roe, 144 S. Ct. at 2494.


[219] CRS Report Section 230, supra note 51,


[220] Kevin Ofchus, Cracking the Shield: CDA Section 230, Algorithms, and Product Liability, 46 U. Ark. L. Rev. 27, 40-41 (2023).


[221] Id. at 38.


[222] See McBrien, supra note 83.


[223] Brannon & Holmes, supra note 36, at 22.


[224] Salter & Ramkhelawan, supra note 39, at 241.


[225] Geary, supra note 35, at 518.


[226] Ofchus, supra note 221, at 28; Liapes v. Facebook, Inc., 95 Cal. App. 5th 910, 926 (2023); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019).


[227] Liapes, 95 Cal. App. 5th at 926; Force, 934 F.3d at 70.


[228] See Anderson v. TikTok, Inc., 116 F.4th 180, 184 (3d Cir. 2024).


[229] Force, 934 F.3d at 70.


[230] Holmes, supra note 3, at 4.


[231] Stefan Milne, Q&A: How TikTok’s ‘Black Box’ Algorithm and Design Shape User Behavior, UW News (April 24, 2024), https://www.washington.edu/news/2024/04/24/tiktok-black-box-algorithm-and-design-user-behavior-recommendation/.


[232] Gabor, supra note 179, at 110.


[233] Id.


[234] Id.


[235] TikTok, Liking, https://support.tiktok.com/en/using-tiktok/exploring-videos/liking.


[236] Gabor, supra note 179, at 115.


[237] Id.


[238] Id.


[239] Liapes v. Facebook, Inc., 95 Cal. App. 5th 910, 926 (2023); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019).


[240] Ben Sperry, Section 230 & Gonzalez: Algorithmic Recommendations Are Immune, Truth on the Market (Feb. 1, 2023), https://truthonthemarket.com/2023/02/01/section-230-gonzalez-algorithmic-recommendations-are-immune/.


[241] Merit Mgmt. Grp., LP v. FTI Consulting, Inc., 583 U.S. 266, 378 (2018).


[242] Robinson v. Shell Oil Co., 519 U.S. 337, 341 (1997).


[243] 47 U.S.C. § 230.


[244] Id.


[245] See Perrin v. United States, 444 U.S. 37, 42 (1979).


[246] 47 U.S.C. § 230.


[247] Sperry, supra note 241.


[248] 47 U.S.C. § 230 (emphasis added).


[249] See Salter & Ramkhelawan, supra note 39, at 249 (citing Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1174 (9th Cir. 2008)).


[250] 47 U.S.C. § 230.


[251] Id.


[252] Gabor, supra note 179, at 110.


[253] See Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024); Anderson v. TikTok, Inc., 116 F.4th 180 (3d Cir. 2024).


[254] U.S. Dep’t of Just., Section 230 – Nurturing Innovation or Fostering Unaccountability? (June 2020), https://www.justice.gov/ag/file/1072971/dl?inline=.


[255] Id.


[256] Holmes, supra note 3, at 16.


[257] Amy B. Cyphert & Jena T. Martin, “A Change is Gonna Come:” Developing a Liability Framework for Social Media Algorithmic Amplification, 13 UC Irvine L. Rev. 155, 169 (2022). 


[258] Id.


[259] Id.


[260] Id. at 171.


[261] Mary A. Franks, Beyond the Public Square: Imagining Digital Democracy, 131 Yale L.J. Forum 427 (2021).


[262] Geary, supra note 35, at 508.


[263] Franks, supra note 262.


[264] Cyphert & Martin, supra note 258, at 172.            


[265] Id. at 177.


[266] Id.


[267] Joan Stewart & Kyle Gutierrez, Three Ways the SAFE TECH Act Would Amend Section 230, Wiley (March 2021), https://www.wiley.law/newsletter-Mar-2021-PIF-Three-Ways-the-SAFE-TECH-Act-Would-Amend-Section-230.


[268] Cyphert & Martin, supra note 258, at 177–78.


[269] Id.


[270] Presently § 230(c)(1) reads as: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”


[271] See Andra, Understanding How the X (Twitter) Algorithm Works in 2024, Social Bee Blog (Aug. 22, 2024), https://socialbee.com/blog/twitter-algorithm.


[272] Id.


[273] See Draper & Neschke, supra note 22, at 2–3.


[274] See Jack M. Balkin, Information Fiduciaries and the First Amendment, 49 U.C. Davis L. Rev 1183, 1207–08 (2016).


[275] Id.


[276] See Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024).


[277] See Anderson v. TikTok, Inc., 116 F.4th 180 (3d Cir. 2024).


Next
Next

Toward a Regional Legal Framework for Climate-Induced Migration in Southeast Asia: Lessons from Indonesia and the ASEAN Response Gap