SECTION 230 FOR DUMMIES
The Truth About Section 230 – How the Courts Broke the Internet – And How to Fix It
Download Section 230 For Dummies Here
By: Jason Fyk
The Truth About Section 230
Section 230 was supposed to protect free speech online, but courts twisted it into something completely different. Originally, it was meant to let websites host user-generated content—like posts, comments, and videos—without being legally responsible for what others said. It was designed to promote a free and open internet where diverse voices could be heard. But over time, Big Tech turned it into a weapon, using it as a legal shield to silence dissent, eliminate competition, and even collaborate with the government to control what information people could see. Instead of protecting free speech, Section 230 became the tool that helped Big Tech take it away.
Rather than upholding free expression, courts misinterpreted Section 230 in a way that handed Big Tech unchecked immunity. These corporations were no longer just neutral platforms; they became gatekeepers of the digital public square. They could decide what speech was allowed, remove anyone who challenged their interests, and shut down competitors before they had a chance to grow. With no accountability, their influence expanded to the point where they were not just moderating content—they were shaping the flow of information itself.
This distortion of Section 230—whether unintentional or deliberate—allowed Big Tech to consolidate power on an unprecedented scale. They did not just censor speech; they crushed competition, became massive, untouchable corporations, and evolved into a direct threat to the free world. By controlling what people could say and see, they influenced elections, manipulated public discourse, and eroded the very foundation of our constitutional republic. Section 230 was meant to protect free speech—but in the wrong hands, it became the greatest tool for silencing it.
Textual Analysis:
How One Tiny Change Broke the Law
A small grammatical mistake can completely change the meaning of a law.
Section 230(c)(1) reads:
- Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Through sloppy legal citations, courts have misquoted Section 230(c)(1) in their decisions. For example, in Fyk v. Facebook, Judge White wrote in his dismissal:
Because the CDA bars all claims that seek to hold an interactive computer service liable as a publisher of third-party content, the Court finds that the CDA precludes Plaintiff’s claims.
Did you catch the mistake?
The phrase “the publisher or speaker” in Section 230(c)(1) was changed to “a publisher.” Judge White changed the definite article “the” to an indefinite article “a”— in other words, he rewrote the law.
The best way to understand the courts’ mistake here is to substitute Judge White’s wording based on what Section 230(c)(1) really says:
Judge White’s version:
“Because the CDA bars all claims that seek to hold an interactive computer service liable as a publisher of third-party content, the Court finds that the CDA precludes Plaintiff’s claims.”
How it should read:
“Because the CDA bar all claims that seek to [treat] an interactive computer service as [the] publisher of third-party content, the Court finds that the CDA precludes Plaintiff’s claims.”
The difference between these two interpretations might seem minor, but it completely changes how the law is applied. Judge White’s version suggests that platforms can never be treated as publishers at all, no matter what they do. Under this flawed interpretation, even if a platform actively removes third-party content or restricts users—both clear publishing functions—it remains immune because it supposedly cannot be held “liable as a publisher.”
However, given the correct reading of Section 230(c)(1), based on its actual wording, makes a crucial distinction: “the publisher” refers specifically to “another information content provider”—not the provider or user of an interactive computer service. This means that a platform provider or user cannot be treated as another entity simply for providing or using the platform. They cannot be held liable for what someone else publishes or speaks, but they also do not gain absolute immunity for their own editorial decisions.
The mistake is right there in the words. Section 230(c)(1) states that a platform cannot be treated as “the publisher” – the original publisher of another’s content, but there is nothing preventing them from being treated as “a publisher” – an additional publisher of content if they engage in any type of secondary publishing. Courts, however, have ignored this grammatical distinction, distorting the law in a way that platforms can never be treated as a publisher at all—no matter how actively they curate, alter, or remove content.
This misinterpretation transformed a narrow protection into sweeping legal immunity, allowing Big Tech to claim protection for all their publishing decisions that go far beyond simply hosting third-party content. It is a huge mistake. Instead of just preventing platforms from being treated as the original author, courts have wrongly expanded Section 230(c)(1) to shield all their own editorial and censorship choices, granting Big Tech far more power than Congress ever intended.
The “Surplusage” Problem
Some might argue Congress intended for 230(c)(1) to protect a platform’s own publishing decisions, but they would be wrong.
The Supreme Court has repeatedly emphasized the principle that statutes should be interpreted in a way that gives meaning to every word and avoids rendering any part superfluous. One of the most well-known quotes on this principle is from Duncan v. Walker, 533 U.S. 167, 174 (2001):
“It is our duty to give effect, if possible, to every clause and word of a statute.”
This principle, sometimes referred to as the surplusage canon is a principle of statutory interpretation that dictates courts should avoid interpretations that render any statutory language redundant or meaningless. The Supreme Court has articulated this principle in various cases. A key statement from TRW Inc. v. Andrews, 534 U.S. 19, 31 (2001) explains it as follows:
“It is a cardinal principle of statutory construction that a statute ought, upon the whole, to be so construed that, if it can be prevented, no clause, sentence, or word shall be superfluous, void, or insignificant.”
If courts are correct that a platform cannot be treated as “a publisher” for their own publishing actions—including restricting content—then Section 230(c)(2)(A) would be meaningless – “superfluous, void, or insignificant.”
Section 230(c)(2) reads:
- Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Section 230(c)(2)(A) explicitly grants liability protection for specific, “good-faith” content restrictions, meaning Congress set clear limits on moderation protection. Similarly, 230(c)(2)(B) protects platforms from liability when enabling others to restrict “otherwise objectionable” content.
But here is the problem: if 230(c)(1) already covers all publishing decisions—including content restrictions—then 230(c)(2)(A) would be redundant, violating the surplusage canon. In other words, not only does the text contradict the court’s interpretation, but their mistake also voids the “good faith” requirement of 230(c)(2)(A)—an outcome that cannot be correct.
To interpret the statute “upon the whole” (i.e., reading 230(c)(1) and 230(c)(2) together), we must distinguish their separate purposes. There is a key difference between 230(c)(1) and 230(c)(2) that is hiding in plain sight. 230(c)(1) defines the “Treatment” of publisher or speaker, whereas 230(c)(2) specifically talks about “Civil liability.” There is a difference:
- 230(c)(1) is a definitional rule—it prevents platforms or users from being treated as another publisher for content they did not create or provide.
- 230(c)(2) is the active liability shield—it protects platforms when they take any actions in good faith to restrict content or enable others to restrict content.
Crucially, 230(c)(2) says nothing about content prioritization or manipulation because active content manipulation is a form of content development, which platforms are not immune for. We will address content development in more detail later.
When considered together – “upon the whole,” Section 230 makes sense:
- 230(c)(1) prevents platforms or users from being treated as someone else.
- 230(c)(2)(A) protects only good faith content restrictions.
- 230(c)(2)(B) protects platforms when they provide tools for others to restrict content.
By misreading 230(c)(1), the courts have expanded its meaning to cover all publishing decisions including bad faith content restrictions.
Under the courts’ misinterpreted version of the law:
- 230(c)(1) protects all content moderation decisions, no matter what.
- 230(c)(1) shields platforms even when they act as publishers.
- 230(c)(1) overrides 230(c)(2), making “good faith” irrelevant.
- 230(c)(1) grants absolute immunity, even for anticompetitive behavior.
The courts interpretation simply does not make any sense.
Understanding the Holistic Relationship Between 230(c)(1) and 230(c)(2)
To fully grasp the relationship between 230(c)(1) and 230(c)(2), we must first understand the distinction between an “interactive computer service” and an “information content provider” as defined in Section 230(f). This definitions section outlines what qualifies as an interactive computer service and an information content provider.
230(f) Definitions
As used in this section:
(1) Internet
The term “Internet” means the international computer network of both Federal and non-Federal interoperable packet-switched data networks.
(2) Interactive computer service
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
(3) Information content provider
The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.
We can skip defining “Internet”—everyone understands what that is. Next, 230(f)(2) defines an interactive computer service as a system that allows multiple users to access a computer network. This definition does not inherently include content moderation; rather, it refers to platforms that provide the means for users to connect, interact, and engage with content provided by others.
Meanwhile, 230(f)(3) defines an information content provider as any entity that is responsible, in whole or in part, for the creation or development of information. This means that any entity, whether a platform or a user, can become responsible in part as an information content provider if they actively alter, manipulate, or influence content in any way.
In simpler terms, platforms like Facebook, Google, and Twitter provide interactive computer services—they offer access to the digital infrastructure that allows users to interact online. However, users provide the content—whether through pages, profiles, or accounts. This distinction is crucial because it determines whether a platform remains a neutral service provider or has crossed into the realm of content creation or development, which carries legal responsibilities—or legal duties.
The Difference Between “Creation” and “Development”
Courts have long struggled to define the distinction between “creation” and “development” as described in Section 230(f)(3). To sustain the overly broad interpretation of 230(c)(1), courts effectively rendered the term “development” meaningless. This led to inconsistent rulings on when a platform crosses the content “development” threshold.
Courts attempted to establish a standard, ruling that a platform must make a material contribution, such as adding something new, to be considered responsible as a content developer. However, this overlooked a crucial distinction:
- “Creation” refers to changing or adding to the substance of content.
- “Development” refers to any modification of its availability, organization, or accessibility—not just substantive changes.
The text of 230(f)(3) does not say creation “and” development; it specifically states creation “or” development. Each term must have distinct meaning under statutory interpretation principles. That means development is independent of creation—content does not need to change in form, only in functionality or availability.
Think of it like developing an idea—removing, organizing, or prioritizing information are all functions of development, even though nothing new is created.
Thus, development—even in part—includes any affirmative content manipulation decision, not just the addition of new material. The “in part” language in 230(f)(3) indicates that even minimal involvement can cross the content development line. Affirmative decisions are key here – intent.
For example:
- If content is organized and provided based on a user’s request, that is not an affirmative decision by the platform to manipulate content.
- But if a platform circumvents the user’s choice and deliberately presents content based on its own initiatives and intent—that is development.
- If any entity actively considers content’s availability, priority, or positioning, that crosses the line into content development.
In short, allowing, disallowing, or prioritizing content are all forms of active content development—as it requires an affirmative decision about what remains visible and what does not.
The Development Hardline: Platform vs. Publisher
The failure to establish a consistent legal standard regarding development has led to contradictory rulings—such as Dangaard v. Instagram and Fyk v. Facebook. Both cases were heard in the same district court, involved nearly identical facts and allegations, and applied the same 230(c)(1) framework—yet different judges reached opposite conclusions.
The key difference? In Dangaard, Judge Alsup recognized the surplusage issue that Fyk argued in his case. Unlike the judge in Fyk—who conveniently held millions in Big Tech stocks—Judge Alsup explicitly rejected Facebook’s §230(c)(1) immunity argument, calling it a:
“backdoor to CDA immunity contrary to the statute’s history and purpose.”
Judge Alsup ruled that platforms cannot use §230(c)(1) to circumvent §230(c)(2)(A) and shield themselves from liability for unlawful business practices—precisely what happened in Fyk’s case.
Such legal unpredictability is unacceptable—therefore, the standard for what constitutes “development” must be applied uniformly.
The surplusage canon requires that each statutory term have distinct meaning. “Creation” refers to generating new content, while “development” refers to modifying, shaping, or influencing content in any way.
Content development occurs the moment a platform affirmatively considers or manipulates content in any way—whether through:
- Prioritization
- Shadow-banning
- Demonetization
- Filtering
- Removal
- Fact-checking
The moment a platform engages in any of these actions, it stops being a neutral intermediary and crosses into content development—becoming an information content provider (a publisher) responsible under traditional liability principles.
In other words, the moment a platform considers the content it hosts—i.e., takes any affirmative action regarding third-party content—it immediately crosses the development threshold and becomes “a” publisher in part rather than a neutral intermediary platform. At that point, its affirmative publishing conduct is subject to 230(c)(2)’s good faith requirement and the statute’s “Good Samaritan” general intent.
To be clear, nothing in 230(c)(1) prevents a platform from being treated as “a” publisher and held responsible for third-party content—it specifically prevents them from being treated as “the publisher or speaker” of that content – as “another information content provider.”
If 230(c)(1) truly protected all content moderation decisions, then 230(c)(2) would, in practice, be rendered meaningless. Congress would not have needed to include “good faith” content restrictions if 230(c)(1) already covered content development.
The courts must be wrong—230(c)(1) cannot logically protect publishing decisions without nullifying 230(c)(2).
Now that we have established the text does not support the courts’ position, and statutory construction contradicts it, the question remains:
Did Congress intend for 230(c)(1) to protect all publishing decisions?
Let us explore that next.
Congressional Intent and Constitutional Analysis
The “Good Samaritan” Principle Proves the Courts Wrong
The courts’ interpretation of Section 230(c)(1) does not just fail grammatically and structurally—it also contradicts the explicit “Good Samaritan” intelligible principle.
An intelligible principle is a clear guiding rule often found in a law’s general provision. It is used in constitutional law to determine whether Congress has lawfully delegated its legislative powers to an executive agency or another entity—including private corporations. However, under the nondelegation doctrine, Congress cannot transfer its lawmaking authority without clear guidelines on how that authority must be exercised—such as requiring actions to be taken in “good faith” and as a “Good Samaritan.”
The Supreme Court established the intelligible principle test in J.W. Hampton, Jr. & Co. v. United States, 276 U.S. 394 (1928), stating:
If Congress shall lay down by legislative act an intelligible principle to which the person or body authorized to [act] is directed to conform, such legislative action is not a forbidden delegation of legislative power.
In other words, Congress must set clear limits, objectives, or standards to guide discretion, rather than granting unfettered power. If a law fails to provide an intelligible principle, it risks being struck down as an unconstitutional delegation of power.
Congress’ intelligible principle is explicitly stated in the heading of Section 230(c)’s general provision:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
You may be wondering why “Good Samaritan” is in physical quotes. Quotes indicate direct speech—so who said it? Congress. Congress laid down an intelligible principle directing Big Tech to conform to the role of a “Good Samaritan” if they want “protection” for “blocking and screening of offensive material.”
230(c)(1) cannot possibly protect all publishing decisions—doing so would include self-interested actions like restricting competitors or removing content on behalf of third parties, such as the government. That would nullify Congress’ general intent and strip the “Good Samaritan” principle of any meaning.
What Exactly Is a “Good Samaritan?”
The term “Good Samaritan” is not explicitly defined in the law, so we must consider its relevant meaning. A Good Samaritan is someone who takes affirmative action to prevent harm or help others—not someone acting for their own benefit. This means that while platforms can restrict content, they may only do so in good faith to prevent future harm to help the public, not arbitrarily or for their own advantage.
When one entity restrains another to prevent future harm, an affirmative defense can provide legal protection. Section 230 functions as an affirmative defense, meaning platforms can justify their actions under it—but only if they meet the law’s requirements. This “Good Samaritan” principle represents the basic congressional intent behind the law’s protections.
Understanding Affirmative Defenses: The Self-Defense Analogy
To better understand an affirmative defense, consider self-defense in criminal law. If someone uses force against another person, that act is typically illegal. However, if they acted to protect themselves or others from imminent harm, they can claim self-defense. But merely claiming self-defense is not enough—they must prove their actions were necessary and reasonable.
This concept ties directly into civil liberties—the rights individuals have to be free from unjust restrictions or interference by others, including both governments and private entities.
Civil Liberties
Civil liberties primarily protect individuals from government overreach, but private entities, including corporations, can also infringe upon these rights. When they do, affected individuals may seek legal remedies, such as damages or court orders to stop the behavior. This applies to cases like:
- Wrongful deplatforming
- Censorship that deliberately harms livelihoods
- Unlawful suppression of speech
Note: If a private entity acts on behalf of the government, it may also be held accountable under constitutional principles.
Civil liberties are often challenged through prior restraint.
Prior Restraint
Prior restraint refers to restricting another’s civil liberties to prevent harm before it occurs. In the context of self-defense, preemptive force is justified only if the threat is imminent and the response is reasonable. Similarly, platforms restricting content must demonstrate that their actions are necessary to prevent future harm—not to manipulate speech, silence competition, or act in bad faith.
This is where Section 230’s constitutional issues arise. The law was intended to prevent one entity from being treated as another and to specifically protect good faith efforts to mitigate future harm—not to grant platforms unchecked immunity to control third-party speech for self-serving purposes. If an entity has unfettered immunity to restrain another’s civil liberties without justification, the law violates due process principles.
Due Process
In practice, the misapplication of Section 230(c)(1) renders the statute unconstitutional “as applied”—meaning it could be fixed, but in its current application, it eliminates due process protections for individuals unlawfully restrained by platform decisions. Somehow, courts have extended 230(c)(1)’s definitional protection to fully immunize platforms when they actively restrict civil liberties, leaving individuals with no legal recourse to challenge wrongful censorship. This effectively denies due process by allowing private entities to restrain the civil liberties of others without oversight or justification.
Additionally, this misapplication raises First Amendment concerns, as it enables platforms to suppress speech with immunity, often under government pressure, effectively allowing government and private actors to bypass constitutional restrictions.
This unconstitutional application of Section 230 was directly challenged in Fyk v. Facebook, where Jason Fyk twice sought to contest its validity. First, he filed a separate action against the United States in Washington DC to challenge the law’s constitutionality. Then, in his ongoing litigation against Facebook, he independently invoked a Rule 5.1 Constitutional Challenge, requiring notification to the Department of Justice that a law was being challenge. Despite these efforts, the judiciary evaded all substantive review of Section 230’s constitutional deficiencies, either erroneously dismissing the challenge outright or terminating it under false procedural technicalities. This ongoing judicial avoidance exemplifies how courts have shielded Section 230 from any real scrutiny, further denying due process to those harmed by its misapplication.
For example, the courts employed judicial evasion to sidestep proper civil procedure, ensuring Fyk’s constitutional challenges were never heard on the merits. In his first attempt, the court arbitrarily determined that Fyk lacked standing, concluding that he was improperly suing the United States over Facebook’s actions. This was an absurd rationale, given that Fyk had already sued Facebook for its actions and was now suing the United States for the court’s own role in denying him due process. Rather than addressing the substance of the claim—that the government’s misapplication of Section 230 deprived him of all legal remedy—the DC court mischaracterized the lawsuit to justify its procedurally improper dismissal.
In Fyk’s second attempt, his Rule 5.1 Constitutional Challenge, the court once again dodged the issue with a demonstrably circular argument. The California courts determined that Fyk’s challenge was “freestanding,” meaning independent of the ongoing proceedings, while simultaneously arguing that he needed to bring it as a separate, independent action. Specifically, the court stated that his Rule 5.1 notice was “not tethered to any pending request for relief,” and therefore could not be considered.
“If relief is not available under either rule, ‘the only other procedural remedy is by a new or independent action to set aside a judgment.’” —Fed. R. Civ. P. 60(b)
Obviously, the challenge was independent, as the court’s own characterization of it as “freestanding” confirmed as much. Yet, despite Rule 5.1 requiring only that a constitutional challenge be identified—not tied to a separate motion—the court refused to adjudicate it. This contradiction ensured that the challenge was never properly reviewed, as the court simultaneously deemed it too independent to be considered while also requiring it to be independent. It made no sense.
These procedurally evasive tactics highlight how courts have intentionally shielded Section 230 from any scrutiny—constitutional or otherwise—ensuring that its broad, unchecked immunity remains in place. Without an intelligible principle guiding the scope of civil liability protection, courts have, as applied, allowed Section 230(c)(1) to function as limitless immunity for platforms, violating Americans’ due process rights, the non-delegation doctrine, and the free speech rights of all Americans. Laws must have clear standards, yet under the court’s current interpretation, platforms wield absolute power over online discourse without any accountability. This unchecked authority to restrain the liberties of third-parties, combined with the elimination of due process, renders Section 230(c)(1) unconstitutional as applied, depriving individuals of fundamental legal protections and allowing private entities to arbitrarily control speech with no legal remedy.
How The Courts Broke the Internet
Section 230 is an Affirmative Defense, Not Immunity from Suit
Contrary to popular belief, Section 230 only provides an affirmative defense, not absolute “immunity from suit.” This means platforms can invoke Section 230 as a limited civil liability protection, but only if they can justify their prior restraint (i.e., content restriction decisions), and only if they comply with Congress’s specific legal requirements under the “Good Samaritan” general intent in good faith. It does not grant blanket immunity.
Again, Section 230(c)(1) is not a liability shield. Instead, it clarifies that a platform or user cannot be treated as someone else—but only when they take no affirmative action at all to restrict third-party content. In other words, if they do nothing—meaning they never considered the content at all—they cannot be held responsible for failing to prevent future harm.
Meanwhile, 230(c)(2) allows platforms to take “any action” to restrict third-party content. 230(c)(2)(A) protects platforms when they directly restrict content, but only in good faith. 230(c)(2)(B) protects platforms when they enable others to restrict content, such as by providing tools for users to filter content themselves or maybe their children.
If a platform takes any direct action to restrict content, it cannot self-determine that it acted in good faith, nor can a court simply assume it did. Good faith is a factual question that must be decided by a jury. The platform must demonstrate in court that its actions were taken in good faith and for the public good as a “Good Samaritan” to prevent future harm. Otherwise, Section 230 does not apply—period.
Despite these legal requirements, Big Tech has rarely, if ever, been forced to justify its content moderation decisions. This raises the obvious question: why?
The answer lies in the courts’ procedural mistakes.
Rules of Civil Procedure Analysis
The difference between civil liability protection and immunity from suit is substantial. Civil liability protection means a party can still be sued but has a legal defense that can protect them from being held liable if they meet certain requirements. They must still go to court and prove their defense.
Immunity from suit, on the other hand, means a party cannot be sued at all, regardless of their actions. The case is dismissed before any evidence is considered.
Section 230 provides civil liability protection, not immunity from suit—because if platforms do not need to present “any evidence,” that would nullify 230(c)’s “Good Samaritan” general provisions and 230(c)(2)’s good faith requirements.
Plainly, based on the text and intent of the statute, platforms must justify their actions in court, not have the courts assume they are automatically protected.
How Section 230’s Affirmative Defense Morphed into Immunity from Suit Over Time
Section 230(c)(1) was originally intended as protection from civil liability—an affirmative defense—meaning platforms could invoke it to avoid liability only after proving it applied. However, over time, courts have transformed it into immunity from suit, allowing platforms to escape litigation before any facts are even examined. This misapplication strips plaintiffs of their right to challenge whether Section 230 even applies, violating due process. That is legally untenable—it should not be possible.
Energy Automation Systems v. Xcentric Ventures – Section 230’s Long-Lost Legal Conversion Standard
Section 230 is not and cannot (at least constitutionally) be sovereign immunity. As far back as 2007, Judge Aleta Trauger set a legal precedent stating that when a Section 230 defense is invoked and it depends on disputed facts, the court must convert a 12(b)(6) motion to dismiss into a Rule 56 motion for summary judgment. This allows for discovery, ensuring that plaintiffs can challenge whether Section 230 even applies to their circumstances.
In Energy Automation Systems v. Xcentric Ventures, the court explicitly ruled that if a Section 230 defense depends on any disputed facts, the case must proceed to discovery before dismissal.
Core Procedural Errors in Fyk v. Facebook
In Fyk v. Facebook, the misapplication of Section 230(c)(1) was clear. Fyk’s verified complaint—sworn under oath—meant his allegations were legally treated as evidence at the motion-to-dismiss stage. Under Rule 12(b)(6), dismissal is not an immunity from suit but a procedural mechanism allowing a court to dismiss a case only if the complaint fails to state a claim upon which relief can be granted. Courts must assume all factual allegations are true and draw inferences in the plaintiff’s favor. If factual disputes exist, dismissal is improper, and the case must proceed to discovery or summary judgment.
However, courts have misused Rule 12(b)(6) as a tool to grant platforms early immunity from suit, dismissing cases before any facts can be challenged or developed through discovery. Section 230 was never meant to be a jurisdictional bar, and affirmative defenses like 230 should not justify dismissal unless it is clear from the face of the complaint—for example, if Fyk had been treating Facebook as “the publisher or speaker” of his content.
But he wasn’t—because that’s ridiculous! The issue was not that Facebook was publishing or speaking his content—it was that Facebook was actively manipulating and interfering with it.
If factual disputes exist—such as whether a platform was acting as a neutral host or engaging in content development—the case should proceed to discovery or be converted to summary judgment under Rule 56. At the very least, a plaintiff should be afforded the opportunity to amend the complaint or be granted a hearing—both of which Fyk was unilaterally denied.
Essentially, the courts treated Section 230 as a jurisdictional bar to suit, depriving Fyk of all legal remedy and, in doing so, denying him due process as procedurally applied.
Judge White failed to convert Facebook’s motion to dismiss into a Rule 56 motion for summary judgment or consider the facts in Fyk’s favor at the dismissal stage, as required under Rule 12(b)(6).
Judge White failed to consider the disputed facts in Fyk’s favor at the dismissal stage (or at all for that matter), as required under Rule 12(b)(6).
Among the restricted pages listed in Fyk’s verified complaint was one that he did not own: www.facebook.com/takeapissfunny. Its inclusion was an error, as Fyk never had any control over the page. Facebook was fully aware of this, as well as the fact that the page had nothing to do with urination. The phrase “Take a Piss” is a common British slang term meaning to make fun of—a cultural reference consistent with the humor content typically like that found on Fyk’s pages. The actual page owner was from the UK, and the addition of the word “funny” in the name further confirms that it was simply a British humor page, not one related to urination in any way.
Despite knowing the truth, Facebook outright lied to the court, falsely presenting this fabricated claim to seemingly justify its actions against Fyk. However, at the Rule 12(b)(6) motion-to-dismiss stage, the court is required to accept the plaintiff’s well-pleaded facts as true—not the defendant’s assertions. And, if any factual disputes exist, the court must either convert Facebook’s 12(b)(6) motion into a Rule 56 summary judgment motion to consider the disputed facts and permit discovery or, at the very least, allow Fyk to amend his complaint. Instead, Judge White blindly accepted Facebook’s misrepresentation, violating basic procedural safeguards.
Facebook’s fabricated claim stated:
“Plaintiff Jason Fyk used Facebook’s free platform to create a series of Facebook pages such as one dedicated to photos and videos of people urinating.”
Judge White’s inherent bias was so blatant that he immediately echoed Facebook’s false narrative in the very first paragraph of his dismissal order:
“Plaintiff had used Facebook’s free online platform to create a series of, among other amusing things, pages dedicated to videos and pictures of people urinating.”
Nothing about that statement is even remotely true!
Not only did Judge White fail to consider Fyk’s facts as true and view them in the light most favorable to him, but he appears to have disregarded them entirely. The court’s regurgitation of Facebook’s lies was, of course, immediately seized upon by internet trolls and uninformed legal commentators who have no real understanding of how Section 230 actually works, branding Fyk as “the piss guy” who just keeps losing. In other words, not only did Facebook harm Fyk—Judge White compounded that harm.
If the courts refuse to follow the plain text of the statute, congressional intent, the Constitution, and even the Rules of Civil Procedure, the inevitable outcome is a denial of due process—a guaranteed loss. Fyk was doomed from the start—not because his claims lacked merit, or the law actually protects Facebook, but because the courts had already decided that Facebook was immune from suit.
Facebook’s blatant and deliberate misrepresentation, reinforced by Judge White’s ruling, allowed Facebook to defraud the court into dismissing the case before any of Fyk’s evidence was considered. Judge White ignored Fyk’s verified allegations altogether—despite their status as sworn evidence at the dismissal stage—and resolved all contested facts in Facebook’s favor without properly converting the dismissal into a Rule 56 summary judgment motion and allowing discovery, something explicitly barred under Rule 12(b)(6).
The key factual dispute that was never adjudicated was whether Facebook was simply a neutral platform that did nothing (i.e., whether Fyk was improperly attempting to treat Facebook as “the publisher or speaker” of his own content—an utterly absurd argument) or whether Facebook acted deliberately in bad faith to restrict Fyk’s pages anticompetitively for its own financial gain—not as a “Good Samaritan” or to prevent any future harm.
Fyk never even got the chance to argue this, because he was improperly barred from bringing his claims at the 12(b)(6) motion-to-dismiss stage—without any factual consideration—a procedurally untenable outcome.
Legal Precedent Lost to Time (This Section is Not for Dummies)
The Ninth Circuit in Barnes v. Yahoo! (2009) made it clear: Section 230(c)(1) is a liability shield, not immunity from suit (you would think they would remember their own precedent – wishful thinking). Barnes v. Yahoo! confirms courts cannot dismiss cases at the pleading stage unless it is indisputable from the face of the complaint that Section 230 applies – it clearly does not in Fyk v. Facebook.
The very first words of the Nature of the Action in Fyk’s verified complaint state:
This case asks whether Facebook can, without consequence, engage in brazen tortious, unfair and anti-competitive, extortionate, and/or fraudulent practices that caused the build-up (through years of hard work and entrepreneurship) and subsequent destruction of Fyk’s multi-million dollar business with over 25,000,000 followers merely because Facebook “owns” its “free” social media platform.
From the outset, Fyk raised a disputable factual question – whether Facebook engaged in brazen tortious, unfair, anti-competitive, extortionate, and/or fraudulent practices. The Good Samaritan general provision and the good faith requirements of 230(c)(2) would suggest the answer is no. But under the courts’ misapplication of 230(c)(1) as immunity from suit, the answer becomes yes. Right there, it becomes obvious how the improper application of 230(c)(1) annihilated the entire purpose of the statute.
How Barnes Was Used to Contradict Barnes
To expose just how sloppy and careless Judge White’s flawed ruling was, we will insert corrections in bold and brackets [ ] to highlight exactly where White went wrong. The irony here is that Judge White literally cited Barnes to contradict Barnes—improperly dismissing Fyk’s verified factual claims while directly contradicting Barnes’ holding that Section 230(c)(1) is a liability shield, not immunity from suit:
Lastly, Plaintiff’s claims here seek to hold Facebook liable as the “publisher or speaker” of that [i.e., Fyk’s] third party content. The three causes of action [there were actually four] alleged in the complaint arise out of Facebook’s decision to refuse to publish or to moderate the publication of Plaintiff’s content [notably a protected function of 230(c)(2)(A)]. To determine whether a plaintiff’s theory of liability treats the defendant as a publisher [did you catch that word change], “what matters is whether the cause of action inherently requires the court to treat the defendant as the ‘publisher or speaker’ of content provided by another.” [whether the cause of action inherently requires the court to treat Facebook is treated as Fyk – given a proper read of 230(c)(1)] Id. (citing Barnes, 570 F.3d at 1101). Consequently, if the duty that the plaintiff alleges was violated by defendant “derives from the defendant’s status or conduct [status – yes, conduct -no] as a ‘published or speaker,’ [not only did Judge White use “a” instead of “the,” he misspelled the word “publisher as well] . . . section 230(c)(1) precludes liability.” [No, 230(c)(1) defines the “status” of the provider or user and how they can be treated. In this case Facebook cannot be treated as Fyk. 230(c)(1) does not preclude liability for any conduct, at all] Id. (citing Barnes 570 F.3d at 1102). Publication “involves the reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” Id. Thus, “any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.” Id. (citing Roommates, 521 F.3d at 1170-71).
Wait a Minute—That’s Not What Section 230(c)(1) Says!
Section 230(c)(1) does not protect “any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online.” That is Section 230(c)(2)—and even then, only under specific conditions.
Section 230(c)(2) protects some moderation decisions, but only if:
- The provider or user or the platform restricts content themselves in good faith, or
- The platform enables someone else to restrict content.
Nowhere in Section 230(c)(1) does it grant providers or user absolute blanket immunity for all editorial decisions – there is no mention of immunity anywhere in Section 230.
What Did Judge White Do wrong here?
Do you see what Judge White did wrong here? He sloppily blended the Ninth Circuit’s already contextually flawed Barnes precedent into an even more flawed one-size-fits-all Section 230(c)(1) “super-immunity”—proudly proclaiming: “If Big Tech engages in any publishing, they are immune from suit! Hallowed be thy law.”
He Created a 230(c)(1) “Super-Immunity” That Doesn’t Exist! It is simply wrong—plain and obvious. And if, at this point, you still can’t see that, then you’re beyond the help of even this simple guide for Dummies.
Misapplication of Levitt and Nemet Chevrolet
Irony stacked on irony—Facebook relied on Levitt v. Yelp and Nemet Chevrolet v. Consumer Affairs to justify immunity from suit, but it misapplied these cases. While both cases involved early dismissals under Section 230, neither held that 230(c)(1) is an automatic bar to litigation—immunity from suit. Instead, both cases cited Barnes, reaffirming that Section 230 is an affirmative defense—not absolute immunity.
Had the court properly applied either Nemet Chevrolet or Levitt, it would have allowed Fyk’s case to proceed to discovery, because Fyk’s verified complaint included allegations that Facebook was not a passive host but actively manipulated content for financial gain—in bad faith. This factual dispute should have precluded 12(b)(6) dismissal and been converted to a Rule 56 summary judgment motion.
But Judge White got that wrong as well. Frankly speaking in Fyk v Facebook the courts got literally everything wrong.
How the Courts Are Wholly Responsible for Breaking the Internet—The Totality of Judge White’s Textual, Intent-Based, Constitutional, and Procedural Mistakes
Judge White’s ruling in Fyk v. Facebook was not just legally flawed—it was a textbook example of how courts have misinterpreted Section 230 to break the internet. His order violated the text of the statute, ignored congressional intent, contradicted constitutional protections, and disregarded basic procedural rules.
1. Judge White’s Order Violated the Text of Section 230
Section 230(c)(1) states that a platform “shall not be treated as the publisher or speaker of any information provided by another information content provider.” It does not state that platforms are immune from suit. By misapplying 230(c)(1) as a blanket immunity shield, Judge White stripped the statute of its intended function and created an overbroad protection that does not exist in the statutory text.
2. Judge White’s Order Violated Congressional Intent
When Congress passed the Communications Decency Act (CDA) in 1996, its purpose was twofold:
- To promote the free exchange of ideas and commerce on the internet, and
- To encourage platforms to remove harmful content in good faith under 230(c)(2).
Congress never intended Section 230(c)(1) to provide absolute immunity from suit or to allow platforms to engage in anticompetitive, fraudulent, or extortionate conduct without consequence. Judge White’s ruling undermined this intent, allowing Facebook to manipulate and destroy businesses under the false pretense of immunity.
3. Judge White’s Order Violated the Constitution
By effectively barring Fyk from any legal remedy, Judge White’s ruling violated multiple constitutional protections, including:
- The Fifth Amendment’s Due Process Clause – Fyk was denied a meaningful opportunity to challenge Facebook’s conduct in court.
- The First Amendment – Judge White’s interpretation of Section 230 allowed Facebook to engage in content manipulation under the guise of immunity, essentially enabling corporate censorship without legal accountability.
- The Article III Requirement for Judicial Review – By treating Section 230 as an automatic jurisdictional bar, the court refused to properly adjudicate the case on its merits.
4. Judge White’s Order Violated Multiple Procedural Rules
Beyond its misinterpretation of the law, Judge White’s ruling was riddled with procedural failures that denied Fyk a fair hearing:
- Improperly treating Section 230 as immunity from suit rather than an affirmative defense, contradicting Barnes v. Yahoo!
- Section 230 was never meant to block lawsuits outright but to provide a defense that platforms must prove. Judge White wrongly applied it as absolute immunity, shutting down Fyk’s case before the facts could be examined.
- Failing to convert the motion to summary judgment, despite Facebook relying on extrinsic evidence—contrary to Energy Automation Systems v. Xcentric Ventures.
- Under established case law, when a defendant introduces extrinsic evidence at the motion-to-dismiss stage, the court must convert the motion to Rule 56 summary judgment and allow discovery.
- Judge White ignored this requirement, letting Facebook assert unverified claims while denying Fyk the ability to challenge them.
- Ignoring Rule 12(b)(6) standards, resolving factual disputes in Facebook’s favor instead of assuming Fyk’s verified allegations as true.
- Under Rule 12(b)(6), the court must assume all well-pleaded factual allegations in the complaint are true and view them in the light most favorable to the plaintiff.
- Instead, Judge White did the opposite, adopting Facebook’s false claims while ignoring Fyk’s sworn allegations.
- Denying due process by preventing Fyk from challenging whether Facebook was acting as a neutral platform or engaging in content development.
- The core dispute in this case was whether Facebook was merely hosting content (protected under 230(c)(1)) or actively manipulating content for financial gain (which would not be protected).
- Fyk was never given the chance to argue this, as Judge White dismissed the case before any evidence could be presented.
How to Fix Section 230 and the Internet
Courts Have Corrupted Section 230 Beyond Recognition
Judge White’s ruling is not an anomaly—it reflects a widespread judicial failure that has allowed Big Tech to weaponize Section 230 against competition, accountability, and justice. Courts have twisted a liability shield into blanket immunity, giving tech giants unchecked power while stripping individuals like Fyk of their right to challenge corporate misconduct in court.
The Supreme Court Can Easily Restore Section 230’s Proper Application
If the Supreme Court grants Fyk’s petition for certiorari, it has the opportunity to restore Section 230 to its rightful legal framework by reaffirming that 230(c)(1) is a liability defense—not immunity from suit. The Court can correct decades of judicial misinterpretation by holding that 230(c)(1) does not protect any affirmative publishing conduct at all, it only prevents treating platforms as “the” publisher or speaker of third-party content.
A ruling in Fyk’s favor would:
- End improper early dismissals by requiring platforms prove their Section 230 defense, rather than merely assert it.
- Protect due process by preventing courts from resolving factual disputes in favor of tech companies at the motion-to-dismiss stage.
- Reinforce procedural safeguards by requiring courts to convert motions to summary judgment when defendants rely on extrinsic evidence, restoring plaintiffs’ right to discovery.
- Ensure legal accountability so platforms cannot engage in anticompetitive, fraudulent, or bad-faith actions under the false pretense of sovereign immunity.
No New Laws Are Needed—Only Judicial Course Correction
If Section 230 is ever to serve its intended purpose, Congress does not need to amend or repeal it—the courts must simply apply it correctly. The law itself is not broken; the judiciary’s failure to follow its text, intent, and procedural safeguards is. Until the courts return to the rule of law, the internet will remain broken—not by Congress, but by the courts.
Jason Fyk
Quick Reference Guide: Section 230
How the Courts Broke the Internet & How to Fix It
1. The Original Purpose of Section 230
- Intended to protect free speech by allowing platforms to host user-generated content without liability for third-party speech.
- Designed to ensure platforms remain neutral hosts, not editorial gatekeepers.
- Courts misinterpreted it, granting Big Tech unchecked power to censor, remove competitors, and shape public discourse.
2. The Key Misinterpretation: “The” vs. “A” Publisher
- Section 230(c)(1) states:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” - Courts changed it to “a” publisher, expanding immunity beyond its intended scope.
- Correct interpretation: Platforms should not be treated as the original author but can still be held accountable for their own publishing conduct.
- Judges, like in Fyk v. Facebook, misquoted the law, distorting its meaning to create broad immunity.
3. The “Surplusage” Argument: Courts Made 230(c)(2) Useless
- Legal principle: Every part of a law must have meaning (Duncan v. Walker, TRW Inc. v. Andrews).
- Courts’ interpretation of 230(c)(1) renders 230(c)(2) meaningless.
- 230(c)(2)(A): Protects good faith content moderation.
- If 230(c)(1) protects all moderation, there’s no reason for a separate good faith requirement.
- Proper interpretation:
- 230(c)(1) only protects platforms from being treated as “the” publisher of another’s content.
- 230(c)(2) governs active moderation and requires “good faith.”
4. The “Content Development” Hardline
- Section 230(f)(3) distinguishes:
- Creation = Changing or adding new content.
- Development = Any modification, organization, or prioritization.
- If a platform manipulates content (shadow bans, filters, prioritizes), it is developing content and becomes “a” publisher.
- Dangaard v. Instagram recognized this; Fyk v. Facebook ignored it.
5. Good Samaritan Rule & Intelligible Principle
- Section 230 is titled: “Protection for ‘Good Samaritan’ blocking and screening of offensive material.”
- Courts ignored “Good Samaritan”: Platforms were supposed to act in good faith, not for self-interest.
- Judicial misinterpretation violates the nondelegation doctrine:
- Congress cannot delegate unchecked power to private corporations without clear limits.
- By ignoring “Good Samaritan” requirements, courts granted limitless power.
6. Section 230 Is an Affirmative Defense, Not Immunity
- Affirmative defenses require platforms to prove their actions were lawful (like self-defense in criminal law).
- Courts wrongly granted Section 230 absolute immunity from suit, dismissing cases before evidence was reviewed.
- Energy Automation Systems v. Xcentric Ventures (2007):
- If a Section 230 defense depends on disputed facts, courts must convert dismissal into summary judgment and allow discovery.
- Judge White ignored this in Fyk’s case.
7. Procedural Violations in Fyk v. Facebook
- Verified complaints = sworn evidence at motion-to-dismiss stage.
- Rule 12(b)(6) requires courts to assume plaintiff’s facts are true.
- Judge White did the opposite:
- Accepted Facebook’s false narrative about Fyk’s pages.
- Ignored Fyk’s verified allegations.
- Failed to convert the case to summary judgment (Rule 56).
8. Barnes v. Yahoo! (2009) Contradicts the Courts’ Own Rulings
- Ninth Circuit ruled that 230(c)(1) is a liability shield, not immunity from suit.
- Judge White ignored this precedent, wrongly dismissing Fyk’s case before any discovery.
- Levitt v. Yelp and Nemet Chevrolet v. Consumer Affairs reaffirmed Barnes, but courts still misapplied them.
9. The Constitutional Violations of Courts’ Interpretation
- Fifth Amendment: Due Process – Courts’ misuse of Section 230 denies plaintiffs any legal remedy.
- First Amendment: Free Speech – Corporate censorship enabled by Section 230’s misinterpretation violates fundamental rights.
- Article III: Judicial Review – Courts treat Section 230 as a jurisdictional bar, preventing cases from even being heard.
10. The Supreme Court Must Fix This
- Fyk’s case presents the best opportunity for SCOTUS to restore Section 230’s proper meaning.
- No new laws are needed—only correct judicial interpretation.
- Key fixes SCOTUS should enforce:
- 230(c)(1) does not protect platforms’ affirmative publishing conduct.
- “Good faith” must be enforced under 230(c)(2).
- Platforms must prove their Section 230 defense in court, not just assert it.
- Courts must stop dismissing cases at the motion-to-dismiss stage if factual disputes exist.