LINK TO PDF
Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 1 of 143
UNITED STATES DISTRICT COURT
FOR THE DISTRICT OF COLUMBIA


JASON FYK,
)
)
Plaintiff,
)
)
vs.
)
CASE NO.:
)
UNITED STATES OF AMERICA,
)
)
Defendant.
)


COMPLAINT FOR DECLARATORY JUDGMENT REGARDING TITLE 47, UNITED
STATES CODE, SECTION 230 (THE COMMUNICATIONS DECENCY ACT)


Callagy Law, P.C.
Jeffrey L. Greyber, Esq.
D.C. Bar No. 1031923
jgreyber@callagylaw.com
1900 N.W. Corporate Blvd.
Suite 310W
Boca Raton, FL 33431
(561) 406-7966 (o)
(201) 549-8753 (f)
Attorney for Plaintiff
Putterman / Yu / Wang, LLP
Constance J. Yu, Esq.
Pending Pro Hac Vice Admission
cyu@plylaw.com
345 California St.
Suite 1160
San Francisco, CA 94104-2626
(415) 839-8779 (o)
(415) 737-1363 (f)
Attorney for PlaintiffCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 2 of 143
TABLE OF CONTENTS
PAGE
I. Nature of the Action, Parties, Jurisdiction, and Venue …………………. 1-5
II. Common Allegations ……………………………………………………. 5-131
A. Brief Introduction …………………………………………………… 5-7
B. Preliminary Statement ………………………………………………. 7-27
C. Constitutional Doctrines Violated by the CDA ……………………… 27-102
III.

  1. Non-Delegation Doctrine / Major Questions Doctrine ………….. 27-40
  2. Void-for-Vagueness Doctrine …………………………………… 40-45
  3. Substantial Overbreadth Doctrine ……………………………….. 45-102
    D. Canons of Statutory Construction Violated by the CDA ……………. 102-127
  4. Absurdity Canon / Harmonious-Reading Canon /
    Whole-Text Canon / Surplusage Canon …………………………. 102-120
  5. Irreconcilability Canon …………………………………………… 120-127
    E. Conclusion …………………………………………………………… 127-131
    Count I – Declaratory Judgment as to CDA Unconstitutionality ……….. 132-135
    EXHIBITS
    Exhibit A – Title 47, United States Code, Section 230.
    Exhibit B – Summary of the Fyk v. Facebook, Inc. Lawsuit (Request for Judicial Notice).
    Exhibit C – Malwarebytes, Inc. v. Enigma Software Group USA, LLC, No. 19-1284, 141 S.Ct.
    13 (2020) and Doe v. Facebook, Inc., 595 U.S. , 2022 WL 660628 (Mar. 7, 2022). Exhibit D – Composite exhibit consisting of Cornell Law School treatises / publications. Exhibit E – Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996 (Sept. 23, 2020) publication. Exhibit F – President Trump’s Executive Order 13925 (May 28, 2020). iiCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 3 of 143 Exhibit G – Canons of Construction publication. Exhibit H – Composite exhibit of Wikipedia publications. Exhibit I – Composite exhibit of definitions from the Merriam-Webster Dictionary. Exhibit J – Intelligible Principle Law and Legal Definition publication. Exhibit K – The Nature and Scope of Permissible Delegations publication. Exhibit L – Overbreadth publication. Exhibit M – Foreign sovereign immunity and comparative institutional competence publication. Exhibit N – The Intelligible Principle: How It Briefly Lived, Why It Died, and Why It Desperately Needs Revival In Today’s Administrative State publication. Exhibit O – I wrote this law to protect free speech. Now Trump wants to revoke it publication. Exhibit P – The First Amendment: Categories of Speech publication. Exhibit Q – CDA 230 – The Most Important Law Protecting Internet Speech publication. Exhibit R – Patently Offensive publication. Exhibit S – Facebook has a right to block hate speech but heres why it shouldn’t publication. Exhibit T – Facebook censored a post for ‘hate speech.’ It was the Declaration of Independence publication. Exhibit U – Compilation of several publications. Exhibit V – Chad Prather, GOP Candidate for Governor, Sues Facebook Over Suspension publication. Exhibit W – Compilation of several publications. Exhibit X – The Hunter Biden laptop is confirmed?! Color us shocked! publication. Exhibit Y – Amazon pulled Justice Clarence Thomas documentary as censorship of conservative content continues publication. Exhibit Z – Tech giants banned Trump. But did they censor him publication. Exhibit AA – I Have Been Permanently Banned By YouTube publication. Exhibit BB – Viewer account banned for no reason post. iiiCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 4 of 143 Exhibit CC – The Tuskegee Timeline publication. Exhibit DD – YouTube, Facebook split on removal of doctors’ viral coronavirus videos publication. Exhibit EE – Compilation of several publications. Exhibit FF – FB newsroom Steps to manage problematic content publication. Exhibit GG – Three-part recipe for cleaning up newsfeed publication. Exhibit HH – Ryan Hartwig April 6, 2022, Affidavit. Exhibit II – When Twitter Blocked Mother Teresa publication. Exhibit JJ – Google maps location data of freedom convoy donors posted online publication. TABLE OF AUTHORITIES PAGE Case Law A.L.A. Schechter Poultry Corp. v. United States, 295 U.S. 495 (1935) …………… 14, 26, 28, 34, 37, 121 Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009) …………………………….. 25, 49, 103, 114, n. 107 Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003) ………………………………………. 114, 116 Baxter v. Bracey, 140 S.Ct. 1862 (2020) …………………………………………….. 111 Board of Trustees of State Univ. of N.Y. v. Fox, 492 U.S. 469, 483 (1989) ………… n. 53 Brown v. Entm’t Merchs. Ass’n, 564 U.S. 786 (2011) ……………………………….. 67 Carter v. Carter Coal Co., 298 U.S. 238 (1936) …………………………………… 25, 31, 34, n. 44, 37, 121 Doe v. America Online, Inc., 783 So.2d 1010, 1025 (Fla. 2001) …………………….. 113 ivCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 5 of 143 Doe v. Facebook, Inc., 595 U.S. , 2022 WL 660628 (Mar. 7, 2022) ……….
    Doe v. Twitter, Inc., No. 21-cv-00485-JCS, 2021 WL 3675207, at *1, *3-4
    (N.D. Cal. Aug. 18, 2021) …………………………………………………………….
    Enigma Software Group USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040
    (9th Cir. 2019) ……………………………………………………………………..
    e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029, *3
    (MD Fla., Feb. 8, 2017) ……………………………………………………………..
    Fair Housing Council of San Fernando Valley v. Roommates.Com, LLC,
    521 F.3d 1157, 1168 (9th Cir. 2008) ………………………………………………
    9, n. 9, 57,
    104, 111,
    n. 102,
    Ex. C
    59
    n. 21,
    n. 30, 25,
    105,
    n. 102,
    129
    49, 103
    9-10, 25,
    106, 112,
    117
    F.C.C. v. Fox Television Stations, Inc., et al., 567 U.S. 239 (2012) ……………….. 41
    Field v. Clark, 143 U.S. 649, 692 (1892) …………………………………………… 29
    Force v. Facebook, Inc., 934 F.3d 53, 65 (2d Cir. 2019) …………………………… 58, 60
    FTC v. Accusearch, Inc., 570 F.3d 1187, 1204 (10th Cir. 2009) …………………….. 60
    FTC v. Mandel Brothers, Inc., 359 U.S. 385, 389 (1959) ………………………… 10
    Fyk v. Facebook, Inc., 808 Fed.Appx. 597, 598 (9th Cir. 2020) …………………….. 115, 117
    Grayned v. City of Rockford, 408 U.S. 104, 108 (1972) …………………………… 41
    Herrick v. Grindr LLC, 765 Fed.Appx. 586, 591 (2d Cir. 2019) …………………… 58
    Jane Doe No. 1 v. Backpage.com, LLC, 817 F. 3d 12, 16–21 (1st Cir. 2016) ……… 55
    Jones v. Dirty World Entertainment Recordings LLC, 755 F. 3d 398, 403, 410,
    416 (6th Cir. 2014) …………………………………………………………………… 116
    J.W. Hampton v. United States, 276 U.S. 394 (1928) ……………………………..
    v
    14, 28,
    30, n. 40Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 6 of 143
    Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269 (9th Cir. 2016) ………………………….. 117
    Kolender v. Lawson, 461 U.S. 352, 357 (1983) ……………………………………. 41
    Lemmon v. Snap, Inc., 440 F. Supp. 3d 1103 (C.D. Cal. 2020) …………………….. 25, 58
    Lindh v. Murphy, 521 U.S. 320, 336 (1997) ………………………………………. 10
    M.A. v. Village Voice Media Holdings, LLC, 809 F. Supp. 2d 1041, 1048
    (ED Mo. 2011) ………………………………………………………………………. 55
    Malwarebytes, Inc. v. Enigma Software Group USA, LLC, No. 19-1284,
    141 S.Ct. 13 (2020) ………………………………………………………………
    Marbury v. Madison, 5 U.S. 137, 177 (1803) …………………………………….
    passim,
    Ex. C
    2, 127
    Members of City Council of Los Angeles v. Taxpayers for Vincent, 466 U.S.
    789, 801 (1984) ……………………………………………………………………….. 67
    Miller v. California, 93 S.Ct. 2607 (1973) …………………………………………… n. 59
    Mistretta v. United States, 109 S.Ct. 647 (1989) …………………………………… 15, 24,
    n. 40, 35,
    45
    National Federation of Independent Business, et al. v. Department of Labor,
    Occupational Safety and Health Administration, et al., No. 21A244 and Ohio,
    et al. v. Department of Labor, Occupational Safety and Health Administration,
    et al., No. 21A247, 595 U.S. _ (Jan. 13, 2022) ………………………….. passim
    Park ‘N Fly, Inc. v. Dollar Park & Fly, Inc., 469 U.S. 189, 197 (1985) ………….. 10, 106
    R.A.V. v. St. Paul, 505 U.S. 377, 382-86 (1992) …………………………………… n. 52-53
    Roberts v. Sea-Land Servs., Inc., 566 U.S. 93, 100 (2012) ……………………….. 10
    Russello v. United States, 104 S.Ct. 296 (1983) ……………………………………… 113
    Sikhs for Justice, Inc. v. Facebook, Inc., 697 Fed.Appx. 526 (9th Cir. 2017) ……… 25, 49,
    103
    Skilling v. U.S., 130 S.Ct. 2896 (2010) …………………………………………….. n. 46
    State v. Conyers, 719 N.E.2d 535, 538 (Ohio 1999) ………………………………. 10
    viCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 7 of 143
    Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710, *3
    (Sup. Ct. NY May 24, 1995) …………………………………………………………. 107-108
    Sunshine Anthracite Coal Co. v. Adkins, 310 U.S. 381, 398 (1940) ………………. n. 40
    U.S. v. Shreveport Grain & Elevator Co., 287 U.S. 77, 85 (1932) …………………. n. 40
    U.S. v. Stevens, 559 U.S. 460 (2010) ………………………………………………. 67
    U.S. v. Williams, 553 U.S. 285, 301–302 (2008) …………………………………….. 67-68
    Virginia v. Hicks, 539 U.S. 113, 122 (2003) …………………………………………. 68
    Wayman v. Southard, 23 U.S. 1, 43 (1825) ………………………………………. 7, 18, 32,
    36, n. 40,
    n. 41
    W. Va. State Bd. of Educ. v. Barnette, 319 U.S. 624, 642 (1943) ……………………. 72
    Yakus v. U.S., 321 U.S. 414, 424–425 (1944) ……………………………………… n. 44
    Zango, Inc. v. Kaspersky Lab, Inc., 568 F.3d 1169 (9th Cir. 2009) ……………….. 44
    Zeran v. America Online Inc., 129 F.3d 327 (4th Cir. 1997) ……………………… 11, 13,
    46, 103-
    104, 106,
    111, 113-
    114, 119,
    127
    Codes
    Title 47, United States Code, Section 230 ……………………………………….. passim
    Title 28, United States Code, Section 2201 ……………………………………… 1
    Title 28, United States Code, Section 1331 ……………………………………… 5
    Title 28, United States Code, Section 1391 ……………………………………… 5
    Title 5, United States Code, Section 706 ………………………………………… 23, 38
    Title 47, United States Code, Section 154 ……………………………………….. 24, 38
    Title 5, United States Code, Sections 551-559 …………………………………… 39
    viiCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 8 of 143
    Title 28, United States Code, Section 2412 ……………………………………….
    134, 135
    Federal Rules
    Federal Rule of Civil Procedure 5.1 ………………………………………………. 1
    Federal Rule of Civil Procedure 57 ……………………………………………….. 1
    Constitution
    Article I ……………………………………………………………………………. 28-29
    Fifth Amendment ………………………………………………………………….. passim,
    n. 3
    First Amendment ………………………………………………………………….. passim,
    n. 4
    viiiCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 9 of 143
    UNITED STATES DISTRICT COURT
    FOR THE DISTRICT OF COLUMBIA

JASON FYK,
)
)
Plaintiff,
)
)
vs.
)
CASE NO.:
)
UNITED STATES OF AMERICA,
)
)
Defendant.
)
COMPLAINT FOR DECLARATORY JUDGMENT REGARDING TITLE 47, UNITED
STATES CODE, SECTION 230 (THE COMMUNICATIONS DECENCY ACT)
In this constitutional challenge to Title 47, United States Codes, Section 230 (the
Communications Decency Act, “CDA” or “Section 230”), Plaintiff, Jason Fyk (“Fyk”), by and
through undersigned counsel, sues Defendant, the United States of America (“USA”), as follows:
NATURE OF THE ACTION, PARTIES, JURISDICTION, AND VENUE
1.
Pursuant to Federal Rule of Civil Procedure 5.1 and Title 28, United States Code,
Section 2201 (Federal Rule of Civil Procedure 57), this is a constitutional challenge of the CDA
seeking this Court’s declaratory judgment that the CDA is unconstitutional and accordingly
inoperative. 1, 2
2.
Fyk seeks a declaration that the CDA (primarily, Section 230(c)) is unconstitutional
because it deprives American citizens (through private commercial entities acting with federal
1
The full text of the CDA, entitled Protection for private blocking and screening of offensive material, is attached
hereto as Exhibit A for the Court’s ease of reference, and Exhibit A is incorporated fully herein by reference.
Alternative relief to a finding from this Court as to the CDA’s unconstitutionality is discussed mainly in ¶¶ 4 and
329 and n. 107, infra, but intermittingly throughout this filing.
2
1Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 10 of 143
government delegated authority) of their (a) liberties and property without due process, in violation
of the Fifth Amendment; 3 and (b) free speech rights, in violation of the First Amendment. 4
3.
Section 230, on its face and / or as applied, 5 violates the Non-Delegation / Major
Questions, Void-for-Vagueness, and Substantial Overbreadth Doctrines. Section 230 also violates
the Harmonious-Reading, Irreconcilability, Whole-Text, Surplusage, and Absurdity Canons of
statutory construction.
4.
This Court has the ability to strike down laws on the grounds that they are
unconstitutional, a power reserved to the courts through judicial review. See Marbury v. Madison,
5 U.S. 137, 177 (1803) (“[i]t is emphatically the province and duty of the judicial department to
say what the law is”). And this is precisely the declaratory judgment Fyk respectfully requests
from this Court here – striking aspects of the CDA as unconstitutional. Alternatively, the Court
has the ability to rein in Section 230 by narrowly conforming the application of Section 230
consistent with the legislative intent, constitutional tenets / mandates, and / or the CDA’s actual
language. In this case, Fyk challenges the inconsistent judicial construction of the limits of online
providers’ Section 230(c) immunity. 6 Fyk seeks a declaration that the CDA’s immunity should be
3
The Fifth Amendment reads as follows:
No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment
or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia,
when in actual service in time of war or public danger; nor shall any person be subject for the same
offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be
a witness against himself, nor be deprived of life, liberty, or property, without due process of law;
nor shall private property be taken for public use, without just compensation.
Id.
The First Amendment reads, in pertinent part, as follows: “Congress shall make no law … abridging freedom of
speech, or of the press … and to petition the government for a redress of grievances.” Id.
4
Constitutional challenges are typically classified as “as applied” challenges or “facial” challenges. This constitutional
challenge is both.
5
6
The breadth of Section 230 immunity has been unchecked and expanded by courts (mainly courts within the Ninth
Circuit, including the Ninth Circuit Court, where many social media companies have their principal place of business)
over the last twenty-six years.
2Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 11 of 143
struck consistent with the Constitution or, alternatively, if judicial interpretation by the Court can
cure the deficiencies of CDA immunity, the alternative declaration that Fyk seeks is this Court’s
clarification of the proper scope of Section 230(c) immunity. 7 See, e.g., ¶ 329(a)-(d) and n. 107,
infra.
5.
The CDA enables a private actor (Interactive Computer Service, “ICS,” as defined
by Section 230(f); e.g., Facebook, Google, Twitter, PayPal, Snapchat, and et cetera, which can
also be rightly categorized as “online providers,” “Big Tech,” “Social Media Giants,” or the like)
to “police,” “regulate,” “enforce,” and / or “penalize” offensive speech under supposed CDA
authority and protection so long as the ICS acts as a “Good Samaritan.” In a separate, independent
action (discussed further below), the CDA was applied (at least so far; this separate, independent
action is presently up on a second appeal to the Ninth Circuit Court) to immunize the commercial
activities of a private actor (Facebook, Inc., “Facebook”) against a commercial competitor (Fyk)
without a showing that the private actor (Facebook) was acting as a “Good Samaritan” or in “good
faith” or legally. See Fyk v. Facebook, Inc., No. 4:18-cv-05159-JSW (N.D. Cal.) / Fyk v. Facebook,
Inc., No. 19-16232 (9th Cir.) / Fyk v. Facebook, Inc., No. 21-16997 (9th Cir.) (the “Facebook
Lawsuit”). 8 More specifically, the overly “broad construction” of the unconstitutional CDA that
has “confer[red] sweeping immunity on some of the largest companies in the world” 9 (i.e., Big
At present, there is no limit to Big Tech’s CDA immunization; and, worse, the judicial construction of immunity
limits varies tremendously from one jurisdiction to another, making the CDA’s application and effect extremely
inconsistent and arbitrary despite the Internet not recognizing geographic bounds.
7
8
In the Facebook Lawsuit, Fyk was denied due process rights after Facebook stripped him of his liberties and property
(property rights demonstrably valued in excess of $100,000,000.00) by way of powers delegated by the federal
government to Facebook (a self-interested commercial private entity acting under the aegis of government authority)
by the authority of the CDA vested in private actors and sanctioned by various courts’ implementation of a sweeping
application of Section 230 immunity protection to the anti-competitive actions of Facebook, which would otherwise
be unlawful. See Fyk v. Facebook, Inc., No. 20-632 (2020), Fyk Nov. 2, 2020, Petition for a Writ of Certiorari; see
also Request for Judicial Notice (“RJN”) Facebook Lawsuit Background, attached hereto as Exhibit B and
incorporated fully herein by reference.
9
See Malwarebytes, Inc. v. Enigma Software Group USA, LLC, No. 19-1284, 141 S.Ct. 13 (2020) (wherein Justice
Thomas issued a detailed Statement, which has since been cited authoritatively in several cases, concerning the CDA
3Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 12 of 143
Tech) substantially harmed Fyk by stripping him of his constitutionally protected Fifth
Amendment due process rights in relation to the Facebook Lawsuit, and, also, his constitutionally
protected First Amendment free speech rights.
6.
When challenging a law as unconstitutional, the Non-Delegation / Major Questions
Doctrine, Void-for-Vagueness Doctrine, Substantial Overbreadth Doctrine, Harmonious-Reading
Canon, Irreconcilability Canon, Whole-Text Canon, Surplusage Canon, and the Absurdity Canon
all apply to Fyk as well as all citizens. Although one does not need “standing” per se to challenge
the (un)constitutionality of Section 230, Fyk has “standing” (predicated on direct harm suffered as
a result of Section 230) to constitutionally challenge the CDA based on the violation of his specific
liberties and the taking of his specific property without due process guaranteed by the Fifth
Amendment and / or for violation of his free speech guaranteed by the First Amendment – the
Facebook Lawsuit and related Section 230 immunity misapplication.
7.
The Plaintiff is Jason Fyk. At all material times, Fyk was a citizen and resident of
Cochranville, Pennsylvania and sui juris in all respects. Fyk established the 501c3 organization
named “Social Media Freedom Foundation” (https://socialmediafreedom.org/) aimed at restoring
freedom online, which such freedom Fyk has devoted his life to for years.
8.
The Defendant is the United States of America. At all material times, the highest
legislative bodies of the federal government and the highest court of the federal judiciary were /
are headquartered in the District of Columbia and responsible for the laws of the land (here, Title
47, United States Code, Section 230).
and several things wrong with same, namely the judicial interpretation / application abuse of Section 230(c) that has
transpired over the CDA’s twenty-six-year existence); see also Doe v. Facebook, Inc., 595 U.S. , 2022 WL 660628 (Mar. 7, 2022). We submit that Malwarebytes is a must read (not to mention that this constitutional challenge cites to same myriad times); thus, Malwarebytes is attached hereto as composite Exhibit C and incorporated fully herein by reference. Also attached as part of composite Exhibit C, because it aligns with his Malwarebytes Statement, is Justice Thomas’ recent Doe Statement. 4Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 13 of 143 9. This Court possesses original jurisdiction pursuant to Title 28, United States Code, Section 1331, as the action “aris[es] under the Constitution, laws, or treatises of the United States.” Id. 10. Venue is proper in the District of Columbia pursuant to Title 28, United States Code, Sections 1391(b)(1), 1391(b)(2), 1391(e)(1)(A), and 1391(e)(1)(B) since, for examples, (a) a substantial part of the events or omissions giving rise to this action against the USA (e.g., the passage and enactment of and / or maintenance of Section 230) occurred in this judicial district, and (b) the situs of the highest legislative governing bodies and the highest judicial court of the USA is the District of Columbia. 11. All conditions precedent to the institution of this action have occurred, been performed, were futile, and / or were not mandatory. COMMON ALLEGATIONS A. Brief Introduction 12. In 1996, in enacting the CDA, a well-intentioned Congress sought to protect an ICS / online provider from liability arising out of the ICS’ voluntary choice to engage in the government’s CDA directive – restriction of offensive online materials (as a “Good Samaritan” and in “good faith”) in an effort to help protect children from harmful web content and / or otherwise rid the Internet of filth. Congress attempted to resolve this Internet indecency issue twenty-six years ago (before many ICSs, e.g., Facebook, Twitter, Instagram, existed) by delegating regulatory “agency” authority (under the CDA’s civil liability protection) directly to private entities (ICS). 13. Among the several challenges to the constitutionality of the CDA advanced in this action is Fyk’s challenge of the CDA’s delegation of authority that permits the discretionary 5Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 14 of 143 actions of a commercial ICS / online provider to “enforce” the CDA regulatory authority. In the Facebook Lawsuit, it was Facebook “enforcing” CDA regulatory authority against one user (Fyk) while, at the same time, electing not to carry out CDA regulation against another user (Fyk’s competitor) with the exact same content, but with whom Facebook had a pecuniary interest. This discretionary enforcement resulted in the advancement of anti-competitive animus, an animus that cannot, by definition, meet the qualification of “Good Samaritanism” to enjoy entitlement to complete immunity for any and all liability for any malfeasance or illegal conduct. 14. Regulation, penalization, or deprivation in any form, carried out by an authorized government agent (i.e., whether private or public) “to fill up the details” (i.e., fill in the quasi- legislative rules) at the directive of Congress, must afford due process and free speech of the entity or person being policed / regulated. Fyk challenges the constitutionality of Section 230, with the law (currently being wielded by large technology companies, cloaked as delegated state actors i.e., proxy agents), to deprive constitutionally protected rights, such as due process and free speech, via illegal conduct) being glaringly violative of the multitude of constitutional doctrines and / or canons of statutory construction discussed throughout this filing. 15. The time has come for this Court to scrutinize whether Section 230 is constitutional / lawful and, if it is not, realign the law / United States Code (the scope of Section 230) consistent with the realities of the modern Internet. Section 230 is a congressional delegation of authority, granted to private entities, to regulate / monitor some area of human activity while immunized from civil liability even when commercial motives for the ICS’ censuring of citizens are alleged. Section 230 operates to deprive citizens, including Fyk, of the freedoms ensured by the First and Fifth Amendments of the United States Constitution. This Court is obliged to assess whether Section 230 may exist extant with the Constitution. Fyk contends that Section 230’s improper 6Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 15 of 143 delegation of legislative authority to private commercial enterprises has resulted in a pernicious degradation and unconstitutional abridgment of citizens’ First and Fifth Amendment rights, which cannot withstand judicial scrutiny. B. Preliminary Statement 10 16. The CDA is an administrative law that provides civil liability protection, in part, when a private entity (ICS) voluntarily undertakes “any action … in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” 47 U.S.C. § 230(c)(2)(A) (emphasis added). 17. An administrative agency (here, an ICS like Facebook, Twitter, or Google, for examples) is “[a] government body authorized to implement legislative directives [e.g., block or screen offensive materials] by developing more precise and technical rules [i.e., “fill up the details,” see, e.g., Wayman v. Southard, 23 U.S. 1, 43 (1825)] than possible in a legislative setting. Many administrative agencies also have [ ] enforcement responsibilities.” 11 Agencies are created through their own organic statutes (e.g., Section 230), and they establish new “laws” (e.g., Facebook’s Community Standards). In so doing, the agencies interpret, administer, and enforce those new “laws.” Generally, administrative agencies are created to protect a public interest (e.g., protect children from harm, such as Internet pornography pursuant to the legislative intent behind the CDA), not to vindicate private rights. The idea of this “Preliminary Statement” is to give the Court a good enough feel for this constitutional challenge before having to deep dive into this matter by way of reading the vast detail that follows this “Preliminary Statement.” Although what follows the “Preliminary Statement” section of this filing is admittedly lengthy, it was / is necessary to not short-shrift since the proper scope of CDA immunity is extraordinarily important and generally misunderstood. 10 11 Cornell Law School, Administrative Agency, https://www.law.cornell.edu/wex/administrative_agency This publication (along with all other Cornell publications cited in this filing) is attached hereto as composite Exhibit D for the Court’s ease of reference and is incorporated fully herein by reference. 7Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 16 of 143 18. In Malwarebytes, Justice Thomas aptly stated (all of Justice Thomas’ Malwarebytes statements were / are apt), in part: courts have extended the immunity in §230 far beyond anything that plausibly could have been intended by Congress… Courts have also departed from the most natural reading of the text by giving Internet companies immunity for their own content [i.e., creation and development in part by proxy]… . Courts have long emphasized nontextual arguments when interpreting §230 [i.e., proof-texting], leaving questionable precedent in their wake.” Ex. C, Malwarebytes, 141 S.Ct. at 13-15 (emphasis added, internal citations omitted). It needs to be considered “whether the text of this increasingly important statute [the CDA] aligns with the current state of immunity enjoyed by Internet platforms.” Id. at 14. 19. Justice Thomas is not alone in his Section 230 views advanced in Malwarebytes. The Department of Justice (“DOJ”) came to a very similar conclusion: At the same time, courts have interpreted the scope of Section 230 immunity very broadly, diverging from its original purpose. This expansive statutory interpretation, combined with technological developments, has reduced the incentives of online platforms to address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability. The time has therefore come to realign the scope of Section 230 with the realities of the modern Internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services. DOJ, Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996, Sept. 23, 2020. 12, 13 20. Most, if not all, cases seeking to expand / surpass Section 230 immunity have relied on (un)twisting the “non-textual,” “questionable” interpretation of the statute. Most Section 230 12 A copy of this DOJ publication is attached hereto as Exhibit E and incorporated fully herein by reference. On May 28, 2020, President Trump entered Executive Order 13925 (“EO”) challenging social media companies’ ability to shield their misconduct behind 230 immunity, which such EO gave way to DOJ’s subsequent review of (and publications concerning) the CDA. A copy of this EO is attached hereto as Exhibit F and incorporated fully herein by reference. 13 8Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 17 of 143 cases wind up in the same California court system, since nearly all major technology companies reside in Silicon Valley and almost always have forum selection provisions included within their user terms of service (“TOS”). 21. California courts (including in the Facebook Lawsuit; again, so far, at least) have consistently failed to address and / or embrace the most natural reading of the CDA’s text by giving Internet companies immunity for their own content and / or conduct, which would otherwise be unlawful. Although Justice Thomas welcomed an “appropriate case,” see Ex. C, Malwarebytes, 141 S.Ct. at 14 (“in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms”) and Ex. C, Doe, 2022 WL 660628 at *2 (“Assuming Congress does not step in to clarify § 230’s scope, we should do so in an appropriate case”), the Supreme Court of the United States (“SCOTUS”) denied Fyk’s Petition for Writ of Certiorari in the Facebook Lawsuit, which addressed (to some degree or another) some of the constitutional doctrines and / or canons of statutory construction at play here. 14 22. A statute must be read as a whole. 15 “[W]e are advised by the Supreme Court that we must give meaning to all statutory terms, avoiding redundancy or duplication wherever possible.” Fair Housing Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 14 Congress has not reformed Section 230 in the twenty-six-year existence of the law because, we submit, there is no realistic and comprehensive way to fix Section 230 shy of a complete overhaul; hence, this constitutional challenge is appropriate and necessary. Whole-Text Canon – “The text must be considered as a whole.” Antonin Scalia & Bryan A. Garner, Canons of Construction, at 2, 15 https://www.law.uh.edu/faculty/adjunct/dstevenson/2018Spring/CANONS%20OF%20CONSTRUCTION.pdf This Canons of Construction publication is attached hereto as Exhibit G for this Court’s ease of reference and is incorporated fully herein by reference. 9Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 18 of 143 1168 (9th Cir. 2008) (citing Park ‘N Fly, Inc. v. Dollar Park & Fly, Inc., 469 U.S. 189, 197 (1985)). 16 23. The Harmonious-Reading Canon provides that the provisions of a law should be interpreted in a way that renders them compatible, not contradictory: 17 “our task is to fit, if possible, all parts into a harmonious whole.” Roberts v. Sea-Land Servs., Inc., 566 U.S. 93, 100 (2012) (citing FTC v. Mandel Brothers, Inc., 359 U.S. 385, 389 (1959)); see also, e.g., Lindh v. Murphy, 521 U.S. 320, 336 (1997) (courts should “accord more coherence” to disparate statutory provisions where possible). The Irreconcilability Canon provides that “[i]f a [statute] contains truly irreconcilable provisions at the same level of generality, and they have been simultaneously adopted, neither provision should be given effect.” Antonin Scalia & Bryan A. Garner, Reading Law: The Interpretation of Legal Texts, at 189 (2012). 18 If the text of a statute contains “truly irreconcilable provisions,” an irreconcilable conflict is determined to exist and “the next inquiry is whether the provisions at issue are general or specific.” See, e.g., State v. Conyers, 719 N.E.2d 535, 538 (Ohio 1999) (internal citation omitted). 24. Courts are often asked to consider immunity under isolated statutory subsections (e.g., Section 230(c)(1) or Section 230(c)(2)), without considering Section 230 as a whole. Defendants typically cite questionable out-of-context precedent to introduce the defendants’ Surplusage Canon – “If possible, every word and every provision is to be given effect … . None should be ignored. None should needlessly be given an interpretation that causes it to duplicate another provision or to have no consequence.” Ex. G at 2. 16 Harmonious-Reading Canon – “The provisions of a text should be interpreted in a way that renders them compatible, not contradictory.” Ex. G at 2. 17 See also Ex. G at 2 for this description of the Irreconcilability Canon: “[i]f a text contains truly irreconcilable provisions at the same level of generality, and they have been simultaneously adopted, neither provision should be given effect.” 18 10Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 19 of 143 understandable bias (because they are defending themselves) into the determination. This is known as “proof-texting:” the practice of using [isolated, out-of-context] quotations from a document, either for the purpose of exegesis, or to establish a proposition in eisegesis … [i.e., interpretation of a text by reading into it, one’s own ideas]. Such quotes may not accurately reflect the original intent of the author [e.g., Congress], and a document quoted in such a manner, when read as a whole, may not support the proposition for which it was cited. 19 When read as a whole, many cases are not harmonious or reconcilable with the “Good Samaritan” intelligible principle / general directive / general provision of Section 230. 25. Section 230’s “harmonious-reading” went astray as early as 1997. In Zeran v. America Online Inc., 129 F.3d 327 (4th Cir. 1997), the first appellate court to consider the statute erroneously held that, although the text of Section 230(c)(1) grants immunity only from “publisher” or “speaker” liability, it eliminates distributor liability too; that is, Section 230 confers immunity even when a company distributes content that it knows is illegal. This determination (without considering Section 230 as a whole) eliminated all liability (i.e., both active publishing and passive distribution), thus swallowing the purpose of the “very next subsection, which governs removal of content, §230(c)(2).” Ex. C, Malwarebytes, 141 S.Ct. at 16. The Zeran decision rendered 230(c)(2) mere “surplusage” (i.e., redundant / superfluous) 20 as early as 1997, and courts have spent more than two decades trying to reconcile this mistaken application of Section 230(c)(1) and / or otherwise trying to put forth a clear meaning and / or application of Section 230; 19 Wikipedia, Prooftext, https://en.wikipedia.org/wiki/Prooftext This Wikipedia article, along with all other Wikipedia articles cited herein, is attached hereto as compose Exhibit H and incorporated fully herein by reference. Wikipedia is, of course, not an authoritative citation source; but, Wikipedia often does a nice job of distillation and / or simplification. And, so, utilize Wikipedia a bit throughout this filing, mainly as to subjects / concepts that are not too complicated and / or that only require generalized understanding in relation to the reason(s) for citation in this filing. 20 See n. 16, supra. 11Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 20 of 143 largely to no avail. 21 Under the most harmonious, reconcilable reading of the statute, Section 230(c)(1) applies to passive distributor liability protection (i.e., a platform / host – omission of action) and Section 230(c)(2) applies to active distributor liability protection (i.e., publisher liability protection when blocking and screening offensive material, so long as such blocking and screening is done in “good faith” and as a “Good Samaritan”). If the interpretation / application were to be kept as narrow / simple as the preceding sentence, the CDA could possibly work as is (although, we submit that, even still, Section 230 would be constitutionally infected); hence, the alternative relief Fyk seeks in this CDA challenge (see, e.g., ¶ 4, supra, ¶ 329, infra, and n. 107, infra). 26. Section 230 enables (under civil liability protection / immunity) an ICS to “voluntarily” act at the prerogative of Congress to block and screen information that it “considers” “objectionable,” but it must follow (in “good faith” and as a “Good Samaritan”) the obligation (i.e., the intelligible principle / general directive / general provision laid down by Congress) articulated in the statute if it is to be afforded liability protection. When an ICS “considers” information, it is acting in a traditional editorial role. Section 230(c)(2) limits (i.e., applicable narrowed provision) that editorial role to the exclusion of material. 22 The presently broken CDA, “Largely” because, as discussed below, the Ninth Circuit Court has started to come around at least with respect to the “Good Samaritan” threshold CDA immunity analysis within an anti-competitive animus setting. See, e.g., Enigma Software Group USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040 (9th Cir. 2019). For a discussion as to the appropriate interpretation / application of the Ninth Circuit Court’s Enigma decision, this Court is invited to review Fyk’s March 3, 2022, filing in Fyk v. Facebook, Inc., No. 21-16997 (9th Cir.) of the Facebook Lawsuit. 21 Section 230(c)(2)(A) provides an ICS with immunity if the ICS acts upon another’s (an Information Content Provider’s) impermissible content in “good faith.” Section 230(c)(2)(B) provides an ICS with immunity if the ICS does not directly take action upon another’s (an Information Content Provider’s) content but instead provides another Information Content Provider with the tools / services needed to appropriately act upon yet another Information Content Provider’ materials; e.g., where an ICS provides Information Content Provider #1 / user #1 (e.g., a parent) with the tools / services needed by that parent to act upon (restrict) offensive materials posted by Information Content Provider #2 / user #2 so as to protect Information Content Provider #1’s child from harm, for example, the ICS (a Facebook, for example) enjoys a “no action” immunity akin to that of Section 230(c)(1), which is why the express language of Section 230(c)(2)(B) relates back to Section 230(c)(1). 22 12Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 21 of 143 however, allows an ICS the editorial ability to decide what content is made available (i.e., advanced – developed in part / provided). Development, in whole or in part, is the role of an Information Content Provider (“ICP”) by definition under Section 230(f)(3); thus, the ICS’ role as an information content restrictor also allows the ICS to act as an ICP who can “knowingly distribute” unlawful information under civil liability protection. This is at odds with the “Good Samaritan” general provision (i.e., intelligible principle / general directive / general provision) of the statute and creates an irreconcilable conflict between Sections 230(c)(2) and 230(c)(1) and the Section 230(f)(3) definition of an ICP. Information “consideration” (i.e., restriction and development in part) gave rise to the mistaken Zeran decision. Any information that is “considered” (i.e., active editorializing) and “allowed” (i.e., not restricted – knowingly chosen, advanced, or developed) by an ICS (even in part) must be subject to civil liability (if not done as a “Good Samaritan”) or, as a result, all distribution / publishing liability is eliminated, including unlawful distribution / publishing (i.e., knowingly causing harm). The statute cannot be reconciled in a way that distinguishes between “development by proxy” (as a result of content restriction consideration) and “development in part” (as a result of information content provision). 27. Contrary to popular belief, restricting users’ materials online, under the supposed protection of Section 230, is not a voluntary choice to act privately (i.e., without obligation or consideration); instead, it is the voluntary choice to act under the directive of Congress (i.e., state directed action) to restrict statutorily specified (i.e., 230(c)(2)(A)) offensive materials. Webster’s dictionary defines the word “voluntary” as follows: “done by design or intention; acting or done of one’s own free will without valuable consideration or legal obligation.” 23 Put differently, a 23 Merriam-Webster Dictionary, Voluntary, https://www.merriam-webster.com/dictionary/voluntarily For this Court’s ease of reference, a copy of all Webster’s Dictionary definitions utilized throughout this filing is attached hereto as composite Exhibit I and incorporated fully herein by reference. Exhibit I provides definitions in the order in which this filing utilizes such definitions. 13Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 22 of 143 provider or user cannot take any voluntary action whatsoever in a private capacity and still somehow enjoy CDA immunity; rather, a provider or user is authorized by (i.e., delegated by) the state to engage in Internet content policing as a state actor via “Good Samaritan” intelligible principle / general directive / general provision and in a “good faith” fashion. Put yet another way, a private actor cannot seek Section 230 civil liability protection for any and all private / commercialized activity because, if a provider or user seeks “protection” (i.e., the consideration), it must have taken its action under the legal obligation (i.e., as a state actor at the prerogative of Congress) to block and screen offensive material. The term “voluntarily” (a private function) is irreconcilable with Section 230’s own obligatory / induced governmental function – Section 230 is an irreconcilable “voluntary mandate” (i.e., governmentally induced private function), as the phrase “voluntary mandate” is a prima facie oxymoron. 28. The Non-Delegation Doctrine: … is a principle in administrative law that Congress cannot delegate its legislative powers to other entities [e.g., Section 230’s ‘voluntary’ option to engage in a government mandate]. This prohibition typically involves Congress delegating its powers to administrative agencies or to private organizations [ICSs]. In J.W. Hampton v. United States, 276 U.S. 394 (1928), the Supreme Court clarified that when Congress does give an agency the ability to regulate, Congress must give the agencies an ‘intelligible principle’ on which to base their regulations. In A.L.A. Schechter Poultry Corp. v. United States, 295 U.S. 495 (1935), the Supreme Court held that ‘Congress is not permitted to abdicate or to transfer to others the essential legislative functions with which it is thus vested.’ 24 The Supreme Court has recognized that Congress could not delegate powers that were ‘strictly and exclusively legislative.’ Chief Justice John Marshall laid the groundwork for the ‘intelligible principle’ standard that governs non-delegation cases today. Marshall stated that if Congress delegates quasi-legislative powers to another body, it must provide a ‘general provision’ by which ‘those who act’ can ‘fill up the details.’ Therefore, Congress cannot give an outside agency free reign to make law, but it can authorize the agency to flesh out the details of a law Congress has already put in place. This became known as providing an ‘intelligible 24 Cornell Law School, Nondelegation Doctrine, https://www.law.cornell.edu/wex/nondelegation_doctrine For this Court’s ease of reference, a copy of this publication is found in composite Exhibit D. 14Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 23 of 143 principle’ to which the agency is instructed to conform. The ‘intelligible principle’ could be anything in the ‘public interest, convenience, or necessity’ or considered ‘just and reasonable.’ Being put in such subjective terms gives agencies vast discretion when enacting new rules. 25 The Court has contrasted the delegation of authority to a public agency, which typically is required to follow established procedures in building a public record to explain its decisions and to enable a reviewing court to determine whether the agency has stayed within its ambit and complied with the legislative mandate, with delegations to private entities, which typically are not required to adhere to such procedural safeguards. 26 29. In Mistretta v. United States, 109 S.Ct. 647 (1989), Justice Scalia warned that “the scope of delegation is largely uncontrollable by the courts, we must be particularly rigorous in preserving the Constitution’s structural restrictions that deter excessive delegation [i.e., Section 230]. The major one, it seems to me, is that the power to make law cannot be exercised by anyone other than Congress, except in conjunction with the lawful exercise of executive or judicial power.” Id. at 678 (emphasis added). 30. Section 230 grants administrative agencies (here, private entities / ICSs), under the “Good Samaritan” intelligible principle / general directive / general provision, the authority to create any rule / “law” the ICS deems to be “in the public interest,” solely relying on the agency’s (here a private entity’s) own views and policy agenda rather than requiring Congress to set forth objective guidelines. This kind of unchecked power vested in private entities (with ulterior motives) cloaked with the imprimatur of “Good Samaritan” immunity is exploitable, reckless and dangerous. 25 US Legal, Intelligible Principle Law and Legal Definition, https://definitions.uslegal.com/i/intellligible-principle/ For this Court’s ease of reference, a copy of this publication (along with other US Legal definitional publication) is attached hereto as composite Exhibit J and is incorporated fully herein by reference. 26 Constitution Annotated, The Nature and Scope of Permissible Delegations, https://constitution.congress.gov/browse/essay/artI-S1- 1%202/ALDE_00000010/%5b’declaration’,%20’of’,%20’independence’%5d A copy of this publication is attached hereto as Exhibit K and incorporated fully herein by reference. 15Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 24 of 143 31. Dovetailing with the Non-Delegation Doctrine is the Major Questions Doctrine, which was recently addressed by the SCOTUS in National Federation of Independent Business, et al. v. Department of Labor, Occupational Safety and Health Administration, et al., No. 21A244 and Ohio, et al. v. Department of Labor, Occupational Safety and Health Administration, et al., No. 21A247, 595 U.S. _ (Jan. 13, 2022).
32.
Justice Gorsuch’s concurring opinion (joined in concurrence by Justice Thomas and
Justice Alito) in the aforementioned January 13, 2022, COVID-19 mandatory vaccination
SCOTUS cases did a nice job in fundamentally recognizing what needs to be fundamentally
recognized here – we need to bring independent agencies (like the Occupational Safety and Health
Administration, “OSHA”) back under the control of Congress so that they do not become a fourth
branch of government. Precisely our point as it relates to the private entity government agencies
(which “private entity government agency” should be an oxymoron in and of itself) that are large
technological companies in relation to the “enforcement” of the CDA.
33.
In the aforementioned cases, it was appropriate for the SCOTUS to rein in the likes
of the OSHA with respect to its attempt to carte blanche mandate COVID-19 vaccination in certain
settings. Similarly, here, private social media commercial enterprises function as quasi-
governmental agencies (like OSHA) that have to be controlled / reined in (or stripped of carte
blanche Section 230 immunization / civil liability protection).
34.
Justice Gorsuch’s concurring opinion in the aforementioned recent SCOTUS cases
included discussion of the Major Questions Doctrine tied to the aforementioned (and also below
discussed) Non-Delegation Doctrine.
16Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 25 of 143
35.
The Major Questions Doctrine is conceptually as follows: “We expect Congress to
speak clearly if it wishes to assign to an executive agency decisions of vast economic and political
significance.” Id. at 2 (internal citation omitted). 27
36.
Justice Gorsuch’s discussion of the “Major Questions Doctrine” specifically relates
same to the “Non-Delegation Doctrine:”
In this respect, the major questions doctrine is closely related to what is sometimes
called the nondelegation doctrine. Indeed, for decades courts have cited the
nondelegation doctrine as a reason to apply the major questions doctrine. … Both
are designed to protect the separation of powers and ensure that any new laws
governing the lives of Americans are subject to the robust democratic processes the
Constitution demands.
Id. at 4 (internal citation omitted). The new “laws” created by large technology companies
“govern[ ] the lives of [millions of] Americans [and must be] subject to the robust democratic
processes the Constitution demand,” like due process and free speech. Anybody sane recognizes
that the “laws” created by large tech companies do anything but ensure constitutional freedoms.
37.
Applied here, and put more simply, CDA immunity implicates major questions
concerning due process, freedom of speech, et cetera. 28
38.
Justice Gorsuch aptly continued:
The major questions doctrine serves a similar function [to the non-delegation
doctrine] by guarding against unintentional, oblique, or otherwise unlikely
delegations of the legislative power. Sometimes, Congress passes broadly worded
statutes seeking to resolve important policy questions in a field while leaving an
agency to work out the details of implementation. … Later, the agency may seek to
exploit some gap, ambiguity, or doubtful expression in Congress’s statutes to
assume responsibilities far beyond its initial assignment. The major questions
doctrine guards against this possibility by recognizing that Congress does not
usually ‘hide elephants in mouseholes.’
Id. at 5 (internal citations omitted) (emphasis added).
27
The concurring opinion cited herein has its own set of page numbers starting at page one.
28
Any law (i.e., Section 230) that results in the deprivation of life, liberty, and / or property sans due process (e.g., the
deprivation experienced thus far by Fyk within the Facebook Lawsuit) is legally untenable straightaway.
17Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 26 of 143
39.
First, the CDA is broadly worded and we have a related doctrine called
“Substantial Overbreadth” directly at play here, discussed below. Second, the well-being of the
worldwide web and protecting (i.e., immunizing) those who legitimately engage in trying to
preserve a healthy Internet (in “good faith” and as a “Good Samaritan”) is very “important policy,”
especially in the ever-burgeoning dot.com era. Third, private actors (like Facebook, Google,
Twitter, et cetera) indeed have tried to exploit (and have largely succeeded in exploiting, thus far,
as illustrated by cases like the Facebook Lawsuit) gaps and / or ambiguities in the vague CDA. So,
just as the below Substantial Overbreadth Doctrine section ties in, so too does the Void-for-
Vagueness Doctrine (also discussed below). Fourth, in the same vein of exploitation, large
technology companies have taken the CDA “far beyond” what Congress originally could have
plausibly intended.
40.
The SCOTUS concurring opinion in the aforementioned COVID-19 vaccination
decision(s) continued:
Whichever the doctrine, the point is the same. Both serve to prevent ‘government
by bureaucracy supplanting government by the people.’ … And both hold their
lessons for today’s case. On the one hand, OSHA claims the power to issue a
nationwide mandate on a major question but cannot trace its authority to do so to
any clear congressional mandate. On the other hand, if the statutory subsection the
agency cites really did endow OSHA with the power it asserts, that law would likely
constitute an unconstitutional delegation of legislative authority. Under OSHA’s
reading, the law would afford it almost unlimited discretion – and certainly impose
no ‘specific restrictions’ that ‘meaningfully constrai[n]’ the agency. … OSHA
would become little more than a ‘roving commission to inquire into evils and upon
discovery correct them.’ A. L. A. Schechter Poultry Corp. v. United States, 295 U.
S. 495, 551 (1935) (Cardozo, J., concurring). Either way, the point is the same one
Chief Justice Marshall made in 1825: There are some ‘important subjects, which
must be entirely regulated by the legislature itself,’ and others ‘of less interest, in
which a general provision may be made, and power given to [others] to fill up the
details.’ Wayman v. Southard, 10 Wheat. 1, 43 (1825). And on no one’s account
does this mandate qualify as some ‘detail.’ The question before us is not how to
respond to the pandemic, but who holds the power to do so. The answer is clear:
Under the law as it stands today, that power rests with the States and Congress, not
18Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 27 of 143
OSHA. In saying this much, we do not impugn the intentions behind the agency’s
mandate. Instead, we only discharge our duty to enforce the law’s demands when
it comes to the question who may govern the lives of 84 million Americans.
Respecting those demands may be trying in times of stress. But if this Court were
to abide them only in more tranquil conditions, declarations of emergencies would
never end and the liberties our Constitution’s separation of powers seeks to preserve
would amount to little.
Id. at 6-7 (some internal citations omitted) (emphasis added). Spot on, we could just swap out
“OSHA” with “ICS,” “Facebook,” “Twitter,” or “Google,” for examples, and come to an identical
SCOTUS holding in relation to the CDA.
41.
The Internet is an indispensable aspect of life for most people in this day and age
and is much more than just some “detail.” This Court must make clear in this constitutional
challenge that the power to control / govern the daily lives (because, again, for most, the Internet
is an indispensable part of everyday life; i.e., inextricably woven into the fabric of everyday life)
of hundreds of millions of people in America and billions of people worldwide is not limitless. 29
42.
The design of the CDA is Internet regulation by way of “blocking and screening
of offensive material.” The CDA contemplates protecting the “Good Samaritan” (whether that be
the user / ICP or the online provider / ICS) who engages in the regulation that is “blocking and
screening of offensive materials.”
43.
Despite the CDA’s “Good Samaritan” requirement, however, courts are deferring
to Big Tech without requiring a threshold showing of the private actor’s entitlement to “Good
Samaritan” status even where (e.g., the Facebook Lawsuit) the allegation against the private actor
is that it acted with anti-competitive motives. 30
This “Major Questions Doctrine” dovetails into different forms of deference; e.g., Chevron deference, Skidmore
deference, Mead deference, and Auer deference.
29
Fyk’s pending second Ninth Circuit Court appeal relates to, in large part, such an anti-competitive animus setting,
questioning the Ninth Circuit Court as to how, under identical circumstances (at least with respect to the anti-
competitive animus facet), did the Ninth Circuit Court provide justice to Enigma, but not Fyk? Put differently, how
did the Ninth Circuit Court deem Malwarebytes’ anti-competitive animus laden conduct to be not eligible for CDA
30
19Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 28 of 143
44.
Under the Major Questions Doctrine recently highlighted by the SCOTUS, one
must be a congressionally appointed agency tasked with overseeing a regulatory act / law before a
federal court even begins to consider yielding to one’s interpretation of that statute or regulation.
45.
Big Tech is not an explicitly congressionally appointed “agency” in relation to the
CDA. In enacting the CDA, Congress did not explicitly appoint an overseeing agency (like, for
examples, the Federal Communications Commission, “FCC,” is to the Communications Act of
1934, or like OSHA is to the Occupational Safety and Health Act), and Congress has not
maintained oversight or regulation of the CDA on its own. But in function / in reality / in practice,
somehow Big Tech has absolutely morphed into Congress’ CDA policing agency.
46.
In the absence of congressional oversight as to the application of the CDA, courts
are almost uniformly giving judicial deference to the private parties (e.g., Facebook, Google,
Twitter) to enforce the CDA.
47.
The “Good Samaritan” blocking and screening decision-making, which is Section
230(c) (i.e., all of 230(c), including 230(c)(1) and 230(c)(2)(A) and 230(c)(2)(B)), cannot rightly
be classified as anything less than decision-making of “vast economic and political significance.”
48.
Under the Major Questions Doctrine, Congress had to “speak clearly if it wishe[d]
to assign [ ] executive agency decision[-making] of vast economic and political significance” to
Big Tech. Congress did not; Big Tech “cannot trace its [purported unlimited and unchecked
Internet policing] authority … to any clear congressional mandate.”
immunity under the Section 230(c) threshold “Good Samaritan” intelligible principle / general directive / general
provision analysis, but determined that Facebook’s anti-competitive animus laden conduct as to Fyk was immune?
Cf. Enigma Software Group USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040 (9th Cir. 2019) with the Facebook
Lawsuit. As invited in footnote 21, supra, for a discussion as to the appropriate interpretation / application of the Ninth
Circuit Court’s Enigma decision, this Court is invited to review Fyk’s March 3, 2020, filing in Fyk v. Facebook, Inc.,
No. 21-16997 (9th Cir.) of the Facebook Lawsuit.
20Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 29 of 143
49.
So, the Major Questions Doctrine and / or the Non-Delegation Doctrine should be
applied (just like the Substantial Overbreadth Doctrine Discussed below and the Void-for-
Vagueness discussed below, for examples) to ensure preservation of constitutionally protected
liberties.
50.
The Void-for-Vagueness Doctrine is:
1) A constitutional rule that requires laws to state explicitly and definitely what
conduct is (in)actionable. Laws that violate this requirement are said to be void for
vagueness. The Void for Vagueness doctrine rests on the due process clauses of
the Fifth and Fourteenth Amendments of the U.S. Constitution. By requiring fair
notice of what is actionable and what is not, the Void for Vagueness Doctrine also
helps prevent arbitrary enforcement of the laws.
2) Under the Void for Vagueness Doctrine, a statute is also void for vagueness if a
legislature’s delegation of authority to judges and/or administrators is so extensive
that it would lead to arbitrary enforcement of the law. 31
51.
The Substantial Overbreadth Doctrine (“Overbreadth” being the shorthand for this
Doctrine):
… provides that a regulation / law can sweep too broadly and prohibit protected
rights. A regulation of speech, for example, is unconstitutionally overbroad if it
regulates a substantial amount of constitutionally protected expression.
Overbreadth is closely related to its constitutional cousin, vagueness. For example,
a regulation of speech is unconstitutionally vague if a reasonable person cannot
distinguish between permissible and impermissible speech because of the difficulty
encountered in assigning meaning to language. 32
Overbreadth doctrine is a principle of judicial review that a law is invalid if it
punishes constitutionally protected speech or conduct along with speech or conduct
that the government [i.e., delegated authority to a private entity] may limit to further
a compelling government interest [e.g., block and screen offensive material]. A
statute that is broadly written [e.g., Section 230(c)(2)(A): “any action voluntarily
taken… to restrict access to or availability of material that the provider or user
considers… whether or not such material is constitutionally protected”] which
Cornell Law School, Vagueness doctrine, https://www.law.cornell.edu/wex/vagueness_doctrine For this Court’s
ease of reference, a copy of this publication is found in composite Exhibit D.
31
32
Middle Tennessee State University Law School, Overbreadth, https://www.mtsu.edu/first-
amendment/article/1005/overbreadth (internal citations omitted). For this Court’s ease of reference, a copy of this
publication is attached hereto as Exhibit L and incorporated fully herein by reference. See also Cornell Law School,
Overbreadth, https://www.law.cornell.edu/wex/overbreadth For this Court’s ease of reference, a copy of this
publication is found in composite Exhibit D.
21Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 30 of 143
deters free expression can be struck down on its face because of its chilling effect
even if it also prohibits acts that may legitimately be forbidden [i.e., actually
offensive]. If a statute is overbroad, the court may be able to save the statute by
striking only the section that is overbroad. If the court cannot sever the statute and
save the constitutional provisions, it may invalidate the entire statute. 33
52.
Section 230’s broad delegation of authority, combined with the courts’ broad
interpretation, enables an ICS to restrict any speech it “considers” “objectionable” (i.e., allowing
development, in part, by proxy), even when the information is “constitutionally protected” /
“permissible” speech. Section 230 is so overbroad that companies like Google, Facebook, and
Twitter, for examples, have effectuated a “chilling effect” (i.e., deterrence) on almost all online
free expression. Being in Google, Facebook, or Twitter “prison” (i.e., denied of one’s liberty or
property, like Fyk in relation to that which is at issue in the Facebook Lawsuit) for purportedly
violating some “vague” Community Standard (i.e., being arbitrarily penalized for some quasi-
legislative “law”) at the sole discretion of a self-interested ICS (without congressional oversight,
uniform enforcement, or judicial review) under the “sovereignly immune” protection of
government (i.e., Congress’ civil liability protection that is Section 230(c)) is repugnant to the
Non-Delegation Doctrine, Major Questions Doctrine, Void-for-Vagueness Doctrine, and
Overbreadth Doctrine (as well as myriad other doctrines and / or canons discussed through this
filing). Regardless of the doctrinal problems of the immunities conferred upon the ICS, the net end
result is the same: deprivation of one’s constitutional rights (e.g., free speech and / or due process).
53.
Section 230’s overly broad misinterpretation / misapplication is an abomination
that has afforded private corporations the unlimited authority to, for examples, eliminate their
competition, dispose of critical thinking, and grant self-interested individuals / companies the
ability to conduct (i.e., under “color” of law) the largest modern-day book burning in the history
33
US Legal, Overbreadth Doctrine Law and Legal Definition, https://definitions.uslegal.com/o/overbreadth-doctrine/
(emphasis added). For this Court’s ease of reference, a copy of this publication is found in composite Exhibit J.
22Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 31 of 143
of mankind. Section 230’s vague, overly broad “sovereign” immunization of Big Tech’s unlawful
conduct results in a deep chilling effect on all lawful / permissible speech online and is an all-out
assault on citizens’ due process rights. 34
54.
The Fifth Amendment says to the federal government that no one shall be “deprived
of life, liberty or property without due process of law” 35 … the Internet should not continue to be
the exception. Fyk was personally denied due process by the California courts (again, at least so
far), and the SCOTUS to a lesser degree, when a government authorized and purportedly fully
immunized “proxy agent” (Facebook), voluntarily taking action under the aegis of government
(Section 230), deprived Fyk (which amounts to a government taking) of his liberty and property
without so much as a single hearing on the matter. Again, this being at issue in the Facebook
Lawsuit.
55.
Pursuant to Title 5, United States Code, Section 706, when an agency takes an
agency action (here, the “agency” being a private person / entity),
[t]he reviewing court shall … (2) hold unlawful and set aside agency action,
findings, and conclusions found to be (A) arbitrary, capricious, an abuse of
discretion, or otherwise not in accordance with law; (B) contrary to constitutional
right, power, privilege, or immunity; (C) in excess of statutory jurisdiction,
authority, or limitations, or short of statutory right; (D) without observance of
procedure required by law; … .
5 U.S.C. § 706 – Scope of review.
56.
“Immunity” from suit, means there is no reviewing court when an agency (i.e., a
private entity) takes an “agency action.” Simply put, there is no review of any ICS rule, action, or
More real-world examples of the havoc Section 230 is wreaking with its carte blanche “sovereign” immunity are
provided throughout this challenge, in particularly in the below “Overbreadth” section where real-world harms caused
by the broken application of the CDA have to be shown.
34
35
Cornell Law School, Due Process, https://www.law.cornell.edu/wex/due_process
reference, a copy of this publication is found in composite Exhibit D.
23
For this Court’s ease ofCase 1:22-cv-01144 Document 1 Filed 04/26/22 Page 32 of 143
enforced violations (paramount to “laws” created via government delegation). Section 230 also
lacks any “official agency” qualifications. Cf, e.g., 47 U.S.C. § 154 – FCC. This lacking of review
and qualifications prevent virtually all judicial scope of review when a commercial private actor
takes “any action voluntarily,” actions that arbitrarily restrict U.S. citizens’ liberty or property
under government authority.
57.
In other words, just as Justice Scalia warned in Mistretta, Section 230 grants a
private entity (i.e., self-motivated agent) the authority to create any rule / “law” (at least “Internet
law,” as if such a thing even exists, which it should not and really does not under the true law but
does in reality, as illustrated by the livelihood crushing applied by Facebook to Fyk at issue in the
Facebook Lawsuit) it deems to be “in the public interest.” And Section 230 “sovereignly”
immunizes (i.e., denies due process to folks like Fyk, for example, which such folks doubtless total
in the millions this far into Big Tech’s two-plus-decades of abuses of the CDA) any / all actions
“voluntarily” taken when arbitrarily restricting the liberty and / or property of others that it
considers “objectionable,” “whether or not such material is constitutionally protected” (i.e.,
contrary to constitutional doctrines and rights), solely relying on the agency’s own views and
policy agenda rather than requiring Congress to set forth objective guidelines, which may partly
explain (if not fully explain) why even Mark Zuckerberg has advocated (or at least suggested) at
congressional hearings for Section 230 regulatory oversight vis-à-vis a congressionally appointed
regulatory body / agency.
58.
Section 230, in its current unchecked state, confers carte blanche immunity to all
online providers, even from unlawful or tortious conduct. 36 According to the restrictive theory,
Absurdity Doctrine / Canon – “A provision may be either disregarded or judicially corrected as an error (when the
correction is textually simple) if failing to do so would result in a disposition that no reasonable person could approve.”
For the Court’s ease of reference, see Ex. G at 3.
36
24Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 33 of 143
“the immunity of the sovereign is recognized with regard to sovereign or public acts of a state, but
not with respect to private acts.’” 37 In other words, a state should enjoy immunity from suits arising
out of the exercise of their governmental functions (i.e., to block and screen offensive material),
but not from suits arising out of the types of activities in which private parties engage (i.e., entirely
“voluntary” private acts, devoid of obligation or consideration). In contradiction to the restrictive
theory (i.e., “which excludes immunity for private acts such as commercial activities”), Section
230 allows both private function and governmental function, simultaneously. In Barnes v. Yahoo!,
Inc., 570 F.3d 1096 (9th Cir. 2009), the Ninth Circuit Court determined that, “any activity that can
be boiled down to deciding whether to exclude material that third parties seek to post online is
perforce immune under section 230.” Id. at 1102 (internal citation omitted). If the Ninth Circuit
Court was correct (it was not correct in Barnes), that would also include unlawful behavior such
as antitrust and / or anti-competitive action, which has been the aberrant conclusion (thus far) in
the Facebook Lawsuit. All agency actions (especially private acts) cannot logically or legally be
immune from suit. While the Ninth Circuit Court has been right on occasion (e.g., Fair Housing,
and Enigma, and Lemmon v. Snap, Inc., 440 F. Supp. 3d 1103 (C.D. Cal. 2020)), the Ninth Circuit
Court has also missed the mark on other occasions (e.g., Barnes, Sikhs for Justice, Inc. v.
Facebook, Inc., 697 Fed.Appx. 526 (9th Cir. 2017), the Facebook Lawsuit), leaving the CDA in a
case law gray zone / no man’s land in addition to the CDA’s constitutionally broken condition.
59.
In Carter v. Carter Coal Co., 298 U.S. 238 (1936), Justice Sutherland aptly wrote:
The power conferred upon the majority [ICS] is, in effect, the power to regulate the
affairs of an unwilling [User]. This is legislative delegation in its most obnoxious
form; for it is not even delegation [Section 230 does not confer power] to an official
or an official body, presumptively disinterested, but to private persons whose
37
The Free Library, Foreign sovereign immunity and comparative institutional competence,
https://www.thefreelibrary.com/Foreign+sovereign+immunity+and+comparative+institutional+competence-
a0401777155 (internal citations omitted). For the Court’s ease of reference, a copy of this publication is attached
hereto as Exhibit M and is incorporated fully herein by reference.
25Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 34 of 143
interests may be and often are adverse to the interests of others in the same business
[the Facebook Lawsuit]. … The difference between producing coal [operating an
interactive computer and advertising service] and regulating [restricting] its
production [materials] is, of course, fundamental. The former is a private activity;
the latter is necessarily a governmental function, since, in the very nature of things,
one person may not be [e]ntrusted with the power to regulate the business of
another, and especially of a competitor. And a statute which attempts to confer such
power undertakes an intolerable and unconstitutional interference with personal
liberty and private property. The delegation is so clearly arbitrary, and so clearly a
denial of rights safeguarded by the due process clause of the Fifth Amendment, that
it is unnecessary to do more than refer to decisions of this court which foreclose the
question.
Id. at 311 (citing, inter alia, A.L.A. Schechter Poultry Corp. v. U.S., 295 U.S. 495, 537 (1935)).
60.
Fyk challenges the constitutionality of the CDA’s delegation of regulatory authority
that permits the discretionary restrictive actions of a commercial private entity. This discretionary
enforcement resulted in the advancement of anti-competitive animus against Fyk (and many other
improperly discriminated users), an animus that cannot, by definition, meet the qualification
(intelligible principle / general directive / general provision) of “Good Samaritanism” to enjoy the
entitlement of complete immunity for any and all liability for any malfeasance or tortious conduct.
Regulation, penalization, or deprivation in any form, carried out by an authorized government
agent (i.e., whether private or public) “to fill up the details” (i.e., fill in the quasi-legislative rules)
at the directive of Congress must afford / not deprive (not even approach infringing upon) the due
process and free speech rights of the entity or person being regulated. Fyk lodges this facial and /
or as applied constitutional challenge of Section 230, with the law being glaringly violative of the
constitutional doctrines and / or canons of statutory construction discussed herein (above and
below), resulting in deprivation of freedoms ensured by the First and Fifth Amendments.
61.
We risk losing the freedoms of this nation or heavy abridgement of freedoms
already experienced by way of the CDA over the last twenty-six years, if this Court does not act
in conjunction with this constitutional challenge to enjoin and put an end to Section 230’s
26Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 35 of 143
unconstitutional delegation of regulatory authority and to put a stop to unchecked large commercial
tech entities’ control over online free speech and the free market. 38
C.
Constitutional Doctrines Violated By The CDA
62.
The CDA’s constitutional / statutory flaws, as discussed in detail greater below
(doctrines in Section C and canons in Section D), result in the deprivation of constitutionally
guaranteed rights (due process under the Fifth Amendment almost always, and free speech under
the First Amendment quite often). As discussed above, the CDA’s numerous constitutional /
statutory flaws deprived Fyk of his Fifth Amendment and First Amendment rights, resulting in the
economic / livelihood destruction of Fyk, all as illustrated by the Facebook Lawsuit. Moreover,
because CDA-oriented actions are taking place without any transparency, and are being performed
by commercial actors, the CDA has a pernicious effect of allowing private factions to “police” and
censor public participation, expression, and speech without any check on online providers’ plenary
power. For these reasons, this Court should scrutinize the constitutionality of the CDA.

  1. Non-Delegation Doctrine / Major Questions Doctrine
  2. America’s growth (technological or otherwise) was inconceivable when the
    Constitution was written. The growth of the Internet was also inconceivable when Section 230 was
    made law in 1996. All this considered, America’s vastness calls for regulation that far exceeds the
    capabilities of Congress.
    64.
    Section 230(c) is an (in)direct congressional grant of authority to private
    commercial enterprises (e.g., ICS, such as Facebook in relation to the Facebook Lawsuit, wherein,
    again, Facebook was a commercial actor in direct competition with Fyk) to self-regulate content
    38
    See n. 14, supra.
    27Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 36 of 143
    under the aegis of “communications decency” statute, typically left to the aegis of an
    administrative agency, such as the FCC or OSHA in other contexts.
    65.
    When Congress “lay[s] down by legislative act an intelligible principle to which
    the person or body authorized to [exercise the regulatory authority] is directed to conform, such
    legislative action is not a forbidden delegation of legislative power [presumably granted to an
    official body].” J.W. Hampton, 276 U.S. at 409. If a statute contains an articulated “intelligible
    principle” / general directive / general provision, we know it is delegated agency authority under
    which the body (here, a private entity) is directed (i.e., obligated) to conform in order to receive
    protection (i.e., consideration, which is civil liability protection / immunity in the CDA context).
    66.
    Here, Section 230 contains the “Good Samaritan” intelligible principle / general
    directive / general provision. The intelligible principle is located in Section 230(c) and is
    emphasized by the quotes surrounding the provision. Since the “Good Samaritan” intelligible
    principle exists within the statute, we must conclude that Section 230 is, in fact, an authority
    delegated by Congress for an ICS to voluntarily act on behalf of Congress (i.e., a state directive).
    67.
    Where, as here, Congress abdicates its regulation of law (whether that be the
    enforcement of such law and / or the development of such law by way of things like rule creation;
    e.g., Facebook Community Standards) to private actors who are not bound by administrative
    agency oversight and who are nevertheless somehow enjoying carte blanche “sovereign”
    immunity in regards to their regulation of law, such congressional abdication runs afoul of the
    Non-Delegation Doctrine. U.S. Const. Art. 1, Art. I, § 1; Art. I, § 8, par. 18. See, e.g., A.L.A.
    Schechter Poultry Corp. v. U.S., 295 U.S. 485, 537 (1935) (congress cannot delegate legislative
    power to the President to exercise an unfettered discretion to make whatever laws he thinks may
    be needed or advisable for the rehabilitation and expansion of trade and industry); National
    28Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 37 of 143
    Federation of Independent Business, et al. v. Department of Labor, Occupational Safety and
    Health Administration, et al., No. 21A244 and Ohio, et al. v. Department of Labor, Occupational
    Safety and Health Administration, et al., No. 21A247, 595 U.S. _ (Jan. 13, 2022),
    concurring opinion at 4 (“the major questions doctrine is closely related to what is sometimes
    called the nondelegation doctrine. … Both are designed to … ensure that any new laws governing
    the lives of Americans are subject to the robust democratic processes the Constitution demands,”
    internal citation omitted).
    68.
    The new “laws” (e.g., Facebook Community Standards) created by Big Tech
    “govern[ ] the lives of Americans [and must be] subject to the robust democratic processes the
    Constitution demand;” again, like due process and free speech. Again, anybody sane recognizes
    that the “laws” created by Big Tech to “fill up the details” do anything but ensure constitutional
    freedoms.
    69.
    The Non-Delegation Doctrine is a principle in administrative law that Congress
    cannot delegate its legislative powers to other entities in unbridled fashion. 39
    70.
    The Constitution makes clear that legislative function should generally remain
    within Congress. See Art. 1, U.S. Constitution Sec. 1. Our system of government has long held
    that “the integrity and maintenance of the system of government ordained by the Constitution”
    mandates that Congress generally cannot delegate its legislative power to another branch (and
    especially not to a private entity, in creation of a fourth branch). See, e.g., Field v. Clark, 143 U.S.
    649, 692 (1892). It was also recognized, however, that the separation-of-powers principle and the
    39
    See, e.g., Cornell Law School, Nondelegation Doctrine, https://www.law.cornell.edu/wex/nondelegation_doctrine
    and Cornell Law School, Administrative Law, https://www.law.cornell.edu/wex/administrative_law . Both of these
    publications are found in composite Exhibit D.
    29Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 38 of 143
    Non-Delegation Doctrine do not entirely preclude Congress from obtaining assistance in
    regulating law.
    71.
    Chief Justice Taft explained the approach to such a cooperative venture: “In
    determining what [Congress] may do in seeking assistance from another branch [here, private
    corporations], the extent and character of that assistance must be fixed according to common sense
    [not absurdity] and the inherent necessities of the government co-ordination.” J.W. Hampton, 276
    U.S. at 406.
    72.
    Succinctly put, the history of the Non-Delegation Doctrine goes like this:
    The SCOTUS has sometimes declared categorically that ‘the legislative power of
    Congress cannot be delegated,’ and on other occasions has recognized more
    forthrightly, as Chief Justice Marshall did in 1825, that, although Congress may not
    delegate powers that ‘are strictly and exclusively legislative,’ it may delegate
    ‘powers which [it] may rightfully exercise itself.’ The categorical statement has
    never been literally true, the Court having upheld the delegation at issue in the very
    case in which the statement was made. The Court has long recognized that
    administration of the law requires exercise of discretion, and that, ‘in our
    increasingly complex society, replete with ever changing and more technical
    problems, Congress simply cannot do its job absent an ability to delegate power
    under broad general directives [i.e., under intelligible principle(s)].’ The real issue
    is where to draw the line. Chief Justice Marshall recognized ‘that there is some
    difficulty in discerning the exact limits,’ and that ‘the precise boundary of this
    power is a subject of delicate and difficult inquiry, into which a court will not enter
    unnecessarily.’ Accordingly, the Court’s solution has been to reject delegation
    challenges in all but the most extreme cases, and to accept delegations of vast
    powers to the President or to administrative agencies. 40
    73.
    The CDA is the extreme case in which unconstitutional authority has been
    delegated, not to the President or even to an official administrative agency, but rather to self-
    interested private parties like Mark Zuckerberg, Jack Dorsey, Sundar Pichai, or to anyone else who
    provides an online service.
    40
    Cornell Law School, The History of the Doctrine of Nondelegability, https://www.law.cornell.edu/constitution-
    conan/article-1/section-1/the-history-of-the-doctrine-of-nondelegability (emphasis added) (citing, in this order U.S. v.
    Shreveport Grain & Elevator Co., 287 U.S. 77, 85 (1932), Field, Wayman, J.W. Hampton, Mistretta, and Sunshine
    Anthracite Coal Co. v. Adkins, 310 U.S. 381, 398 (1940). This publication is found in composite Exhibit D.
    30Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 39 of 143
    74.
    In delivering the opinion of the SCOTUS in Carter, Justice Sutherland stated, in
    part, as follows:
    [t]he power conferred upon the [ICS] is, in effect, the power to regulate the affairs
    of an unwilling [participant]. This is legislative delegation in its most obnoxious
    form [i.e., an extreme case]; [Section 230 does not confer power] to an official or
    an official body, presumptively disinterested, but to private persons whose interests
    may be and often are adverse to the interests of others in the same business.
    Carter, 298 U.S. at 311 (emphasis added). This is precisely what happened in the above-described
    Facebook Lawsuit, and what has happened to millions of others over the last twenty-six years.
    75.
    Applying the principles in Carter to the Facebook Lawsuit, private corporations
    have been delegated (unconstitutionally) overly broad (i.e., unlimited) nonsensical authority to
    regulate the life, liberty, and / or property of other U.S. citizens under the color of law. Anyone
    who seeks to challenge Big Tech’s “legislative” actions, even when their actions are prima facie
    unlawful (as was the case in the Facebook Lawsuit), are dismissed pre-merits (i.e., immune from
    all civil liability). That is simply absurd and unconstitutional.
    76.
    The action of one that affects the life, liberty, and / or property of another, for
    example, is the epitome of a “major question.” Again, as Justice Gorsuch recently emphasized, the
    Non-Delegation Doctrine and Major Questions Doctrine are often (if not always) intertwined.
    Putting this situation into a Major Questions Doctrine perspective is one instance in this filing
    where elaborating beyond that which is said in the above “Preliminary Statement” section of this
    filing is not necessary; i.e., as it pertains to the application of the Major Questions Doctrine, the
    above Paragraphs 31-49 say all that needs to be said and are accordingly incorporated fully into
    this Section C by reference. That said, a re-write of a particular passage from Justice Gorsuch’s
    (and Justice Thomas’ and Justice Alito’s) concurring opinion in the aforementioned OSHA
    COVID-19 vaccination case(s) is worthwhile in this Section C.
    31Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 40 of 143
    77.
    Regardless of the doctrine (whether it be the Non-Delegation Doctrine or the Major
    Questions Doctrine):
    the point is the same. Both serve to prevent ‘government by bureaucracy
    supplanting government by the people.’ … And both hold their lessons for today’s
    case. On the one hand, [Big Tech] claims the power to issue … nationwide
    mandate[s] on … major question[s] but cannot trace [their] authority to do so to
    any clear congressional mandate. On the other hand, if the statutory subsection
    [Section 230] [that Big Tech] cites really did endow [Big Tech] with the power
    [they] assert[ ], that law would likely constitute an unconstitutional delegation of
    legislative authority. Under [Big Tech’s] reading, [the CDA] would afford [them]
    almost unlimited discretion – and certainly impose no ‘specific restrictions’ that
    ‘meaningfully constrai[n]’ the agency. … [Big Tech] would become little more
    than a ‘roving commission to inquire into evils and upon discovery correct them.’
    A. L. A. Schechter Poultry Corp. v. United States, 295 U. S. 495, 551 (1935)
    (Cardozo, J., concurring). Either way, the point is the same one Chief Justice
    Marshall made in 1825: There are some ‘important subjects, which must be entirely
    regulated by the legislature itself,’ and others ‘of less interest, in which a general
    provision may be made, and power given to [others] to fill up the details.’ Wayman
    v. Southard, 10 Wheat. 1, 43 (1825). And on no one’s account does [regulation of
    the entire Internet] qualify as some ‘detail.’ The question before us is not how to
    respond to [Internet policing], but who holds the power to do so. The answer is
    clear: Under the law as it stands today, that power rests with the States and
    Congress, not [Big Tech]. In saying this much, we do not impugn the intentions
    behind [Big Tech’s Internet] mandate[s]. Instead, we only discharge our duty to
    enforce the law’s demands when it comes to the question who may govern the lives
    of … million[s] [of] Americans. Respecting those demands may be trying in times
    of stress. But if this Court were to abide them only in more tranquil conditions,
    declarations of emergencies would never end and the liberties our Constitution’s
    separation of powers seeks to preserve would amount to little.
    Id. at 6-7 (some internal citations omitted).
    78.
    Here, although the real-world application of the CDA has somehow resulted in
    online providers having become the Internet policing authority without any apparent exposure to
    civil liability, carte blanche immunity finds no Congressional authority. On the other hand, if the
    CDA could somehow be read to provide online providers with carte blanche “sovereign”
    immunity, such would be an unconstitutional delegation of power. Whether viewed through a Non-
    Delegation Doctrine lens or a Major Questions Doctrine lens, such doctrines are in place to prevent
    32Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 41 of 143
    government by bureaucracy supplanting government by the people. Here, the use of the CDA’s
    enforcement mechanism has been contorted into government by a private bureaucracy as to all
    things Internet. The resulting effect of Section 230 (in application, at the very least) is that the
    CDA is unconstitutional under the Non-Delegation Doctrine and / or Major Questions Doctrine.
    79.
    Section 230 requires agency action, if a private entity seeks protection. A private
    entity must have voluntarily chosen to act as the regulatory agency, under the intelligible principle
    / general directive / general provision of the statute (i.e., chosen to act as an agent of Congress in
    “Good Samaritan” and “good faith” fashion). But the power to determine whether a private entity
    is entitled to “Good Samaritan” status cannot be abdicated and Congress cannot delegate the power
    to restrict speech (or deprive due process) upon any agent (whether official or private) because it
    is not a “power[ ]which [it] may rightfully exercise itself.” 41
    80.
    Private actors who seek protection / immunization after voluntarily engaging in
    blocking and screening pursuant to Section’s 230 mandate are acting as congressional agents, at
    least in part. Agencies are created through their own organic statutes (e.g., Section 230), which
    establish new laws, and doing so, creates the respective agencies to interpret, administer, and
    enforce those new laws. Generally, administrative agencies are created to protect a public interest
    rather than to vindicate private rights. 42
    81.
    Section 230 “creates the respective agencies” who “establish new laws” (e.g.,
    Community Standards) amidst zero boundaries concerning such new “laws.” And, then, self-
    interested corporations are left free to “interpret, administer, and enforce” those new “laws”
    41
    Cornell Law School, The History of the Doctrine of Nondelegability, https://www.law.cornell.edu/constitution-
    conan/article-1/section-1/the-history-of-the-doctrine-of-nondelegability (citing Wayman v. Southard, 23 U.S. (10
    Wheat.) 1, 41 (1825). A copy of this publishing is found in composite Exhibit D.
    42
    Cornell Law School, Administrative Law, https://www.law.cornell.edu/wex/administrative_law (emphasis added).
    A copy of this publication is found in composite Exhibit D.
    33Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 42 of 143
    however they see fit without any congressional or judicial oversight, which, as noted above, may
    well explain why Big Tech heads (like Mark Zuckerberg) have actually advocated (or at least
    suggested) at congressional hearings for Section 230 regulatory oversight … whether or not that
    was just for show from Mr. Zuckerberg does not take away from the fact that such was / is a valid
    point / suggestion. Section 230 is repugnant to the Non-Delegation Doctrine and / or Major
    Questions Doctrine. The CDA is the extreme case that must be addressed by this Court, just like
    the extreme OSHA mandatory COVID-19 vaccination case(s) recently addressed by the SCOTUS.
    82.
    In Carter, Justice Sutherland went on to note the difference between private activity
    and governmental function:
    The difference between [providing an interactive computer service] and regulating
    [material] is, of course, fundamental. The former is a private activity; the latter is
    necessarily a governmental function, since, in the very nature of things, one person
    may not be [e]ntrusted with the power to regulate the business of another, and
    especially of a competitor. And a statute which attempts to confer such power
    undertakes an intolerable and unconstitutional interference with personal liberty
    and private property. The delegation is so clearly arbitrary, and so clearly a denial
    of rights safeguarded by the due process clause of the Fifth Amendment, that it is
    unnecessary to do more than refer to decisions of this court which foreclose the
    question.
    Carter, 298 U.S. at 311 (emphasis added) (citing, inter alia, A.L.A. Schechter).
    83.
    Traditional editorial activity is voluntary private activity; but, when a private entity
    acts at the prerogative of Congress under protection, it is not acting privately (i.e., voluntarily), it
    is acting as an agent of government under required / obligatory activity.
    84.
    Congressional delegation of the authority to an ICS (a commercial enterprise) that
    operates without transparency or the safeguards of agency oversight, undertook an intolerable and
    unconstitutional interference with Fyk’s personal liberty and private property. Fyk lost hundreds
    of millions of dollars without due process when Facebook, Fyk’s competitor, acting under the
    color of “congressional CDA authority,” stripped Fyk of his livelihood. See the Facebook Lawsuit.
    34Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 43 of 143
    85.
    This kind of delegation by Congress to commercial actors is “clearly arbitrary,”
    resulting in a deprivation of Fyk’s due process rights (and free speech rights, for that matter) from
    which he has thus far been unable to brook relief in California’s federal court system (both the
    Northern District of California Court and the Ninth Circuit Court).
    86.
    Section 230 does not confer power to “a government body” required to adhere to
    procedural safeguards; but, instead, to private entities who are not required to adhere to the same
    procedural safeguards as the government body would be. This is the fundamental reason why a
    private entity cannot be delegated regulatory agency authority because no safeguards exist and
    because a company’s decision to deny one’s life, liberty, and / or property cannot be challenged in
    court (i.e., lacks due process) even when the entity, acting under government authority, regulates
    to its own benefit or pursuant to its own motivation.
    87.
    In Mistretta, Justice Scalia warned that where (as here with the CDA):
    the scope of delegation is largely uncontrollable by the courts, we must be
    particularly rigorous in preserving the Constitution’s structural restrictions that
    deter excessive delegation. The major one, it seems to me, is that the power to make
    law cannot be exercised by anyone other than Congress, except in conjunction with
    the lawful exercise of executive or judicial power.
    Mistretta, 488 U.S. at 416-417. Section 230 enables Big Tech to create “law” (e.g., Facebook’s
    Community Standards).
    88.
    The CDA “has effectively allowed Congress to grant administrative agencies the
    authority to create any rules they deem to be in the public interest, solely relying on the agency’s
    own views and policy agenda rather than requiring Congress to set forth objective guidelines.” 43
    Dunigan, M., St. John’s University School of Law, The Intelligible Principle: How It Briefly Lived, Why It Died,
    and Why It Desperately Needs Revival In Today’s Administrative State,
    43
    https://scholarship.law.stjohns.edu/lawreview/vol91/iss1/7/ For the Court’s ease of reference, a copy of this
    publication is attached hereto as Exhibit N and incorporated fully herein by reference.
    35Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 44 of 143
    89.
    Section 230’s authority is largely (if not entirely) uncontrollable and it grants the
    functional equivalency of administrative agency to online providers (e.g., Facebook, Google,
    Twitter, et cetera) with the authority to create any rules they deem to be in the “public interest,”
    solely relying on the quasi-agency’s own views and policy agenda (which rarely, if ever, comport
    with public interest) rather than requiring Congress to set forth objective guidelines.
    90.
    Where (as here) delegation has gone to private individuals / entities rather than a
    public official, such is acceptable if Congress has sufficiently marked the field within which an
    administrator may act so it may be known whether the private individual / entity has kept within
    the so-marked boundaries in compliance with the legislative will. That is not the case with the
    CDA because there are no checks and / or balances on whether the online providers’ conduct and
    activities (which can be completely hidden and proprietary, such as algorithms only accessible
    from the private provider’s exclusive purview) are operating within the parameters of legislative
    will. Most legal cases challenging the legitimacy of the online providers’ actions are summarily
    dismissed based on CDA immunity before the merits are even heard or subjected to discovery.
    91.
    Thus, the “Good Samaritan” intelligible principle / general directive / general
    provision laid down by Congress, and Section 230 lacks any material safeguards that ensure the
    “enforcers” act within the legislative standards or general directives of Congress. Without
    safeguards, a self-interested company is more inclined to exploit even the most basic directives
    (e.g., to act as a “Good Samaritan” in “good faith”) for its own self-benefit.
    92.
    “The line has not been exactly drawn which separates those important subjects,
    which must be entirely regulated by the legislature itself, from those of less interest, in which a
    general provision may be made, and power given to those who are to act under such general
    provisions to fill up the details.” Wayman, 23 U.S. at 20. While Chief Justice Taft’s distinction
    36Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 45 of 143
    may have been lost over time, the theory of the power “to fill up the details” remains current, most
    recently discussed in the Justice Gorsuch (and Justice Thomas and Justice Alito) concurring
    opinion in the above-mentioned OSHA COVID-19 vaccination case(s) through the Major
    Questions Doctrine lens.
    93.
    Per government publication:
    The second principle underlying delegation law is a due process conception that
    undergirds delegations to administrative agencies. The Court has contrasted the
    delegation of authority to a public agency, which typically is required to follow
    established procedures in building a public record to explain its decisions and to
    enable a reviewing court to determine whether the agency has stayed within its
    ambit and complied with the legislative mandate, with delegations to private
    entities, which typically are not required to adhere to such procedural safeguards. 44
    94.
    The CDA provides no established procedures to review online providers’
    compliance with the safeguards of, and entitlement to the “Good Samaritan” immunity; rather,
    under the CDA, private entities can do whatever they want (contrasted with a FCC, SEC, IRS who
    are required to follow procedures, explain their actions, and enable a court to review their actions
    to assure their actions complied with the limits of the agency’s legislative mandate).
    95.
    This lack of “required safeguards” led to the decisions in A.L.A. Schechter and
    Carter, for examples. Both cases centered around delegated authority (to regulate the affairs of
    others) being granted to private entities who inevitably regulated based on their own interests,
    rather than under the requirements set forth by the legislative mandate (i.e., intelligible principle /
    general directive / general provision). These public agency requirements are specifically in place
    to safeguard every citizen’s constitutionally ensured rights when the authorized agency takes any
    44
    Constitution
    Annotated,
    The
    Nature
    and
    Scope
    https://constitution.congress.gov/browse/essay/artI-S1-1-
    2/ALDE_00000010/%5b’declaration’,%20’of’,%20’independence’%5d
    of
    Permissible
    Delegations,
    (citing Carter v. Carter Coal Co., 298 U.S. 238, 310–312 (1936); Yakus v. U.S., 321 U.S. 414, 424–425 (1944)). This
    publication is attached hereto as Exhibit K and incorporated fully herein by reference.
    37Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 46 of 143
    action to the contrary (e.g., an action to deprive someone of their life, liberty, and / or property).
    Section 230 (in its present state, or in the present interpretation / application of same) does not
    afford someone any safeguards, as illustrated by the Facebook Lawsuit. A private entity cannot be
    delegated this authority to handle major questions amidst no scope of review, among other things.
    96.
    The FCC, for example, is an “official body” and has strict regulations to which it
    must adhere. Title 47, United States Codes, Section 154 (Federal Communications Commission
    of the US Telecommunications Act of 1996) pertains to procedural guidelines of the FCC (the
    same Telecommunications Act containing Section 230).
    97.
    When the FCC takes action against another, it is subject to a scope of review. Under
    Title 5, United States Code, Section 706 (Scope of review), when an agency takes an agency action
    (the Section 230 agency being a private entity):
    … the reviewing court shall decide all relevant questions of law, interpret
    constitutional and statutory provisions, and determine the meaning or applicability
    of the terms of an agency action. The reviewing court shall –
    (1) compel agency action unlawfully withheld or unreasonably delayed; and
    (2) hold unlawful and set aside agency action, findings, and conclusions found to
    be –
    (A) arbitrary, capricious, an abuse of discretion, or otherwise not in accordance
    with law;
    (B) contrary to constitutional right, power, privilege, or immunity;
    (C) in excess of statutory jurisdiction, authority, or limitations, or short of statutory
    right;
    (D) without observance of procedure required by law;
    (E) unsupported by substantial evidence in a case subject to sections 556 and 557
    of this title or otherwise reviewed on the record of an agency hearing provided by
    statute; or
    (F) unwarranted by the facts to the extent that the facts are subject to trial de novo
    by the reviewing court.
    In making the foregoing determinations, the court shall review the whole record
    or those parts of it cited by a party and due account shall be taken of the rule of
    prejudicial error.
    Id.
    38Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 47 of 143
  3. None of these procedural requirements or mechanisms of review exist in Section
  4. Moreover, online providers are not required to possess any qualifications for agents
    230.
    who regulate the affairs of U.S. citizens. Conversely, the FCC maintains specific qualifications to
    explicitly safeguard every U.S. citizen’s constitutional rights.
    100.
    The principal qualification of most (if not all) official regulatory commissions (e.g.,
    the FCC) is that all of its regulatory agents be U.S. citizens because only a U.S. citizen can make
    and enforce law implicating life, liberty, and / or property of another U.S. citizen. The same goes
    for jury service, as another example – the primary qualification for jury service is that the candidate
    must be a U.S. citizen. A foreign actor cannot be tasked with depriving any U.S. citizens of their
    rights, and, yet, private entities (e.g., Facebook, Twitter, Google, et cetera.) admittedly hire foreign
    agents to regulate U.S. citizens’ information (e.g., foreign content moderators / fact checkers). And
    regardless of whether the third-parties enlisted by Big Tech to regulate U.S. citizens’ information
    control over U.S. elections, and / or et cetera, such third parties are rogue actors without any
    qualifications to protect our constitutional rights.
    101.
    Furthermore, a private entity (who has received delegated authority from Congress)
    is not required to adhere to the Administrative Procedure Act (“APA”), a federal act that is codified
    as Title 5, United States Code, Sections 551-559 and governs the procedures of administrative law.
    Section 3 of the APA addresses the procedural formalities that agencies must employ when making
    decisions. There is a distinction made between (a) general regulations made through the process
    of rulemaking, and (b) case-by-case decisions made through the process of adjudication. Section
    10 of the APA deals with judicial review of administrative agency decisions. Reviewing courts
    39Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 48 of 143
    determine whether agency officials acted in compliance with relevant federal statutes and whether
    the agency’s actions were “arbitrary, capricious, or an abuse of discretion.”
    102.
    Section 230 has no measurable bounds, is not “enforced” uniformly, and is often
    “enforced” to the benefit of the online provider / ICS rather than “in the interests of the public.” In
    the Facebook Lawsuit, Fyk has thus far been denied all measure of redress (i.e., denied due process,
    and denied free speech for that matter) when Facebook took agency action (illegitimately protected
    by government) against Fyk. This unlawful regulatory taking action (undertaken by an agent of
    government – Facebook) has thus far been afforded “who cares?” status by the courts presiding
    over the Facebook Lawsuit, amounting to a deprivation of Fyk’s due process rights even though
    Fyk’s Verified Complaint in the Northern District of California Court specifically alleges anti-
    competitive animus / motives for Facebook’s actions.
    103.
    Section 230 is an inescapably extreme example of why the Non-Delegation
    Doctrine and Major Questions Doctrine exist. Congressional authority, to assist in the legislative
    function, may be delegated to an “official body, presumptively disinterested;” but, regulatory
    authority delegated to private entities motivated by self-interest is “legislative delegation in its
    most obnoxious form.” Section 230 is an unconstitutional delegation of regulatory authority, that
    is “so clearly arbitrary, and so clearly a denial of rights safeguarded by the due process clause of
    the Fifth Amendment.” Section 230’s constitutional infirmities must be immediately addressed
    and remedied by this Court, lest continued irreparable permanent harm to the constitutional rights
    of all Americans (like Fyk) and to the Constitution of the United States of America continue.
  5. Void-for-Vagueness Doctrine
  6. The legal definition of the Void-for-Vagueness Doctrine is a doctrine requiring that
    a penal statute (Section 230) “define a[n] … offense with sufficient definiteness that ordinary
    40Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 49 of 143
    people can understand what conduct is prohibited and in a manner that does not encourage arbitrary
    and discriminatory enforcement.” Kolender v. Lawson, 461 U.S. 352, 357 (1983) (internal citations
    omitted). Under the Void-for-Vagueness Doctrine, a vague law is a violation of due process
    because the law does not provide fair warning of a prohibition; i.e., fails to provide “persons of
    ordinary intelligence a reasonable opportunity to know what is prohibited, so that he may act
    accordingly.” Grayned v. City of Rockford, 408 U.S. 104, 108 (1972).
    105.
    Section 230(c) is entitled “Protection for ‘Good Samaritan’ Blocking and Screening
    of Offensive Material.” “[B]locking and screening” is a form of penalization (i.e., a restriction of
    liberty and / or property). See, e.g., F.C.C. v. Fox Television Stations, Inc., et al., 567 U.S. 239
    (2012) (assessing the Void-for-Vagueness Doctrine in a civil setting, rather than criminal).
    106.
    In the Facebook Lawsuit, Facebook deemed Fyk’s materials “offensive” when in
    Fyk’s hands; but, when Fyk’s materials (i.e., identical in content) were in the hands of Fyk’s
    competitor, Fyk’s materials were inexplicably no longer offensive to Facebook. Not-so-
    coincidentally, Fyk’s competitor paid Facebook substantially more advertising money than Fyk.
    Facebook’s discriminatory determination that Fyk’s identical materials were “offensive” was
    motivated by commercial monetary objectives and unfair competition – not Good Samaritan
    motives of policing “decency” – and this type of tortious conduct cannot be immune under the
    CDA; hence, this constitutional challenge. To be clear, a commercial actor can make commercial
    decisions on its own platform but it cannot enjoy immunity from liability by an aggrieved party
    from the consequences that flow from conduct that are determined a judge or jury to be tortious.
    107.
    Section 230(c)(2)(A) attempts to better define what constitutes (i.e., what is to be
    considered) “offensive” material. It reads, in pertinent part, as follows: “any action voluntarily
    taken in good faith to restrict access to or availability of material that the provider or user considers
    41Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 50 of 143
    to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,
    whether or not such material is Constitutionally protected.” Id.
    108.
    Section 230 leaves the decision-making as to what is and is not offensive entirely
    in the hands of self-interested private corporations. Here, the vaguest measure of “blocking and
    screening” under Section 230(c)(2)(A) is “objectionable.” “[O]therwise objectionable” could be
    anything
    unwanted,
    inconvenient,
    undesirable,
    embarrassing,
    troublesome,
    awkward,
    disadvantageous, conflicting, or even contrary discourse.
    109.
    Section 230(c)(2)(A) analysis and decision-making is inherently (i.e., on its face)
    arbitrary and / or discriminatory, and Section 230 does not sufficiently define (let alone in a way
    that ordinary people can understand) what conduct is prohibited; thus, the CDA is void for
    vagueness, as the CDA specifically encourages arbitrary and / or discriminatory enforcement (i.e.,
    any action taken to restrict whatever the provider of user considers objectionable). On its face and
    as applied, Section 230 even allows, for example, a provider or user the ability to discriminate
    arbitrarily against protected classes, so long as the ICS considers them objectionable. Vagueness
    leads to that absurd result, with the related Absurdity Canon discussed further below.
    110.
    The term “material” in the context of Section 230 defies objective determination.
    Webster’s dictionary defines the term “material,” in pertinent part, as: “relating to or made of
    matter; physical rather than spiritual or intellectual; having real importance.” 45 The term “material”
    relates to physical matter, not to intellectual or spiritual things.
    111.
    In the Facebook Lawsuit, Fyk’s content (i.e., Fyk’s physical material) was restricted
    for Fyk, but then Fyk’s identical materials were restored by Facebook for Fyk’s competitor who
    45
    Merriam-Webster Dictionary, Material, https://www.merriam-webster.com/dictionary/material A copy of this
    definition is found in composite Exhibit I.
    42Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 51 of 143
    paid substantially more money to Facebook. Because Facebook made more money from Fyk’s
    competitor than from Fyk, the strong inference is that Facebook’s discriminatory application of
    censorship of Fyk’s materials were motivated by anti-competitive animus rather than a benign but
    non-uniform application of the CDA.
    112.
    “Under [the] vagueness doctrine, a statute is also void for vagueness if a
    legislature’s delegation of authority to judges and/or administrators is so extensive that it would
    lead to arbitrary prosecutions.” 46
    113.
    Private corporations have been delegated broad administrative authority by Section
    230 to create rules (i.e., to “fill up the details”) in the public interest. As applied, however, Section
    230 grants these companies the authority to create any rules the company deems to be in the “public
    interest,” solely relying on the agency’s own views and policy agenda rather than requiring
    Congress to set forth objective guidelines.
    114.
    Facebook recently admitted that “facts” are nothing more than (intellectual or
    spiritual) opinion. This is an extraordinary statement and reveals how the CDA is fostering the
    corruption of public discourse and suppression of public participation and speech. Section 230 is
    so vague, on its face and as applied, that private corporations now determine what is fact and what
    is fiction, dovetailing with the Substantial Overbreadth Doctrine (discussed below) and the Major
    Questions Doctrine (discussed above).
    115.
    Not only have these companies become the arbiters of truth, but companies like
    Facebook have become the arbiters of opinion.
    46
    Cornell Law School, Void for vagueness, https://www.law.cornell.edu/wex/void_for_vagueness (citing to Skilling
    v. U.S., 130 S.Ct. 2896 (2010)). A copy of this Cornell publication is found in composite Exhibit D.
    43Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 52 of 143
    116.
    Even competition has become “objectionable,” such as in the Facebook Lawsuit.
    Companies like Facebook are paid to develop information for their sponsors. Sponsored ads are
    shown in the newsfeed alongside (or, rather, in displacement of) other user’s content or advertising.
    It is almost certainly in the company’s best interest to restrict its own competition. 47 Section 230,
    as applied, allows companies like Facebook (or any other private corporation, for that matter) to
    deem the user (i.e., its own competition), the user’s business, and / or the user’s advertising
    objectionable in its own competitive self-interest and restrict them from the site.
    117.
    In his concurring opinion in Zango, Inc. v. Kaspersky Lab, Inc., 568 F.3d 1169 (9th
    Cir. 2009), Judge Fisher warned that pernicious consequences could follow if future courts
    permitted online platforms to have unchecked authority to define what content is “otherwise
    objectionable.” See id. at 1178-1180. Continuing with Judge Fisher’s concurring opinion:
    Focusing for the moment on anticompetitive blocking, I am concerned that
    blocking software providers who flout users’ choices by blocking competitors’
    content could hide behind § 230(c)(2)[A] when the competitor seeks to recover
    damages. I doubt Congress intended § 230(c)(2)[A] to be so forgiving. … Unless
    § 230(c)(2)[A] imposes some good faith limitation on what a blocking software
    provider can consider ‘otherwise objectionable,’ or some requirement that
    blocking be consistent with user choice, immunity might stretch to cover conduct
    Congress very likely did not intend to immunize.
    Id. at 1178-1179.
    118.
    Judge Fisher’s warning of pernicious consequences was not only correct, but an
    unimaginable understatement. A legislative statute enacted to protect children from harmful
    47
    This note could have been placed in several different areas throughout this brief, but one thing we could not do is
    not put this note somewhere in this brief. Let us be abundantly clear that in no way, shape, or form are we suggesting
    an ICS does not have the right to compete, not even close. But life is full of choices and full of consequences, and Big
    Tech companies are run by sophisticated adults. If an ICS of ordinary (or, really, heightened) corporate intelligence
    chooses to conduct itself in an anti-competitive fashion (devoid of “Good Samaritanism” and / or “good faith”), then
    the ICS should so choose, knowing full well that it (Facebook, Google, Twitter) no longer has a choice as to “invoking”
    CDA civil liability protection / immunity. The ICS cannot have its proverbial cake and eat it too. If an ICS chooses to
    behave in an anti-competitive way, then it subjects itself to civil liability in the ordinary course based on the merits
    (i.e., just as it would outside the Internet ether; i.e., just as it would in the legal real world) because without a true,
    legitimate “Good Samaritan” cloak at the threshold, the ICS can enjoy no CDA immunity, period.
    44Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 53 of 143
    content has morphed into vague, arbitrary, and unfettered discretion to crush anyone economically,
    politically, ideologically, ethnically, racially, religiously, philosophically, and et cetera.
    119.
    In his dissenting opinion in Mistretta, Justice Scalia posed the question: “What
    legislated standard, one must wonder, can possibly be too vague to survive judicial scrutiny, when
    we have repeatedly upheld, in various contexts, a ‘public interest’ standard?’” Mistretta, 488 U.S.
    at 416 (internal citations omitted). “This standard has effectively allowed Congress to grant
    administrative agencies the authority to create any rules they deem to be in the public interest,
    solely relying on the agency’s own views and policy agenda rather than requiring Congress to set
    forth objective guidelines.” 48
    120.
    A law cannot be so vague as to allow a private entity motivated by self-interest
    rather than public interest to allow discretionary enforcement of same, which leads to arbitrary
    adjudication. Here, the CDA allows biased private entities to freely prosecute anyone for anything
    at any time in arbitrary fashion whether physically, intellectually, or spiritually. Section 230
    violates the Void-for-Vagueness Doctrine both on its face and as applied, and must be struck.
  7. Substantial Overbreadth Doctrine 49
  8. “Chris Cox [‘Cox’] and Ron Wyden [‘Wyden’] wrote Section 230 in 1996 to give
    up-and-coming tech companies a sword and a shield, and to foster free speech and innovation
    Dunigan, M., St. John’s University School of Law, The Intelligible Principle: How It Briefly Lived, Why It Died,
    and Why It Desperately Needs Revival In Today’s Administrative State,
    48
    https://scholarship.law.stjohns.edu/lawreview/vol91/iss1/7/ See Ex. J, incorporated fully herein by reference.
    The breadth of CDA immunity is a bipartisan issue. For example, when Googling “Biden / Trump communications
    decency act,” here are some search results: (a) Both Trump and Biden have criticized Big Tech’s favorite law – here’s
    what Section 230 says and why they want to change it, CNBC (May 28, 2020); (b) Section 230 under attack: Why
    Trump and Democrats want to rewrite it, USA Today (Oct. 15, 2020). As another example of bipartisan scrutiny,
    Facebook, Twitter, and Google have testified in front of Congress regarding “serious consequences” flowing from
    unbridled CDA immunity; e.g., silencing of voices (at fever pitch during an election cycle).
    49
    45Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 54 of 143
    online.” 50
    122.
    As Wyden wrote in his June 2020 article (Ex. O):
    Essentially, 230 says that users, not the website that hosts their content, are the ones
    responsible for what they post, whether on Facebook or in the comments section of
    a news article. That’s what I call the shield [i.e., Section 230(c)(1)]. But it also gave
    companies a sword [i.e., Section 230(c)(2)(A)] so that they can take down offensive
    content, lies and slime – the stuff that may be protected by the First Amendment
    but that most people do not want to experience online.
    Id.
    123.
    Wyden and Cox are the two authors of Section 230. In the title of his article (Ex.
    O), Wyden points out that Section 230 was written “to protect free speech.” He goes on to say that
    the purpose of Section 230 was to give up-and-coming tech companies a “shield” (defensive
    protection, Section 230(c)(1)), a “sword” (offensive weapon, Section 230(c)(2)(A)), and a “shield”
    and a “sword” vis-à-vis the ability (i.e., the “shield”) to pass the “sword” (i.e., the tools necessary
    to restrict materials) to another (defensive protection when providing the offensive weapon to
    another, Section 230(c)(2)(B)).
    124.
    In 1997, the Fourth Circuit Court in Zeran somehow transformed the Section
    230(c)(1) “shield” into an offensive weapon (i.e., another sword), and, as another example,
    somehow the California court system in the Facebook Lawsuit has thus far endorsed Facebook’s
    offensive weaponization of the defensive Section 230(c)(1) realm in a case having nothing to do
    with Section 230(c)(1). It is logical to provide an ICS with a “shield” from liability for the content
    and conduct of another (defensively) vis-à-vis Section 230(c)(1), but it is not logical to provide a
    “shield” that allows protection for an ICS’ own content (offensively) or conduct as to the content
    50
    Wyden, Ron, I wrote this law to protect free speech. Now Trump wants to revoke it,
    https://edition.cnn.com/2020/06/09/perspectives/ron-wyden-section-230/index.html A copy of this publication is
    attached hereto as Exhibit O and incorporated fully herein by reference.
    46Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 55 of 143
    of another (offensively) because such was the intended purpose of the Section 230(c)(2) “sword.”
    The “sword” can be used by the ICS within the context of Section 230(c)(2)(A) (pursuant, of
    course, to the “Good Samaritan” intelligible principle / general directive / general provision and
    otherwise pursuant to Section 230’s “good faith” language) and, in the case of Section
    230(c)(2)(B); whereas, pursuant to Section 230(c)(2)(B), an ICS is “shielded” when the “sword”
    is passed by the ICS to another ICP #1 to use offensively against ICP #2 where appropriate (e.g.,
    the ability of ICP #1 to remove ICP #2’s comments on ICP #1’s post via tools / services made
    available to ICP #1 by the ICS).
    125.
    Wyden attempts to paint Section 230’s authority into a favorable light, insinuating
    that Section 230 only “gave companies a sword so that they can take down offensive content, lies
    and slime;” but, in application at the very least, Section 230 gives companies far more than just a
    “sword” to take down “lies and slime.” Wyden acknowledges that the “sword” is used to slash
    “the stuff [speech] that may be protected by the First Amendment.” Restricting protected speech
    is at the core of an Overbreadth challenge.
    126.
    “The [SCOTUS] has recognized that the First Amendment’s protections extend to
    individual and collective speech ‘in pursuit of a wide variety of political, social, economic,
    educational, religious, and cultural ends.’ Accordingly, speech is generally protected under the
    First Amendment unless it falls within one of the narrow categories of unprotected speech.” 51
    127.
    The CRS continues:
    [T]he [SCOTUS] has recognized the narrow categories that the government may
    regulate because of their content, as long as it does so evenhandedly [i.e.,
    uniformly]. The Court generally identifies these categories as obscenity,
    51
    Congressional Research Service, The First Amendment: Categories of Speech,
    https://crsreports.congress.gov/product/pdf/IF/IF11072 (citing Roberts v. U.S. Jaycees, 468 U.S. 609, 622 (1984)).
    A copy of this publication is attached hereto as Exhibit P and incorporated fully herein by reference.
    47Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 56 of 143
    defamation, fraud, incitement, fighting words, true threats, speech integral to
    criminal conduct, and child pornography.” 52
    In the CDA context, the absurd practical effect (even if originally unintended by the government)
    is that the government has laundered policing of anything considered “objectionable” to private
    self-interested technology companies (remarkably, even if the “objectionable” material is
    permissible speech that is supposed to be protected under the First Amendment), whereas the
    government should only be “laundering” regulation of impermissible speech (i.e., not
    constitutionally protected speech) under the government’s / SCOTUS’ very narrow view of what
    constitutes impermissible speech.
    128.
    Section 230(c)(2)(A)’s categories of supposedly “unprotected” speech are not
    nearly as narrow as typical government agency standards. With the professed purpose of writing
    Section 230 being to protect speech, it is counterintuitive to provide private entities with a broader
    range of categories over which to restrict permissible speech. Section 230(c)(2)(A) identifies
    “impermissible” speech categories as anything the provider or user considers: “obscene, lewd,
    lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Section 230(c)(2)(A)
    has a glaring flaw. A self-interested private corporation can “consider” anything obscene, lewd,
    lascivious, filthy, excessively violent, harassing, or otherwise objectionable and, unlike the
    government’s determinations of (and SCOTUS prescribed) impermissible speech categories, one
    cannot challenge an online provider’s decisions.
    129.
    Several of Section 230(c)(2)(A)’s categories, at least theoretically, track (in some
    respect) the government’s and SCOTUS’ categorical identifications, except for “otherwise
    objectionable.” “Otherwise objectionable” is so broad that it swallows all of the other categories.
    52
    See Exhibit P (citing See R.A.V. v. St. Paul, 505 U.S. 377, 382-86 (1992)) (regular italics in original and bold italics
    added).
    48Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 57 of 143
    “Obscene, lewd, lascivious, filthy, excessively violent, harassing” are all “otherwise
    objectionable” terms / phrases; so, the lowest (or broadest) measure of offensive content is
    anything the ICS considers “objectionable.” Section 230(c)(2)(A) could be written, in its broadest
    and most pertinent part, as “… to restrict access to or availability of material that the provider or
    user considers to be otherwise objectionable…” and it would have the same overbroad effect,
    having removed the terms / phrases “obscene, lewd, lascivious, filthy, excessively violent,
    harassing” from the statute. The breadth of the phrase “otherwise objectionable” far exceeds the
    policy and purpose of Section 230’s protections; i.e., “otherwise objectionable” is violative of the
    Overbreadth Doctrine, reaching well-beyond the very few, limited categories of truly
    impermissible speech.
    130.
    Section 230(c)(2)(A)’s “otherwise objectionable” language / category is already
    overly broad in and of itself; but, the overbreadth of “otherwise objectionable” is compounded by
    the judiciary’s mistakenly overbroad application of Section 230(c)(1) (to protect editorial function
    without a measure of “good faith”) … the “limits” of civil liability protection became absolute
    publishing sovereignty as to all things online, which amounts to the lawless wild west of the
    Internet.
    131.
    Per Justice Thomas:
    [B]y construing § 230(c)(1) to protect any decision to edit or remove content,
    Barnes v. Yahoo!, Inc., 570 F. 3d 1096, 1105 (CA9 2009), courts have curtailed the
    limits Congress placed on decisions to remove content [i.e., curtailed the limits to
    restrict speech – overbreadth], see e-ventures Worldwide, LLC v. Google, Inc., 2017
    WL 2210029, *3 (MD Fla., Feb. 8, 2017) (rejecting the interpretation that §
    230(c)(1) protects removal decisions because it would ‘swallo[w] the more specific
    immunity in (c)(2)’). With no limits on an Internet company’s discretion to take
    down material, § 230 now apparently protects companies who racially discriminate
    in removing content. Sikhs for Justice, Inc. v. Facebook, Inc., 697 Fed. Appx. 526
    (9th Cir. 2017), aff ’g 144 F. Supp. 3d 1088, 1094 (N.D. Cal. 2015) (concluding
    that ‘any activity that can be boiled down to deciding whether to exclude material
    that third parties seek to post online is perforce immune’ under § 230(c)(1)).
    49Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 58 of 143
    Malwarebytes, 141 S.Ct. at 17 (emphasis in original).
    132.
    The court’s (mis)construing Section 230(c)(1) to protect editorial conduct (which
    it does not), curtailed the speech restriction limits (which were already overbroad) espoused in
    Section 230(c)(2)(A). Section 230’s protection is (as applied) so broad that a company can, for
    example, racially discriminate (commit unlawful acts). Unlawful conduct (e.g., discrimination,
    anti-competition) is prima facie vastly beyond the breadth of Congress’ CDA intent. Lawful,
    legitimate, and permissible speech became fair game for removal without there being a showing
    of “good faith” vis-à-vis a “Good Samaritan,” while allowing (i.e., knowingly providing)
    otherwise unlawful / impermissible content has become commonplace online. Section 230 went
    from being overly broad on its face (e.g., the authority to restrict anything considered “otherwise
    objectionable, whether or not such material is constitutionally protected” or whether such conduct
    is illegal) to absurd in its misapplication as absolute editorial sovereignty. Violative of the
    Overbreadth Doctrine and Absurdity Canon.
    133.
    Despite misguided proponents of Section 230 believing Section 230 is a protection
    for First Amendment rights, Section 230 is just the opposite. The ICS’ First Amendment rights are
    ensured by the Constitution, Section 230 does not change or protect that fact. In the real world,
    Section 230 authorizes (under civil liability protection) the infringement of a third-party’s First
    Amendment rights. Section 230 authorizes an ICS to create arbitrary rules, deem third-party speech
    impermissible, restrict that third-party speech, and then punish the third-party for their content and
    conduct all under the “protection” of government. Section 230 does not “protect” (in any capacity)
    First Amendment rights, it only serves to protect the ICS’ ability to infringe on a third-party’s First
    Amendment rights.
    50Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 59 of 143
    134.
    Third-party participants have no process by which to challenge (in a court of law)
    a corporation’s unlawful, anti-competitive decisions because Section 230 has ridiculously
    morphed into absolute immunity from suit. In the Facebook Lawsuit, Facebook unlawfully
    restricted Fyk’s permissible speech by way of the government’s delegation of the major question
    that is free speech. Section 230 did not “protect” either the ICS’ or Fyk’s First Amendment rights;
    rather, by dismissing Fyk’s claims, the courts protected Facebook from civil liability and infringed
    upon Fyk’s rights to seek redress and speak freely. There is a distinct difference between the
    government’s liability “protection” of the ICS and the government’s authorization (i.e., to a private
    agent) to infringe on a third-party’s constitutionally protected rights. Simply put, the government
    cannot fuel an ICS’ deprivation of an ICP’s free speech rights, which is precisely what the CDA
    fosters in a far too overbroad way.
    135.
    “A statute is overly broad if, in proscribing unprotected speech, it also
    proscribes protected speech. Because an overly broad law may deter constitutionally protected
    speech, the overbreadth doctrine allows a party to whom the law may constitutionally be applied
    to challenge the statute on the ground that it violates the First Amendment rights of others.” 53 Here,
    Section 230(c)(2)(A), for example, literally contains the following language: “whether or not such
    material is constitutionally protected.” Here, Fyk challenges Section 230 on behalf of millions of
    Americans (i.e., a substantial number) whose lawful, permissible speech has been unlawfully,
    unwillingly censored by private agents with ulterior motives (e.g., monetarily driven competition)
    acting under the aegis of government, as a direct result of the overly broad draftsmanship of
    Section 230 and the overly broad application of Section 230 immunity.
    53
    https://en.wikipedia.org/wiki/Overbreadth_doctrine (citing, e.g., Board of Trustees of State Univ. of N.Y. v. Fox,
    492 U.S. 469, 483 (1989), and R.A.V. v. City of St. Paul, 505 U.S. 377 (1992)). A copy of this Wikipedia article, along
    with all other Wikipedia articles cited throughout this filing, is attached hereto as composite Exhibit H and
    incorporated fully herein by reference.
    51Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 60 of 143
    136.
    “Overbreadth is closely related to vagueness; if a prohibition is expressed in a way
    that is too unclear for a person to reasonably know whether or not their conduct falls within the
    law, then to avoid the risk of legal consequences they often stay far away from anything that could
    possibly fit the uncertain wording of the law [e.g., Fyk].” 54 The CDA’s effects are much broader
    than Congress had intended or that the Constitution permits; hence, the CDA is violative of the
    Overbreadth Doctrine.
    137.
    Indeed, the CDA’s prohibitions (or allowances, conversely) are so unclear to Fyk
    (a reasonable person who has no idea what conduct does or does not fall within the CDA or Big
    Tech Community Standards that have spiraled out of the CDA) that Fyk, ever since Facebook
    destroyed his livelihood, has been risk adverse; i.e., Fyk has stayed far away from anything that
    could result in such destruction again under the broken CDA. Put differently, the overbroad and
    vague CDA have had a chilling effect on Fyk’s life (professionally and personally) in a much
    broader way than Congress could have ever intended or that the Constitution permits. Fyk’s free
    speech has been chilled / deterred to such a degree that Fyk, for fear an ICS would destroy his life
    (professionally and personally) again by crushing his businesses and permissible free speech, has
    not reestablished his businesses on any other social media platforms. Fyk fears that he will once
    again waste his time and energy building his businesses (with permissible free speech being a
    foundational material for same, and the building of businesses being a pillar upon which this
    country was built in the vein of the American Dream) only to have it destroyed once again, by a
    governmentally authorized agent, without recourse. Accordingly, the CDA’s absolute protection
    of online providers’ unilateral ability to restrict permissible speech has had a substantial real-world
    54
    https://en.wikipedia.org/wiki/Overbreadth_doctrine, Ex. H.
    52Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 61 of 143
    chilling effect on protected free speech for Fyk and public discourse as a whole. After all, the
    Internet being the modern-day public square for anybody with a grip on reality.
    138.
    Section 230 does far more to advance the commercial interests of private
    corporations than it does to protect children or the public from impermissible offensive speech.
    When considering how to resolve Section 230’s overly broad protections, Justice Thomas noted
    (and this constitutional challenge asks this Court to so note and so engage in) the
    [p]aring back [of] the sweeping immunity courts have read into § 230[.] [Such
    paring] would not necessarily render defendants liable for online misconduct, [such
    paring] simply would give plaintiffs a chance to raise their claims in the first place.
    Plaintiffs still must prove the merits of their cases, and some claims will
    undoubtedly fail.
    Ex. C, Malwarebytes, 141 S.Ct. at 18. Fyk was not given the chance to raise his claims because of
    Section 230’s “sweeping” (i.e., overly broad) immunity.
    139.
    This Court should also consider on this constitutional challenge that “laws are
    constitutional only if they directly advance a substantial government interest and are not broader
    than necessary to serve that interest.” 55
    140.
    For a statute (Section 230) to be struck down because it is substantially overbroad,
    on its face and / or as applied, the amount of overbreadth must be substantial and real (i.e., not
    hypothetical), when judged in relation to the statute’s legitimate scope (e.g., to block and screen
    offensive material). For a statute to be substantially overbroad, a substantial number of the
    applications of the statute must be impermissible under the First Amendment, both in terms of
    absolute numbers and in relation to a law’s legitimate applications (the ratio of permissible to
    impermissible applications).
    55
    See Ex. P.
    53Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 62 of 143
    141.
    This lawsuit seeks a judicial determination that the phrase “otherwise
    objectionable” cannot stand because it is overly broad on its face, and worse, when “enforced” (as
    applied) by private actors, acting in their own self-interest (with governmental authority), there are
    no checks on the capricious and arbitrary actions of the “enforcer,” much less on actions which
    are targeted and conscious efforts to engage in anti-competitive behavior, political suppression,
    ideological suppression, sociological suppression, religious suppression, as examples.
    142.
    When a statute is challenged under the Substantial Overbreadth Doctrine, one must
    first consider the statute as a whole (i.e., on its face) and one must then consider how the agents
    (private or public) have applied the statute’s authority when restricting a citizen’s speech. The
    question is whether or not the statute (as a whole) and / or whether the agent’s application of the
    statute (as applied) is or is not substantial? Does Section 230 deter a substantial amount of
    permissible speech, causing a chilling effect on future lawful speech?
    143.
    A substantial overbreadth challenge can be raised if a statute has both legitimate
    and illegitimate applications. This action seeks a judicial determination that a significant number
    of possible applications of the statute are impermissible under the First Amendment and that the
    statute should accordingly be invalidated in its entirety (on its face). Separately, the government
    (the Defendant) may attempt to convince this Court that a small number of possible applications
    are impermissible and that those applications can be dealt with one at a time in as applied
    challenges. Such an attempt by the government under a First Amendment lens would still be
    untenable, however, because the statute cannot stand as repugnant to the Fifth Amendment’s Due
    Process clause because Section 230 permits a de facto taking from a citizen before the citizen is
    given his Due Process rights.
    54Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 63 of 143
    144.
    We now turn our examination to the substantiality of the statute’s overbreadth. To
    provide an adequate backdrop, against which to contrast what lawful, legitimate and permissible
    speech is restricted, we must first give examples of the illegitimate, impermissible,
    “objectionable,” “offensive,” and otherwise unlawful speech and / or conduct that has been
    authorized (i.e., immunized) by Section 230.
    145.
    In one case, for example:
    several victims of human trafficking alleged that an Internet company that allowed
    users to post classified ads for ‘Escorts’ deliberately structured its website to
    facilitate illegal human trafficking. Among other things, the company ‘tailored its
    posting requirements to make sex trafficking easier,’ accepted anonymous
    payments, failed to verify e-mails, and stripped metadata from photographs to make
    crimes harder to track. Jane Doe No. 1 v. Backpage.com, LLC, 817 F. 3d 12, 16–
    21 (1st Cir. 2016). Bound by precedent creating a ‘capacious conception of what it
    means to treat a website operator as the publisher or speaker,’ the court held that §
    230 protected these website design decisions and thus barred these claims. Id., at
    19; see also M. A. v. Village Voice Media Holdings, LLC, 809 F. Supp. 2d 1041,
    1048 (ED Mo. 2011).
    Ex. C, Malwarebytes, 141 S.Ct. at 17.
    146.
    Had the CDA actually been correctly interpreted and applied (i.e., applied to not
    immunize an ICS that knowingly fosters / allows illegal activity, e.g., sex trafficking, to unfold on
    its platform), FOSTA-SESTA would not have had to have become law under President Trump’s
    April 11, 2018, signature. 56 FOSTA-SESTA was a law enacted to offset / guard against Section
    230’s protecting an ICS’ unscrupulous business practices (which such protection of an ICS’
    unscrupulousness has somehow become reality amidst the “as applied” CDA Twilight Zone that
    has evolved over the last twenty-six years), when the CDA was by no means designed to provide
    immunity to websites that facilitate sex trafficking.
    “SESTA” is an acronym for “Stop Enabling Sex Traffickers Act,” and “FOSTA” is an acronym for “Allow States
    and Victims to Fight Online Sex Trafficking Act.’
    56
    55Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 64 of 143
    147.
    Does not the fact that a separate law has to be created to fix another law render
    that other law untenable? Yes – the CDA is untenable. Is not the fact that Section 230 has morphed
    into a facilitator of sex trafficking confirmation of the CDA’s overly broad immunization? Yes –
    the CDA is unconstitutionally overbroad. Just based on the enactment of FOSTA-SESTA alone,
    which, again, necessarily confirms the legal / constitutional repugnancy of the CDA, the CDA must
    be struck.
    148.
    Pursuant to FOSTA-SESTA, an ICS no longer has an option (i.e., no longer has a
    voluntary decision to make) as to whether or not to remain inactive / sit idly by (in the CDA
    context, see Section 230(c)(1) as to an ICS’ direct inactivity and see Section 230(c)(2)(B) as to an
    ICS’ indirect inactivity) when the ICS knows about illegalities (e.g., sex trafficking) unfolding on
    or being promoted within its backyard / platform. That is not to say that FOSTA-SESTA renders
    an ICS responsible for seeking out such illegalities (i.e., not to say that proactivity is now required
    of an ICS in some sort of pre-harm crystal ball or detective fashion) and / or not to say that an ICS
    somehow needs to perform the arduous (if not impossible, actually) task of somehow acting upon
    unknown illegal content. Not the case.
    149.
    Rather, FOSTA-SESTA can be distilled to this: “hey, Facebook / Twitter / Google
    / YouTube, if you know about bad things going down on your platform, it would behoove you to
    do the right thing … for example, if you know of sex trafficking transpiring on your site, you
    should strongly consider immediately blowing the whistle and blowing the whistle loudly because
    an opposite decision to remain inactive will constitute willful / known / negligent decision-making
    on your part, i.e., a decision to remain inactive is an action, and you will not somehow enjoy
    immunity for your ‘own’ action(s). That is not to say that your decision to remain inactive, i.e. /
    e.g., not blow the whistle on sex trafficking unfolding on your site, will result in civil liability; but,
    56Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 65 of 143
    it is to say that you do not enjoy a threshold immunity under the aforementioned circumstances
    and your accuser / opponent will get his / her day in court on the merits.”
    150.
    FOSTA-SESTA supports our correct understanding of the CDA – CDA civil
    liability protection sweeps far too broadly in immunizing an ICS from its own conduct (even if
    that conduct is to knowingly not act), such as the facilitating sex trafficking on their sites or such
    as Facebook’s anti-competitive animus conduct against Fyk as alleged in Fyk’s Verified
    Complaint in the Facebook Lawsuit. And FOSTA-SESTA supports our request that the CDA be
    struck – again, a new law had to be passed (FOSTA-SESTA) in order to combat (i.e., do the job of
    the CDA) the overly broad CDA immunization (“overly broad” in that CDA immunity sweeps so
    widely that an ICS is somehow protected from claims arising out of the ICS’ allowing sex
    trafficking to unfold on the platform and / or even going so far as to promote, directly or indirectly,
    the trafficking). Just as lawmakers had to start over again with respect to a piece of the CDA vis-
    à-vis FOSTA-SESTA, so too should this Court with respect to all of Section 230(c). FOSTA-
    SESTA was a relatively easy “CDA partial fix,” all things considered … FOSTA-SESTA was
    passed in the House with a vote of 388-25 and passed in the Senate with a vote of 97-2. 57
    151.
    Of note, had the Doe case (Ex. C) presented itself to the SCOTUS in a procedurally
    “final” way, the SCOTUS would be presently entertaining a CDA case involving the FOSTA-
    SESTA bit in the CDA context: “It is hard to see why the protection § 230(c)(1) grants publishers
    against being held strictly liable for third parties’ content should protect Facebook from liability
    for its own ‘acts and omissions.’” Ex. C, Doe, 142 S.Ct. at 1088.
    57
    The bi-partisan support for FOSTA-SESTA evidences another thing that we have been saying all along (even citing
    to Section 230 news articles featuring President Biden and Section 230 news articles featuring President Trump in our
    late-2020 Petition for a Writ of Certiorari to the SCOTUS) – irrespective of one’s politics (left, right, center), if one
    has functioning dendrites and / or firing synapses, agreement is legion that the CDA is broken and needs fixed
    immediately … yesterday … years ago … decades ago.
    57Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 66 of 143
    152.
    And then there is the absurdity (likely also overbreadth) of permitting terroristic
    content on a platform:
    Consider also a recent decision granting full immunity to a company for
    recommending content by terrorists. Force v. Facebook, Inc., 934 F. 3d 53, 65 (2d
    Cir. 2019), cert. denied, 590 U. S. —— (2020). The court first pressed the policy
    argument that, to pursue ‘Congress’s objectives, . . . the text of Section 230(c)(1)
    should be construed broadly in favor of immunity [i.e., overbreadth].’ 934 F. 3d, at
  9. It then granted immunity, reasoning that recommending content (i.e.,
    development in part) ‘is an essential result of publishing.’ Id., at 66. Unconvinced,
    the dissent noted that, even if all publisher conduct is protected by § 230(c)(1), it
    ‘strains the English language to say that in targeting and recommending these
    writings to users . . . Facebook is acting as ‘the publisher of . . . information
    provided by another information content provider.’ Id., at 76– 77 (Katzmann, C. J.,
    concurring in part and dissenting in part) (quoting § 230(c)(1)).” (Emphasis Added)
    Ex. C, Malwarebytes, 141 S.Ct. at 18.
    153.
    Moreover:
    Other examples abound. One court granted immunity on a design-defect claim
    concerning a dating application that allegedly lacked basic safety features to
    prevent harassment and impersonation. Herrick v. Grindr LLC, 765 Fed. Appx.
    586, 591 (2d Cir. 2019), cert. denied, 589 U. S. —— (2019). Another granted
    immunity on a claim that a social media company defectively designed its product
    by creating a feature that encouraged reckless driving. Lemmon v. Snap, Inc., 440
    F. Supp. 3d 1103, 1107, 1113 (C.D. Cal. 2020).
    Ex. C, Id. at 17-18.
    154.
    As yet another heinous example of Section 230’s failures:
    Plaintiffs John Doe #1 and John Doe #2 allege that when they were thirteen years
    old, they were solicited and recruited for sex trafficking and manipulated into
    providing to a third-party sex trafficker pornographic video (‘the Videos’) of
    themselves through the social media platform Snapchat. A few years later, when
    Plaintiffs were still in high school, links to the Videos were posted on Twitter.
    Plaintiffs allege that when they learned of the posts, they informed law enforcement
    and urgently requested that Twitter remove them but Twitter initially refused to do
    so, allowing the posts to remain on Twitter, where they accrued more than 167,000
    views and 2,223 retweets. According to Plaintiffs, it wasn’t until the mother of one
    of the boys contacted an agent of the Department of Homeland Security, who
    initiated contact with Twitter and requested the removal of the material, that Twitter
    finally took down the posts, nine days later. … [I]f a provider remained passive and
    uninvolved in filtering third-party material from its network, the provider could not
    58Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 67 of 143
    be held liable for any offensive content it carried from third parties. … Twitter was
    immune from claims based on theory that third-party content Twitter allowed to be
    posted on its platform led to plaintiff’s injury because the claim sought to hold
    Twitter liable as a publisher.
    Doe v. Twitter, Inc., No. 21-cv-00485-JCS, 2021 WL 3675207, at *1, *3-4 (N.D. Cal. Aug. 18,
    2021).
    155.
    In this circumstance, Twitter was not simply a “passive” host, it knowingly chose
    to “allow” (i.e., knowingly continued to host unlawful content) the patently offensive, obscene,
    and illegal child pornography, thereby materially and negligently contributing to the development
    of the information in part (i.e., the content amassed more than 167,000 views and 2,223 retweets
    after Twitter chose to “allow” the content to remain – i.e., acted to not act), until such time as it
    required the Department of Homeland Security to get involved.
    156.
    Twitter did absolutely nothing to achieve the compelling government interest of
    Section 230 and acted contrary to ordinarily recognized contemporary community standards.
    Twitter did not act as a “Good Samaritan;” rather, it exploited the overbroad immunity protections
    it is afforded by the courts. In our opinion, Twitter’s active material responsibility in contributing
    to the development of child pornography should have not only disqualified it from CDA immunity,
    but should have resulted in criminal charges against those directly involved in the decision to allow
    it to remain. It is one thing for the CDA to preclude civil liability (in protection of the behemoth
    ICS), whereas it is quite another thing for the CDA’s immunity swath to sweep so broadly that
    such swath ends up protecting an ICS from child-related illegalities (some of which such
    illegalities would rightly be in the criminal realm).
    157.
    Section 230’s overbroad immunity authorizes (as applied) unlawful conduct such
    as discrimination, human trafficking, recommending terrorist content, building dangerous
    applications that lack basic safety features (i.e., negligence), encouraging unlawful reckless driving
    59Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 68 of 143
    and knowingly hosting (i.e., allowing) child pornography, as just a few examples of Section 230’s
    (as-applied) failures.
    158.
    Justice Thomas noted a commonality between these circumstances (a commonality
    shared with the Facebook Lawsuit):
    A common thread through all these cases is that the plaintiffs were not necessarily
    trying to hold the defendants liable ‘as the publisher or speaker’ of third-party
    content. § 230(c)(1). Nor did their claims seek to hold defendants liable for
    removing content in good faith. § 230(c)(2). Their claims rested instead on alleged
    product design flaws – that is, the defendant’s own misconduct. Cf. Accusearch,
    570 F. 3d, at 1204 (Tymkovich, J., concurring) (stating that § 230 [immunity]
    should not apply when the plaintiff sues over a defendant’s ‘conduct rather than for
    the content of the information’). Yet courts, filtering their decisions through the
    policy argument that ‘Section 230(c)(1) should be construed broadly,’ [to protect
    all editorial function], Force, 934 F. 3d, at 64, give defendants immunity.
    Ex. C, Malwarebytes, 141 S.Ct. at (Emphasis added). Judge Tymkovich’s concurrence is correct.
    159.
    “Section 230 had two purposes: the first was to encourage the unfettered and
    unregulated development of free speech on the Internet, as one judge put it; the other was to allow
    online services to implement their own standards for policing content and provide for child
    safety.” 58
    160.
    The legislative intent of Section 230, however, has been turned upside down. Self-
    interested private corporations, given regulatory power and immunity protection, have (as applied)
    discouraged the development of third-party free speech without transparency or accountability
    (i.e., private corporations have penalized a substantial amount of permissible speech, causing an
    alarming chilling effect) and at the same time “reduced the incentives of online platforms to
    address illicit activity,” see Ex. E (e.g., to protect children).
    58
    Electronic Frontier Foundation, CDA 230 – The Most Important Law Protecting Internet Speech,
    https://www.eff.org/issues/cda230/legislative-history (internal citation omitted).
    A copy of this publication is attached hereto as Exhibit Q and incorporated fully herein by reference.
    60Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 69 of 143
    161.
    By what measure does one determine what material is “offensive” (i.e.,
    impermissible, unlawful speech)? “The Miller Test” enlightens as to that question:
    According to the FCC and the Supreme Court, a broadcast [similar to publishing]
    is considered offensive and obscene if it meets criteria under three different
    statements. The broadcast is offensive if: ‘[a] An average person, applying
    contemporary community standards, must find that the material, as a whole, appeals
    to the prurient (involving sexual desire) interest[;] [b] The material must depict or
    describe, in a patently offensive way, sexual conduct specifically defined by
    applicable law[;] [c] The material, taken as a whole, must lack serious literary,
    artistic, political or scientific value. 59
    162.
    The government agency’s (FCC’s) general standard for an “offensive” broadcast
    (i.e., content publishing) is material that appeals to the prurient (sexual in nature) mind and lacks
    serious literary, artistic, political, or scientific value. Section 230’s breadth (i.e., authority) to
    restrict any speech considered “objectionable” goes well beyond sexual material that lacks serious
    literary, artistic, political, or scientific value.
    163.
    Having provided an adequate backdrop against which to compare what unlawful,
    illegitimate, impermissible speech, and / or conduct that is encouraged (“allowed” / “developed”
    as-applied by online providers under Section 230 authority), we now turn our examination towards
    the substantiality of the lawful, legitimate, and permissible third-party speech that is discouraged
    (disallowed / restricted / penalized / punished), which does absolutely nothing to achieve Section
    230’s compelling government interests of developing permissible Internet free speech and / or
    protecting children by discouraging impermissible (i.e., offensive, unlawful) speech.
    164.
    Facebook has well over two billion users, making it one of the largest social media
    companies in the world. Facebook often chooses to act “voluntarily” to purportedly achieve
    59
    Laws, Patently Offensive,
    https://patent.laws.com/patently-offensive (citing to Miller v. California, 93 S.Ct. 2607 (1973)).
    A copy of this publication is attached hereto as Exhibit R and incorporated fully herein by reference.
    61Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 70 of 143
    Section 230’s compelling government interest to restrict offensive / impermissible material.
    Facebook moderates (without transparency or accountability) a “substantial” amount of third-party
    (im)permissible speech. Facebook is accordingly a good model by which to determine the ratio of
    permissible to impermissible applications of Section 230’s breadth (authority).
    165.
    When Congress enacted Section 230, it had ordinary contemporary community
    standards (circa 1996) in mind. Private companies like Facebook, however, inevitably create and
    administer self-interested, self-benefiting Community Standards that are often misaligned with the
    ordinary contemporary community standards that Congress had in mind twenty-six years ago when
    Mark Zuckerberg was eleven years old. To better understand how a company as large as Facebook
    misapplies its Community Standards to restrict a substantial amount of permissible speech, we
    must first understand how it creates and enforces its “rules.” To better understand this, we turn to
    a source that worked directly for Facebook.
    166.
    Brian Amerige, an engineering manager of Facebook products who had firsthand
    knowledge of Facebook’s internal processes, explains Facebook’s content policy, moderation, and
    enforcement processes in his op-ed publication entitled Facebook Has a Right to Block ‘Hate
    Speech’—But Here’s Why It Shouldn’t. 60
    167.
    Mr. Amerige, writes:
    When I joined the Facebook team in 2012, the company’s mission was to ‘make
    the world more open and connected, and give people the power to share.’

As of 2013, this was essentially Facebook’s content policy: ‘We prohibit content
deemed to be directly harmful, but allow content that is offensive or controversial.
We define harmful content as anything organizing real world violence, theft, or
60
https://quillette.com/2019/02/07/facebook-has-a-right-to-block-hate-speech-but-heres-why-it-shouldnt/
A copy of this publication is attached hereto as Exhibit S and incorporated fully herein by reference.
62Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 71 of 143
property destruction, or that directly inflicts emotional distress on a specific private
individual (e.g. bullying).’


Facebook’s content policy evolved to more broadly define ‘hate speech.’


[I]t became clear that they [i.e., Facebook employees] were committed to
sacrificing free expression in the name of ‘protecting’ people.


Let’s fast-forward to present day. This is Facebook’s summary of their current hate
speech policy:
We (i.e., Facebook) define hate speech as a direct attack on people based
on what we call protected characteristics – race, ethnicity, national origin,
religious affiliation, sexual orientation, caste, sex, gender, gender identity,
and serious disease or disability. We also provide some protections for
immigration status. We define attack as violent or dehumanizing speech,
statements of inferiority, or calls for exclusion or segregation.
The policy aims to protect people from seeing content they feel attacked by. It
doesn’t just apply to direct attacks on specific individuals (unlike the 2013 policy),
but also prohibits attacks on ‘groups of people who share one of the above-listed
characteristics.’
If you think this is reasonable, then you probably haven’t looked closely at how
Facebook defines ‘attack.’ Simply saying you dislike someone with reference to a
‘protected characteristic’ (e.g., ‘I dislike Muslims who believe in Sharia law’) or
applying a form of moral judgment (e.g., ‘Islamic fundamentalists who forcibly
perform genital mutilation on women are barbaric’) are both technically considered
‘Tier-2’ hate speech attacks, and are prohibited on the platform.
This kind of social-media policy is dangerous, impractical, and unnecessary.
The trouble with hate speech policies begins with the fact that there are no
principles that can be fairly and consistently applied to distinguish what speech is
hateful from what speech is not. Hatred is a feeling, and trying to form a policy that
hinges on whether a speaker feels hatred is impossible to do.


63Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 72 of 143
The truth is, any list of protected characteristics is essentially arbitrary. Absent a
principled basis, these are lists that are only going to expand with time as interest
and identity groups claim to be offended, and institutions cater to the most sensitive
and easily offended among us.
The inevitable result of this policy metastasis is that, eventually, anything that
anyone finds remotely offensive will be prohibited. Mark Zuckerberg not only
recently posted a note that seemed to acknowledged this, but included a handy
graphic describing how they’re now beginning to down-rank content that isn’t
prohibited, but is merely borderline.
Almost everything you can say is offensive to somebody. Offense isn’t a clear
standard like imminent lawless action. It is subjective – left up to the offended to
call it when they see it.


The lesson here is that while ‘offense’ is certainly something to be avoided
interpersonally, it is too subjective and ripe for abuse to be used as a policy
standard.
Perhaps even more importantly, you cannot prohibit controversy and offense
without destroying the foundation needed to advance new ideas. History is full of
important ideas, like heliocentrism and evolution, that despite later being shown to
be true were seen as deeply controversial and offensive because they challenged
strongly held beliefs. Risking being offended is the ante we all pay to advance our
understanding of the world.
But let’s say you’re not concerned about the slippery slope of protected
characteristics, and you’re also unconcerned with the controversy endemic to new
ideas. How about the fact that the truths you’re already confident in—for example,
that racism is abhorrent—are difficult to internalize if they are treated as holy writ
in an environment where people aren’t allowed to be wrong or offend others?
Members of each generation must re-learn important truths for themselves
(“Really, why is racism bad?”). “Unassailable” truths turn brittle with age, leaving
them open to popular suspicion. To maintain the strength of our values, we need to
watch them sustain the weight of evidence, argument and refutation. Such a free
exchange of ideas will not only create the conditions necessary for progress and
individual understanding, but also cultivate the resilience that much of modern
culture so sorely lacks.
But let’s now come down to ground level, and focus on how Facebook’s policies
actually work.


64Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 73 of 143
When a post is reported as offensive on Facebook (or is flagged by Facebook’s
automated systems), it goes into a queue of content requiring human moderation.
That queue is processed by a team of about 8,000 (soon to be 15,000) contractors.
These workers have little to no relevant experience or education, and often are
staffed out of call centers around the world [i.e., foreign actors]. Their primary
training about Facebook’s Community Standards exists in the form of 1,400 pages
of rules spread out across dozens of PowerPoint presentations and Excel
spreadsheets [i.e., no ordinary person / ICP could conceivably know what violates
these 1,400 rules or not]. Many of these workers use Google Translate to make
sense of these rules, [begging the question – how is the ordinary person supposed
to know what the delegated state actor [Facebook] has deemed [a] ‘rule[ ]’ when
Facebook’s own foreign contractors have no clue?]. And once trained, they
typically have eight to 10 seconds to make a decision on each post. Clearly, they
are not expected to have a deep understanding of the philosophical rationale behind
Facebook’s policies, [begging the question – how is anyone supposed to know what
Facebook’s philosophical rationale is if Facebook’s own moderators do not even
know?].
As a result, they often make wrong decisions. And that means the experience of
having content moderated on a day-to-day basis will be inconsistent for users. This
is why your own experience with content moderation not only probably feels
chaotic, but is (in fact) barely better than random. It’s not just you. This is true for
everyone (i.e., substantial).


Sometimes, the rules are ignored to insulate Facebook from ‘PR Risk.’ Other times,
the rules are applied more stringently when governments that are more likely to fine
or regulate Facebook might get involved [i.e., compelled by government]. Given
how inconsistent and slapdash the initial moderation decisions are, it’s no surprise
that reversals are frequent. … It’s hard to overstate how sloppy this whole process
is.
There is no path for something like this to improve. …They think they’ll be able to
clarify the policies sufficiently to enforce them consistently, or use artificial
intelligence (AI) to eliminate human variance. Both of these approaches are
hopeless.
Iteration works when you’ve got a solid foundation to build on and optimize. But
the Facebook hate speech policy has no such solid foundation because “hate
speech” is not a valid concept in the first place. It lacks a principled definition—
necessarily, because “hateful” speech isn’t distinguishable from subjectively
offensive speech—and no amount of iteration or willpower will change that.


65Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 74 of 143
Case in point: When Facebook began the internal task of deciding whether to follow
Apple’s lead in banning Alex Jones, even that one limited task required a team of
(human) employees scouring months of Jones’ historical Facebook posts to find
borderline content that might be used to justify a ban. In practice, the decision was
made for political reasons [i.e., not for 230(c)(2)(A) offensive reasons], and the
exercise was largely redundant.


No one likes hateful speech, and that certainly includes me. … Such attacks are
morally repugnant. I suspect we all agree on that.
But given all of the above, I think we’re losing the forest for the trees on this issue.
‘Hate speech’ policies may be dangerous and impractical, but that’s not true of anti-
harassment policies [harassment being of the 230(c)(2)(A) ilk], which can be
defined clearly and applied with more clarity. The same is true of laws [government
defined standards] that prohibit threats, intimidation and incitement to imminent
violence [and also definable impermissible speech]. Indeed, most forms of
interpersonal abuse that people expect to be covered by hate speech policies—i.e.,
individual, targeted attacks—are already covered by anti-harassment policies and
existing laws.
So, the real question is: Does it still make sense to pursue hate speech policies at
all? I think the answer is a resounding “no.” Platforms would be better served by
scrapping these policies altogether. But since all signs point to platforms doubling
down on [increasing the substantiality of] existing policies, what’s a user to do?
First, it’s important to recognize that much of the content that violates Facebook’s
content policy never gets taken down. I’d be surprised if moral criticism of religious
groups [i.e., protected speech], for example, resulted in enforcement by moderators
today, despite being (as I noted above) technically prohibited by Facebook’s
policy… . [I]n the meantime, I’d encourage you to not let the policies get in your
way. Say what you think is right and true [and lawful, legitimate, and permissible],
and let the platforms deal with it… .
See Ex. S (emphasis added).
168.
Mr. Amerige’s summary of Facebook’s hate speech policy / procedure nicely
illustrates the problems that are inherent when authorizing a private corporation to make decisions
as to what is categorically objectionable (i.e., (im)permissible) speech. Impermissible speech
restrictions are typically subjected to strict scrutiny by the courts. Courts are often disinclined to
66Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 75 of 143
broaden speech restrictions because such erodes First Amendment rights. Facebook, like most (if
not all) other large tech companies, does substantially more to advance self-interest by restricting
permissible speech than it does to restrict patently offensive / impermissible speech. Facebook
does not adhere to contemporary community standards; rather, it instead, relies on its own interests
and policy agenda to create its own Community Standards others must live by, despite the “others”
/ “ICPs” having no clue as to, among other things, about Facebook’s 1,400 pages of “rules” that
not even Facebook’s moderators understand or uniformly apply.
169.
Per the CRS:
The contours of [impermissible speech] categories have changed over time, with
many having been significantly narrowed by the Court (to protect speech). In
addition, the Roberts Court has been disinclined to expand upon this list, declining
to recognize, for example, violent entertainment or depictions of animal cruelty as
new categories of unprotected speech. See Brown v. Entm’t Merchs. Ass’n, 564 U.S.
786 (2011); United States v. Stevens, 559 U.S. 460 (2010).
See Ex. P.
170.
Contrary to courts’ narrow approach to speech restrictions, online providers have
“substantially” expanded their already vague policies / Community Standards to broaden
“impermissible” (i.e., offensive) speech categories to include categories such as “inconvenient,”
“competitive,” or “unwanted” speech.
171.
Big Tech’s so doing under the CDA, creates “a realistic danger that the statute itself
will significantly compromise recognized First Amendment protections of parties not before the
Court … . [The CDA is accordingly eligible for being] facially challenged on overbreadth
grounds.” Members of City Council of Los Angeles v. Taxpayers for Vincent, 466 U.S. 789, 801
(1984).
172.
In determining whether a statute’s overbreadth is substantial, the courts consider a
statute’s application to real-world conduct, not fanciful hypotheticals. See, e.g., U.S. v. Williams,
67Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 76 of 143
553 U.S. 285, 301–302 (2008). Accordingly, the courts have repeatedly emphasized that an
overbreadth claimant bears the burden of demonstrating, “from the text of [the law] and from
actual fact that substantial overbreadth exists.” See, e.g., Virginia v. Hicks, 539 U.S. 113, 122
(2003) (internal citation omitted).
173.
“From the text of [Section 230(c)(2)(A)] and from actual fact,” it is plain that
230(c)(2)(A) is overly broad, undertaking an intolerable and unconstitutional interference with
personal liberty by restricting protected speech whether or not such material is constitutionally
protected.
174.
Congress cannot, constitutionally, delegate the power to restrict speech upon any
agent (whether official or private) because it is not a power that Congress can rightfully exercise
itself. Assuming arguendo that Congress did have the power to restrict speech, then Congress is
not permitted to abdicate or to transfer to others the essential legislative functions with which it is
thus vested. And, assuming arguendo that Congress did have the power to transfer its legislative
function, then the power to make law cannot be exercised by anyone other than Congress, except
in conjunction with the lawful exercise of executive or judicial power.
175.
Furthermore, Mr. Amerige’s explanation of Facebook’s broad policies demonstrate
the very real-world overly broad application of Section 230’s authority. Hate speech is one
example of the real world vague / overly broad / arbitrary implementation of “rules” that restrict
permissible / protected speech that fall under the overly broad (i.e., as applied) phrase “otherwise
objectionable.” Other examples of permissible speech restrictions abound, not as fanciful
hypotheticals, but as substantial “real-world” applications of Section 230 authority – we now
provide a sampling.
68Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 77 of 143
176.
Being in Facebook, Twitter, or YouTube “jail” (the word “prison” is more fitting,
since the accused is already convicted and sentenced) is so commonplace, that it is now part of the
American vernacular. Section 230 enables an ICS to create arbitrary laws (e.g., Community
Standards), to arbitrarily enforce those laws (e.g., restrict speech), and to then determine the
arbitrary penalty (e.g., ban) for breaking those “laws,” all under the same authority. There is no
separation of power, no checks and balances, and no congressional or judicial oversight and there
is nothing “standard” about the enforcement of Community “Standards.”
177.
As an example of permissible speech that was arbitrarily penalized, Sue Mosley (a
mom) was restricted for posting a picture of her daughter’s cake. The cake was misidentified by
the algorithm as a female nipple. A similar misidentification happened with an elbow. This
algorithmic “misidentification” happens all too frequently (i.e., substantially) where a user / ICP
is convicted of a “crime” that it did not commit and must seek an appeal (which such appeals are
farcical, futile endeavors … probably carried out by an algorithm rather than a living, breathing
human being or United States citizenship) to undo its punishment.
178.
Fyk has endured several similar “misidentifications.” As an example, Fyk was once
punished (banned) by Facebook for posting a picture of a pink circle that Facebook determined
“may be offensive or upsetting to others.” Here, Fyk’s permissible speech was restricted because
it “may be” offensive. Facebook penalized Fyk for something that was not determined to be
offensive but might be offensive which was, to any rational person, not offensive.
179.
“Misidentifications” occur all too often. Most “misidentified” speech restrictions
(penalizations) go unchallenged; but, in the rare instance where public outrage presents a public
relations concern, companies like Facebook will overturn their decisions and apologize. As an
example, Facebook once “misidentified” the Declaration of Independence as “hate speech.” In the
69Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 78 of 143
case of the “mistaken” restriction of the Deceleration of Independence, Facebook’s spokeswoman,
Sarah Pollack stated, “The post was removed by mistake and restored as soon as we looked into
it.” 61 Facebook’s order of operation is to restrict first, “look[ ] into it” later only if there is public
outrage, then apologize for its “mistake.” Big Tech’s apologies have been plentiful during myriad
congressional hearings and, yet, at most Big Tech gets a scolding or hand slap and goes right back
to its CDA immunized illegal ways. The First Amendment does not read: Congress (here, a proxy
agent acting on behalf of Congress) shall make no law … abridging the freedom of speech, unless
of course, it is a “mistake.”
180.
Big Tech’s tactic of issuing public mea culpas in response to public backlash
underscores the lack of any analogous “agency” regulatory oversight to reign in “agency” abuse.
Essentially, there is no repercussion. The two sisters known as Diamond and Silk are a good
example of “mistaken” penalization and public push back. Facebook unpublished the ladies’ social
media content as being “unsafe,” a defamatory claim in and of itself. When asked by Congressman
Billy Long, “what is unsafe about two black women supporting President Donald J. Trump,”
Mr. Zuckerberg responded “well, Congressman, nothing is unsafe about that.” 62 When challenged,
companies like Facebook feign ignorance for having defamed someone’s character, restricted their
speech, and destroyed their livelihood without compassion or consequence. “Oops, our bad” does
not cut it when denying someone of their life, liberty, or property under government authority.
The Washington Post, Facebook censored a post for ‘hate speech.’ It was the Declaration of Independence,
https://www.washingtonpost.com/news/the-intersect/wp/2018/07/05/facebook-censored-a-post-for-hate-speech-it-
was-the-declaration-of-independence/
61
For the Court’s ease of reference, this news article is attached hereto as Exhibit T and incorporated fully herein by
reference.
62
This testimony from Mr. Zuckerberg is available via video at the following:
https://www.facebook.com/watch/?v=1909262926038134 This video is incorporated fully herein by reference.
70Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 79 of 143
181.
Other examples of “mistaken” (“oops our bad”) penalizations are found within
these publications: (a) “Facebook apologizes for ‘mistake’ in threatening to ban 81-year-old woolen
pig knitter for hate speech.” (b) “Facebook Banned This Perfectly Innocent Photo Of A Puppy,
For Obvious Reasons.” (c) “Twitter apologizes after conservative commentator Candace Owens
was briefly locked out of her account.” (d) “Stephen: Twitter Apologizes for Banning People Who
Tweeted ‘Memphis.’” (e) “Their bad! Twitter apologizes to Dave Rubin for the ‘inconvenience’
of locking him out of his account for having legit COVID19 concerns.” (f) “Google apologizes for
accidentally removing the Podcast Addict app.” 63
182.
Brian Amerige pointed out that Facebook, like many other online providers, “often
make[s] wrong decisions. And that means the experience of having content moderated on a day-
to-day basis will be inconsistent for users.” The company’s decision, whether right or wrong,
cannot (as applied) be challenged in a court of law. A third-party simply must accept his / her
punishment unless they can muster up enough public outrage to present a public relations fire.
Without powerful connections and / or notoriety, the average person cannot expect their
“mistaken” punishment to ever be undone because the appeals process is often ignored and is not
consistently available.
183.
Regarding the “appeals process,” even that is a farce – the same company that
“mistakenly” bans a user / ICP is the same company considering their appeal. As applied, the user
can be “mistakenly” accused of, convicted of, and sentenced to “prison” for of a speech “crime”
they did not commit. The user / ICP may not even know what prohibition (i.e., Community
Standard) they violated. See, e.g., the article from Mr. Amerige quoted at length above. Companies
For the Court’s ease of reference, this compilation of articles is attached hereto as composite Exhibit U, which
such composite exhibit is incorporated herein by reference.
63
71Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 80 of 143
are not required to make a showing of what specific prohibition a user / ICP violated. What was
unsafe about Diamond and Silk? Mark Zuckerberg admitted “nothing.” Facebook seems to have
simply disagreed with Diamond and Silk’s political or ideological viewpoint and restricted the two
ladies, claiming they were “unsafe” based on Facebook’s own political viewpoint or ideological
agenda.
184.
Per the CRS:
The Supreme Court has long considered political and ideological speech to be at
the core of the First Amendment, including speech concerning ‘politics,
nationalism, religion, or other matters of opinion.” W. Va. State Bd. of Educ. v.
Barnette, 319 U.S. 624, 642 (1943). … A government regulation [through a proxy
agent] that implicates political or ideological speech generally receives strict
scrutiny in the courts, whereby the government must show that the law is narrowly
tailored to achieve a compelling government interest.
Ex. P (emphasis added). Section 230 is not narrowly tailored, nor does it achieve the compelling
government interest (e.g., the need for FOSTA-SESTA). Any restriction, predicated on political
or ideological viewpoint, should be subject to strict scrutiny by the courts. But when considering
immunity, the courts in practice (as applied) have not implemented any security, because the courts
very rarely (if ever) venture into the merits of why a plaintiff was restricted and whether it was
done in “good faith” by a “Good Samaritan.” Such has been Fyk’s plight thus far in the Facebook
Lawsuit.
185.
Consider this – if an elected official is elected by the public interest and, similarly,
popular figures (e.g., Diamond and Silk) are made popular because of the public interest, is
restricting their political or ideological speech in the public interest? No. Another consideration –
if an online provider / ICS restricts (under the aegis of government authority) an elected political
official, popular political figure, political speech, or censors political discourse, is the online
72Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 81 of 143
provider acting as a “Good Samaritan” to achieve “a compelling government interest” that aligns
with the public interest (e.g., to protect our children from offensive materials)? No.
186.
If online providers were using the CDA to censor speech for anti-competitive
purposes (yep, they are all the time) because they are somehow above the law via the CDA, using
their authority to quash speech / expression as “otherwise objectionable” (speech that would
otherwise be afforded the highest level of constitutional protection (yep, they are all the time
because they are somehow above the law via the CDA)), how then might an ordinary citizen know
what is prohibited and / or challenge the arbitrary and capricious “enforcement” of the CDA? The
ordinary citizen cannot under the broken CDA.
187.
Targeting and restricting permissible political or ideological viewpoints as
“otherwise objectionable” or “unsafe” speech is repugnant to the core values and purpose of the
First Amendment; and, yet, such is authorized conduct under Section 230’s currently over broad
application. Companies like Facebook, Twitter, and Google have even restricted political
candidates during an election. Big Tech’s “choice” to silence a candidate during an election, results
in millions of others not being able to see that candidate’s speech at the most critical time. This
hurts not only the candidate’s chances of being elected, but also the millions of others who are
stripped of their ability to follow that candidate’s speech (i.e., not in the public interest). And vice
versa – Big Tech even protects politicians of choice (e.g., Hunter Biden’s laptop suppression).
188.
Chad Prather, running for Texas governor, was banned by Facebook a mere eight
days prior to the election. Facebook’s moderators determined that Mr. Prather had violated some
arbitrary rule by posting the following comment:
Marilyn Hart interesting piece of bias. Now pull the fbi reports saying an
insurrection never happened. Do more research. Please for the love of God. I know
you need some bad guys in your life to make you feel better but please…you’re an
over-spoiled first world brat that has no actual clue how the world works. Good.
73Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 82 of 143
God. Travel the world a little bit and realize how well off you are. You’ve
contributed nothing to the freedoms you now enjoy. Troll the internet and create
your sense victimhood but please spare me. I literally toy with your responses on
Facebook because I’m nice and sometimes have time to waste but your self-
conceived sense of intelligence is beyond delusional. Take care. Get help. God
bless.
189.
Mr. Prather’s chance of winning the election was severely hindered by Facebook
because he called someone a delusional brat, which is, of course, permissible speech. In response,
Mr. Prather filed a temporary restraining order against Facebook alleging (per comment by Mr.
Prather’s attorney) “[t]hat Facebook appears to be actively interfering in the Texas governor’s
election to benefit the sitting governor, Greg Abbot, in order to protect a private deal that would
grant Facebook subsidies with taxpayer dollars to build a new facility in Texas, is a political
scandal of epic proportions.” Mr. Prather’s attorney’s comments continued: “This corruption and
affront on free and fair elections in Texas is an outrage that must be stopped immediately by the
court.” 64 This Court has the power to stop this affront on free and fair elections, once and for all.
190.
Mr. Prather’s banning is not an isolated occurrence. Darlene Swaffar for Congress
was restricted during her campaign and Chris Bish for Congress was restricted during her
campaign, as well as many more candidates who are currently running for office. Several examples
are illustrated in the following: (a) Facebook Drops ‘Hate Speech’ Suspension for OH GOP
Candidate Josh Mandel. (b) Elected officials suspended or banned from social media platforms.
(c) Far right candidate Laura Loomer, banned from most social media, suspicious of Comcast
glitch. 65
64
Texas Scorecard, Chad Prather, GOP Candidate for Governor, Sues Facebook Over Suspension
https://texasscorecard.com/state/chad-prather-gop-candidate-for-governor-sues-facebook-over-suspension/
For the Court’s ease of reference, this article is attached hereto as Exhibit V and incorporated fully herein by reference.
65
This compilation of articles is attached hereto as composite Exhibit W and incorporated fully herein by reference.
74Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 83 of 143
191.
Silencing candidates during an election cycle is one of the most obnoxious forms
of First Amendment violations that exists. It not only harms the candidate, but it harms the entire
electoral process and, thus, the public interest. A private company can, under Section 230’s
currently overly broad application, sway the results of an election to the benefit of self-interested
private corporations (e.g., seeking to secure government subsidies). This is very dangerous – as
applied, Section 230 contravenes the core values of the First Amendment.
192.
Information can also be suppressed or supported by biased social media companies
to help or hinder a candidate during an election. As touched upon a few paragraphs ago, take, for
example, the Hunter Biden laptop scandal. The Hunter Biden laptop scandal is a “… perfect
example of how politicians and / or oligarchs weaponize “fact checkers” to deflect criticism and
enlist social media to censor articles. Nothing to see here!” 66 “The laptop is ‘unsubstantiated’
because the media [social media included] doesn’t want it substantiated.” See Ex. W. Here, the
suppression of factual information was restricted as misinformation to help a political candidate
win an election – the Presidential election. Regardless of one’s political leanings, a substantial
number of real-world users / ICPs were penalized for sharing factual information (previously
deemed misinformation), which would have had real-world effect on elections (including the 2020
presidential election). As Mr. Amerige pointed out, they got it wrong.
193.
Political and ideological suppression goes beyond candidates. Sitting congressional
members have been restricted by many social media companies; e.g., Louie Gohmert, Jim Banks,
Marjorie Taylor Greene, Ron Johnson, and Rand Paul. The most famous example of a political
66
The New York Post, The Hunter Biden laptop is confirmed?! Color us shocked!
https://nypost.com/2021/09/21/the-hunter-biden-laptop-is-confirmed-color-us-shocked/
A copy of this article is attached hereto as Exhibit X and incorporated fully herein by reference.
75Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 84 of 143
figure being banned was President Donald J. Trump for supposed incitement. And political and
ideological suppression often involves political figureheads not presently in office and not
presently running for office; e.g., Tulsi Gabbard recently experiencing consistent mystery shadow
bans.
194.
Whether one agrees politically or ideologically, restricting the speech of a candidate
or elected official by way of the overly broad spectrum of “otherwise objectionable,” “unsafe,” or
even just “mistakenly,” is a very slippery and dangerous slope to stand on. Even judges are not
exempt from Section 230’s overly broad reach; e.g., Justice Clarence Thomas’ documentary was
pulled (restricted) from Amazon’s streaming services: “The documentary film about Thomas,
‘Created Equal: Clarence Thomas in His Own Words,’ was removed from Amazon’s streaming
service last month and the filmmaker said he was never given an explanation.” 67
195.
If a sitting President and a Supreme Court Justice can be silenced without
repercussion, how long will it be until members of this Court are pulled from public social media
discourse, whether mistakenly or deliberately? The CDA’s enabling private companies the
authority to silence a Supreme Court Justice, the President of the United States, and Congress
threatens the very fabric of liberty.
196.
Per the Washington Post:
But there’s another, more conceptual debate that transcends partisan politics and
carries implications beyond Trump’s freedom to tweet. It’s the question of whether
the largest social media companies have become so critical to public debate that
being banned or blacklisted – whether you’re an elected official, a dissident or even
just a private citizen who runs afoul of their content policies – amounts to a form
67
Fox News, Amazon pulled Justice Clarence Thomas documentary as censorship of conservative content continues,
https://www.foxnews.com/media/justice-clarence-thomas-amazon-censorship
A copy of this article is attached hereto as Exhibit Y and incorporated fully herein by reference.
76Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 85 of 143
of modern-day censorship. And, if so, are there circumstances under which such
censorship is justified? 68
197.
Many other notable public figures, besides political and judicial figures, have been
or are serving online “prison” sentences for sharing lawful, legitimate, and permissible content,
but deemed “otherwise objectionable,” “unsafe,” “hate speech,” and et cetera by an ICS.
198.
Another example as to the “substantiality” of the CDA’s “substantial overbreadth”
was / is found in the Alex Jones saga. Regardless of whether or not one likes Mr. Jones, Mr. Jones’
situation illustrated a “concerted effort” by multiple ICSs at once to silence someone for their
opinion. Some would argue Alex Jones is nuts, dangerous, harmful, “otherwise objectionable,”
because he voices dreaded “conspiracy theories” (many of which later prove true, similar to the
Hunter Biden laptop scandal). As Mr. Amerige noted, Mr. Jones did not outright violate any
specific rules (i.e., he did not actually break any Community Standard / “law”); instead, a team of
humans “scouring months of Jones’ historical Facebook posts to find borderline content that might
be used to justify a ban” were not able to find anything justifiable; so, “[i]n practice, the decision
was made [to restrict Jones] for political reasons.” Alex Jones’ protected permissible speech was,
in practice, restricted based on “borderline content” and for “political reasons.” Restricting
borderline speech for political reasons is beyond the intended compelling government interest of
Section 230; i.e., a substantially overbroad application of the CDA.
199.
More examples of notable public figures who were banned, but are certainly not
limited to, the following; Paul Joseph Watson, James Woods, Monica Mathews on Air, Mindy
Robinson, David Harris Jr., Ann Vandersteel, Sydney Powell, Michael Flynn, John Stubbins, Ian
68
The Washington Post, Tech giants banned Trump. But did they censor him
https://www.washingtonpost.com/technology/2022/01/07/trump-facebook-ban-censorship/
A copy of this article is attached hereto as Exhibit Z and incorporated fully herein by reference.
77Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 86 of 143
Trottier, Derek Utley, Dan Bongino, and Jack Posobiec, Babylon Bee, Paul Gosar, Stew Peters,
Doug Billings, Larry Elder, Col. Rob Maness, Ivory Hecker, April Moss, Disclose.tv, Tim Pool,
Jovan Hutton Pulitzer, Dr. Gina, Hodgetwins, Joe Rogan, Elon Musk, Anna Khait, Ron Coleman,
George Papadopoulos, Ron DeSantis, Steven Crowder, Mark Dice, Chuck Callesto, James Woods,
Dinesh D’Souza, Charlie Kirk, Ryan Fournier, Tucker Carlson, Laura Ingraham, just to name a
few. “Not limited to” because there are far more notable public figures (not to mention, unknown
citizens) who have been silenced.
200.
Section 230’s overly broad application goes beyond political restrictions. Examples
of notable figures who have been penalized for other permissible speech include, but are not
limited to, the following: (a) Clint Eastwood was punished for sharing his delight over the 2016
election outcome. (b) Rihanna was punished for posting a (potentially serious artistic) picture of
her buttocks. (c) Courtney Love was punished for posting “derogatory” claims (i.e., not unlawfully
defamatory). (d) Rose McGowan was punished for calling out Ben Affleck’s knowledge of
Harvey Weinstein’s behavior. (e) Isis Thompson was punished for having the “wrong” name. (f)
PewDiePie was punished for making a joke about joining “Isis.” (g) Elly Mortimer was punished
for posting a selfie of herself as part of an art piece. (h) Liberty Memes was punished for poking
fun at various political figures. (i) Kendall Jenner was punished for posting a runway picture.
201.
We mention notable figures because their stories of restriction have been told; but,
the total number of people who are not part of pop culture and who have been punished due to the
overbreadth of Section 230 application is staggering. The real-world overly broad application of
Section 230 to authorize restriction of permissible speech goes well beyond “substantial” – such
is unprecedented in human history.
78Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 87 of 143
202.
Fyk was punished for nothing at all (i.e., there was no reason given; i.e., the ICS
never showed cause). Fyk received a ban of his page WTF Magazine (Where’s The Fun) page
without any showing of cause (it was blank). In 2016, Fyk’s pages were simultaneously
unpublished without any showing of cause. Without a requirement to “show cause,” Big Tech is
free to penalize anyone, for any reason, at any time, without any “good faith” justification /
showing. The punished never know whether or not their penalization was done in “good faith” by
a “Good Samaritan,” as the website is not required to show the cause for penalization.
203.
Fyk did not know why his magazine’s Facebook business page was penalized or
why his approximate six pages were unpublished simultaneously, while others were not
unpublished, although restricted nonetheless. It was not until Facebook offered to republish Fyk’s
content for Fyk’s competitor (and not Fyk) that Facebook’s anticompetitive motive for restricting
Fyk’s material became prima facie.
204.
Another example of an “unjustified” (no cause shown) banning is the restriction of
the website known as the SGT report: “ALERT: On October 15, 2020 YouTube terminated BOTH
SGT Report YouTube channels without warning or cause. On October 22, 2020 Patreon terminated
the SGT Report Patreon page without warning or cause. This is economic warfare friends.” 69
205.
Yet another example of an “unjustified” banning was posted on YouTube’s help
board by an individual named Andrew Tsurikov. It read in pertinent part, as follows:
A few weeks ago I got a message from youtube that my account was banned for
violation of some rules with a link to general terms and conditions of the service. I
was trying to appeal, but got the answer back that they are keeping the ban. The
thing is that I never ever posted a single video or commented anything or put likes
69
SGT Report, https://www.sgtreport.com/2020/08/i-have-been-permanently-banned-by-youtube/
A copy of this article is attached hereto as Exhibit AA and incorporated fully herein by reference.
79Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 88 of 143
or anything like that. I was totally confused by the fact something should have
happened that lead to the ban, but I haven’t got a clue what exactly was wrong. 70
206.
Claiming someone violated one or several rules and pointing a link to general terms
and conditions hardly justifies the claim. Here, on April 18, 2019, Andrew Tsurikov says, in his
own words on YouTube’s help board, that he is “totally confused” and he “ha[s]n’t got a clue what
exactly was wrong.” Rarely does anyone know what rule was broken or why they were penalized,
they are simply directed to a general rules page in order to guess at which “rule” they broke.
Statutes should be provide clear guidelines as to what conduct is being proscribed – as described
above, language like “otherwise offensive” is so broad and unintelligible so as to be
unconstitutionally vague.
207.
The all-too-common cryptic / glossy nature of ICS notifications associated with
quashing materials advanced by an ICP makes it almost impossible (if not impossible) for the ICP
to comprehend the “good faith” reasons for Big Tech’s restrictions. Facebook’s Tessa Lyons,
shared: “that we [Facebook] removed hundreds of pages and accounts, in that case, it was because
of the behavior that was spammy coordinated inauthentic behavior that we were seeing on our
platform.” 71 Apparently “spammy coordinated inauthentic behavior” (i.e., not materials) is
somehow a justifiable reason to cost hundreds of people their livelihoods and restrict those users’
permissible (at least by government standards) speech, and somehow “behavior” has become
physical materials.
70
Viewer account banned for no reason post, see https://support.google.com/youtube/thread/4433175/viewer-
account-banned-for-no-reason?hl=en
A copy of this post is attached hereto as Exhibit BB and incorporated fully herein by reference.
71
The video from which this quote was derived, which such video is incorporated fully herein by reference, can be
found at https://www.youtube.com/watch?v=do1XECYZ8vw
80Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 89 of 143
208.
Taking down materials based on behavior and / or without a showing of “good
faith” cause is one thing; but, imagine for a moment a social media company making major medical
decisions on behalf of millions of Americans? Imagine the real-world dangers associated with
restricting medical discourse, or blocking “contrary” medical data, especially if they are wrong.
We do not need to imagine such a hypothetical; it is already a real-world scenario.
209.
Many of the largest tech companies in the world are acting as medical professionals
giving medical advice (i.e., practicing medicine without a license) or, rather, restricting any
medical advice (even from licensed doctors) that is contrary to CDC or government “guidance.”
History has proven that government “guidance” (e.g., CDC) is not always in the best interest of
the people and should not always be trusted.
210.
For example, the Tuskegee experiments:
In 1932, the USPHS, working with the Tuskegee Institute, began a study to record
the natural history of syphilis. It was originally called the “Tuskegee Study of
Untreated Syphilis in the Negro Male” (now referred to as the “USPHS Syphilis
Study at Tuskegee”). The study initially involved 600 Black men – 399 with
syphilis, 201 who did not have the disease. Participants’ informed consent was not
collected. Researchers told the men they were being treated for “bad blood,” (e.g.,
akin to COVID immunization) a local term used to describe several ailments,
including syphilis, anemia, and fatigue. In exchange for taking part in the study, the
men received free medical exams, free meals, and burial insurance. 72
211.
There was zero informed consent amidst six hundred human beings experimented
upon. They were given an injection to see what happened. Many people have concerns about
modern vaccinations. Under its current misapplication, Section 230 enables Big Tech to make
major medical decisions on behalf of millions of Americans when blocking or developing medical
72
Center for Disease Control and Prevention, The Tuskegee Timeline (emphasis added).
https://www.cdc.gov/tuskegee/timeline.htm
A copy of this article is attached hereto as Exhibit CC and incorporated fully herein by reference.
81Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 90 of 143
information. Several major Internet platforms have made a concerted effort to censor doctors who
question the government’s guidance even when presenting facts and professional opinions. One
group that was notably censored for providing a “contrary” opinion to the CDC’s guidance, is
known as the Front-Line Doctors. The Front-Line Doctors gathered on Capitol Hill to give a press
conference discussing medical research and their personal experiences as doctors on the frontlines
of COVID-19. They exposed explosive details about how relevant information has been censored.
Rather than allowing people to view the press conference and decide for themselves, YouTube,
Facebook, Twitter, and Squarespace censored and silenced these medical professionals because
their opinions did not align with political agendas, Big Pharma views, and / or Big Tech’s views.
212.
YouTube spokesperson, Ivy Choi, stated: “We quickly remove flagged content that
violates our Community Guidelines, including content that explicitly disputes the efficacy of local
health authority recommended guidance on social distancing (or any other medical guidance) that
may lead others to act against that guidance.” 73 In other words, social media companies are
restricting valid professional opinions (from actual doctors) about public health and saying to their
users “the government is right, you are wrong, do what we say to do, do not listen to contrary
information, because we are the authority and we have never had a bad result for the uniformed
participant.” Suppressing medical information is very dangerous and not a decision that should be
made by a website programming company, and, yet, it happens every day and has so far absurdly
found protection under the CDA vis-à-vis the overly broad application of same.
73
NBC News, YouTube, Facebook split on removal of doctors’ viral coronavirus videos
https://www.nbcnews.com/tech/tech-news/youtube-facebook-split-removal-doctors-viral-coronavirus-videos-
n1195276 (6 paragraphs down)
A copy of this article is attached hereto as Exhibit DD and incorporated fully herein by reference.
82Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 91 of 143
213.
Facebook also banned two doctors who simply put forward clinical data. 74 These
two doctors operated five hospitals and are experts in their field. Apparently, online providers /
ICS / Big Tech know more than doctors, who are in charge of multiple health facilities, and can
now make major medical decisions and / or practice medicine without a license. The real-world
harm that has resulted from silencing doctors’ opinions (i.e., denying users’ informed consent) is
difficult to comprehend, especially if those medical professionals are correct. An ICS, making any
major medical decisions on behalf of users, is beyond the breadth of Section 230’s intended
purpose. This is something that the Court should not take lightly – suppressing health-related
information, regardless of whether it aligns with government guidance, may have potential
catastrophic (even deadly) real-world consequences. Decisions regarding what medical advice is
allowed and what information is suppressed, should not be left up to social media moderators. Mr.
Zuckerberg did not complete undergrad, let alone medical school.
214.
Public discourse surrounding health issues like COVID mask mandates and
vaccinations is of the utmost importance, and this discourse should not be suppressed. Some
people argue that COVID is just a government conspiracy, masks are bad, and vaccinations are
dangerous and / or killing people. While other opinions include people who are afraid of being
sick and want to be sure that they and their loved ones are protected from harm at all costs. All
sides are entitled to their opinions and all citizens should have the right to express those opinions
assuming their speech is lawful, legitimate, and permissible.
74
The video discussing same, which such video is incorporated fully herein by reference, can be found at
https://www.facebook.com/watch/?v=553927125266189
83Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 92 of 143
215.
Commercial ICS’ censorship of lawful, legitimate, and permissible speech is
substantially out of control. Here are just a few more examples of articles relating to vast
censorship of lawful speech:
(a)
https://thefederalist.com/2021/08/11/youtube-will-censor-you-if-you-disagree-
with-the-government-even-if-youre-in-the-government/
(b) https://reclaimthenet.org/twitter-censors-and-locks-out-mrna-expert/
(c) https://dailycaller.com/2022/03/01/abuse-censorship-twitter-suspends-republican-
vicky-hartzler-tweet-transgender/
(d)
https://www.breitbart.com/tech/2021/05/10/5-of-big-techs-most-serious-acts-of-
censorship/
(e) https://neeva.com/learn/big-tech-censorship
(f) https://disinformationchronicle.substack.com/p/media-will-not-call-big-tech-
censorship?utm_source=url
(g)
https://nypost.com/2021/04/06/justice-thomas-shows-how-we-can-end-big-tech-
censorship-for-good/
(h)
https://www.newsweek.com/big-tech-censorship-threatens-americans-
constitutional-rights-opinion-1609286
(i) https://elamerican.com/twitter-censors-el-americans-account/
(j) https://www.dailysignal.com/2022/02/18/youtube-censors-mom-fighting-school-
mask-mandates/
75
75
A compilation of these articles is attached hereto as composite Exhibit EE and incorporated fully herein by
reference.
84Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 93 of 143
216.
As previously mentioned, Facebook serves as a good case study in determining the
ratio of permissible to impermissible applications of Section 230’s breadth since Facebook makes
up a substantial portion of the social media usership world. Other companies follow in similar
moderation footsteps as Facebook, imposing the same unconstitutional overly broad restrictions
on users in relation to permissible third-party speech.
217.
For example, Facebook’s Newsfeed manager, Tessa Lyons, explains how Facebook
handles “problematic content” (i.e., permissible speech): “we reduce the spread of problematic
content…and when we say problematic content, what we are talking about is, content that violates
the values that we hold but might NOT violate our community standards.” 76
218.
This is an admission, by one of the largest social media companies in the world,
that permissible speech is penalized, even though the speech does not actually violate Facebook’s
own “rules” but is simply deemed “problematic” (i.e., otherwise objectionable) and restricted
anyway. Facebook admittedly applies its content restrictions beyond Section 230’s compelling
government interest and even beyond its own Community Standards (i.e., “rules” / “laws”).
219.
As another example, third-parties (not Facebook) “making money” is apparently
considered problematic / objectionable for Facebook. Tessa Lyons explains how Facebook handles
financially motivated material:
for the financially motivated actors, their goal is to get a lot of clicks [i.e., reach
and distribution] so they can convert people to go to their websites, which are often
covered in low quality ads, and they can monetize and make money from those
people’s views, and if we can reduce the spread of those links, we reduce the
number of people who click through and we reduce the economic incentives that
they have to create that content in the first place. 77
76
This video, which such video is incorporated fully herein by reference, can be found at
https://www.youtube.com/watch?v=X3LxpEej7gQ
77
Id.
85Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 94 of 143
Reducing economic incentives in order to reduce the creation of third-party content is another
example of the chilling effect on lawful speech (and its anti-competitive animus) that the overly
broad application of Section 230 has had.
220.
Fyk’s economic incentives to create content (i.e., future permissible speech) were
reduced by Facebook by, for example, blocking Fyk’s entire website www.funnierpics.com.
Facebook prevented anyone from clicking on Fyk’s links to his website and prevented anyone
from seeing Fyk’s ads, which was / is tortious interference with Fyk’s economic advantage.
Congress’ compelling interest for Section 230 was absolutely not to allow tortious interference or
prevent people from creating financially incentivized lawful content (i.e., permissible speech).
221.
How does Facebook purportedly handle problematic content “evenhandedly”?
Tessa Lyons explains: “it’s not as if everyone loses 20%, what we intend to have happen is the
spammy low quality content loses a lot of traffic while the high-quality publishers continue to do
well.” 78 Facebook’s penalizations are admittedly unequally predicated upon content “quality”
standards, not predicated on the offensive nature of the content. In other words, content that is of
objectionable “quality,” but does not actually violate Community Standards, is deemed
problematic and still restricted. This is an excellent example of how restrictions are used to develop
the remaining information by proxy – low quality users are restricted, while high-quality
publishers are allowed / developed.
222.
Facebook’s founder, Mark Zuckerberg, openly admits that Facebook is developing
(at least in part, if not in whole) information based on Facebook’s own opinion: “we’re showing
the content on the basis of us believing it is high quality, trustworthy content rather than just ok
78
This video, which such video is incorporated fully herein by reference, can be found at
https://www.youtube.com/watch?v=DEVZeNESiqw&t=8s
86Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 95 of 143
you followed some publication, and now you’re going to get the stream of what they publish.” 79
Facebook is not just passively “allowing” content, they are actively “showing” (i.e., developing in
part) the content based on their own values and interests, while reducing any information of less
interest. This craziness is, at present, enabled by the overly broad CDA.
223.
Inconvenient facts and opinions are of less interest to social media companies.
Facts, information, or opinions that the ICS disagrees with (finds otherwise objectionable) are
often restricted (as applied) under Section 230 protection. “Misinformation” and “fake news” are
two more overly broad (as applied) objectionable (yet lawful) categories subjected to Section 230
speech restrictions. As Mr. Amerige pointed out, should truth or accuracy be “treated as holy writ
in an environment where people aren’t allowed to be wrong or offend others? Members of each
generation must re-learn important truths for themselves.” How, as a society, do we maintain what
the “truth” is? Such is not possible under the current overly broad application of the CDA – under
the currently overly broad CDA, Big Tech is the arbiter of “truth,” suppressing facts, information,
and / or opinions that contradict their version of truth. Congress’ compelling interest for Section
230 was not for Big Tech to serve as the judge, jury, and executioner over truth. Refutation and
inaccuracies are important to exercise the truth or get at the truth. Again, Section 230’s application
to restrict inaccuracies and falsities is overly broad.
224.
Mr. Amerige continues:
[u]nassailable truths turn brittle with age, leaving them open to popular suspicion.
To maintain the strength of our values, we need to watch them sustain the weight
of evidence, argument and refutation. Such a free exchange of ideas will not only
create the conditions necessary for progress and individual understanding, but also
cultivate the resilience that much of modern culture so sorely lacks.
79
This video, which such video is incorporated fully herein by reference, can be found at
https://about.fb.com/news/2019/04/marks-challenge-mathias-dopfner/
87Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 96 of 143
Most simply put – without discourse or dissent, truth begins to atrophy.
225.
Fact checking in general (i.e., manipulation and refutation of truth) is well beyond
the breadth of Congress’ compelling government interest. Tessa Lyons explains Facebook’s fact
checking process:
we identify, potential hoaxes or misinformation…and once we predict those things,
we send them to Independent third-party fact checkers… Once they mark an
individual piece of content false, and apply it to the newsfeed ranking algorithm, in
order to reduce the relevance score and show that piece of content lower in
newsfeed, reducing the number of people who see it. 80
226.
Facebook identifies “misinformation” (i.e., “predicts” problematic content), then
sends it to a third-party fact checker (i.e., to launder the creation of contrary information) to rate
the subject information as “false,” in order to justify restriction, reduction, or refutation. Tessa
Lyons explains that Facebook “tak[es] action against pages and domains repeatedly marked
false.” 81 The same company that identifies the misinformation (repeatedly) is also the same
company who “takes action” against the users they disagree with (i.e., the users who are viewed
as inconvenient). Congress’ compelling interest for Section 230 was not for Big Tech to determine
content accuracy. That the CDA presently allows such renders the CDA overly broad in
application.
227.
“Fact checking” (i.e., content refutation) becomes particularly concerning (i.e.,
dangerous) during elections, for example. United States political candidates should not be
restricted (i.e., “fact checked”) during an election. Tessa Lyons explains who handles “fact
checking” for Facebook (e.g., during elections): “We partner with fact checkers now in seven
80
This video, which such video is incorporated fully herein by reference, can be found at
https://www.youtube.com/watch?v=X3LxpEej7gQ
81
Id.
88Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 97 of 143
countries including the US and the fact checkers are able to review the content and rate its
accuracy.” 82 In other words, foreign actors (from as many as seven countries, in the case of
Facebook) are restricting United States citizens, candidates, and officials during elections, which
has a real-world impact on the outcome of an election. Section 230’s compelling government
interest was not to allow foreign interference in an election. That the CDA presently allows such
renders the CDA overly broad in application.
228.
Justice Thomas pointed out that
[u]nder [the current] interpretation [of Section 230], a company can solicit [a third-
party to create contrary information], select and edit for publication several of those
statements [i.e., identify ‘misinformation’], add commentary [i.e., create refuting
information in whole or in part], and then feature [i.e., provide / develop] the final
product prominently over other submissions [i.e., displacing or restricting user’s
information] – all while enjoying immunity.
Ex. C, Malwarebytes, 141 S.Ct. at 16 (internal citation omitted).
229.
Not only is “fact checking” dangerous, it is Information Content Provision (ICP)
by definition, at least in part, through a third-party paid proxy. Big Tech identifies
“misinformation” (i.e., any problematic content), then sends the “misinformation” out to third-
parties (who are contracted by the social media company) to create (in whole or in part)
contradictory material that the online provider solicits (pays for), then features (develops) the
created information prominently over users’ information. Congress’ compelling interest for
Section 230 was not to immunize content solicitation, creation, and development. In application,
therefore, the CDA is overly broad.
230.
Content creation / development (subject to the online provider’s prerogative) is
being laundered through third-party “fact checkers” so that the information is “provided by
82
Id.
89Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 98 of 143
another.” This is very similar to how third-party speech infringement (i.e., the government’s
prerogative) is being laundered through the online provider’s First Amendment rights so that the
information is “restricted by another” (i.e., private entity).
231.
It is very important to note that the government is not allowed to restrict permissible
speech, so the government offers consideration (e.g., Section 230 liability protection) to solicit
third-party’s conduct to act (i.e., restrict material) at the prerogative of the higher authority, while
hiding behind an arm’s length transaction. The government is soliciting actions that the
government does not have the authority to undertake itself. 83 Similarly, corporations are not able
to provide content without liability, so the corporation offers consideration (e.g., pays a “partner”
/ “fact checker”) to solicit a third-party’s conduct to act (i.e., to create material) at the prerogative
of the higher authority, while hiding behind an arm’s length transaction. Neither the government
nor the corporations should be allowed, by this Court, to continue laundering their prerogatives
through solicited third-party actions.
232.
Facebook and other ICSs are never (or very rarely) held accountable for third-party
information even when the ICS pays for / solicits same (i.e., offers consideration under obligation)
and is a party to the creation and / or development of that information. Ironically, Facebook group
admins (users) are generally held accountable for unsolicited third-party information. Tessa Lyons
explains: “we will be holding the admins of Facebook Groups more accountable for (i.e., third-
party content) Community Standards violations… When people in a group repeatedly share
content that has been rated false by independent fact-checkers, we will reduce that group’s overall
John Stossel is a prime example of “misinformation” being used to silence opposition, as Mr. Stossel explains in
his video found at https://www.facebook.com/JohnStossel/videos/511412617168427 . This video is incorporated fully
herein by reference.
83
90Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 99 of 143
News Feed distribution [i.e., punish everyone else).” 84 Is that which is good for the goose not good
for the gander? Congress’ compelling interest for Section 230 was not to restrict the permissible
speech of one user for the speech or actions of another user. In this way, the CDA is overly broad
in application.
233.
Ms. Sandberg, Facebook’s COO, goes far afield by equating “misinformation” with
financial motivation and with spam (amongst other things): “The misinformation and fake news
that we see on Facebook is financially motivated. It’s spammers… people who are trying to
generate clicks to low quality websites covered in ads so they can generate impressions and ad
revenue.” 85 Apparently, “financially motivated” equates to “spam,” which equates to accuracy.
The ordinary person cannot possibly know what is prohibited if financial interest is, in reality,
misinformation.
234.
The ordinary user cannot possibly know what is prohibited if the rules are
“expressed in a way that is too unclear for a person to reasonably know whether or not their conduct
falls within the law.” Users tend to avoid the risk of consequences by staying far away from
anything that could possibly fit the uncertain wording of the Community Standards. The ordinary
user cannot possibly know what is prohibited activity if offensive content is anything the provider
or user considers objectionable (which is anything that is problematic), anything problematic is
spam, anything that is spam is financially motivated, and anything that is financially motivated is
misinformation.
84
“April 10 2019 FB newsroom Steps to manage problematic content.” This article can be found at
https://about.fb.com/news/2019/04/remove-reduce-inform-new-steps/
For the Court’s ease of reference, a copy of this article is attached hereto as Exhibit FF and is incorporated fully
herein by reference.
85
This video, which such video is incorporated fully herein by reference, can be found at
https://www.youtube.com/watch?v=do1XECYZ8vw
91Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 100 of 143
235.
Big Tech “rules” are so unclear that no ordinary user knows where one “rule” ends
and the next begins. Facebook’s rules are so vague and unclear that Facebook’s own COO cannot
even distinguish between them. Financially motivated content is not necessarily spam and it is not
necessarily misinformation. Congress’ compelling interest for Section 230 was not to allow an ICS
the ability to deem anything and everything prohibited and then arbitrarily enforce “rules”
whenever it best suits the ICS, regardless of whether or not the ICS is acting as a “Good
Samaritan.” In this sense, the CDA is overly broad in application.
236.
“By removing, reducing and informing we disrupt the incentives that exist for
spreading inauthentic harmful communication.” 86 Here, again, Facebook is “disrupting incentives”
(i.e., tortiously interfering) under vague prohibited categories. There is no measure for the ordinary
person to know what content is authentic. Congress’ compelling interest for Section 230 was not
for Big Tech to determine content authenticity. In this sense, the CDA is overly broad as applied.
237.
Facebook and other platforms use terms / phrases like “click bait,” “spam,”
“financially motivated,” “hate speech,” “sensationalism,” “bullying,” “inauthentic,” “harmful,”
“low quality,” and et cetera interchangeably to broadly define all problematic (i.e., objectionable)
content in order to justify any arbitrary restriction (i.e., punishment) and to, when no real reason
exists, justify a ban under the “rules” (e.g., the Alex Jones saga discussed above). Congress’
compelling interest for Section 230 was not to enable arbitrary justification for removing any and
all information. In this sense, the CDA is overly broad as applied.
86
May 22, 2018 Three-part recipe for cleaning up newsfeed. This article can be found at
https://about.fb.com/news/2018/05/inside-feed-reduce-remove-inform/
For this Court’s ease of reference, a copy of this article is attached hereto as Exhibit GG and incorporated fully
herein by reference. The quote featured in this filing is found in the video associated with this article, and the video
can be found by way of the preceding hyperlink. This video is incorporated fully herein by reference.
92Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 101 of 143
238.
Even the subject of Kyle Rittenhouse’s innocence was deemed “objectionable” by
many online platforms. Expressing support for Kyle Rittenhouse’s innocence resulted in the
banning of many. It does not matter how the reader of this filing felt / feels about Mr. Rittenhouse’s
case. What matters is that these monolithic platforms could potentially sway a jury’s decision
because of their ability to sway public opinion. Congress’ compelling interest for Section 230 was
not to enable Big Tech to determine guilt or innocence or sway public opinion and juries. In this
sense, the CDA is overly broad as applied.
239.
Another generic reason why users get penalized or punished due to Section 230’s
overbreadth is using the service too much; e.g., joining too many groups, sending too many friend
requests, liking too many posts, commenting too much, poking too many people, or messaging too
often. Congress’ compelling interest for Section 230 was not to enable Big Tech to prevent too
much use of a service.
240.
As Facebook’s Tessa Lyons has publicly admitted, an ICS’ sponsored partner could
cause another user’s content to be displaced (i.e., restricted or reduced availability). Stories that
are higher in the Newsfeed are more likely to be seen. Someone pays Facebook to become
responsible for developing (i.e., “increase its distribution” and do the placement) the information,
in part, which then displaces (i.e., restricts other users’ content availability). In other words,
Facebook restricts lower-valued users’ content (who do not pay Facebook, or who do not pay
Facebook enough) in order to displace their content and develop the content of the ICP who pays
more and is accordingly valued more. Simply put, Facebook acts as a direct competitor, in
partnership with sponsored advertisers, of its own users, displacing their information in favor of
Facebook’s high paying partners, such was the case in the Facebook Lawsuit. This is the
advertising business model of most major social media platforms – offer reach and distribution for
93Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 102 of 143
free initially to increase the user base, then once the user base is large enough, restrict the reach
and distribution (restrict access to or availability of material) in order to displace the user’s content
with content the ICS is paid to develop. Congress’ compelling interest for Section 230 was not to
enable Big Tech to restrict user’s speech to allow for the ICS’ own financial gain as a direct
competitor. In this sense, the CDA is overly broad as applied.
241.
One “simply stat[ing] on an online platform, such as a Facebook, that he does not
like someone of a certain class … often results in restriction by the online provider,” explained
Ryan Hartwig (“Hartwig”), a former moderator and employee of Cognizant. 87 Hartwig further
explained: “If someone says something like ‘I dislike Muslims who believe in Sharia law,’ that
statement is a tier 2 hate speech offense, at least as it pertains to Facebook for example, and your
statement will be deleted off Facebook’s platform.” 88 This kind of speech may be objectionable,
but such is not constitutionally impermissible. Allowing Big Tech to cripple users’ speech that is
permissible under the guise of the First Amendment because it is “objectionable” under the CDA
renders the CDA overly broad as applied because Congress’ compelling interest was certainly not
to create a law that violates constitutional rights in application.
242.
Per Hartwig, “an online provider can even choose to exempt [i.e., waive their own
rules] offensive speech whenever it benefits the company or aligns with the company’s own views
and policy agenda.” 89 Hartwig further explained: “generally advocating for the death of babies or
fetuses is patently offensive and is a violation of Facebook’s policies, but Facebook instructed us
[i.e., Cognizant’s moderation employees] in an internal correspondence that, ‘advocating for
87 See R. Hartwig April 6, 2022, affidavit attached hereto as Exhibit HH and incorporated fully herein by reference.
88 See id.
89 See id.
94Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 103 of 143
killing babies / fetuses in an abortion context should be ignored [i.e., exempted].” 90 In other words,
advocating for killing babies / fetuses is not okay in one context but is okay in another. Where
prohibition is based on the context in which something is said, the ordinary person cannot possibly
know what is or is not prohibited. The CDA’s enabling of Big Tech to engage in context coin
flipping in deciding the (im)permissibility of content / speech renders the CDA overly broad in
application.
243.
Even Mother Teresa is not above violating Big Tech content policies, which such
policies are enabled by CDA overbreadth. A tweet consisting of Mother Teresa’s words and her
picture read as follows: “Abortion is profoundly anti-women. Three quarters of its victims are
women: Half the babies and all the mothers.” This content was deemed “hate speech” and restricted
by Twitter. 91 Advocating for abortion is fine, but quoting Mother Teresa’s stated opinion that
abortion is profoundly anti-women is “otherwise objectionable” and warrants censorship. Again,
the CDA is absurdly overbroad in application.
244.
Per Hartwig, “[c]ontent provision was / is not always in the control of the third-
party moderator… .” 92 For example, Hartwig explained:
Cognizant was seeing content trending around an anti-abortion law passed in
Alabama. An image relating to that was brought to the attention of a client,
Facebook, as it met our hate speech policy for political exclusion. Given the
newsworthy nature of the content, however, Facebook directed us to ignore this
image and told us to be advised of further violations in captions and comments. 93
90 See id.
91 Townhall, When Twitter Blocked Mother Teresa
https://townhall.com/columnists/michaelbrown/2019/04/12/when-twitter-blocked-mother-teresa-n2544687
A copy of this article is attached hereto as Exhibit II and incorporated fully herein by reference.
92 See Exhibit II.
93 See id.
95Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 104 of 143
Here, the client (Facebook) is making the editorial determination (knowingly) to provide content
that is being reported as offensive because the client (Facebook) thinks it is news-worthy.
Congress’ compelling interest for Section 230 was not to allow an ICS to act as an ICP knowingly
hosting offensive content. In this sense, the CDA is overly broad as applied.
245.
There is nothing “standard” about the application of Community “Standards.”
Users are treated differently based on their notoriety, social status, or economic benefit to the
platform, just to list a few of the many treatment incongruences. For example, Facebook maintains
an internal “shielding” process for special status users. For example, the PR Fire shield is used to
prevent bans on users’ accounts that pose a public relation concern (fire) such as major media
figures or celebrities. Other examples include, the viral content shield, electoral shield, legal shield,
media ops notable shield, popular page shield, media ops BOB (i.e., book of business) partner
shield, or verified page shield. In other words, a caste system.
246.
One shield is particularly notable – the advertising shield better aligns with
racketeering than it does with special status protection. An unnamed Facebook whistleblower (who
called herself Foxtrot – to remain anonymous- when interacting with Fyk because she was scared
of Facebook) explained to Fyk how the advertising shield works. The advertising shield is a three-
tier protection system that Facebook implements for advertisers. The more an advertiser pays, the
higher tiered protection shield the advertisers receives. An advertising shield prevents the
algorithm from “accidentally” / “mistakenly” getting it wrong and it also prevents lower leveled
moderators from being able to ban (punish) higher valued customers (e.g., like Fyk’s competitor,
at issue in the Facebook Lawsuit) since they pay Facebook more. In other words, the overbreadth
of the CDA enables Facebook to penalize third-party users; but, if those third-party users pay for
protection from Facebook, Facebook will not penalize them. The mob once had (well, still does
96Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 105 of 143
have) a similar racket going, and the compelling government interest behind the CDA was not to
promote mob-like conduct. In this sense, the CDA is overbroad as applied.
247.
Social media companies can even choose to temporarily enact or exempt rules when
it benefits the company or aligns with the company’s views and policy agenda. For example,
calling someone a “retard” will not get a user banned, but calling Greta Thunberg a retard or any
other pejorative will get a user restricted. Again, status of the user apparently changes the rules.
Prohibitions change on the fly. Congress’ compelling interest for Section 230 was not to allow
Big Tech to change rules on a case-by-case basis predicated on status, notoriety, or financial value
to the company. In this sense, the CDA is overbroad as applied.
248.
Some websites even ban words (e.g., hashtags). How, for example, does restricting
the hashtags “Save the children” and “Stop the Steal” align with Congress’ compelling interest for
Section 230? It does not. Was it the compelling government interest of Congress to prevent
children from being saved or from an election being stolen? No. It is absurd to think that restricting
hashtags like “Save the Children” or “Stop the Steal” is acting in the public interest, as a “Good
Samaritan,” or to protect children from harm. Restricting the hashtag #SavetheChildren is
antithetical to the legislative intent of Section 230. In this sense, the CDA is overbroad as applied.
249.
Other Hashtags that have been inexplicably banned by Instagram, for examples,
include: #alone, #assday, #beautyblogger (but #beautybloggers works), #bikinibody, #boho.

brain, #costumes, #curvygirls, #date, #dating, #desk, #dm, #elevator, #graffitiigers,

hardworkpaysoff

(but #hardworkpaysoff💪 works),

happythanksgiving,

humpday,

iphonegraphy, #italiano, #kansas (but #kansascity works), #killingit, #kissing, #master, #models,

mustfollow, #nasty, #newyearsday, #petite, #pornfood, #pushups, #saltwater, #shit, #shower,

single, #singlelife, #skype, #snap, #snapchat (but #snapchat👻 works), #snowstorm, #sopretty,

97Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 106 of 143

stranger, #streetphoto, #sunbathing, #swole, #tag4like, #tanlines, #teens, #thought, #undies,

valentinesday, #workflow. Allowing Big Tech to get away with such absurd speech restriction

by way of CDA immunity prima facie demonstrates the overbreadth of the CDA as applied.
250.
Acting in a concerted / conspiring effort to restrict user speech across multiple
platforms at once is yet another example of the overly broad application of Section 230 immunity.
Alex Jones, Laura Loomer, and President Trump are a few examples of users that were not-so-
coincidentally restricted across multiple platforms at the same time pursuant to concerted Big Tech
effort. These individuals’ materials were not just restricted as being offensive, they were (as an
individual) eradicated from existence online. Whether one agrees or disagrees with their speech,
how is conspiring between platforms to silence individuals as a whole within the breadth of Section
230 immunity? It is not. The message being sent to all users is: “if you do not play by our rules
and agree with what we think, we will act in unison to de-person you from society (i.e., deter
permissible speech across multiple platforms) and we are completely immune from liability…
haha, sucker.” Congress’ compelling interest for Section 230 was not to enable Big Tech to
eradicate individuals from all online public discourse simply because of their lawful opinions. In
this sense, the CDA is overly broad as applied.
251.
When a user is penalized for posting purportedly “violative content,” Big Tech
companies like Facebook also restrict the users’ ability to private message people, disconnecting
users from their friends, family, or loved ones as a result of something they posted publicly. This
is one of the most nauseating bans that exists – even prisons allow inmates to contact their friends,
family, and loved ones, notwithstanding their violations. Congress’ compelling interest for Section
230 was not to enable Big Tech to restrict private conversations (permissible speech). In this sense,
the CDA is overly broad in application.
98Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 107 of 143
252.
Fyk has endured many of the same arbitrary restrictions mentioned above, based
on absurd and dubious CDA-based “justifications” that have no basis in reality, “good faith,” or
“Good Samaritanism.” For example, in or around the end of 2016, Facebook deleted one of Fyk’s
businesses / pages (with millions of viewers and thousands in advertising and / or web trafficking
earnings at issue) because, for example, it contained a posted screenshot from the Disney movie
Pocahontas. Facebook claimed that this screenshot (from a Disney children’s movie) was racist
and accordingly violative of the CDA; i.e., to use Facebook terminology, the Pocahontas
screenshot post constituted a “strike.” Meanwhile, for comparison’s sake, Facebook allowed other
businesses / pages at that same time (in or around the end of 2016, and thereafter for that matter)
to maintain, for examples, a posted screenshot of a mutilated child or instant article Facebook
advertisements (moneymakers for Facebook) of things like people engaged in overly sexual
activities, among other things that really were violative of the CDA. 94
253.
As another example of lawful, legitimate permissible speech restricted vis-a-vis
Section 230’s overbreadth was Fyk’s picture of a child riding a tricycle with Sloth’s head (from
the movie Goonies) photo-shopped in place of the child’s head. There are no identifiable aspects
of the child in the photo. The words on the photo were: “When you post a picture of your kids, this
is what we see.” If this type of humor was considered patently offensive, shows such as Tosh.O,
Family Guy, South Park, and the Simpsons would be censored on every social media platform.
94
Fyk reported the disgusting posted screenshot of the mutilated child to Facebook; but, in December 2016, Facebook
advised Fyk that such disgusting post was acceptable. Facebook advised Fyk as follows: “Thank you taking the time
to report something that you feel may violate our Community Standards. Reports like yours are an important part of
making Facebook a safe and welcoming environment. We reviewed the photo you reported for being annoying and
uninteresting and found it doesn’t violate our Community Standards.” Apparently posts of decapitated children are of
“interest” to Facebook, whereas a photo of Chief Powhatan in Pocahontas is “annoying” (or who knows what). Fyk
had put together a nice video compilation further showing Big Tech CDA abuses and allowance of garbage over the
years, which such video compilation had been posted on YouTube for quite some time (about a year); but, not-so-
surprisingly / not-so-coincidentally, YouTube deleted the video very shortly before this filing for who knows what
“reason.” Thankfully, Fyk saved the video compilation; so, when the time is right for Fyk to share that video with the
Court (or if the Court requests same now), Fyk’s work product will be shared.
99Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 108 of 143
Fyk’s posted picture may offend someone, somewhere, but it was certainly not unlawful,
impermissible speech and everyone who would see Fyk’s picture initially elected to view his
content by liking his page. Yet Facebook deemed Fyk’s post violative of Community Standards.
Congress’ compelling interest for Section 230 was not to prevent humorous content. In this sense,
the CDA is overbroad as applied.
254.
An example of an ICS’ double standards is Google’s policy on “doxing,” which
such policy specifically prohibits the act of revealing personal information or contact details of a
person without consent. And, yet, Google made the names and addresses of people who donated
to the Canadian truckers available to the public on Google Maps. The information was later
removed after public outrage. Here, per Google rules, no one can reveal personal information about
someone else but Google can reveal personal information about anyone it so chooses. 95
255.
Another example of double standards is Facebook’s articulated position on “spam.”
Facebook has knowingly hosted (i.e., taken payment to develop information) sponsored ads that
are full-blown scams. For example, an electric scooter selling for $99.00, which should sell for
over $1,000.00. It was a sponsored ad that Facebook was paid to develop, that scammed users.
Anyone who attempted to buy the item would never see the item or their money again. Why does
Facebook have no responsibility when they are paid to promote an online scam? The CDA’s
allowing Facebook to participate in and get away with this scam is further evidence that the CDA
is overly broad in application.
95
An article regarding Google’s conduct can be found at:
https://yournews.com/2022/02/23/2303820/google-maps-location-data-of-freedom-convoy-donors-posted-online/
A copy of this article is attached hereto as Exhibit JJ and incorporated fully herein by reference.
100Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 109 of 143
256.
Most people are completely unaware of the largest permissible speech restrictions
that occur online. The phrase “shadow ban” was coined to describe the content restriction in a
more covert general sense. Wikipedia describes it as: “Shadow banning, also called stealth
banning, ghost banning or comment ghosting, is the practice of blocking or partially blocking a
user or their content from some areas of an online community in such a way that it will not be
readily apparent to the user that they have been banned.” 96 In the shadow ban vein, online platforms
will downrank content that is found problematic, showing it lower in Newsfeeds or potentially not
at all, without ever notifying the individual being penalized (no showing of cause or “good faith”).
The concept of punishment without notice is unfathomable; and, yet, the overbreadth of the CDA
enables Big Tech to engage in such absurd misconduct free of any civil liability. Giant swaths of
permissible speech are restricted and reduced by tech platforms, without any notification,
reasoning, or ability to be challenge the punishment. This is a substantial use of Section 230’s
breadth to restrict nearly all online permissible information without the user’s knowledge. Such is
violative of the Substantial Overbreadth Doctrine, as is every example above.
257.
Examples substantiating the substantiality of overbreadth are, quite seriously,
limitless. Should the Court somehow require more examples, we would be happy to oblige.
258.
It is plain to even the casual observer that Big Tech is using / hiding behind the
overly broad immunity of Section 230 (that has somehow absurdly evolved over the last twenty-
six-years) to advance personal agendas that benefit social media companies; i.e., that are the
antithesis of “Good Samaritanism” and / or in the interest of the public.
96
This article is attached hereto as part of composite Exhibit H and incorporated fully herein by reference.
101Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 110 of 143
259.
Section 230 (intended to protect and advance public discourse and to protect
children from offensive materials) is overly broad (and, thus, unconstitutional) facially and as
applied.
260.
Social Media companies do not have, and never will have, (im)permissible speech
figured out. Reason being, self-interested private companies were never supposed to be the
government authorized judge, jury, and executioner of (im)permissible speech.
D.
Canons of Statutory Construction Violated by the CDA

  1. The CDA also fails upon a statutory construction examination.
  2. Absurdity Canon / Harmonious-Reading Canon / Whole-Text Canon /
    Surplusage Canon
  3. Consider this statement – the CDA grants absolute immunity from, for, in relation
    to all things online. Consider this related statement – a private corporation, motivated by self-
    interest, can voluntarily engage in any editorial conduct it considers beneficial, without fear of
    civil liability, so long as the misconduct swirls about the magical ether of the Internet. In the words
    of John McEnroe to a Wimbledon chair umpire in 1981, “You cannot be serious?!” Alas, this is
    serious.
    263.
    As the DOJ has aptly stated:
    Platforms no longer function as simple forums for posting third-party content, but
    instead use sophisticated algorithms to promote content (i.e., to develop third-party
    information) and connect users… . [C]ourts have interpreted the scope of Section
    230 immunity very broadly, diverging from its original purpose. This expansive
    statutory interpretation, combined with technological developments, has reduced
    the incentives of online platforms to address illicit activity on their services and, at
    the same time, left them free to moderate lawful content without transparency or
    accountability.
    Ex. E.
    264.
    As Justice Thomas aptly stated in Malwarebytes:
    102Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 111 of 143
    The decisions [e.g., Zeran, the Facebook Lawsuit, et cetera) that broadly interpret
    § 230(c)(1) to protect traditional publisher functions also eviscerated the narrower
    liability shield Congress included in the statute [§ 230(c)(2)]. [97] Section
    230(c)(2)(A) encourages companies to create content guidelines [i.e., fill in the
    details] and protects those companies that ‘in good faith . . . restrict access to or
    availability of material that the provider or user considers to be obscene, lewd,
    lascivious, filthy, excessively violent, harassing, or otherwise objectionable.’”
    Taken together [98] both provisions in § 230(c) most naturally [99] read to protect
    companies when they unknowingly decline to exercise editorial functions to edit or
    remove third-party content [i.e., omit action], § 230(c)(1), and when they decide to
    exercise those editorial functions in good faith [i.e., ‘any action voluntarily taken,’
    § 230(c)(2)], § 230(c)(2)(A).
    Ex. C, Malwarebytes, 141 S.Ct. at 16-17 (emphasis in original).
    265.
    Justice Thomas further aptly stated:
    But by construing § 230(c)(1) to protect any decision to edit or remove content [i.e.,
    a voluntarily action), Barnes v. Yahoo!, Inc., 570 F. 3d 1096, 1105 (CA9 2009),
    courts have curtailed the limits Congress placed on decisions to remove content,
    see e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029, *3 (MD Fla.,
    Feb. 8, 2017) (rejecting the interpretation that §230(c)(1) protects removal
    decisions because it would ‘swallo[w] the more specific immunity in (c)(2)’ [i.e., §
    230(c)(1) renders § 230(c)(2) mere surplusage]. With no limits on an Internet
    company’s discretion to take down material, § 230 now apparently protects
    companies who racially discriminate in removing content [which is absurd]. Sikhs
    for Justice, Inc. v. Facebook, Inc., 697 Fed. Appx. 526 (CA9 2017), aff ’g 144
    F. Supp. 3d 1088, 1094 (ND Cal. 2015) (concluding that ‘any activity that can be
    boiled down to deciding whether to exclude material that third parties seek to post
    online is perforce immune’ under §230(c)(1)).
    Ex. C, id. at 17 (emphasis in original).
    266.
    Under this mistaken application whereby Section 230(c)(1) purportedly protects
    “any decision to edit or remove content,” online platforms can (without consequence) promote,
    advance, sponsor, boost, suggest, and / or develop (even in part or by proxy) a host of unlawful
    activities; e.g., child sexual exploitation, illicit drug sales, cyberstalking, human-trafficking,
    97 As for the Surplusage Canon (discussed in greater detail below), see n. 16, supra, and Ex. G at 2.
    98 As for the Whole-Text Canon (discussed in greater detail below), see n. 15, supra, and Ex. G at 2.
    99 As for the Harmonious-Reading Canon (discussed in greater detail below), see n. 17, supra, and Ex. G at 2.
    103Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 112 of 143
    terrorism, harassment, pirating, impersonation, discrimination, Internet advertising scams, reckless
    driving, or even help to facilitation of child suicide (such as with www.sanctionsuicide.com).
    267.
    Continuing with illustration of the absurdity at play, online platforms are free
    (under the currently unbridled, facially and as applied, CDA) to restrict anyone or anything for any
    reason or any motive that they consider “objectionable;” e.g., undesirable users, gold star parents,
    politicians / elected officials, inconvenient factoids, differing opinions (i.e., “wrong” thoughts),
    and et cetera. Section 230 has even left Internet companies free to restrict their competition and
    clear of any concern whatsoever of any civil lability; see, e.g., the Facebook Lawsuit.
    268.
    As a result of Section 230(c)(1)’s unlimited editorial authority (the whacky result
    of Zeran and subsequent decisions; e.g., the Facebook Lawsuit), online platforms (like Facebook,
    Google, Twitter, et cetera) are able to, for examples, institute a preferential caste system (e.g., the
    blue checkmark / “high-quality” / “trustworthy” / “authentic sources”), restrict one’s ability to run
    one’s business (i.e., reducing visibility / arbitrary bans) and / or make money online (i.e.,
    demonetization / account cancellation) (e.g., Facebook Lawsuit), predetermine someone’s guilt
    (e.g., Kyle Rittenhouse), 100 arbitrarily penalize lesser valued (i.e., organic) users while allowing
    higher valued customers (e.g., advertisers) to be exempted from the rules (i.e., lack of uniform
    enforcement) (e.g., Facebook Lawsuit).
    269.
    The text of Section 230 has rarely (if ever, other than perhaps Justice Thomas’
    Malwarebytes and Doe Statements, see Ex. C) been considered as a whole, implicating the Whole-
    Text Canon generally. The CDA’s individual provisions have been interpreted in isolated ways
    (all six ways to Sunday, with hardly any rhyme or reason, over the last twenty-six years) that are
    Neither Fyk nor undersigned counsel take any position whatsoever on the result of Mr. Rittenhouse’s trial – proper
    or improper. The point herein is that online providers quashed public participation based on preferred opinions by
    banning participation or speech deemed less worthy or palatable by unknown (and likely foreign) decision-makers.
    100
    104Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 113 of 143
    absurd (Absurdity Canon), duplicative (Surplusage Canon), and certainly not compatible with the
    rest of the statute (Harmonious-Reading Canon / Irreconcilability Canon). 101 This has accordingly
    led to contradicting (and irreconcilable) court decisions, such as the polar opposite (even if just
    viewed through the anti-competitive animus lens) paths / fates of the Facebook Lawsuit and
    Enigma. It is well-past time for a declaration as to the CDA’s unviability leading to so many
    conflicting decisions, associated legal chaos, and associated inconsistent case results ((in)justices)
    over more than two and a half decades; hence, this challenge.
    270.
    When read accurately (still not the case in the Facebook Lawsuit; hence, the current
    / second appeal lodged by Fyk before the Ninth Circuit), Enigma Software Group USA, LLC v.
    Malwarebytes, Inc., 946 F.3d 1040 (9th Cir. 2019), cert. denied Malwarebytes, Inc. v. Enigma
    Software Group USA, LLC, 141 S.Ct. 13 (2020) (with Justice Thomas providing a spot-on detailed
    statement, referenced several times throughout this filing), the Ninth Circuit panel in Enigma did
    not limit its examination to Subsection 230(c)(2) although the factual background of that case was
    seemingly of a 230(c)(2) ilk; but, instead, considered the whole-text of the statute with a focus on
    the “Good Samaritan” intelligible principle / general directive / general provision articulated in
    Section 230(c) as a whole in denying immunization of anti-competitive conduct. Simply put, self-
    motived anti-competitive blocking decisions cannot harmoniously (enter, again, the Harmonious-
    Reading Canon) be the actions of a “Good Samaritan.” Courts have rarely ever given effect (i.e.,
    given meaning) to the “Good Samaritan” general provision (i.e., the general motivation) of the
    statute, which, again, is the intelligible principle underlying or overarching the above-discussed
    Non-Delegation Doctrine, which links to the above-discussed Major Questions Doctrine.
    101
    Again, for a nice generalized understanding of various canons at issue here, there is Exhibit G attached hereto and
    incorporated fully herein by reference.
    105Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 114 of 143
    271.
    “[W]e are advised by the Supreme Court that we must give meaning to all statutory
    terms, avoiding redundancy or duplication wherever possible.” Fair Housing Council of San
    Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1168 (9th Cir. 2008) (citing Park ‘N
    Fly, Inc. v. Dollar Park & Fly, Inc., 469 U.S. 189, 197 (1985)). Translated – we are to respect
    things like the Whole-Text Canon, the Harmonious-Reading Canon, the Surplusage Canon, the
    Irreconcilability Canon, the Absurdity Canon, et cetera.
    272.
    Again, a proper application of Section 230 started going down the proverbial tubes
    (although the CDA was destined to go down the tubes from the get-go for the myriad reasons
    discussed throughout this filing) as early as 1997 in Zeran (Fourth Circuit Court):
    Courts have discarded the longstanding distinction between ‘publisher’ liability and
    ‘distributor’ liability. Although the text of § 230(c)(1) grants immunity only from
    ‘publisher’ or ‘speaker’ liability, the first appellate court to consider the statute held
    that it eliminates distributor liability too – that is, § 230 confers immunity even
    when a company distributes content that it knows is illegal. Zeran v. America
    Online, Inc., 129 F. 3d 327, 331–334 (CA4 1997).
    Ex. C, Malwarebytes, 141 S.Ct. at 15 (emphasis in original).
    273.
    If Section 230(c)(1) eliminates all liability (as the Zeran court incorrectly
    determined and the California courts have thus far absurdly determined in the Facebook Lawsuit),
    it would swallow the purpose of the very next subsection (Section 230(c)(2), which governs
    removal of content either directly by the ICS as to 230(c)(2)(A) or indirectly by the ICS as to
    230(c)(2)(B)); i.e., Section 230(c)(1) is disharmonious to Section 230(c)(2) and renders Section
    230(c)(2) mere surplusage under the current judicial misinterpretation / misapplication of Section
    230(c)(1), violative of the Harmonious-Reading Canon and / or the Surplusage Canon.
    274.
    As Justice Thomas correctly stated:
    … Congress expressly imposed distributor liability in the very same Act that
    included § 230. Section 502 of the Communications Decency Act makes it a crime
    to ‘knowingly . . . display’ obscene material to children, even if a third party created
    106Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 115 of 143
    that content. 110 Stat. 133–134 (codified at 47 U.S.C. § 223(d)). This section is
    enforceable by civil remedy [as it should be]. 47 U.S.C. § 207. It is odd to hold, as
    courts have, that Congress implicitly eliminated distributor liability [i.e., §
    230(c)(1) eliminates all publisher and distributor liability] in the very Act in which
    Congress explicitly imposed it.
    Ex. C, Malwarebytes, 141 S.Ct. at 15. Justice Thomas is correct, it is “odd” (i.e., disharmonious)
    that Congress would impose distributor liability while also eliminating distributor liability in the
    very same statute. Once again, Section 230 falls flat on its face under a canon of statutory
    construction (the Harmonious-Reading Canon) examination.
    275.
    Continuing with Justice Thomas’ Malwarebytes Statement:
    Traditionally, laws governing illegal content distinguished between publishers or
    speakers (like newspapers) and distributors (like newsstands and libraries).
    Publishers or speakers were subjected to a higher standard because they exercised
    editorial control. They could be strictly liable for transmitting illegal content. But
    distributors were different. They acted as a mere conduit without exercising
    editorial control, and they often transmitted far more content than they could be
    expected to review. Distributors were thus liable only when they knew [i.e.,
    exercised editorial control] (or constructively knew) that content was illegal. See,
    e.g., Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710, *3 (Sup.
    Ct. NY, May 24, 1995); Restatement (Second) of Torts § 581 (1976); cf. Smith v.
    California, 361 U. S. 147, 153 (1959) (applying a similar principle outside the
    defamation context).
    Ex. C, Malwarebytes, 141 S.Ct. at 14. It is reasonable to conclude that the delineation (in regards
    to Section 230 immunity) is not whether the online provider (like an ICS, like a Facebook, Google,
    Twitter, et cetera) is a publisher or distributor; but, rather, whether the online provider exercised
    editorial control (i.e., took action).
    276.
    Continuing with our canons analysis grounded within Justice Thomas’ appropriate
    framework:
    The year before Congress enacted § 230, one court blurred this distinction [between
    publisher and distributor]. An early Internet company [Stratton Oakmont] was sued
    for failing to take down defamatory content posted by an unidentified commenter
    on a message board. The company contended that it merely distributed the
    defamatory statement. But the company had also held itself out as a family-friendly
    107Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 116 of 143
    service provider that moderated and took down offensive content. The court
    determined that the company’s decision to exercise editorial control over some
    content ‘render[ed] it a publisher’ even for content it merely distributed. Stratton
    Oakmont, 1995 WL 323710, 3-4.
    Taken at face value, § 230(c) alters the Stratton Oakmont rule in two respects. First,
    § 230(c)(1) indicates that an Internet provider does not become the publisher of a
    piece of third-party content [i.e., when the Internet provider exercises editorial
    control subject to the provisions of §230(c)(2)] – and thus subjected to strict liability
    – simply by hosting or distributing that content. Second, § 230(c)(2)(A) provides
    an additional degree of [direct immunity when companies take down or restrict
    access to objectionable content, so long as the company acts in good faith. In short,
    the statute suggests that if a company unknowingly leaves up illegal third-party
    content, it is protected from publisher liability by § 230(c)(1); and if it takes down
    certain third-party content in good faith, it is protected from § 230(c)(2)(A).
    Ex. C, Malwarebytes, 141 S.Ct. at 14-15 (emphasis added). Distilled, Section 230(c)(2) provides
    immunity when the online provider takes any action (directly in regards to 230(c)(2)(A) and
    indirectly as to 230(c)(2)(B), see, e.g., n. 22, supra) and Section 230(c)(1) informs courts not to
    treat the online provider (ICS) as the content provider (ICP) when the online provider does not act
    upon the content in question (i.e., fails to remove offensive materials).
    277.
    “To be sure, recognizing some overlap between publishers and distributors is not
    unheard of. Sources sometimes use language that arguably blurs the distinction between publishers
    and distributors. One source respectively refers to them as ‘primary publishers’ and secondary
    publishers or disseminators,’ explaining that distributors can be ‘charged with publication.’” Ex.
    C, Malwarebytes, 141 S.Ct. at 15 (citing W. Keeton, D. Dobbs, R. Keeton, & D. Owen, Prosser
    and Keeton on Law of Torts 799, 803 (5th ed. 1984)).
    278.
    Overlap between publishers and distributors does exist and it does blur their
    distinction. Many courts have tried to distinguish between publishers and distributors (the
    “publisher / platform debate”). Under Section 230, an online provider’s liability does not simply
    end at “distributor” and begin with being a “publisher.” An online provider’s liability distinction
    108Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 117 of 143
    relies wholly on who acted (i.e., who exercised editorial control), how they acted, and under what
    motivation they acted. Any other read or application is the epitome of disharmonious (at
    minimum).
    279.
    An online provider (ICS) can act as both a publisher and a distributor
    simultaneously. In Section 230, there is no delineation made between a publisher and a distributor
    (i.e., a platform) because their functions overlap. Determining whether the online provider can (or
    cannot) be “charged with publication” depends entirely upon their exercise of editorial control
    (i.e., their actions). To better understand how this overlap occurs, we now define the interlaced
    roles (publisher / distributor) an online provider can play. 102
    102
    The discussion found within the next paragraphs is precisely one of the aspects of CDA immunity that Justice
    Thomas quite recently (as of March 7, 2022) welcomed review of (with Justice Thomas having welcomed the
    SCOTUS’ review of Section 230 immunity more generally by way of his October 13, 2020, Malwarebytes Statement)
    but was unable to vis-à-vis the Doe case because the Doe case presented itself to the SCOTUS in a not yet “final”
    state. Again, this constitutional challenge is being presented to the judiciary as doubtless the welcomed, “appropriate
    case” within which to “address the proper scope of immunity under § 230:”
    This decision exemplifies how courts have interpreted § 230 ‘to confer sweeping immunity on some
    of the largest companies in the world,’ Malwarebytes, Inc. v. Enigma Software Group USA, LLC,
    592 U. S. ––––, ––––, 141 S.Ct. 13, 13, 208 L.Ed.2d 197 (2020) (statement of THOMAS, J.
    respecting denial of certiorari), particularly by employing a ‘capacious conception of what it means
    to treat a website operator as [a] publisher or speaker,’ id., at ––––, 141 S.Ct., at 17 (internal
    quotation marks omitted). Here, the Texas Supreme Court afforded publisher immunity even though
    Facebook allegedly ‘knows its system facilitates human traffickers in identifying and cultivating
    victims,’ but has nonetheless ‘failed to take any reasonable steps to mitigate the use of Facebook by
    human traffickers’ because doing so would cost the company users—and the advertising revenue
    those users generate. …
    It is hard to see why the protection § 230(c)(1) grants publishers against being held strictly liable
    for third parties’ content should protect Facebook from liability for its own ‘acts and omissions.’
    At the very least, before we close the door on such serious charges, ‘we should be certain that is
    what the law demands.’ Malwarebytes, 592 U. S., at ––––, 141 S.Ct. at, 18. As I have explained the
    arguments in favor of broad immunity under § 230 rest largely on ‘policy and purpose,’ not on the
    statute’s plain text. Id., at ––––, 141 S.Ct., at 15. Here, the Texas Supreme Court recognized that
    ‘[t]he United States Supreme Court—or better yet, Congress—may soon resolve the burgeoning
    debate about whether the federal courts have thus far correctly interpreted section 230.’ 625 S.W.3d,
    at 84. Assuming Congress does not step in to clarify § 230’s scope, we should do so in an appropriate
    case.
    Ex. C, Doe, 2022 WL 660628, at *1-2 (internal case record / docket entry cites omitted). The answer to the question of
    “whether the federal courts have thus far correctly interpreted section 230” is a resounding “no;” hence, this
    constitutional challenge. And the judiciary has to take on this monumentally important constitutional challenge because
    109Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 118 of 143
    280.
    Under Section 230(c)(1): (a) A passive distributor (i.e., an inactive host) cannot be
    “charged with publication” (i.e., treated as “the publisher”) when, and if, the online provider fails
    to moderate (i.e., omits editorial control / “unknowingly” distributes / acts as a mere conduit of
    information); (b) An active distributor (i.e., an active host – publisher) can be “charged with
    publication” (i.e., treated as “a” publisher – not to be confused with “the publisher”) when, and if,
    the online provider engages in primary and / or secondary publishing conduct (i.e., exercises any
    editorial control / “knowingly” chooses to distribute or provide information in a secondary
    capacity).
    281.
    Under Section 230(c)(2), an active distributor (i.e., an active host / publisher)
    cannot be “charged with publication” when it acts as “a” secondary publisher when restricting
    offensive content entirely provided by third-parties (i.e., not created or developed, even in part, by
    the online provider), subject to the “Good Samaritan” intelligible principle / general directive /
    general provision of Section 230(c) and the “good faith” provisions of Section 230(c)(2).
    282.
    In reality, however, no delineation exists between the publisher and a distributor
    within the text of Section 230. The only delineation that exists is between “the [primary] publisher”
    (who the online provider cannot be treated as) and “a secondary publisher” (who can be “charged
    with publication” for their actions, excluding the good faith moderation editorial control described
    in Section 230(c)(2)).
    283.
    If Section 230 is applied properly (i.e., in a harmonious fashion, in a non-surplusage
    fashion, in a reconcilable fashion, in a not absurd fashion, call it whatever), an online provider
    Congress has “not step[ped] in to clarify § 230’s scope” in the CDA’s twenty-six-year existence and because citizens
    of this country (including Fyk) desperately need the law to work correctly immediately … yesterday … a week ago …
    a year ago … twenty-six years ago. All that has occurred over the last twenty-six years is that Section 230 has become
    more and more messed up by the minute.
    110Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 119 of 143
    could be treated as “a [secondary] publisher” (not “the publisher” / i.e., as another) under Section
    230(c)(1) when it knowingly chooses to allow (i.e., knowingly hosts / develops / distributes)
    unlawful information. This would encourage the online provider to error on the side of caution,
    when engaged in secondary editorial moderation. The online provider would then be liable for
    content it knowingly allowed (i.e., distributed) while still not being liable for information it failed
    to moderate. This would increase “the incentives of online platforms to address illicit activity on
    their services…” and not leave “them free to moderate lawful content without transparency or
    accountability.” See DOJ publication, Ex. E.
    284.
    The analytical framework espoused above (“a modest understanding” of CDA
    immunity or lack thereof) “is a far cry from [the disharmony, absurdity that] has prevailed in court.
    Adopting the too-common practice of reading extra immunity into statutes where it does not
    belong, see Baxter v. Bracey, 590 U. S. ––––, 140 S.Ct. 1862, 207 L.Ed.2d 1069 (2020) … courts
    have relied on policy and purpose arguments to grant sweeping protection to Internet platforms.”
    Ex. C, Malwarebytes, 141 S.Ct. at 15 (citing 1 R. Smolla, Law of Defamation § 4:86, p. 4–380 (2d
    ed. 2019) (“[C]ourts have extended the immunity in § 230 far beyond anything that plausibly could
    have been intended by Congress”) and Rustad & Koenig, Rebooting Cybertort Law, 80 Wash. L.
    Rev. 335, 342–343 (2005) (similar)); Ex. C, Doe, 2022 WL 660628 at *1-2 (see n. 102, supra).
    285.
    “[F]rom the beginning, courts have held that § 230(c)(1) protects the ‘exercise of a
    publisher’s traditional editorial functions – such as deciding whether to publish, withdraw,
    postpone or alter content.” E.g., Zeran, 129 F. 3d, at 330 (emphasis added); cf. id., at 332 (stating
    also that § 230(c)(1) protects the decision to ‘edit’).” Id. at 16 (emphasis in original).
    286.
    Neither Section 230(c)(1)’s definitional protection, nor Section 230(c)(2)’s direct
    protection, relates to deciding whether to publish, edit, or alter content or to the creation or
    111Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 120 of 143
    development of any information, even in part / in any capacity. Deciding whether to publish, edit,
    or alter information are all the editorial actions of an ICP. The online provider (like an ICS, like
    Facebook, Google, Twitter, and et cetera) cannot, in any semblance of a reconcilable fashion, be
    an ICP (i.e., even in a secondary capacity) and still receive CDA protection; and, yet, it happens
    (in a statutory canon repugnant fashion) every day and everywhere in the real world because of
    the exploitation of the statute (e.g., Big Tech’s collecting copious amounts of money to develop
    advertising content in a secondary publishing capacity).
    287.
    Continuing on with Justice Thomas’ Malwarebytes Statement:
    Courts have [ ] departed from the most natural reading of the text by giving Internet
    companies immunity for their own content (i.e., laundering information content
    provision through the development of third-party content). [103] Section 230(c)(1) [,
    if interpreted / applied in a harmonious way,] protects a company from publisher
    liability only when content is provided entirely by another [ICP]. … Nowhere does
    this provision protect a company that is itself the [ICP].
    Ex. C, id. at 16 (emphasis in original, and citing Fair Housing Council of San Fernando Valley v.
    Roommates.Com, LLC, 521 F.3d 1157, 1165 (CA9 2008). Justice Thomas continued: “[a]nd an
    [ICP] is not just the primary author or creator, it is anyone ‘responsible, in whole or in part, for
    the creation or development’ of the content.” Ex. C, id. (emphasis in original, and citing Section
    230(f)). “Depart[ure] from the most natural reading” implicates, to one degree or another, the
    Harmonious-Reading Canon, the Irreconcilability Canon, the Whole-Text Canon, Surplusage
    Canon and the Absurdity Canon.
    288.
    Implicating the Whole-Text Canon and / or Harmonious-Reading Canon, Justice
    Thomas’ Malwarebytes Statement continued with this sagely discussion:
    103
    As an example, Mark Zuckerberg openly admits that Facebook knowingly distributes (through development /
    advancement) its own content: “We’re showing the content on the basis of us believing it is high quality, trustworthy
    content rather than just ok you followed some publication, and now you’re going to get the stream of what they
    publish.” https://about.fb.com/news/2019/04/marks-challenge-mathias-dopfner/ (which such video is also cited in n.
    79, supra).
    112Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 121 of 143
    [H]ad Congress wanted to eliminate both publisher and distributor liability, it
    could have simply created a categorical immunity in § 230(c)(1): No provider
    ‘shall be held liable’ for information provided by a third party. After all, it used
    that exact categorical language in the very next subsection, which governs removal
    of content. § 230(c)(2). Where Congress uses a particular phrase in one subsection
    and a different phrase in another, we ordinarily presume that the difference is
    meaningful. Russello v. United States, 464 U.S. 16, 23, 104 S.Ct. 296, 78 L.Ed.2d
    17 (1983); cf. Doe v. America Online, Inc., 783 So.2d 1010, 1025 (Fla. 2001)
    (Lewis, J., dissenting) (relying on the rule to reject the interpretation that § 230
    eliminated distributor liability.
    Ex. C, id. at 16 (emphasis added because Congress created no such “categorical immunity” in the
    real world). And “[w]here Congress uses a particular phrase in one subsection and different phrase
    in another … [and the court is to] ordinarily presume that the difference is meaningful” implicates,
    to one degree or another, the Harmonious Reading Canon, the Irreconcilability Canon, the Whole-
    Text Canon, Surplusage Canon and perhaps even the Absurdity Canon.
    289.
    The definition of an ICP in Section 230(f)(3) reads: “The term ‘information content
    provider’ means any person or entity that is responsible, in whole or in part, for the creation or
    development of information provided through the Internet or any other interactive computer
    service.” Id. Given the current state of the jurisprudence that Section 230(c)(1) protects any
    editorial conduct, Section 230(c)(1) could be read as “[n]o provider or user of an ICS shall be held
    liable for any editorial conduct (i.e., treated as “a” publisher – themselves) of any information
    provided by another ICP. To morph the statutory “the publisher” (immunized in some contexts)
    language into “a publisher” cuts a far too overbroad immunity swath for Big Tech and is, put a bit
    more harshly (but appropriately), absurd. This runs afoul of the Substantial Overbreadth Doctrine
    and the Absurdity Canon.
    290.
    Said differently, an ICP is any entity responsible, in whole or in part, for creating
    or “deciding whether to publish, edit or alter information” (i.e., development) even if the
    information is provided by a third-party. If the Zeran decision was correct that Section 230(c)(1)
    113Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 122 of 143
    protects all “traditional editorial function” and the Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003)
    decision was also correct that Section 230(c)(2) protects development (even in part), then Section
    230(c)(1) and / or Section 230(c)(2) defeat / swallow the purpose of defining an ICP because both
    subsections would protect information content provision. This contravenes the Surplusage Canon.
    Section 230(f)(3)’s definition of an ICP would have no purpose if an online provider cannot be
    treated as “a publisher” (i.e., as an ICP) in the general sense; thus, the online provider (ICS, like
    Facebook, Google, Twitter, et cetera) can also be an ICP free of all civil liability for any editorial
    conduct. This current misinterpretation, as Justice Thomas noted, is a “categorical immunity” that
    no reasonable person would approve of; i.e., does not survive the Absurdity Canon, for example.
    291.
    Section 230(c)(1) cannot plausibly (i.e., reconcilably) protect all “traditional
    editorial function” because both information content provision and content restriction (the purpose
    of Section 230(c)(2)) are both editorial functions. “The decisions that broadly interpret § 230(c)(1)
    [e.g., Zeran, the Facebook Lawsuit) to protect traditional publisher functions also eviscerated the
    narrower liability shield Congress included in the statute [§ 230(c)(2)(A)].” Ex. C, Malwarebytes,
    141 S.Ct. at 16. The misinterpretation that Section 230(c)(1) protects all editorial control, and the
    fact that such “eviscerates” Section 230(c)(2)’s purpose, renders Section 230(c)(2) superfluous;
    i.e., mere surplusage. This cannot survive under the Surplusage Canon.
    292.
    In the Facebook Lawsuit, the Ninth Circuit Court (thus far) wrongly shrugged off
    Fyk’s valid surplusage points, among many other valid points, and, in so doing, put a very bizarre
    spin on the supposed interaction between Section 230(c)(1) and Section 230(c)(2), stating:
    We reject Fyk’s argument that granting § 230(c)(1) immunity to Facebook renders
    § 230(c)(2)(A) mere surplusage. As we have explained, §230(c)(2)(a) ‘provides an
    additional shield from liability.’ Barnes, 570 F.3d at 1105 (emphasis added).
    ‘[T]he persons who can take advantage of this liability shield are not merely those
    whom subsection (c)(1) already protects [i.e., all providers and users], but any
    provider of an interactive computer service. Thus, even those who cannot take
    114Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 123 of 143
    advantage of subsection (c)(1), perhaps because they developed, even in part, the
    content at issue can take advantage of subsection (c)(2).’ Id.
    Fyk v. Facebook, Inc., 808 Fed.Appx. 597, 598 (9th Cir. 2020) (emphasis in original).
    293.
    In the Facebook Lawsuit thus far, the Ninth Circuit Court has done nothing to
    resolve the statutory conflict raised in Fyk vs. Facebook; rather, the Ninth Circuit Court has only
    emphasized non-textual arguments when interpreting Section 230, leaving yet more questionable
    precedent in the CDA wake. In the Facebook Lawsuit, the Ninth Circuit’s “additional shield from
    liability” (i.e., additional to §230(c)(1)) is “develop[ment], even in part,” which does not exist
    anywhere within the text of Section 230(c)(2)(A)). By including development in part in the
    protections of Section 230(c)(2)(A), the Ninth Circuit “eviscerated” an ICP by creating another
    disharmonious conflict with Section 230(f)(3)’s definition of an ICP. Section 230(c)(2)(A) would,
    therefore, protect information content provision, even though Section 230(c)(2)(A)’s specifically
    articulated purpose is to exclusively allow an online provider or user the ability to “restrict access
    to or availability of material.” And again, if (as courts wrongly believe, including the courts in the
    Facebook Lawsuit thus far) all traditional editorial function is perforce immune under Section
    230(c)(1) and “development, even in part” is, or course, an editorial function, then information
    content provision is already immune under Section 230(c)(1) and Section 230(c)(2)(A) is by no
    means something “additional,” let alone an “additional shield from liability.” Not only is this Ninth
    Circuit view in the Facebook Lawsuit violative of the Surplusage Canon, but this view renders
    Section 230(c)(1) disharmonious with Section 230(c)(2)(A) and disharmonious with Section
    230(f)(3), which creates an irreconcilable conflict across the various CDA subsections.
    294.
    Further in the Section 230(f)(3) vein:
    Only later did courts wrestle with the language in § 230(f)(3) suggesting providers
    are liable for content they help develop ‘in part.’ To harmonize [§ 230(c)(2)(A),
    protecting ‘development, even in part’] with the interpretation that § 230(c)(1)
    115Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 124 of 143
    protects ‘traditional editorial functions,’ courts relied on policy arguments to
    narrowly construe § 230(f)(3) to cover only substantial or material edits and
    additions. E.g., Batzel v. Smith, 333 F. 3d 1018, 1031, and n. 18 (CA9 2003) (‘[A]
    central purpose of the Act was to protect from liability service providers and users
    who take some affirmative steps to edit the material posted’).
    Ex. C, Malwarebytes, 141 S.Ct. at 16. The Central purpose of the CDA was the “blocking and
    screening of offensive material;” i.e., the purpose of the CDA was to restrict materials, not “edit”
    (i.e., modify) or develop them. Justice Thomas himself uses “harmonize” in discussing that the
    CDA is anything but; i.e., violative of the Harmonious-Reading Canon.
    295.
    In the disharmonious and absurdity analysis, Justice Thomas’ Malwarebytes
    statement continues:
    Under this [mis]interpretation, a company can solicit thousands of potentially
    defamatory statements, [104] ‘selec[t] and edi[t] . . . for publication’ several of those
    statements, add commentary, and then feature the final product prominently over
    other submissions – all while enjoying immunity. Jones v. Dirty World
    Entertainment Recordings LLC, 755 F. 3d 398, 403, 410, 416 (CA6 2014)
    (interpreting ‘development’ narrowly to ‘preserv[e] the broad immunity th[at §
    230] provides for website operators’ exercise of traditional publisher functions’).
    To say that editing a statement and adding commentary in this context does not
    ‘creat[e] or develo[p]’ the final product, even in part, is dubious.
    Ex. C, Malwarebytes, 141 S.Ct. at 16 (emphasis added). Picking / choosing / allowing / selecting
    is not a harmonious way to read the CDA or, in any sort of way, reconcilable. And “dubious”
    might as well mean “absurd.” Implicating the Harmonious-Reading Canon, Irreconcilability
    Canon, and the Absurdity Canon.
    296.
    The Batzel court indicated the development of information that transforms one into
    an ICP is “something more substantial than merely editing portions of an email and selecting
    material for publication.” Batzel v. Smith, 333 F.3d 1018, 1031 (9th Cir. 2003). If an online
    provider can “select and edit materials for publication,” it is responsible, at least in part in a
    104
    Or solicit a new owner of the materials in the case of Fyk vs. Facebook. See, e.g., Ex. B.
    116Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 125 of 143
    secondary divisible capacity, for the development of that information. This has left many courts
    (and everybody else, including Fyk) scratching their heads as to where the arbitrary line exists
    between insignificant development protected by Section 230(c)(1) and Section 230(c)(2) and
    significant development not protected by those sections (implicating, at the very least, the
    irreconcilability canon).
    297.
    In the Facebook Lawsuit, the Ninth Circuit Court in its first go-round (again, the
    Facebook Lawsuit is presently pending in the Ninth Circuit Court for a second time) defined this
    arbitrary development line in this way: “a website may lose immunity under the CDA by making
    a material contribution to the creation or development of content.” Fyk, 808 Fed.Appx. at 598
    (citing Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269 (9th Cir. 2016) and Fair Housing, 521 F.3d at
    1166). The Ninth Circuit Court in the first Facebook Lawsuit go-round further stated:
    Fyk, however, does not identify how Facebook materially contributed to the content
    of the pages. He concedes that the pages were the same after Facebook permitted
    their re-publication as when he created and owned them. We have made clear that
    republishing or disseminating third party content ‘in essentially the same format’
    ‘does not equal creation or development of content.’ Kimzey, 836 F.3d at 1270,
    1271.
    Id. It is important to note, “re-publishing” is the act of knowingly distributing third-party content
    while disseminating may not involve any action when distributing.
    298.
    A “material contribution” applies to a divisible injury. A material contribution to
    the information provided would accordingly be any divisible alteration (i.e., primary creation or
    secondary development) of the information, even in part. In the Facebook Lawsuit, Facebook’s
    actions (as “a publisher”) were divisible from Fyk’s actions (as “the publisher”) and / or from
    Fyk’s competitor’s actions, but the California courts involved in the Facebook Lawsuit have (so
    far) made the erroneous determination that Facebook’s divisible involvement in the development
    117Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 126 of 143
    of Fyk’s information did not meet the arbitrary (and imaginary, for that matter) “material”
    development “line.”
    299.
    In the Facebook Lawsuit, Fyk’s property was made unavailable by Facebook under
    Fyk’s ownership (i.e., a divisible harmful action), Facebook solicited a new high valued owner
    (i.e., a divisible anti-competitive development action), and then Facebook made Fyk’s content
    available again (i.e., actively altering the availability and value of Fyk’s material) for Fyk’s
    competitor (i.e., a divisible anti-competitive development action).
    300.
    With no limits to liability and a narrow interpretation of development, Section
    230(c)(1) is the functional equivalent of “sovereign immunity.” In the present broken CDA
    landscape, an online provider can do anything to anyone for any reason, without exposure to civil
    liability. As noted above, under the Absurdity Canon, “a provision may be either disregarded or
    judicially corrected as an error (when the correction is textually simple) if failing to do so would
    result in a disposition that no reasonable person could approve.” Ex. G. Section 230(c)(1) could
    conceptually be judicially corrected (Paragraph 4, supra) by, for example, giving the word “the”
    effect (thus aligning it with its most harmonious interpretation); but, given the overall disaster that
    is the CDA (and considering Section 230(c)(2)(A) is not realistically fixable), we submit that
    “disregard[ing]” the CDA via eradication is the proper course.
    301.
    Under the Surplusage Canon (see Ex. G at 2) every word and every provision are
    to be given effect (verba cum effectu sunt accipienda). None should be ignored. None should
    needlessly be given an interpretation that causes it to duplicate another provision or to have no
    consequence. Section 230(c)(2) should not be ignored, should not be duplicative, and every word
    should be given effect. The difference between Section 230(c)(1) and Section 230(c)(2) must be
    meaningful because a statute is to be read as a whole-text (i.e., taken together) and “we must give
    118Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 127 of 143
    meaning to all statutory terms, avoiding redundancy or duplication wherever possible.” The
    current lay of the CDA land does not give every word in Section 230 effect, far from it (in
    “evisceration” fashion in the appropriate words of Justice Thomas).
    302.
    The Fourth Circuit Court’s Zeran decision and the Ninth Circuit Court’s Fyk
    decision (so far), just as a couple examples, were / are infected (taking such decisions beyond the
    point of viability) by the all-too-common practice of reading extra immunity into a statute – those
    courts failed to read Section 230 as a whole-text and give meaning to all statutory terms. Section
    230(c)(1) is a “definitional” protection (i.e., a directive). See, Ex. C, Malwarebytes, 141 S.Ct. at
  4. Section 230(c)(1) instructs courts on how to treat an online provider that serves as a mere
    bulletin board for content; i.e., does nothing to the content – fails to moderate. Section 230(c)(2),
    on the other hand, provides immunity from civil liability for acting as a publisher or speaker, so
    long as such action is that of a “Good Samaritan” and in “good faith.”
    303.
    To unravel the Section 230 Gordian knot, we must give meaning to every term. The
    word “the” may seem like an insignificant statutory term, but it has a dramatic impact on the proper
    interpretation and application of Section 230(c)(1). The word “the” serves to define the
    “meaningful” distinction between Section 230(c)(1) and Section 230(c)(2). We submit, this
    singular term “the” (which has not been afforded adequate effect, some courts even misquoting
    the “the” of the statute as “a”), is the origin of Section 230’s court misinterpretation and the absurd
    unlimited liability protection of Section 230(c)(1).
    304.
    James Madison once argued that the most important word in “The Right To Free
    Speech” is the word “the” because it denotes “the right” preexisted any potential abridgement.
    Section 230(c)(1) specifically reads, “Treatment of Publisher or Speaker: No provider or user of
    an interactive computer service shall be treated as the publisher or speaker of any information
    119Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 128 of 143
    provided by another information content provider.” Id. (emphasis added). If we give proper effect
    to the word “the,” the distinction between Section 230(c)(1) and Section 230(c)(2) becomes
    meaningful. “The publisher or speaker” denotes the preexisting publisher or speaker. An online
    provider (ICS, as in Facebook, Google, Twitter, et cetera) cannot be treated as “the” (i.e., the
    original) publisher or speaker who entirely provided the information (i.e., which in the Facebook
    Lawsuit, Fyk was “the publisher”). The online provider can, however, be treated as “a publisher”
    (i.e., treated as itself, which, in the Facebook Lawsuit, Facebook was “a publisher” in a secondary
    capacity). When the proper effect is given to the word “the,” Section 230(c)(2) would afford a
    separate protection for certain active “good faith” publisher liability protection and Section
    230(c)(1) would maintain its definitional liability protection pursuant to the treatment of publisher
    or speaker. The difference between Section 230(c)(1) and Section 230(c)(2) becomes meaningful
    (i.e., harmonious).
  5. The Irreconcilability Canon
  6. Private companies have the First Amendment right to “voluntarily” allow or
    disallow any information on their private platforms and can “create, develop, restrict, edit, alter or
    modify” any information they want within their discretion; but a private company’s having the
    “right” to do something does not mean the private company would not be subject to liability for
    its own decisions (i.e., conduct). The company’s conduct is its prerogative, but is subject to civil
    (and potentially criminal) liability for its own conduct. See n. 47, supra.
    306.
    A lot of the confusion surrounding Section 230 protection resides within the
    quandary of whether an Internet company is engaged in voluntary “private” editorial function or
    whether an Internet company is engaged in involuntary (i.e., state-induced / mandated) obligatory
    120Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 129 of 143
    (i.e., government) function. In Carter, Justice Sutherland stated that there is a difference between
    a private activity and a governmental function:
    The difference between producing coal [operating an interactive computer and
    advertising service] and regulating [restricting] its production [materials] is, of
    course, fundamental. The former is a private activity; the latter is necessarily a
    governmental function, since, in the very nature of things, one person may not be
    [e]ntrusted with the power to regulate the business of another, and especially of a
    competitor.
    Id. at 311 (emphasis added) (citing, inter alia, A.L.A. Schechter Poultry Corp. v. U.S., 295 U.S.
    495, 537 (1935)).
    307.
    Operating an ICS is a private function (accompanied by First Amendment rights),
    but the power (i.e., Section 230 regulatory authority) to regulate the business or personal affairs of
    another is necessarily a government function. A private entity cannot be entrusted with the power
    to regulate the business of another, such as was the case in the Facebook lawsuit in which Facebook
    regulated its own competition, Fyk. Most, if not all, “Community Standards” are rarely (if ever)
    enforced (i.e., prosecuted) uniformly or in the interest of the general public. A private entity has
    the First Amendment right to arbitrarily take actions against another if those actions are entirely
    voluntary; but, if its actions are somehow induced or directed by governmental obligation (which
    is absolutely the case of all tech companies functioning within CDA protections), the entity is no
    longer acting entirely voluntarily, as a private entity, it is a functioning governmental agent / state
    authorized actor regulating speech, liberty, and property of others and destroying lives, like Fyk’s,
    in the process. “Private actions” to regulate speech are constitutionally protected while “state
    actions” to regulate the speech, liberty, and property of other are (for the most part),
    constitutionally prohibited.
    308.
    The answer to whether an online provider is acting as a private entity or as a state
    actor resides in the definition and placement of a singular word – “voluntarily.” In order for the
    121Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 130 of 143
    “action taken” to be a constitutionally protected act, the action must be taken entirely “voluntarily”
    (i.e., not induced by government obligation). Section 230(c)(2)(A) reads, in pertinent part, as
    follows: “any action voluntarily taken in good faith to restrict access to or availability of material
    that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent,
    harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
    Id. (emphasis added). The statute does not read as “any voluntary action taken… .”
    309.
    Webster’s Dictionary defines the word “voluntary” as follows: “done by design or
    intention; acting or done of one’s own free will without valuable consideration or legal
    obligation.” 105 Simply stated, any action taken by a private entity, in its entirety, must be done
    “without valuable consideration or legal obligation” (i.e., not for protection or under inducement
    or directive) for the action to be considered a “voluntary” private act.
    310.
    If a provider or user takes any action “voluntarily” (i.e., as a private actor under no
    government obligation or for any immunity consideration) it cannot legally seek statutory
    “protection” (i.e., the immunity consideration) because if a provider or user seeks the “protection,”
    it must have taken its action under the legal obligation (i.e., the directive of government – state
    action to block and screen offensive material in “good faith” as a “Good Samaritan”). Delineating
    the line between entirely voluntary private activity and obligatory governmental function is blurred
    within the CDA. The placement of the word “voluntarily,” however, serves to define the line
    between what acts are private and what acts are state activity.
    311.
    Section 230(c)(2)(A) reads, in pertinent part, as follows: “any action voluntarily
    taken to restrict … material.” It does not read; “any voluntary action taken to restrict … material.”
    105
    Merriam-Webster Dictionary, Voluntary, https://www.merriam-webster.com/dictionary/voluntary See Ex. I
    (emphasis added).
    122Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 131 of 143
    In order to seek protection consideration (i.e., immunity), any action taken by an ICS must be done
    in order to restrict materials in “good faith” as a “Good Samaritan” within the confines of Section
    230(c)(2)(A)’s state directive. The choice of whether or not to engage in the state activity is
    voluntary (a private act). Had the statute read as “any voluntary action taken to restrict …
    materials,” the activities taken would not be mandated but would instead capture any act the private
    entity chose to engage in. Put differently, Section 230(c)(2)(A) is a private entity’s voluntary
    choice to engage in state activity (i.e., act under the state directive / obligation; e.g., to restrict
    access to or availability of obscene, lewd, lascivious, filthy, excessively violent, harassing, or
    otherwise objectionable materials) if it seeks statutory protection (i.e., immunity / consideration).
    312.
    The term “voluntarily” creates an irreconcilability within Section 230(c)(2)(A). The
    actions taken must be voluntary private actions but, at the same time, the action must also follow
    the state directive, and any action taken by an ICS within the statutory framework (which, again,
    such statutory framework is not voluntary, it is obligatory state delegated action) can never be
    classified as private activity. Put differently, and to be abundantly clear, none of the actions taken
    by an ICS under the protection and provisions of Section 230(c)(2)(A) (i.e., to restrict materials at
    the directive of state) can ever be entirely voluntary (i.e., only the choice of whether to engage in
    state activity is voluntary) because, if the ICS voluntarily takes any action to restrict materials
    under Section 230(c)(2)(A) and seeks Section 230(2)(A)’s protection, the private entity must have
    acted at the directive of state (i.e., to restrict access to or availability of obscene, lewd, lascivious,
    filthy, excessively violent, harassing, or otherwise objectionable material in “good faith” as a
    “Good Samaritan”).
    313.
    The statutory term “voluntarily” (i.e., interpreted as a private activity, not as a
    private choice to engage state activity) is irreconcilable with its own obligatory (i.e., mandated)
    123Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 132 of 143
    governmental function to block and screen obscene, lewd, lascivious, filthy, excessively violent,
    harassing, or otherwise objectionable material. A private entity simply cannot (at least not in a
    reconcilable fashion) act entirely “voluntarily” while simultaneously acting under obligation or for
    consideration (i.e., under governmental directive or for civil liability protection). Section
    230(c)(2)(A) is irreconcilable with its own statutory use / directive and, as a result, Section
    230(c)(2)(A) (at minimum) must be struck. 106
    314.
    In addition to the “voluntarily” quandary is the quandary surrounding the
    understanding, scope, and application of the phrase “development, even in part.”
    315.
    There are two active roles and one passive role played by an ICS that are protected
    from liability under Section 230: (a) an inactive entity – passively hosting information, which is
    the purpose of Section 230(c)(1) protection; (b) an active entity – actively restricting offensive
    information, voluntarily following the directive of state, which is the purpose of Section
    230(c)(2)(A) protection; or (c) an active entity – actively providing the tools necessary to a third-
    party to restrict information themselves, but the ICS is passive in relation to hosting the information
    of another, which is the purpose of Section 230(c)(2)(B).
    316.
    There are two active roles played by an ICP that are not protected from liability, in
    any way, under Section 230: (a) an active entity, responsible for (i.e., liable for) creating, in whole
    or in part, information provided online; or (b) an active entity, responsible for (i.e., liable for)
    developing, in whole or in part, information provided online. “Creation” means to bring
    information into existence, whereas “development” means any divisible manipulation of
    information already in existence.
    106
    As noted above, Section 230(c)(1) could stand in current form if the word “the” was to be given actual effect.
    124Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 133 of 143
    317.
    An “interactive computer service” (ICS) is defined under Section 230(f)(2) as “any
    information service, system, or access software provider that provides or enables computer access
    by multiple users to a computer server, including specifically a service or system that provides
    access to the Internet and such systems operated or services offered by libraries or educational
    institutions.” Id.
    318.
    And an “information content provider” (ICP) is defined under Section 230(f)(3) as
    “any person or entity that is responsible, in whole or in part, for the creation or development of
    information provided through the Internet or any other interactive computer service.” Id.
    319.
    Section 230(c)(2)(A) specifically provides an ICS with liability protections when
    it, itself, takes certain restrictive actions, whereas Section 230(c)(2)(B) contemplates an ICS
    providing the tools to another to restrict materials for themselves. And if the ICS fails to restrict
    offensive materials (230(c)(2)(A) omission), it cannot be treated as the entity or person who
    provided the information because of Section 230(c)(1)’s definitional protection.
    320.
    All of Section 230’s protection provisions fall squarely within protecting
    information restriction actions (i.e., blocking and screening actions). Section 230 does not,
    however, provide any liability protections for any information provision actions. An Information
    Content “Provider” (ICP) is not afforded any liability protections for the creation or development
    of any information in whole or even “in part” (i.e., in any insignificant divisible measure). An
    ICS’ role is to passively host information (i.e., provide only the service, not the information) or
    restrict certain information (in a “good faith” and “Good Samaritan” way). An ICS’ role is not to
    create or develop any information, even in part. As soon as the ICS steps over the line of creation
    or development, even in part, it transforms itself into an ICP and is subject to liability for its own
    material contribution, whether or not the underlying content was originally provided by another.
    125Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 134 of 143
    321.
    Courts have struggled with the proper interpretation and application of the phrase
    “development, in part.” The phrase “development, in part” seems relatively intelligible to the
    ordinary person – if an entity is responsible for any divisible contribution (i.e., even “in part,” and
    no matter how (in)significant), to solicit, sponsor, advance, alter, expound upon, make available
    (i.e., allow or provide), modify, manipulate, organize, promote the growth of (especially by
    deliberate effort over time) and / or et cetera, the entity is, by definition, an information content
    provider (ICP) developing information and is accordingly liable for its own provisional conduct
    and content (i.e., liable for its own secondary material “development” contribution), even if that
    information was initially provided (i.e., created or developed) “by another.”
    322.
    This is where the irreconcilability of the phrase “development, even in part” begins
    to take shape. Under Section 230, an ICS can have absolutely no active role in the provision of
    information (i.e., no creation or development, even in part) in order to maintain civil liability
    protections. An ICS cannot be both an ICS and an ICP at the same time (if CDA immunity is to be
    had) because an entity cannot have both an active development role and also not have an active
    development role simultaneously – such is irreconcilable / impossible.
    323.
    Whenever an ICS “considers” information, it is acting in a traditional editorial role.
    Section 230(c)(2)(A) limits (i.e., under the proper, narrow read of the provision) that editorial role
    to the exclusion of materials; but, an inherent result (i.e., by proxy) of exclusion consideration, is
    inclusion consideration (i.e., content provision / development in part).
    324.
    Development, in whole or in part, is the exclusive role of an ICP by Section
    230(f)(3) definition and by the purpose of 230(c)(1); but, the ICS’ role as an information content
    restrictor also allows the ICS (by proxy) to act as an ICP who can “knowingly distribute” (i.e.,
    “allow”) unlawful information. This is at odds with the “Good Samaritan” general provision (i.e.,
    126Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 135 of 143
    intelligible principle) of the statute and creates an irreconcilable conflict between Sections
    230(c)(2) and 230(c)(1) and between Section 230(f)(3)’s definition of an ICP.
    325.
    Information “consideration” (i.e., restriction, or allowance / provision by proxy)
    gave rise to the mistaken Zeran decision. Any information that is “considered” (i.e., traditional
    editorial responsibility) and “allowed” (i.e., knowingly provided/ advanced; i.e., developed in part)
    by an ICS, is development in part, and must be subject to civil liability or, as a result, all
    distribution, publishing, information content providing liability is eliminated, including unlawful
    distribution and publishing (i.e., knowingly causing harm). The statute cannot be reconciled in a
    way that distinguishes between “development by proxy” (as an inherent result of information
    content consideration – 230(c)(2)(A)) and “development in part” (information content provision –
    in conflict with 230(c)(1) and 230(f)(3)).
    326.
    The definition of an ICP is at odds with (i.e., irreconcilable with) the role of an ICS,
    who is acting as an ICP when “considering information” to restrict because it is also considering
    what to “allow” (i.e., developing information in part – by proxy) under Section 230(c)(2)(A).
    Section 230(f)(3)’s ICP definition and use of the phrase “development, in part” is irreconcilable
    with the function of Section 230(c)(2)(A) and the purpose of Section 230(c)(1). Section
    230(c)(2)(A) is irreconcilable with its own statutory terms / use / function and, as a result, the CDA
    Section 230(c)(2)(A), at minimum, must be struck.
    E.
    Conclusion
    327.
    As stated at the outset of this constitutional challenge, this Court has the ability to
    strike down laws on the grounds that they are unconstitutional, a power reserved to the courts
    through judicial review. See Marbury v. Madison, 5 U.S. 137, 177 (1803) (“[i]t is emphatically the
    province and duty of the judicial department to say what the law is”).
    127Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 136 of 143
    328.
    In its current state (some of it having to do with on its face, some of it having to do
    with as applied, some of it having to do with statutory construction, and some of it having to do
    with judicial misinterpretation, as demonstrated above), the CDA strips United States citizens of
    their constitutionally protected First Amendment and Fifth Amendment rights (case in point being
    the Facebook Lawsuit, primarily implicating deprivation of Fyk’s Fifth Amendment rights, but
    also implicating his First Amendment rights). That untenable end result comes from the CDA
    being so badly broken in myriad ways (e.g., vague, misunderstood, misapplied, overbreadth, and
    unconstitutional, et cetera). Section 230, on its face and / or as applied, violates the Non-
    Delegation Doctrine / Major Questions Doctrine, Void-for-Vagueness Doctrine, Substantial
    Overbreadth Doctrine, Harmonious-Reading Canon, Irreconcilability Canon, Whole-Text Canon,
    Surplusage, and Absurdity Canon. Under any of these legal tenets (again, with the end result being
    the deprivation of constitutionally guaranteed rights), and whether considered separately or
    together, the CDA is unconstitutional and / or legally untenable and is due to be struck. And this
    is precisely the declaratory judgment that Fyk respectfully requests from this Court here – striking
    all of the CDA as unconstitutional and / or legally untenable (in the primary) or striking a portion
    of the CDA as unconstitutional (in the alternative).
    329.
    Alternatively, the Court has the power and obligation to rein in Section 230 by
    narrowly conforming the application of Section 230 consistent with legislative intent, with
    constitutional tenets / mandates, and with the actual language of Section 230. If there was ever a
    way to fix the CDA in the alternative, amidst the CDA’s so presently broken condition, the “fix”
    would necessarily have to involve the following immunity analysis (there is simply no other way
    in which the CDA could work, any underlying immunity analysis other than the following fails
    for one or more of the various reasons discussed throughout this constitutional challenge):
    128Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 137 of 143
    (a)
    The first logical point of Section 230(c) immunity analysis is the intelligible
    principle / general directive / general provision found in the very title of 230(c) – “Good
    Samaritan[ism].” If an ICS (e.g., Facebook, Twitter, Google, YouTube) is not acting as a “Good
    Samaritan” (for example, one cannot be an anti-competitor and a “Good Samaritan” at the same
    time, that is prima facie oxymoronic, as observed in Enigma), then the Section 230(c) immunity
    analysis stops there; i.e., does not proceed to the subsections of Section 230(c). If the “Good
    Samaritan” threshold is cleared, then the immunity analysis of 230(c)’s subsections necessarily
    begins to unfold as follows.
    (b)
    Section 230(c)(1) immunizes a passive (inactive) ICS / provider / host / platform
    when the ICS takes no action with respect to the content provided by another ICP / user; i.e., the
    provider or user is not treated as another publisher for the conduct of, or the content (exclusively)
    provided by, another. It makes perfect sense that where there is no harm inflicted by the ICS
    because there was no action taken by the ICS as to another ICP’s content, there is no foul that can
    be called against the ICS. 107 Section 230(c)(1) could remain intact, if clarified and narrowed down
    to its proper interpretation and application, namely proper effect being given to the actual statutory
    language (“the publisher”) rather than the make-believe statutory language (“a publisher”) that has
    somehow come about.
    107
    Originating with Barnes, courts have often applied a three-part test to determine Section 230(c)(1) immunity (or
    not) at the threshold. Unfortunately, this three-part Section 230(c)(1) immunity test lacks critical elements and converts
    “the publisher” (the actual language of Section 230(c)(1)) to “a publisher,” which creates the irreconcilable conflict
    between Section 230(c)(1) and Section 230(c)(2) and otherwise renders Section 230(c)(2)(A) mere surplusage to
    Section 230(c)(1). The incorrect Barnes three-part Section 230(c)(1) test goes as follows – Section 230(c)(1) immunity
    from liability exists for (a) a provider or user of an interactive computer service, (b) whom a plaintiff seeks to treat,
    under a state law cause of action, as “a” publisher or speaker, (c) of information provided by another information
    content provider. The correct Section 230(c)(1) test would actually be four parts and would go like this – Section
    230(c)(1) immunity from liability exists for (a) a “Good Samaritan,” (b) who is a provider or user of an interactive
    computer service, (c) whom a plaintiff seeks to treat, under a state law cause of action, as “the” publisher or speaker,
    (d) of information provided exclusively by another information content provider.
    129Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 138 of 143
    (c)
    Section 230(c)(2)(A) must be struck. Under no circumstance can Section
    230(c)(2)(A) constitutionally delegate regulatory authority (i.e., governmental obligation for
    consideration) to self-interested private entities (acting as an agent of government) to deny a United
    States citizen their Constitutional rights to free speech and due process. Section 230(c)(2)(A) must
    be struck and sent back to the legislature to be rewritten in accordance with the Constitutional
    doctrines. It makes perfect sense that an ICS should be able to delete patently offensive or
    universally recognized impermissible material provided by another ICP / user, without fear of
    liability. Decisions as to what is considered patently “offensive” or impermissible and immune
    from liability, however, cannot be left to self-interested private corporations, as has been
    demonstrated by one or more of the various reasons discussed throughout this constitutional
    challenge. Instead, an official regulatory commission must be formed 108 to determine universal
    contemporary community standards, which would be granted immunity provided the ICS
    restricted materials in accordance with the commission’s standards. In its present form, Section
    230(c)(2)(A) must be struck as unconstitutional. The proper (i.e., constitutionally sufficient)
    legislative rewrite of Section 230(c)(2)(A) should read as follows: “No provider or user of an
    interactive computer service shall be held liable on account of – (A) any action voluntarily
    undertaken in good faith by the provider or user to restrict access to or availability of material that
    the provider or user reasonably considers objectionable pursuant to universal contemporary
    community standards defined by the regulatory commission’s prohibitions.” 109
    108
    The legislature is at an impasse in regards to repealing or amending Section 230. Some legislators want Section
    230 gone, while others want to keep it in place, giving them a formidable censorship and competitive weapon. The
    votes needed to repeal the statute, through the legislative process, are unattainable. This Court’s actions could finally
    break the impasse (through the judicial process) and force the legislature to go back to the table and get it right by way
    of, among other things, setting up an impartial official regulatory commission tasked with setting up (“filling up the
    details”) universal contemporary community standards and regulatory guidelines for all United States Internet
    companies to follow.
    Rather than “regulatory commission,” the actual name of the “regulatory commission” would be inserted here, like
    the FCC, for example.
    109
    130Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 139 of 143
    (d)
    Section 230(c)(2)(B) (which expressly relates back to Section 230(c)(1) because it
    is the same kind of inaction situation in a slightly different context) immunizes an ICS when the
    ICS takes no action with respect to the content of another ICP #2 / user #2 but provides the tools /
    services to an ICP #1 / user #1 to take action on the content of ICP #2 / user #2 – it makes sense
    that an ICS would not be subject to any liability for giving a parent / user / ICP (ICP #1) the tools
    needed to protect a child in eradication of pornography, for example, posted on the Internet by
    another user / ICP (ICP #2). Section 230(c)(2)(B), however, does include an exploitable flaw – an
    ICS could potentially provide the tools to ICP #1, with the instructions or directive to act upon
    ICP #2, thus laundering the ICS’ own actions through a proxy ICP, analogous to Section
    230(c)(2)(A). In its present form, Section 230(c)(2)(B) must be struck as unconstitutional. The
    proper (i.e., constitutionally sufficient) legislative rewrite of Section 230(c)(2)(B) should read as
    follows; “(B) any action taken to enable or make available to information content providers or
    others the technical means to restrict access to material described by the (regulatory commission)
    is subject to the definitional protection of paragraph 230(c)(1).”
    330.
    In sum, Section 230(c)(1) could stand alone, if the word “the” in “the publisher” is
    given proper (i.e., literal) effect and the definitional protection of Section 230(c)(1) only relates to
    the inaction of the ICS. Under no circumstance, however, can Section 230(c)(2)(A) or Section
    230(c)(2)(B) be constitutionally rectified.
    331.
    Fyk respectfully requests from this Court the striking of all of the CDA (the most
    realistic route) or, alternatively, the striking of a portion of the CDA (Section 230(c)(2)) the less
    realistic, but theoretically conceivable route. 110
    110
    It is important to note that, under the current broken CDA landscape, no normal person has the realistic ability to
    challenge Big Tech and their ongoing abuses, partly because most courts dismiss actions (under illogical and / or
    unintelligible) without ever considering the merits. It takes a man like Elon Musk (one of the richest human beings on
    the planet), who is trying to acquire Twitter in an attempt to restore some semblance of free speech online. For just
    131Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 140 of 143
    COUNT I – DECLARATORY JUDGMENT AS TO CDA UNCONSTITUTIONALITY
    Plaintiff, Jason Fyk, re-alleges Paragraphs 1-331 above as if fully set forth herein and
    further alleges as follows.
    332.
    Fyk was harmed by the application of the CDA (see Facebook Lawsuit,
    summarized in Ex. B) which had the effect of violating his Fifth Amendment due process rights
    and / or suppressing his First Amendment rights.
    333.
    Fyk has a bona fide, actual, and present need for declarations as to his rights, status,
    and privileges under the CDA, as to the constitutionality of the CDA, and / or as to the construction
    of the CDA.
    334.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Non-Delegation Doctrine (see, e.g., ¶¶ 3, 28-30, 36, 38, 49, 52, 63-103, 328, supra) and,
    thus, unconstitutional. As a result of Section 230’s unconstitutionality, Fyk respectfully requests
    further declaration from this Court that Section 230 is hereby struck.
    335.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Major Questions Doctrine (see, e.g., ¶¶ 3, 31, 34-38, 48-49, 52, 63-103, 114, 270, 328, n.
    29, supra) and, thus, unconstitutional. As a result of Section 230’s unconstitutionality, Fyk
    respectfully requests further declaration from this Court that Section 230 is hereby struck.
    336.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Void-for-Vagueness Doctrine (see, e.g., ¶¶ 3, 39, 50, 52, 104-120, 206, 235-236, 328, supra)
    and, thus, unconstitutional. As a result of Section 230’s unconstitutionality, Fyk respectfully
    requests further declaration from this Court that Section 230 is hereby struck.
    about everybody else (i.e., folks not as rich as Elon Musk), the justice system is the last resort; hence, this constitutional
    challenge.
    132Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 141 of 143
    337.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Substantial Overbreadth Doctrine (see, e.g., ¶¶ 3, 51-52, 121-260, 289, 328, supra) and,
    thus, unconstitutional. As a result of Section 230’s unconstitutionality, Fyk respectfully requests
    further declaration from this Court that Section 230 is hereby struck.
    338.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Harmonious-Reading Canon (see, e.g., ¶¶ 3, 23-25, 262-304, 328, supra) and, thus, legally
    untenable. As a result of Section 230 being legally untenable under this canon of statutory
    construction (as well as unconstitutional under the above doctrines), Fyk respectfully requests
    further declaration from this Court that Section 230 is hereby struck.
    339.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Irreconcilability Canon (see, e.g., ¶¶ 3, 23, 26-27, 269, 287-288, 293, 295-296, 305-326,
    328, supra) and, thus, legally untenable. As a result of Section 230 being legally untenable under
    this canon of statutory construction (as well as unconstitutional under the above doctrines), Fyk
    respectfully requests further declaration from this Court that Section 230 is hereby struck.
    340.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Whole-Text Canon (see, e.g., ¶¶ 3, 24-25, 262-304, 328, supra) and, thus, legally untenable.
    As a result of Section 230 being legally untenable under this canon of statutory construction (as
    well as unconstitutional under the above doctrines), Fyk respectfully requests further declaration
    from this Court that Section 230 is hereby struck.
    341.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Surplusage Canon (see, e.g., ¶¶ 3, 25, 262-304, 328, supra) and, thus, legally untenable. As
    a result of Section 230 being legally untenable under this canon of statutory construction (as well
    133Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 142 of 143
    as unconstitutional under the above doctrines), Fyk respectfully requests further declaration from
    this Court that Section 230 is hereby struck.
    342.
    Fyk respectfully requests a declaration from this Court that Section 230 is violative
    of the Absurdity Canon (see, e.g., ¶¶ 3, 71, 75, 109, 127, 132, 152, 212, 243, 248-249, 252, 256,
    258, 262-304, 328, supra) and, thus, legally untenable. As a result of Section 230 being legally
    untenable under this canon of statutory construction (as well as unconstitutional under the above
    doctrines), Fyk respectfully requests further declaration from this Court that Section 230 is hereby
    struck.
    343.
    In the alternative to the declarations Fyk seeks in Paragraphs 334-342 above
    (which, again, such primary declarations are likely the only realistic declarations here given the
    pervasive, multi-dimensional brokenness of the CDA), Fyk seeks an alternative declaration from
    this Court that Section 230 immunity follows the precise analysis set forth in Paragraph 329-330
    and footnote 107 above.
    344.
    As a direct, foreseeable, and proximate result of the unconstitutionality and / or
    illegality of Section 230 (and / or Defendant’s enacting and / or maintaining an unconstitutional /
    illegal law), Fyk has suffered and continues to suffer harm along with millions of others.
    345.
    Fyk has no other remedy to receive the aforementioned declarations other than this
    lawsuit.
    346.
    As a further result of the unconstitutionality / illegality of Section 230 (and / or
    Defendant’s enacting and / or maintaining an unconstitutional / illegal law), Fyk has been forced
    to retain legal counsel to represent him in this matter and is accordingly entitled to recover his
    reasonable attorneys’ fees and costs pursuant to Title 28, United States Code, Section 2412 or as
    otherwise awardable.
    134Case 1:22-cv-01144 Document 1 Filed 04/26/22 Page 143 of 143
    WHEREFORE, Plaintiff, Jason Fyk, respectfully requests that this Court declare and
    construe the CDA and enter declaratory judgment, as follows: declare the CDA unconstitutional,
    accordingly inoperative, and hereby struck. See ¶¶ 334-342, supra. 111 In conjunction with same,
    Fyk further respectfully requests (a) the Court’s entry of declaratory judgment in his favor, for all
    declaratory and supplemental relief within the declaratory jurisdiction of this Court; (b) the Court’s
    taxation of costs and / or award of reasonable attorneys’ fees in favor of Fyk pursuant to Title 28,
    United States Code, Section 2412 or as otherwise taxable / awardable; and (c) the Court’s awarding
    Fyk any other relief deemed equitable, just, or proper. 112
    Dated: April 26, 2022.
    Respectfully Submitted,
    CALLAGY LAW, P.C.
    1900 N.W. Corporate Blvd., Suite 310W
    Boca Raton, FL 33431
    (561) 405-7966; (201) 549-8753 (f)
    /s/ Jeffrey L. Greyber
    Jeffrey L. Greyber, Esq.
    D.C. Bar No. 1031923
    jgreyber@callagylaw.com
    hcasebolt@callagylaw.com
    Attorney for Plaintiff
    PUTTERMAN / YU / WANG LLP
    /s/ Constance J. Yu
    Constance J. Yu, Esq.
    Pending Pro Hac Vice Admission
    345 California St., Suite 1160
    San Francisco, CA 94104-2626
    (415) 839-8779; (415) 737-1363 (f)
    cyu@plylaw.com
    Attorney for Plaintiff
    111
    Alternatively, see ¶¶ 4, 329-330 and 343, and n. 107, supra.
    112
    Finally, in the spirit of full transparency, we advise the Court that it is likely Fyk will file (in the not-so-distant
    future) a motion for nationwide injunction as to the (non-)application of Section 230(c), which such injunction would
    / should remain in effect until this constitutional challenge is resolved.
    135