30

I have just seen this video on Youtube:

"Republican Texas governor Greg Abbott has signed a law that forbids social media companies from banning people from their platforms based on their location and/or their politics. "

I presume this is intended to prevent things like Twitter banning Trump from posting.

Is this law unconstitutional, on the grounds that it violates freedom of the press? As I understand it, the First Amendment means the government can't tell any publisher what political opinions they must publish, or must not publish. If that is correct, does 'the press' include social media?

Pete
  • 1,147
  • 7
  • 16
  • 34
    Twitter didn't ban Trump for his politics, they banned him for inciting violence, which is in violation of their Terms & Services – BlueRaja - Danny Pflughoeft Sep 21 '21 at 07:15
  • If we treat them purely as private companies, then this might be seen as an attempt to make political orientation a protected class, but that still wouldn't allow someone to express those views on those platforms (although you may be able to make a case against that). The alternative is that they're trying to challenge the idea that they are private companies, instead of essential service providers, and free speech protection should extend to those platforms (to some degree). But that would need to be fought in the Supreme Court (and creating a law to start this process is not unheard of). – NotThatGuy Sep 21 '21 at 10:00
  • 27
    @BlueRaja-DannyPflughoeft Well, there are plenty of people with different politics that advocated for violence that didn't suffer any consequences despite being reported, so it's not that clear cut. If there's a rule that's only getting applied to one group of people and not another, it's not accurate to claim just the rule is the cause; it is also which group the person belongs to. – ColleenV Sep 21 '21 at 13:45
  • 1
    @ColleenV how many of them were addressing a crowd that later became a violent mob? – Caleth Sep 21 '21 at 15:11
  • 4
    @Caleth I'm noting the argument, not arguing that any particular action was or was not warranted. The comment was a criticism of the question. My point is simply that criticism isn't as straightforward as it was stated. – ColleenV Sep 21 '21 at 15:14
  • 2
    The first amendment only says that the government can't restrict free speech. Twitter is not the government, they can do whatever they want. – Darrel Hoffman Sep 21 '21 at 15:18
  • 14
    My question is not about Twitter's actions. My question is about the government's actions telling social media what to do. – Pete Sep 21 '21 at 15:32
  • 1
    I would like to see how government mandating how you use your intellectual property passes a SCOTUS test. This may very well have far reaching effects in regards to intellectual property rights. – Neil Meyer Sep 21 '21 at 19:35
  • @DarrelHoffman websites are non tangible property. They are intellectual property like a patent or a trade secret. People don't seem to understand that. – Neil Meyer Sep 21 '21 at 19:37
  • 2
    @NeilMeyer social media sites such as Facebook and Twitter are more than intellectual property. They enjoy some protection under copyright and trademark law, and perhaps under some patents. They are also mechanisms whereby people can publish information, and the government has a well established capacity to regulate such communications media. For example, they are "interactive computer services" under section 230. The government has chosen to absolve such services of liability for content supplied by users, but it didn't have to. – phoog Sep 21 '21 at 19:57
  • 1
    "Republican Texas governor Greg Abbott has signed a law that forbids social media companies from banning people from their platforms based on their location" If social media sites ban all Texans, I'm not sure Texas would have any jurisdiction. – Acccumulation Sep 22 '21 at 00:34
  • This might be a separate question, but why does Governor Abbott think that companies such as Facebook and Twitter fall within his jurisdiction? – Dawood ibn Kareem Sep 22 '21 at 20:16
  • @AndrewT. That bill was put out of enforcement by the Florida federal circuit. – Trish Sep 22 '21 at 23:18

4 Answers4

35

most likely

The government can't compel people to some sort of speech under the 1st amendment. Forcing a company to host people is compelled speech by the company.

It is well established that the government can't compel a newspaper to host its messages as it wants. The key case might be Miami Herald Publishing Co. v. Tornillo 418 US 241 (1974). In this case, it was deemed unconstitutional that a newspaper would need to host speech of a political candidate the newspaper didn't like in the same amount it had used to disparage that candidate.

While the Miami Herald brought the newspaper into the line by the action of the newspaper, Wooley v. Maynard 430 U.S. 705 (1977) held that the state could not force any citizen to host its motto. Or for the matter, any message.

The State may not constitutionally require an individual to participate in the dissemination of an ideological message by displaying it on his private property in a manner and for the express purpose that it be observed and read by the public. Pp. 71717.

Forcing a public web page to host advertisement or speech from any government - or under the threat of the government action - is compelled speech and violates the rulings of Miami Herald, Wooley and other cases.

However, there is a little light for the government under PruneYard, Turner Broadcasting and Rumsfeld. However, all of them don't cut here: Turner Broadcasting was about a service provider for radio that did not host its own speech. PruneYard is a shopping center that doesn't host its own speech and is only useful in California as there is a California constitution issue. And Rumsfeld dealt with military recruitment, which always is special.

A similar Florida law was deemed to be very likely unconstitutional by the (federal) Northern District of Florida (Injunction Text)

Addendum

A joint lawsuit by NetChoice & CCIA was filed against Texas on 22nd September 2021 (Complaint), asking for a preliminary injunction. NetChoice puts its filings on their website.

Further reading:

01st December 2021 Update

Indeed, the relevant parts of HB20 were put out of enforcement via injunction on December 1st, 2021, reasoning that:

Social media platforms have a First Amendment right to moderate content disseminated on their platforms. See Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1932 (2019) (recognizing that “certain private entities[] have rights to exercise editorial control over speech and speakers on their properties or platforms”). Three Supreme Court cases provide guidance. First in Tornillo, the Court struck down a Florida statute that required newspapers to print a candidate’s reply if a 13 newspaper assailed her character or official record, a “right of reply” statute. 418 U.S. at 243.

[...]

In Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Bos., the Supreme Court held that a private parade association had the right to exclude a gay rights group from having their own float in their planned parade without being compelled by a state statute to do otherwise. 515 U.S. 557, 572– 73 (1995).

[...]

Finally, the Supreme Court ruled that California could not require a private utility company to include a third party’s newsletters when it sent bills to customers in Pac. Gas & Elec. Co. v. Pub. Utilities Comm’n of California, 475 U.S. 1, 20–21 (1986).

HB 20 compels social media platforms to significantly alter and distort their products. Moreover, “the targets of the statutes at issue are the editorial judgments themselves” and the “announced purpose of balancing the discussion—reining in the ideology of the large social-media providers—is precisely the kind of state action held unconstitutional in Tornillo, Hurley, and PG&E.” Id. HB 20 also impermissibly burdens social media platforms’ own speech. Id. at *9 (“[T]he statutes compel the platforms to change their own speech in other respects, including, for example, by dictating how the platforms may arrange speech on their sites.”). For example, if a platform appends its own speech to label a post as misinformation, the platform may be discriminating against that user’s viewpoint by adding its own disclaimer. HB 20 restricts social media platforms’ First Amendment right to engage in expression when they disagree with or object to content.

For these reasons, IT IS ORDERED that the State’ s motion to dismiss, (Dkt. 23), is DENIED.

IT IS FURTHER ORDERED that Plaintiffs’ motion for preliminary injunction, (Dkt. 12), is GRANTED. Until the Court enters judgment in this case, the Texas Attorney General is ENJOINED from enforcing Section 2 and Section 7 of HB 20 against Plaintiffs and their members. Pursuant to Federal Rule of Civil Procedure 65(c), Plaintiffs are required to post a $1,000.00 bond.

IT IS FINALLY ORDERED that Plaintiffs’ motion to strike, (Dkt. 43), is DISMISSED WITHOUT PREJUDICE AS MOOT.

Trish
  • 39,097
  • 2
  • 79
  • 156
18

That will be determined later by the courts. You might think so, if Facebook etc. were strictly private platforms. Knight First Amdt. Inst. at Columbia Univ. v. Trump, 928 F. 3d 226 somewhat undermines that thinking. The court disallowed Trump from exercising ordinary First Amendment control over his account. In reviewing the lower court finding,

After concluding that the defendants had created a public forum in the interactive space of the Account, the court concluded that, by blocking the Individual Plaintiffs because of their expressed political views, the government had engaged in viewpoint discrimination.

The appeals court said that

we agree that in blocking the Individual Plaintiffs the President engaged in prohibited viewpoint discrimination

The court rejected the contention that

the Account is exclusively a vehicle for his own speech to which the Individual Plaintiffs have no right of access and to which the First Amendment does not apply.

An analogous law was passed in Florida. There was a lawsuit and enforcement was stayed, which means that the court found that the arguments against the law were likely to prevail. The answer w.r.t. the Texas law hinges on the difference between the Florida vs Texas laws, and the basis for the injunction against the Florida law.

user6726
  • 214,947
  • 11
  • 343
  • 576
  • Knight Institute however was mooted and voided. – Trish Sep 20 '21 at 17:02
  • 7
    Yes, for the reason that Trump is no longer president. The reasoning in the ruling was not touched. – user6726 Sep 20 '21 at 17:27
  • 11
    It seems that this would be far closer to the Florida case than to Knight Institute. That result was predicated on the finding that Trump's Twitter account was a mechanism for government communication. Because there is no such claim as to the social media companies more broadly, there's not much force to the argument that they should be stripped of their First Amendment protections. – bdb484 Sep 20 '21 at 17:55
  • 18
    Knight argued that Trump created a public forum on his personal Twitter account. While the public forum existed, it was impermissible for the government (including the President) to deny access to that forum because of protected political speech. However, Knight did not rule that Twitter was in general a public forum, just Trump's account. – A. R. Sep 20 '21 at 20:19
  • 12
    Hard agree with what @AndrewRay said, and more importantly, Knight did not even decide that Twitter can't ban people from viewing Trump's account on the basis of their political opinion, it decided that Trump can't do that. That entire case hinged on the critical detail that Trump is a government official making use Twitter as a part of his job in office, and this does not apply here in general. – DreamConspiracy Sep 21 '21 at 00:51
  • 3
    @DreamConspiracy more precisely, Knight holds that the President of the United States is constrained by the reasoning we're discussing, which is why the case was dismissed when Trump ceased to be President of the United States. As you note, it's not about creating a "public forum," which Twitter is in general, but about the use of Twitter for official government purposes. – phoog Sep 21 '21 at 09:29
  • 1
    @AndrewRay: Indeed, Twitter actually did deny some people (in fact, everyone) access to Trump's Twitter account after the events of January 6, 2021. Obviously the court did not prohibit them from doing that. – Kevin Sep 21 '21 at 18:01
  • Are you sure that enforcement of the Florida law being stayed means that the court judged the plaintiffs likely to prevail? Wouldn't it be enough for the court to decide (1) that there is a genuine controversy, and (2) that the plaintiffs would suffer irreparable harm if the state proceeded with enforcement while that controversy was being adjudicated? – John Bollinger Sep 22 '21 at 19:14
  • 1
    @JohnBollinger "To get a preliminary injunction, a party must show that they will suffer irreparable harm unless the injunction is issued. Preliminary injunctions may only be issued after a hearing. When determining whether to grant preliminary injunctions, judges consider the extent of the irreparable harm, each party's likelihood of prevailing at trial, and any other public or private interests implicated by the injunction. Parties may appeal the judge's decisions on whether to award a preliminary injunction." https://www.law.cornell.edu/wex/preliminary_injunction – Acccumulation Sep 23 '21 at 06:36
  • @JohnBollinger From the Florida court itself: “The plaintiffs are likely to prevail on the merits of their claim that these statutes violate the First Amendment. There is nothing that could be severed and survive.” – Trish Sep 23 '21 at 08:31
  • That's useful @Trish, but it speaks to the specifics of this case. The answer says "enforcement was stayed, which means that [...]", and in my understanding, the fact that enforcement of a law was stayed in a situation like this does not mean or require the court to take such a strong position on the likely disposition of the case. – John Bollinger Sep 23 '21 at 10:42
11

No one knows how the courts will eventually rule on this law, or on the somewhat similar Florida law now being litigated.

It is true, as the answer by Trish says, that the government cannot compel a publisher to publish things against its wishes, and that an individual cannot generally be compelled to make statements of political views. In West Virginia State Board of Education v. Barnette, 319 U.S. 624 (1943) the US Supreme Court wrote:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.

However Section 230(c)(1) of the Communications Decency Act says that:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Thus requiring that a person be allowed to have an account is not requiring the owner of the service to publish anything, since the owner is not the publisher of any user's content, nor does it require the owner or anyone else to make any statement, and in particular not to make any statement specified by the government.

It is constitutional, in general, for a state to pass anti-discrimination laws, or laws requiring that a business give access to particular groups of people. If a social media platform is treated as a service allowing people to publish their own views, then the state may pass laws limiting the exclusion of people from that service on particular grounds.

That does not prove that the law is constitutional, but it shifts the ground of inquiry significantly. The exact provisions of the Texas and Florida laws may be significant in the eventual decision.

I doubt that Knight Institute. will be found relevant. That case was based on the very unusual situation that a public official, and in particular the president of the United States, was using social media as a means for distributing official policy statements, and as a public forum for comments on them, and it was held that Trump could not, in those circumstances, exclude specific individuals, as that would constitute government action against those people. That situation is quite different from the ones under the Texas and Florida social media laws.

ColleenV
  • 277
  • 4
  • 16
David Siegel
  • 113,558
  • 10
  • 204
  • 404
  • Miami HErad and Wooly v Meynard as well, but to a lesser degree. – Trish Sep 21 '21 at 16:37
  • 6
    I have always thought that Section 230c companies enjoying special protections as "not publishers" while simultaneously censoring information based on the opinion of the company is strange. You can't have your cake and eat it too (well, anywhere other than under the law I guess ;)) – ColleenV Sep 21 '21 at 17:01
  • @ColleenV If they were liable for content, almost everything controversial would be blocked or removed. If they couldn't moderate protected speech, you'd very quickly have e.g. pornographic political memes all over. – Caleth Sep 22 '21 at 14:11
  • 2
    Sure, @Caleth, but when section 230 was first enacted, the problem it addressed was that information service providers couldn't effectively filter content while still providing service at all. The implied quid pro quo was that in exchange for providing a useful service that was understood not to be filtered, information service providers were excused from responsibility for the content of the speech they conveyed. Now that providers have taken to exercising effective editorial control over the speech they convey, the premises of section 230 have been undermined. – John Bollinger Sep 22 '21 at 19:38
  • 2
    @JohnBollinger I don't remember it ever being assumed that content wouldn't be filtered. Human moderation and blacklisting profanity predates that legislation – Caleth Sep 22 '21 at 19:43
  • @Caleth Although the things that social media companies do block (not ban; accounts are not banned for this content AFAIK but the content itself is blocked) are generally egregious in nature, such as graphic videos of death, graphic sexuality, and so on. No matter how much you do or don't like Trump's tweets, you'd be hard pressed to equate them to something like that. – Ertai87 Sep 22 '21 at 20:16
  • 3
    @John Bollinger I do not think there was any provision or understanding that online services be completely unfiltered, and to the best of my knowledge none of the major ones ever were. Rather the issue was the reverse: Prior to sec 230, doing any content moderation might impose liability for the content on the service. One purpose of sec 230 was to allow a service to do some content moderation without assuming responsibility for things that slipped by. Can you cite any source for the idea that sec 230 was based on the understanding that content was to be un-filtered? – David Siegel Sep 22 '21 at 20:35
  • 5
    @JohnBollinger: Your interpretation frankly makes no sense. Under Cubby v. CompuServe and Stratton Oakmont v. Prodigy Services, what you describe was already the law before section 230 had been enacted (i.e. you're excused from liability as long as you don't filter anything). Congress enacted section 230 as part of the Communications Decency Act, which was specifically intended to clean up the internet and reduce the amount of "obscene" content on it. They wanted more filtering, not less. – Kevin Sep 22 '21 at 21:55
  • 1
    @ColleenV Punishing entities for not helping people spread lies would also be rather strange. It would create a perverse incentive for entities to not remove any libelous materiel, because once they do, they become a "publisher", and have to remove all libelous materiel, even material they aren't aware is libelous. If you kick someone out of your house because they're spouting lies, are you now a "publisher" and liable for anything one of your guests says? – Acccumulation Sep 23 '21 at 06:51
  • @Acccumulation It’s strange when folks pretend a $48B company that serves millions of people is somehow akin to an individual in their living room. If the public isn’t getting the benefit of more speech, why should we give them extra protection from the law? The intent of 230c was not to give corporations free reign to suppress any speech they disliked without consequences. If corps are going to start picking and choosing which speech is harmful to the public, the public needs the recourse of the courts. – ColleenV Sep 23 '21 at 11:15
  • The consequence of removing 230c protections would be that the SM corps would have to apply their rules more consistently or not at all. I'm not sure why everyone jumps to the "not at all" option when we live in the age of the computer algorithm that can analyze our posts in real time to serve us ads. We have far more capability to identify and remove unlawful speech than we imagined was possible in 1995. – ColleenV Sep 23 '21 at 12:54
  • 1
    @ColleenV yes we do, but IMO as a software developer, we do not yet have any program which can reliably do so to a standard that a court would approve. – David Siegel Sep 23 '21 at 14:17
  • @DavidSiegel I don't want to completely derail your answer, but I think that there is plenty of protection in the US legal system to keep companies from being held civilly liable for impossible standards. The billion $ corp does have a big advantage in court. Maybe I'm just old and have more faith in the US legal system. It's far from perfect, but it works pretty well. Corps should be allowed to set terms of use for their products, but they shouldn't be protected from liability once they've attained the sort of power the mega tech corps currently wield. – ColleenV Sep 23 '21 at 14:57
  • 1
    I don't think you are coming anywhere close to 'derailing" y answer. What rights and powers a corp that runs an SM service should have is a policy decision, and so not really on-topic her on LAW. The standards that were being imposed on moderation prior to Sec 230 are not currently attainable by purely automated processes; indeed I think they would require fully sapient human-level decision makers for final review. Whether to change those standards and if so how, or to grant immunity, or take some other action is again a policy decision, Any of those routes could be enacted law. – David Siegel Sep 23 '21 at 15:07
  • @DavidSiegel I meant "derailing the discussion under your answer"... I know it's annoying sometimes to get pinged by a discussion that's not really relevant to the answer. – ColleenV Sep 23 '21 at 15:42
  • No problem. It is an interesting discussion eve if a bit aside from the question. I think you are expectign morfe of automated processes than thy can manage. Such things can help, but IMO humans are needed in the loop if effective moderation to the level of eliminating actionable defamation and other problematic posts is to be achieved, and that is not free. – David Siegel Sep 23 '21 at 15:50
  • I guess my feeling is that whatever method of moderation the company chooses only needs to be good enough to prevent most cases from being brought, not perfect. The company can decide what legal exposure they can abide. I'm against "corporate welfare" type laws on principle. Either the company should enjoy protection and be more hands off, or forgo protection and have complete control. – ColleenV Sep 23 '21 at 15:55
  • I also find that technology needs challenges to overcome. Maybe the only reason we don't have good automated moderation is because we've never really needed it. – ColleenV Sep 23 '21 at 15:58
  • We do need challenges, but this one is being worked on. Thew firm I work for is putting sizable resources behind so-called "natural language processing" (NLP) not to moderate social media, but to urn dictated notes from a doctor working on a case into useful medical codes and notes in standard formats. This work could also be applied to moderation. That is one way I know how hard a job designing such a program would be. (I also have a decades-long interst in AI.) NLP is being worked on for other purposes as well. – David Siegel Sep 23 '21 at 16:19
  • Undoubtedly there are many technologies being actively developed that might be useful for moderation. However we will probably see huge strides forward if something happens like FaceBook being unable to find enough human moderators to wade through the horrific posts they have to deal with, and not taking those down quickly enough could end up costing the company enough to worry them. – ColleenV Sep 23 '21 at 16:27
-2

The Knight ruling does not address the rights of the company owning the platform and their ability to set rules, regulations, and policies on their services. Each PERSON may have rights but they quickly diminish if he/she violates terms of service. The idea they are of a public forum will not stand up over long term scrutiny. No way. People think the idea of free speech is sacred, and it is, but free enterprise reigns supreme and nobody is going tell a private company how it should regular it's own IP. That's far more dangerous than any bogus free speech argument here, of which there is none.

  • 3
    There are already laws which regulate what company can do with its IP. I think you are correct that the Knight case will not apply, but that does not settle the issue at all. Can you site any source or case for "The idea they are of a public forum will not stand up ..." please? – David Siegel Sep 23 '21 at 15:45