Michael Hiltzik, Tribune News Service
Almost no one noticed in 1996 when Congress gave online social media platforms sweeping legal immunity from what their users posted on them.
The provision crafted by then-Rep. Christopher Cox and then-Rep. Ron Wyden was known as Section 230 of the Communications Decency Act. It has since become labeled as the “Magna Carta of the internet” and “the twenty-six words that created the internet.”
Without Section 230, according to Jeff Kosseff, the law professor whose book on the section bears the latter title, the social media world as we know it today “simply could not exist.”
That’s why advocates of online speech — indeed, of internet communications generally — are very, very nervous that the Supreme Court has taken up a case that could determine Section 230’s limits or even, in an extreme eventuality, its constitutionality.
The Supreme Court’s decision to review two lower court rulings, including an appellate case from the 9th Circuit Court of Appeals in San Francisco, marks the first time the court has chosen to review Section 230, after years in which it consistently turned away cases involving the law.
That may not reflect a change in its view of the legal issues, so much as a change in how society views the internet platforms at the centre of the cases — Google, Facebook, Twitter and other sites that allow users to post their own content with minimal review.
“We’ve been in the midst of a multiyear tech-lash, representing the widely-held view that the internet has gone wrong,” says Eric Goldman, an expert in high-tech and privacy law at Santa Clara University Law School. “The Supreme Court is not immune to that level of popular opinion — they’re people too.”
Disgruntlement with the big tech platforms stretches from one side of the political spectrum to the other.
Conservatives cherish the notion that the platforms are liberal fronts that have been hiding behind their content-moderation policies to disproportionately block conservative users and suppress conservative viewpoints; progressives complain that the platforms’ policies haven’t been successful in eradicating harmful content, including disinformation and racism and other hate speech.
The harvest has been laws and legislative proposals aiming to dictate how the platforms moderate content.
Florida enacted a law prohibiting social media firms from shutting down politicians’ accounts based on proponents’ assertions that “big tech oligarchs in Silicon Valley” aim to silence conservatives to favour a “radical leftist agenda,” as a federal appeals court observed in a decision overturning the law.
Texas enacted a law forbidding the firms to remove posts based on a user’s political viewpoint. That law was upheld by a federal appeals court. Both laws may be destined to come before the Supreme Court.
As I’ve reported before, congressional hoppers are brimming with proposals to regulate tweets, Facebook posts and the methods those platforms use to winnow out objectionable content posted by their users.
Efforts to place collars on social media platforms haven’t emerged exclusively from red states or conservative mouthpieces. Last month, California Gov. Gavin Newsom signed a law requiring those firms to make public a host of information about their rules governing user behavior and activities.
The platforms are required to report twice a year how they define and deal with hate speech, content that might radicalise users, misinformation and disinformation and other content, as well as how often they took action respecting such content. The law sets stiff monetary penalties for violation.
It should be obvious that laws purporting to open online platforms to “neutral” judgments about content do nothing of the kind: They’re almost invariably designed to favor one color of opinion over others.
There’s no evidence that the online platforms have systematically suppressed conservative opinion — that’s just a talking point of conservatives such as Sen. Ted Cruz, R-Texas, and former President Donald Trump. And progressives haven’t been militating against conservative speech, but hate speech and harmful misinformation, which the major platforms themselves claim to officially prohibit.
Before exploring the implications of the Supreme Court’s review further, here’s a primer on what Section 230 says.
The 26 words cited by Kosseff state, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
That places the social media platforms, as well as other platforms that host outsiders’ content or images, such as newspaper reader content threads or consumer reviews, in the same position as owners of bookstores or magazine stands: They can’t be held liable for the content of the books or magazines they sell. Liability rests only with the actual content producers.
There’s a bit more to Section 230. It specifically allows, even encourages, the online platforms to moderate content on their sites by making good-faith judgments about whether content should be taken down or refused.
In other words, just because a site blocks some content, it can’t be held responsible for whatever it leaves online. Nor does Section 230 require sites to be “neutral,” however that term could ever be satisfactorily defined. (Almost any definition would presumably run afoul of the 1st Amendment.)
The power of Section 230 wasn’t evident when it was passed in 1996. Google, Facebook, Twitter and YouTube didn’t even exist at the time; the impetus for the law came from some legal rulings affecting CompuServe and Prodigy, interactive services that no longer exist as independent operations today.
The fortunes of today’s social media giants have been built upon the freewheeling content provided by their users at no charge. The nature of public discussion has also been transformed through the networks of users on the platforms.
From a commercial standpoint, the companies have been reluctant to get in the way of the torrent, unless it’s so noisome that it crosses an inescapable line. Where that line is, and who should draw it, is the issue at the heart of most of the controversy over the supposed power of the big tech companies to affect public discourse.
That brings us back to the California case before the Supreme Court. It was brought against Google, the owner of YouTube, by the family of Nohemi Gonzalez, an American who was killed in an attack by the militant group in Paris on Nov. 13, 2015.
The plaintiffs blame YouTube for amplifying the message of militant group’s videos posted on the service by steering users who viewed the videos to other videos either posted by militant group or addressing the same themes of violent terrorism, typically through algorithms.
The legal system’s perplexity about how to regulate online content was evident from the outcome of the Gonzalez case at the 9th Circuit. The three-judge panel fractured into issuing three rulings, though the effective outcome was to reject the family’s claim about algorithmic recommendations. The lead opinion by Judge Morgan Christen found that Section 230 protected YouTube.
There is little to suggest that tampering with Section 230 will address all the issues that the public has with the state today of online speech. The real danger is that almost nothing the court could do would make the issues swirling around online content moderation better, only worse.