Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.
The coming review of Section 230, the law that frames the landscape for content moderation
The Supreme Court holds the internet's fate in its hands.
By Michael Hiltzik

•••
Almost no one noticed in 1996 when Congress gave online social media platforms sweeping legal immunity from what their users posted on them.
The provision, known as Section 230 of the Communications Decency Act, has since become labeled as the "twenty-six words that created the internet."
Without Section 230, according to Jeff Kosseff, the law professor whose book on the section bears that title, the social media world as we know it today "simply could not exist."
That's why advocates of online speech are nervous that the Supreme Court has taken up a case that could determine Section 230's limits, or even its constitutionality.
The Supreme Court's decision to review two lower court rulings, including an appellate case from the Ninth Circuit Court of Appeals in San Francisco, marks the first time the court has chosen to review Section 230, after years in which it consistently turned away cases involving the law.
That may not reflect a change in its view of the legal issues, so much as a change in how society views the internet platforms at the center of the cases — Google, Facebook, Twitter and other sites that allow users to post their own content with minimal review.
"We've been in the midst of a multiyear tech-lash, representing the widely-held view that the internet has gone wrong," says Eric Goldman, an expert in high-tech and privacy law at Santa Clara University Law School. "The Supreme Court is not immune to that level of popular opinion — they're people too."
Disgruntlement with the Big Tech platforms stretches from one side of the political spectrum to the other.
Conservatives cherish the notion that the platforms are liberal fronts that have been hiding behind their content-moderation policies to disproportionately block conservative users and suppress conservative viewpoints. Progressives complain that the platforms' policies haven't been successful in eradicating harmful content, including disinformation and racism and other hate speech.
The harvest has been laws and legislative proposals aiming to dictate how the platforms moderate content.
Florida enacted a law prohibiting social media firms from shutting down politicians' accounts. A federal appeals court overturned the law.
Texas enacted a law forbidding the firms to remove posts based on a user's political viewpoint. That law was upheld by a federal appeals court.
Both laws may be destined to come before the Supreme Court.
Meanwhile, as I've reported before, congressional hoppers are brimming with proposals to regulate tweets, Facebook posts and the methods those platforms use to winnow out objectionable content posted by their users.
Efforts to place collars on social media platforms haven't emerged exclusively from red states or conservative mouthpieces. Last month, California Gov. Gavin Newsom signed a law requiring those firms to make public a host of information about their rules governing user behavior and activities.
It should be obvious that laws purporting to open online platforms to "neutral" judgments about content do nothing of the kind: They're almost invariably designed to favor one color of opinion over others.
Before exploring the implications of the Supreme Court's review further, here's a primer on what Section 230 says.
The 26 words cited by Kosseff state, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
That places the social media platforms, as well as other platforms that host outsiders' content or images, such as newspaper reader content threads or consumer reviews, in the same position as owners of bookstores or magazine stands: They can't be held liable for the content of the books or magazines they sell. Liability rests only with the actual content producers.
There's a bit more to Section 230. It specifically allows, even encourages, the online platforms to moderate content on their sites by making good-faith judgments about whether content should be taken down or refused.
In other words, just because a site blocks some content, it can't be held responsible for whatever it leaves online.
The fortunes of today's social media giants have been built upon the freewheeling content provided by their users at no charge. The nature of public discussion has also been transformed through the networks of users on the platforms.
From a commercial standpoint, the companies have been reluctant to get in the way of the torrent, unless it's so noisome that it crosses an inescapable line. Where that line is, and who should draw it, is the issue at the heart of most of the controversy.
That brings us back to the California case before the Supreme Court. It was brought against Google, the owner of YouTube, by the family of Nohemi Gonzalez, an American who was killed in an attack by the militant group Islamic State, also known by the acronym ISIS, in Paris on Nov. 13, 2015.
The plaintiffs blame YouTube for amplifying the message of ISIS videos posted on the service by steering users who viewed the videos to other videos either posted by ISIS or addressing the same themes of violent terrorism, typically through algorithms.
The legal system's perplexity about how to regulate online content was evident from the outcome of the Gonzalez case at the 9th Circuit. The three-judge panel fractured into issuing three rulings, though the effective outcome was to reject the family's claim about algorithmic recommendations. The lead opinion found that Section 230 protected YouTube.
In legal terms, the question is whether YouTube and other platforms move beyond the role of mere distributors of someone else's content when they make "targeted recommendations" steering users to related content, including when they do so via automated algorithms.
But that argument risks the creation of a legal minefield. Publishers and distributors constantly take steps to steer audience members toward content they might find provocative, piquant or interesting; newspapers signal the importance or relevance of some articles by placing them on the front page or in sections with themes such as local or national news; news programs do the same through the order that they present stories on the air.
More worrisome, however, may be this Supreme Court's tendency to legislate on its own. "The court has shown consistently that it doesn't care about other sources of power," Goldman told me. There appear to be few grounds for the justices to drastically narrow Section 230, but given this court's overreach on principles as well-established as abortion rights, Goldman says, "all bets are off."
There is little to suggest that tampering with Section 230 will address all the issues that the public has with the state today of online speech.
A world in which platforms lose their ability to exercise their own judgment about content, or in which that ability is constrained by a court decision, will be indistinguishable from an open sewer, which wouldn't be healthy for anyone. A Supreme Court decision in that direction will be hard for Congress to undo.
What keeps advocates of Section 230 up at night is the possibility that the same Supreme Court justices who overturned the right to abortion and narrowed the application of the Voting Rights Act might see the potential for partisan advantage in removing the immunity enjoyed by online services for more than a quarter-century.
"We've now put power into the hands of nine justices who have embraced the culture wars," Goldman says, "and they're going to decide how we talk to each other."
about the writer
Michael Hiltzik
His speech, combined with his meeting with the leader of the far-right Alternative for Germany party, is a monument of arrogance based on a foundation of hypocrisy.