Back after a four-week winter recess and a federal holiday on Monday, the Supreme Court this week takes up two highly important tests of the laws governing the Internet, including legal rules on the operation of some of the biggest online platforms.
The Court will broadcast “live” the audio (no video) of the hearings on its homepage, supremecourt.gov To listen, click on “Live Audio” and follow the prompt when the courtroom scene appears lower on the page. The audio also will be available, under the title of each case, on C-Span TV at this link: cspan.org/supremecourt
This report examines the case to be heard on Tuesday, mainly involving Google and its video offshoot, YouTube. A report on the Wednesday case, centering on Twitter, will be analyzed in this space tomorrow. Another major platform, Facebook, is supporting the arguments of Google and Twitter. An odd fact about these cases is that both of them were decided in the same ruling in a lower court. However, they are being heard separately by the Justices.
The lawsuits against those Internet giants were filed by United State relatives of individuals who were killed in separate terrorist attacks in Paris and Istanbul. They sued for money damages under federal anti-terrorism laws. Those laws, first passed in 1996 and then expanded in 2016, allow Americans to sue for harms done by a foreign terrorist group or its supporters. If such a lawsuit is successful, the verdict can be an award of triple damages. In the lower appeals court, the relatives lost against Google and YouTube but won against Twitter.
Tuesday’s hearing: Gonzales v. Google LLC The hearing begins at 10 a.m., and is scheduled for 70 minutes.
Background: The Digital Age probably got its start about six decades ago, with creation in the late 1960s of the first workable prototype of what would later become the Internet, an interactive, computer-based communications system. The law governing operation of online platforms, however, has developed slowly over that time span. It was not until 1996 that Congress first enacted a law that would help the Internet to develop by giving online platforms a wide umbrella of legal protection from being sued if their content caused harm to someone else.
Now, after 27 years, that law is finally having its first test in the Supreme Court tomorrow in the Google case. At about the time that law was enacted, the nation was developing a lively fascination with the new electronic phenomenon, the Internet, as it enabled ordinary people to post thoughts and ideas on a widely available type of bulletin board. No longer were commercial in-print publishers or TV and radio networks the dominant dispensers en masse of information.
The fledgling Internet got legal encouragement from a federal court ruling in New York in 1991, concluding that CompuServe, the first major publisher of a computer-based platform, was not a publisher of a newsletter article that harmed the reputation of a competing online newsletter, run by Cubby, Inc. CompuServe had only been the host of the site, allowing a separate firm to put the article on the site, that court concluded.
But, four years later, a state court in New York decided that the Prodigy platform, host for a site called Money Talk, was legally responsible for a user’s post that accused a New York City investment firm of fraud. Prodigy, that court said, was at fault because it exercised some editorial choice over what it would allow on its site. The court thus applied the traditional theory of defamation law long used for print publishers, extending it to the new technology. The theory is that the publisher of defamatory information is legally responsible for causing the harm, even if it did not realize that the information was harmful.
Congress reacted quickly to the Prodigy decision, passing one year later what is now called Section 230 of the Communication Decency Act. For online platforms, it displaced state and local defamation laws. Section 230 provides two forms of immunity to lawsuits: first, a platform that simply hosts information from another source cannot be sued as a publisher for that entry, and, second, no operator of an interactive computer service can be sued if it makes a “good faith” effort to block information that is obscene, excessively violent, harassing, or “otherwise objectionable.”
It is the meaning of the first of those provisions that the Supreme Court will examine in the Google case.
Facts of this case: On November 13, 2015, a series of attacks by the global terrorist organization known as ISIS (Islamic State of Iraq and Syria) occurred at a theater and at other locations in Paris. Three gunmen fired into a café where an American student, 23-year-old Nohemi Gonzalez, was dining with friends. She was among those killed. A day later, ISIS claimed responsibility and posted a YouTube video of the attacks.
Suing Google and YouTube under the anti-terrorist laws, Gonzalez family members claimed that YouTube was often used by ISIS to spread its propaganda and to help recruit new followers. The lawsuit particularly challenged Google’s use of automated algorithms to channel viewers toward ISIS videos on YouTube. In other words, the lawsuit challenged the company’s recommendations to users of the platform, based on what Google had learned about the viewers’ preferences of online material, with that data then directed by mathematical formulas. (Twitter and Facebook were also sued by the family, but that part of their claims is at issue in the separate case – Twitter’s appeal, coming before the Court on Wednesday.)
A federal trial judge in California, relying in large part on the Section 230 immunity protection, dismissed the Gonzalez family’s lawsuit seeking to enforce anti-terrorism laws. A federal appeals court upheld that result, finding that Google and YouTube were channeling viewers toward ISIS materials based upon what the platform had learned about viewers’ history, and thus they were carrying out the basic function of a search engine. The family appealed to the Supreme Court.
The questions before the Court: Does Section 230 provide legal immunity to social media platforms only for their basic decision to accept or reject an entry from another source? Or does it also shield those platforms when they recommend that users go to additional material from that source?
The Biden Administration’s Justice Department has entered the case and is urging the Justices to adopt a narrower view of Section 230 than the appeals court did, contending that immunity should be largely confined to the editorial choice of allowing or rejecting content offered to the platform. Section 230, the Department said in its brief, bars the anti-terrorism law claims only as to their plea that YouTube “failed to block or remove ISIS videos from its site, but the statute does not bar claims that YouTube’s targeted recommendations of ISIS content.”
Significance: There was a time, early in the development of social media and the Internet, when major platforms functioned basically as electronic bulletin boards for users who log on to say something or to search for something. That basic mode was outgrown years ago. With increasing technological ingenuity, major platforms developed ways to gather vast amounts of private data about their users, and created novel ways to channel the platform’s offerings to hold the attention of the existing audience and to constantly expand that audience globally.
Two factors were crucial to sustaining that business model: unending growth of the user audience in order to attract lucrative advertising revenues to finance their creative engineering, and a broad legal immunity to foster an environment of free and experimental communication to satisfy the interests of a global audience.
Because this business model allowed the larger platforms to gain enormous power in the marketplace of ideas, an enduring and increasingly urgent question has hung over these platforms: how can the use of that power be held accountable? Those platforms were inheritors of an American constitutional tradition of robust free speech, but how was the exercise of that freedom to be restrained in the new technology to prevent its abuse? Would competition in the marketplace be sufficient, or is there a need for official regulation?
It is in this larger context that the Google case is now unfolding in the Supreme Court, and it does so against the harrowing background of global terrorism, which has developed its own sophisticated ways of using social media to spread its propaganda and build its menacing ranks. Is a 27-year-old federal law up to the task of mediating the flow of information in this advanced state of the Digital Age?
A trio of recent appeals court decisions have interpreted Section 230 in an expansive way, with the result that the new business model – with algorithms directing users to posts or videos they might find interesting – has been significantly strengthened.
On Wednesday, the Supreme Court will turn to Twitter’s appeal, focusing on the scope of the legal duty, under U.S. anti-terrorism laws, for online platforms to more diligently exclude information provided by terrorist organizations.