- The Supreme Court of the United States (SCOTUS) has begun hearing two pivotal lawsuits that will for the first time ask it to interpret Section 230 of the U.S. Communications Decency Act of 1996, the law that has shielded tech companies from liabilities over decades.
- The lawsuits pose a long-standing question — should digital companies be held liable for the content that users post on their platforms?
What are the two lawsuits?
- Both lawsuits have been brought by families of those killed in Islamic State (ISIS) terror attacks.
- The first lawsuit, Gonzalez versus Google, has been filed by the family of Nohemi Gonzalez, a 23-year-old American killed while studying in Paris, in the ISIS terror attacks of 2015 that killed 129 people.
- The family is suing YouTube-parent Google for “affirmatively recommending ISIS videos to users” through its recommendations algorithm.
- The Court filings say that the video-sharing platform YouTube “aided and abetted” the Islamic State in carrying out acts actionable under U.S. anti-terrorism law.
- The second case, Twitter, Inc versus Taamneh, pertains to a lawsuit filed by the family of a Jordanian citizen killed in an ISIS attack on a nightclub in Istanbul, Turkey, in 2017.
- The lawsuit relies on the Antiterrorism Act, which allows U.S. nationals to sue anyone who “aids and abets” international terrorism “by knowingly providing substantial assistance.”
- The family argues that despite knowing that their platforms played an important role in ISIS’s terrorism efforts, Twitter and other tech companies failed to take action to keep ISIS content off those platforms.
- It also says that the platforms assisted the growth of ISIS by recommending extremist content through their algorithms.
What is Section 230?
- If a person posts on Facebook that a certain individual is a fraud, according to Section 230 of the U.S Communications Decency Act, the individual cannot sue the platform, but only the person who posted it.
- It is essentially a “safe harbour” or “liability shield” for social media platforms or any website on the internet that hosts user-generated content, such as Reddit, Wikipedia, or Yelp.
- No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
- Another thing Section 230 does is allow interactive computer service providers to engage in content moderation, removing posts that violate their guidelines or are obscene. According to the statute, these platforms can remove content posted on them as long as it is done in “good faith”.
What are tech companies saying?
- In January this year, a group of tech companies, websites, academics, users of the internet, and rights groups filed amicus curiae briefs in the Supreme Court, urging it to not change Section 230, outlining the sweeping impact such a move could have on the Internet.
- Twitter argued that Section 230 allows facilitated platforms to moderate huge volumes of content and present the “most relevant” information to users.
- It added that the company has frequently relied on the statute to protect it from “myriad lawsuits’.
- Digital rights and free speech activist Evan Greer also pointed out that holding platforms liable for what their recommendation algorithms present could lead to the suppression of legitimate third-party information of political or social importance, such as those created by minority rights groups.
SOURCE: THE HINDU, THE ECONOMIC TIMES, PIB