The U.S. Supreme Court on Thursday sided with Twitter and Google in two cases that questioned whether social media networks should be held liable for harmful content posted on their platforms.
In doing so, the justices sidestepped ruling on Section 230, the legal protection that generally shields internet companies from being sued over what users post.
"Internet companies today are breathing a big sigh of relief," Anupam Chander, an expert on tech regulation and professor at Georgetown Law, told Semafor.
Here are three takeaways from the rulings and the implications they could have for the future of the internet.
1. Section 230 survives
In the Google case, the family of U.S. college student Nohemi Gonzalez, who was killed in a 2015 ISIS attack, accused YouTube (which is owned by Google) of promoting terrorist videos through its algorithm.
The lawsuit could have given the court an opening to roll back some Section 230 protections, potentially exposing tech platforms to more liability and having wide-reaching implications for the web. Instead, the justices dismissed the case and left a lower court ruling in place.
The ruling was cheered by internet freedom advocates like the tech industry group NetChoice, which called the decision "a huge win for free speech on the internet."
The ruling was also notable, Chander said, because it focused on the platforms' own recommendation algorithms, rather than just user-generated content.
The plaintiffs' reasoning "would have made search engines liable for surfacing material that later turned out to be harmful, which would make search engines essentially illegal," Chander said.
2. Social platforms likened to cell phones
In the Twitter case, the Supreme Court justices unanimously ruled the platform wouldn't have to face accusations of aiding terrorism because it hosted tweets posted by ISIS.
The case was brought by the family members of Jordanian citizen Nawras Alassaf, who was killed in an ISIS attack in 2017. They claimed Twitter had helped ISIS by allowing some of the group's posts to remain on the site.
Given that real-world harm can often be traced back to the internet in some way, this case was a "three-alarm fire" for platforms, Chander said.
But the ruling suggests that the court equates the liability standard for social media platforms to that of other pieces of technology, even if they are sometimes used for harmful purposes.
"It might be that bad actors like ISIS are able to use platforms like defendants’ for illegal — and sometimes terrible — ends," Justice Clarence Thomas wrote in the decision. "But the same could be said of cell phones, email, or the internet generally."
3. Leaving it to Congress
With the Supreme Court leaving Section 230 in place, its future could now be in the hands of Congress.
Members of both parties have called for the statute to be overhauled — or abolished altogether.
While Democrats want to reform Section 230 to help prevent the spread of misinformation, many Republicans don't like the law because they believe it allows big tech companies to get away with censorship.
Jeff Kosseff, a cybersecurity law professor who wrote a book on the history of Section 230, said Thursday that the court's ruling "maybe slightly" increases the chances of Congress amending the law.
"The proposals -- and visions for platform moderation -- are wildly different," he tweeted. "I don't see much consensus."