Matt Vasilogambros, Tribune News Service
As deadly wildfires raged in Los Angeles this month, local officials were forced to address a slew of lies and falsehoods spreading quickly online. From artificial intelligence-generated images of the famous Hollywood sign surrounded by fire to baseless rumors that firefighters were using women’s handbags full of water to douse the flames, misinformation has been rampant. While officials in Southern California fought fire and falsehoods, Meta — the parent company of Facebook and Instagram — announced it would eliminate its fact-checking programme in the name of free expression.
That has some wondering what, if anything, state governments can do to stop the spread of harmful lies and rumors that proliferate on social media. Emergency first responders are now experiencing what election officials have had to contend with in recent years, as falsehoods about election fraud — stemming from President Donald Trump’s refusal to acknowledge his 2020 loss — have proliferated. One California law, which passed along party lines last year, requires online platforms to remove posts with deceptive or fake, AI-generated content related to the state’s elections within 72 hours of a user’s complaint.
The measure allows California politicians and election officials harmed by the content to sue social media companies and force compliance. However, federal statute protects social media companies broadly from lawsuits, shielding them from being found liable for content. “Meta’s recent announcement that they were going to follow the X model of relying on a community forum rather than experts goes to show why the bill was needed and why voluntary commitments are not sufficient,” Democratic Assemblymember Marc Berman, who introduced the measure, wrote Stateline in an email. X, the company formerly known as Twitter, sued California in November over the measure, likening the law to state-sponsored censorship.
“Rather than allow covered platforms to make their own decisions about moderation of the content at issue here, it authorizes the government to substitute its judgment for those of the platforms,” the company wrote in the suit. The law clearly violates the First Amendment, the suit argues. Further hearings on the lawsuit are likely to come this summer. Berman said he’s confident the law will prevail in the courts since it’s narrowly tailored to protect the integrity of elections. California’s measure was the first of its kind in the nation. Depending on how it plays out in the courts, it could inspire legislation in other states, Berman said.
The spread of misinformation about the Los Angeles fires, bolstered by algorithms that boost divisive content, shows how social media companies cannot and are not handling this “crisis moment,” said Jonathan Mehta Stein, executive director of California Common Cause, a pro-democracy advocacy organization. States need to do more, he said. “You’re not getting information from fire agencies or from the local authorities unless the social media companies ensure that you do,” he said in an interview. “And, unfortunately, the social media companies not only aren’t doing it, they’re actively working to make it harder for government to do anything about online mis- and disinformation.”
The two words are sometimes used interchangeably, but “misinformation” applies to false and misleading information, while “disinformation” refers to falsehoods that are spread deliberately by people who know the information is inaccurate. California Common Cause and its California Initiative for Technology and Democracy project helped craft Berman’s bill and are working to promote similar state legislation around the country.
Misinformation laws in other states have been far more limited. In Colorado, for example, Democratic lawmakers last year passed legislation that requires the attorney general to develop statewide resources and education initiatives aimed at preventing the spread of online misinformation. But it doesn’t target social media companies.
In July, the US Supreme Court put on hold laws in Florida and Texas that would have prevented social media companies from banning or restricting content from politicians. Social media companies argued those laws violated their First Amendment protections. The laws were a response to what Republican state lawmakers saw as anti-conservative bias in social media companies, especially after Trump was banned from Twitter and Facebook in the aftermath of the Jan. 6, 2021, riot at the US Capitol. The justices unanimously agreed that the legal issues need further study in lower courts.