Social media executives sitting uncomfortably before congressional committees has become a common sight over the past few years.
In April 2018, July 2019 and most recently, March 2021, Congress called Facebook CEO Mark Zuckerberg and others to explain topics ranging from data privacy to Big Tech monopolies to online misinformation. The March hearing happened in part because of the assault on the United States Capitol on Jan. 6, in which groups looking to overturn the results of the 2020 presidential election communicated and even coordinated with one another online.
"We've always had lies. We've always had rumors. We've always had misinformation," said Harrison Mantas, a fact-checking reporter at the Poynter Institute. "We just have a faster way of getting it out now. It's a serious issue in the sense that democracies live and die on facts."
According to the Pew Research Center, 50% of Americans said that made up news or information was a bigger issue than violent crime, climate change, racism, illegal immigration, terrorism and sexism. Additionally, 52% of Americans say they have changed the way they use social media while 43% say they have lessened their overall news intake to combat this problem in their daily lives.
In 2020, a peer-reviewed study in the Harvard Kennedy School Misinformation Review found that, on average, 71.08% of respondents agreed that social media companies should be providing fact-checks of statements made by politicians, although the responses differed between Democrats and Republicans, with about 86% of Democrats in support of fact-checking politicians compared to 56% of Republicans.
While social media companies have increasingly begun taking action against misinformation, from flagging content to banning high-profile users such as former president Donald Trump, just how effective are their efforts?
News now
According to the Pew Research Center, about half of Americans get at least some of their news from social media.
Facebook is the biggest source of news for Americans among social media sites with 36% of Americans getting at least some news on the site. In decreasing order, Twitter, YouTube and then Instagram are other popular sources for news.
Although Twitter does not have the largest American news audience among social media platforms, it has the highest percentage of active users who get news on its site. 59% of its users regularly get news on Twitter.
Despite the large number of Americans getting news on social media, 59% of social media news consumers in 2020 said they expect news on social media to be "largely inaccurate."
Social media companies have started implementing a variety of changes to their content recommendation algorithms and practices in order to address some of these issues.
Label-makers in action
Since 2016, Facebook’s third-party fact-checkers have been committed to labelling misinformation that pops up on the social media platform. It’s important behind-the-scenes work meant to root out lies about significant institutions and processes, be it the 2020 election or the COVID-19 vaccines.
Good fact-checkers, Mantas said, reach out to the people whose posts they flag. Users have the opportunity to rebut, and it can get contentious.
Reuters and other major media organizations in the United States, such as PolitiFact, are signatories of the International Fact-Checking Network’s code of principles. The five principles outline a commitment to transparency about non-partisanship, sourcing, funding, methodology and corrections policies.
The IFCN, hosted by Poynter, created the code and includes organizations in dozens of countries who’ve all signed. It investigates organizations who want to join the network and acts as a filter to find committed fact-checkers, Mantas said. Facebook chooses who to work with from there.
“We set the standard,” Mantas said. “We make sure people live up to it.”
However, third-party fact-checkers are not enough to stop the spread of all misinformation on Facebook. Human content moderators are spread all over the world, coming from different cultures with different belief systems.
"It really doesn't come as a huge surprise then that you would find that a content moderator in the Philippines might arrive at a really different content-related decision under the same set of rules than a content moderator based in San Francisco," said Jonathan Peters, an associate professor in the University of Georgia's College of Journalism and Mass Communication and the UGA School of Law.
Mantas said one area of work for Facebook is boosting its content moderation in more countries. Language barriers aside, foreign moderators might not even know if a post contains misinformation, he said.
According to Facebook, its 60 partnered fact-checking organizations as of last May helped flag 180 million pieces of content viewed on the site from Mar. 1 to Nov. 4, 2020. Additionally, they helped flag 50 million pieces of content regarding the pandemic last April.
Mantas said artificial intelligence and Facebook users begin the fact-checking process. They identify potential misinformation, and fact-checkers log into an interface that compiles the posts in need of verification. Then they check their validity.
But fact-checkers can’t reach this scale of misinformation control on their own. Facebook employs AI to ramp up the efforts, according to multiple 2020 blogs published by the site.
Its SimSearchNet++ image recognition system is “resilient to a wider array of image manipulation,” according to a November 2020 blog post, in order to detect copies and slight changes to misinformation shared on Facebook. It’s supposed to scale fact-checking while reducing the number of legitimate posts to receive labels.
Although the number of flagged posts related to the pandemic and the election is staggering, it's a fraction of all the information shared globally on Facebook every day. Dangerous misinformation can still fall through the cracks.
“We’re not going to be able to fact-check our way out of this,” Mantas said. “I think the biggest thing that needs to happen is that companies could be more transparent in the way they share their data so researchers can look at the data and say ‘Oh, these are the solutions that would work.’”
Right now, data on how posts spread and how Facebook’s News Feed algorithm prioritizes certain posts is lacking, Mantas said.
And, according to Facebook, false news does not violate its community standards. The company says it wants to avoid “stifling productive public discourse.” It also wants to avoid labelling satire and opinion as misinformation. So, false news stays on the site. Users can still share it too, although Facebook says it reduces the distribution of flagged posts by making them appear lower in people’s news feeds.
"Content rated either 'False' or 'Altered' makes up the worst of the worst kind of misinformation," wrote Facebook's Keren Goldschlager in a post on the site's Journalism Project page on Aug. 11, 2020. "As such, these ratings will result in our most aggressive actions: we will dramatically reduce the distribution of these posts, and apply our strongest warning labels."
Algorithms and AI: better or worse than humans?
Twitter has a policy “against misleading information about civic integrity, COVID-19, and synthetic and manipulated media.”
They label tweets that violate these policies, and labeled tweets are not recommended by their algorithm to users. Users are unable to retweet or reply to these tweets, but they are not fully removed from the platform.
In 2018, MIT did a study showing that false information travels faster than true information on Twitter. Considering this information, slowing down users' trigger fingers when it comes to retweeting content may need to be as much of a priority as flagging content.
A new prompt on Twitter that's addressing this issue is encouraging users to actually read the article they're retweeting before sharing it.
While it is at least slowing down users and making them re-think their decision to retweet, it does not actually prevent them from retweeting without reading. This is also only a prompt for linked articles and does not show up for screenshots or any other types of content that may be carrying the same message.
AI’s effectiveness at finding and removing illegal information and content is up for debate. According to a report published by Security and Human Rights, AI excels in “screening for and identifying fake bot accounts — techniques known as bot-spotting and bot-labelling”.
Digital platforms such as Google, Facebook and Twitter use these techniques to identify and remove trolls and fake accounts. AI may also inadvertently identify accurate information, or produce false positives/negatives, according to the report.
“AI is 100% a function of the people who develop the algorithms,” said Jason Anastasopoulos, assistant professor of political science at the University of Georgia.
He said bias within programmers can find its way into the program itself and influence what content is labeled as misinformation on social media platforms.
“Think of AI as simply an extension of the preferences of the people who program it.”
To moderate or not to moderate?
Given the spread of content flags and high-profile social media bans, questions have been raised about why and how social media companies choose to limit misinformation and whether or not their actions are legal.
"As non-governmental entities, they themselves enjoy First Amendment rights," Peters said.
Essentially, this means social media companies have the right to develop their own content and community guidelines and generally enforce those rules as they please. That includes the ability to de-platform users, to suspend accounts and to create systems by which they can block users or flag their content.
While it is completely within their power to remove any content that does not meet their community guidelines, they are under no obligation to do so.
Section 230 of the Communications Decency Act, part of a larger telecommunications law passed in 1996, does not hold these companies liable for the content on their platforms. It enables the platform to avoid liability if it does not remove content that is objectionable, flagged as misinformation or even illegal. Platforms also avoid liability if they choose to remove content, per Section 230. The drafters of section 230 "wanted the web to be a place that would be a forum for diverse viewpoints," Peters said.
"section 230's encouragement for platforms to police their own content is by design. that is a feature that's not a bug, it was meant to put the power in the hands of the platforms," said Peters
The bottom line: The government doesn't have a way to enforce anti-misinformation practices other than to publicly shame Big Tech CEOs during congressional hearings. And while Facebook and Twitter have the legal right to remove anyone spreading misinformation on their sites, it's still their choice to do so or not.
"They're trying to thread this needle of being a place where people can connect ... while not being seen as too heavy-handed in terms of the way that they moderate content," Mantas said.
"Do I think that they actually care?" he said. "To the most extent, I'd say yes. But people would disagree with me on that."
Credits:
Created with images by Pixelkult - "media social media apps" • Firmbee - "facebook social media media" • PhotoMIX-Company - "social media facebook twitter" • 3844328 - "programming computer environment"