According to the Pew Research Center, about half of Americans get at least some of their news from social media.
Facebook is the biggest source of news for Americans among social media sites with 36% of Americans getting at least some news on the site. In decreasing order, Twitter, YouTube and then Instagram are other popular sources for news.
Although Twitter does not have the largest American news audience among social media platforms, it has the highest percentage of active users who get news on its site. 59% of its users regularly get news on Twitter.
Label-makers in action
Since 2016, Facebook’s third-party fact-checkers have been committed to labelling misinformation that pops up on the social media platform. It’s important behind-the-scenes work meant to root out lies about significant institutions and processes, be it the 2020 election or the COVID-19 vaccines.
Good fact-checkers, Mantas said, reach out to the people whose posts they flag. Users have the opportunity to rebut, and it can get contentious.
According to Facebook, its 60 partnered fact-checking organizations as of last May helped flag 180 million pieces of content viewed on the site from Mar. 1 to Nov. 4, 2020. Additionally, they helped flag 50 million pieces of content regarding the pandemic last April.
Mantas said artificial intelligence and Facebook users begin the fact-checking process. They identify potential misinformation, and fact-checkers log into an interface that compiles the posts in need of verification. Then they check their validity.
“We’re not going to be able to fact-check our way out of this,” Mantas said. “I think the biggest thing that needs to happen is that companies could be more transparent in the way they share their data so researchers can look at the data and say ‘Oh, these are the solutions that would work.’”
Right now, data on how posts spread and how Facebook’s News Feed algorithm prioritizes certain posts is lacking, Mantas said.
And, according to Facebook, false news does not violate its community standards. The company says it wants to avoid “stifling productive public discourse.” It also wants to avoid labelling satire and opinion as misinformation. So, false news stays on the site. Users can still share it too, although Facebook says it reduces the distribution of flagged posts by making them appear lower in people’s news feeds.
"Content rated either 'False' or 'Altered' makes up the worst of the worst kind of misinformation," wrote Facebook's Keren Goldschlager in a post on the site's Journalism Project page on Aug. 11, 2020. "As such, these ratings will result in our most aggressive actions: we will dramatically reduce the distribution of these posts, and apply our strongest warning labels."
Algorithms and AI: better or worse than humans?
Twitter has a policy “against misleading information about civic integrity, COVID-19, and synthetic and manipulated media.”
They label tweets that violate these policies, and labeled tweets are not recommended by their algorithm to users. Users are unable to retweet or reply to these tweets, but they are not fully removed from the platform.
A new prompt on Twitter that's addressing this issue is encouraging users to actually read the article they're retweeting before sharing it.
While it is at least slowing down users and making them re-think their decision to retweet, it does not actually prevent them from retweeting without reading. This is also only a prompt for linked articles and does not show up for screenshots or any other types of content that may be carrying the same message.
Created with images by Pixelkult - "media social media apps" • Firmbee - "facebook social media media" • PhotoMIX-Company - "social media facebook twitter" • 3844328 - "programming computer environment"