top of page

Facebook utilizes fact-checkers to prevent spread of misinformation

  • Writer: Ashley Stalnecker
    Ashley Stalnecker
  • Apr 25, 2020
  • 2 min read

While scrolling through Facebook, users are starting to notice that some posts are blurred out and tagged by Facebook as false or partly false.


After much critique for the role social media plays in the spread of misinformation, Facebook started paying third-party fact-checkers like Snopes to rate evaluate the truthfulness of certain posts.

Screenshot of a post that has been fact-checked.


Facebook's role in the 2016 election

Although Facebook first began fact-checking posts in 2015 by allowing users to report posts that seemed false, Facebook was criticized for its inability to curb the spread of misinformation in the 2016 election.


During the 2016 election, Russia started a disinformation campaign by creating Facebook pages to reach people organically. The research director of the Tow Center for Digital Journalism at Columbia University found that six publicly-known Facebook pages linked to Russia had their posts shared 340 million times. In total, 470 pages were linked to Russian operatives, according to the Atlantic.


Advertising was also purchased to spread misinformation during the 2016 election. The disinformation campaign went largely unnoticed by Facebook.


There's no way to tell if that actually influenced the outcome of the election but the world finally woke up to the idea that the spread of misinformation on social media can be extremely dangerous to the public good. After the 2016 election, Facebook made a commitment to curbing the spread of misinformation.


Should Facebook be responsible for fact-checking its content?

Yet, it is still unclear whether or not Facebook should be deciding what is true or false on its site.


Facebook invests a relatively small amount in paying third-party fact-checkers to evaluate posts that have been reported as possibly false by users. Even without such a small investment, it would be nearly impossible to keep pace with content on a site that is host to over 200 million posts a day,


Also a study done in 2020 found that adding labels to false posts can cause the implied truth effect. This means that when posts are marked as false, users might assume that all other posts that have not been marked are true. However, the only posts that are evaluated are posts that receive a significant amount of feedback from users.



Screenshot of a page after a user chooses to report a post.


There is also the possibility of inserting bias into the fact-checking process. Recently, the Daily Caller, a conservative website allegedly incorrectly marked a New York Post opinion piece about the coronavirus as false.


Clearly, it is difficult to manage fact-checking on social media. Facebook might have better luck with investing in media literacy workshops that can help its users learn how to identify false information by themselves.

 
 
 

Comments


CONTACT
CONTACT INFORMATION
SEND ME A NOTE
FOLLOW ME

​​

ashleystalnecker@gmail.com

Tel: (717) 926-3927

Thanks for submitting!

  • Facebook Clean
  • Twitter Clean
  • YouTube
  • LinkedIn

© 2020 by ASHLEY STALNECKER. created with Wix.com

bottom of page