Facebook has been under fire to address the proliferation of fake news for over a year.
According to a widely publicized study, the top 20 fake news stories about the 2016 US Presidential Election received more attention on Facebook than the top 20 real news stories.
Another study said that one in four Americans had seen at least one fake news article.
Fake news readership broke down along political lines. Pro-Trump readers were three times more likely to visit fake news sites promoting Trump than pro-Clinton readers were to visit fake news sites promoting Clinton. More significantly, mildly left-leaning users were more likely to read pro-Trump content than pro-Clinton content.
Could these have been the swing votes that determined the course of the election?
In any case, Facebook has taken the threat seriously. A few months after the election, Facebook started tagging articles with “trust indicators.” The indicators alerted users about stories that seemed to have fake content.
A year into this policy, however, it seems to have failed.
The phenomenon of fake news
Before looking at Facebook’s specific policy, let’s start by defining our terms.
By fake news, we mean media articles that are patently false. We are not talking about partisan news. We are talking about headlines such as “Malia Obama Expelled from Harvard.” That didn’t happen.
Fake news has sensationalist headlines designed to grab attention: for example, a headline accusing Clinton of a “Sudden Move of $1.8 Billion to Qatar Central Bank.”
It doesn’t matter too much whether the story is plausible, as long as the topic is right for the targeted readers.
Let’s use NationalReport.net as an example. This is a webpage that publishes stories that are admittedly false. (A disclaimer on the site says that “any resemblance to the truth is purely coincidental.”)
Jestin Coler, its founder, explained the philosophy behind fake news. “When it comes to the fake stuff, you really want it to be red meat… It doesn’t have to be offensive. It doesn’t have to be outrageous. It doesn’t have to be anything other than just giving them what they already wanted to hear.”
Consumption of fake news
That is a key point: the aforementioned moderates represented only a minority of fake news consumers. The majority were highly partisan and highly politically active. The most conservative 10 percent of readers accounted for over 60 percent of the visits to fake news sites. They were readers who had already made up their minds.
Moreover, it turns out that even these voracious consumers of fake news read far more real news. Fake news made up just 6 percent of the media diet of Trump supporters, and just 1 percent of the consumption of Clinton supporters.
And no one seems to know whether these readers actually believed the fake news. Perhaps sharing stories was more of a way to fling insults than to persuade.
This renders the claim that fake news changed the course of the election less and less convincing.
Teaching readers to recognize fake news
Of course, we’d like to use fake news as a teaching point: readers need to learn how to separate the truth from fiction. Perhaps a general political awareness could help readers to recognize implausible headlines. And we could teach them to fact-check stories by looking to other media outlets.
Facebook, it turns out, seems to be thinking along these lines. A year after the rollout of its “trust indicators” in December 2016, Facebook changed its policy on fake news.
Instead of flagging news that appears suspicious, Facebook now displays fact-check articles next to the suspicious news. The fact checks give information about other articles written by the same sources, and about whether articles are advertisements or journalistic reports.
Facebook executives say that the policy change puts more discretion in the hands of users: “We believe in giving people a voice and that we cannot become arbiters of truth ourselves.”
Now, even the suggestion of Facebook as an arbiter of truth is guaranteed to grind gears. Still, isn’t this a move in the positive direction of teaching readers to search out the facts for themselves?
Without a will, there is no way
The saying goes that “where there’s a will, there’s a way.” But in this case, the will seems to be lacking.
We’ve already said that people who share fake news are 1) highly partisan, 2) not concerned with plausibility, and 3) interested in reading “what they already wanted to hear.”
Facebook’s initial strategy of deploying “trust indicators” didn’t dissuade these people from visiting fake news sites because – ironically – they didn’t trust Facebook. For them, Facebook was less an arbiter of truth than a harbinger of deception.
This has been a leitmotif of the analysis of the US Presidential Election: Trump’s victory was a surprise because pundits failed to realize how many Americans felt isolated from liberal policies and left-leaning mainstream media.
The phenomenon of fake news serves as another warning about the significance of this isolated demographic.
Cause and effect
Indeed, we could argue that much analysis of fake news makes an incorrect connection between cause and effect. The popular impression is that the existence of a large, conservative, and apparently gullible demographic is the cause enabling the proliferation of fake news. Fake news, in this narrative, emerged to target the gullible population.
But perhaps the existence of the population susceptible to fake news is actually an effect. Ironically, it is an effect of real news. Real news, that is, which is so partisan and disconnected from a large demographic that it renders that demographic suspicious of mainstream media.
As Jestin Coler, who was a leading American generator of fake news in the 2016 election, observed:
While some suggest fake news is responsible for the decline in trust in traditional media sources, I would argue the opposite. Fake news is the result of declining trust. As consumers of content become more disheartened by trusted sources, they seek information from sources that are less credible. In that regard, President Trump may be a blessing in that his continued criticism of the media has led to deep conversations about the future of journalism and the important role played by the fourth estate.
Perhaps this demographic has turned to fake news because it was dissatisfied with the liberal media that did not understand it. It was unwilling to listen to Facebook’s “trust indicators,” and it will probably be unwilling to read fact-checks as well.
Facebook’s flagging of a fake news article is rather like saying “no” to a rebellious child: it only stokes the fire. The real issue is the underlying rebellion.
Similarly, in Facebook’s case, the problem may lie not with the fake news, but with the news that is real and slanted.
Jeffrey Pawlick is a PhD Candidate in Electrical Engineering at New York University Tandon School of Engineering.