Suppose you went to a party with a group of friends, one of whom is a rather outspoken person we’ll call Ms A. After an hour or so, someone you never met before comes up to you and says, “I couldn’t help noticing that you came here with Ms A. Aren’t you concerned that her views are a little extreme? I can give you the number of somebody who can help her.”
How would you react?
I don’t know about you, but my first thought would be, “Who the — are you to be judging my friend?”
The whole thing smacks of authoritarian control and monitoring on the part of the snoop who expressed concern about Ms A. Yet in early July, Facebook announced that it was going to do a trial of a system that essentially does that very thing. And I know someone it’s already happened to.
Here is the way Facebook explains what it’s doing, as reported by Reuters on July 1: “This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk.”
That sounds reasonable—after all, groups such as Al Qaeda used platforms like Facebook to recruit US citizens to their cause, and if there’s something Facebook can do to keep that from happening again, it sounds like it’s worth doing.
I’m involved in a group that meets monthly to discuss an article from the journal of religion and public life called First Things. Most of us are over 50, and a more harmless group of non-radicals is hard to imagine. For a time, a woman attended who later joined the Roman Catholic Church. From my limited interactions with her, I would say she was conservative, but not radically so, and unusually articulate about various social problems, including abortion. She no longer attends our discussion group, but some of us still follow her on Facebook.
Her Facebook followers were surprised the other day when Facebook asked them if they thought the Catholic woman was becoming an extremist.
I don’t use Facebook and so I can’t say what material she might have posted which inspired Facebook to ask this question. But based on what I know about the woman, at the very least Facebook is wasting its time. And more seriously, this anonymous action on the part of a powerful corporation exerts a chilling effect on the tattered, bedraggled thing we once called free speech.
The fly in this otherwise admirable-sounding ointment of extremism prevention is the question of just what counts as “extremist.” One person’s extremist is another’s enthusiast.
Also on July 1, Fox News reported the comments of several people who had received such warnings, which typically read “Are you concerned that someone you know is becoming an extremist?” followed by an option to “Get Help” which leads to an organization called “Life After Hate.” One user who received this type of notice neatly summed up the dilemma that Facebook faces: “‘Confidential help is available?’ Who do they think they are? Either they’re a publisher and a political platform legally liable for every bit of content they host, or they need to STAY OUT OF THE WAY.”
The reason Facebook isn’t liable for every bit of content they host, as a conventional newspaper publisher would be, is Section 230 of the Communications Decency Act, which exempts platform hosts from being liable for what third parties place on their platforms.
Perhaps in the early days of the Internet, this protection was needed in order to encourage investment in the young, struggling things that were Google and Facebook. But now that social media constitute a major, if not the primary, source of political and cultural news in the US, the pretence that they are insignificant people-connectors who just barely make enough money from ads to stay in business and need special protection from the government is looking more ridiculous every day.
Not only is Facebook deciding who is an extremist, it’s getting help deciding what truth is from the White House. Biden Administration press secretary Jen Psaki said on July 15 that they are “identifying ‘problematic’ posts for Facebook to censor because they contain ‘misinformation’ about COVID-19.”
Again, this sounds reasonable at first glance. Some things that people are saying on Facebook about COVID-19 and vaccinations for it are ludicrous and harmful. But what happened to the old saying “I disapprove of what you say, but I will defend to the death your right to say it”? According to Wikipedia, this quotation comes from a biography of Voltaire by one Evelyn Beatrice Hall, writing as S. G. Tallentyre.
Hall was trying to illustrate one of Voltaire’s principles, which was a radical (there’s that word again) belief in free speech, which is one of the pillars of what should now be called classical liberalism, along with democratic governance and freedom of religion.
The American Civil Liberties Union adhered to radical free-speech principles until a few decades ago, even defending such scurrilous extremists as a neo-Nazi group that wanted to stage a march in a Chicago suburb where many Holocaust survivors lived. This was in 1978, and although the ACLU itself has a page on its website describing this episode, I think it’s fair to say that the current ACLU is finding other things to do with its time.
Facebook wants to have things both ways. They want to receive plaudits as the platform for the little guy where a thousand free-speech flowers bloom, and they also want to avoid opprobrium (and lawsuits, and fines) for hosting material that is illegal, libellous, or harmful in someone’s eyes.
But as the gentleman quoted above implied, you can’t have freedom without responsibility. Editing or censoring one thing on Facebook means the whole thing is now an edited entity. You can’t be just a little bit pregnant, and you can’t pretend a platform is free if parts of it aren’t—especially when the parts that aren’t change from day to day, or from White House instruction to embarrassing news report.