Back in the days when people were still working out whether social media companies could be forced to take responsibility for hosting antisemitic and other hateful content, a Facebook page called ‘Jewish Ritual Murder’ became the test case for many Jewish organisations. I was personally involved in years of frustrating arguments with Facebook executives who insisted that they were not doing anything wrong by giving a platform to medieval Jew-hatred of the worst kind; really, they said, the responsibility for dealing with it should fall on users - which in reality meant Jews - who were supposed to argue back against it. More speech is always better, was their mantra; better for their bottom line, was how we always felt. This argument lasted from at least 2014 until 2018, when Facebook finally deleted this most antisemitic of pages.
This is what came to mind this week when Mark Zuckerberg announced that Meta - which owns Facebook, Instagram and Threads - is “going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.” This, after all, is what free expression looked like when the platform was still in its “roots” phase: anything goes, and Facebook took responsibility for removing very little of it. Zuckerberg’s list of changes, notably the radical reduction in its automated filtering of hate speech (of which more in a moment), suggest the clock may well get turned back to those days.
Much of the media reporting has focused on Zuckerberg’s decision to replace fact-checking with a Community Notes-style system, similar to X. Rather than “independent experts” making objective decisions about whether something is true or not, Meta will now rely on the wisdom of crowds. It is a capitulation to the notion that there is no such thing as objective truth; rather, truth becomes whatever most people think it is. Zuckerberg says Meta don’t want to be the “arbiters of truth”, which is another way of saying they are absolving themselves of any responsibility for their own technology being used to spread dangerous lies. If enough people believe it is a fact that Jews run the banks, homosexuality is evil, and women are inferior, what is to stop these ‘facts’ becoming embedded in Meta’s new Community Notes ecosystem? Meta’s argument is that the fact-checkers proved to be “politically biased”, but at times biases and prejudices take hold across entire societies, and now these prejudices have the chance to establish themselves as Meta’s ‘truth’.
Zuckerberg appears to know this, saying that Meta’s policies were imposing “restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.” This is a weaselly way of saying that as racism and misogyny have become more widespread in society and politics, so Meta wants to hold on to those racist, sexist users rather than losing them to other platforms. Remember, more engagement means a bigger bottom line - and if you can save money on paying for fact-checking, outsourcing it for free to users, than all the better.
Incidentally, this will have a profound knock-on effect across many other platforms and media organisations, because an entire cottage industry of fact-checking organisations has grown in recent years thanks to funding from Meta and other tech companies. If that funding disappears, a lot of independent fact-checking companies are going to struggle to stay afloat. Who knows, perhaps old-fashioned newspapers and their boring old editorial standards will make something of a comeback, answering a need for something reliable and worthy, like the information equivalent of the resurgence of vinyl records.
However, the most consequential change to Meta’s approach is not the end of fact-checking, but the withdrawal of automatic content filters to block or delete all but the most egregiously harmful content on Meta platforms. This has been largely missed in the media coverage so far but is likely to have a far bigger negative impact on the average user’s daily experience of life on a Meta platform. As Joel Kaplan, Meta’s new Chief Global Affairs Officer, explained:
“Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn’t have been. So, we’re going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams. For less severe policy violations, we’re going to rely on someone reporting an issue before we take any action.”
Note what is missing from that list of “high-severity violations”: hate speech. According to Meta’s own data, around 95% of all hate speech on Facebook is caught by their automatic filters, rather than being reported by human beings; on Instagram it is over 98%. Once those filters are removed, like the opening of a dam, all but the most extreme and illegal hate speech will flood through. Kaplan says the problem is that their filters were making too many mistakes, but he also estimated that these mistakes accounted for between 10 and 20% of what has been blocked - meaning around 80% of that previously-blocked content was indeed hate speech, correctly caught by their automated filters - and now to be allowed into your social media feeds.
Much of this will not be illegal or violent, but it will be unpleasant enough: the kind of bigoted comments that, if overheard in your day to day life, would upset you enough to ruin your day. Similar to X, in fact, where the general atmosphere has just become more toxic and less fun, even if it isn’t directed at you personally. Experience has taught us by now that this is what more free speech online actually entails, more often than not.
Extremists are early adopters of every new communications technology because it allows them to circumvent their exclusion from traditional media, and you can be sure that extremists, hate actors and foreign manipulators will be testing these new Meta rules and pushing the boundaries as far as they can go. This is what happened when Elon Musk bought X and allowed a whole range of previously-banned extremists back onto the platform, and this will be no different. At that time Meta seemed to want to compete with Musk by being different, launching Threads as a more civilised, family-friendly alternative to X. Now they have jumped the other way and fallen in line with the new Muskovite social media sensibility.
This all serves to confirm the belief held by many people - myself included - that social media platforms only improved their performance in tackling online hate when they were forced to do so for financial, legal or reputational reasons. They never truly believed in it as an ethical or moral imperative, and now they don’t have to pretend. It’s shameful, really. By removing filters for hate speech and only getting involved when someone reports a post, the richest companies on the planet - whose technology is now the primary promoter of hate and extremism - are effectively washing their hands of it all, leaving responsibility for cleaning up the mess they have created to their own customers. In reality, this burden will fall onto bodies like CST, Tell MAMA, the ADL and other community-based, charitable and non-profit organisations. Charities having to spend their resources chasing hate speech around the internet while tech billionaires gather up ever bigger profits doesn’t seem especially fair.
But Zuckerberg is going even further, pledging to “work with President Trump to push back on governments around the world” who are “going after American companies and pushing to censor more.” That means Britain’s Online Safety Act, the European Union’s Digital Services Act, and others besides. This threat of political and economic confrontation comes at a time when Elon Musk, who is due to take up a position in the new US administration, is very openly speculating about directly trying to influence politics in several countries: calling for new elections in Britain, trying to remove Nigel Farage as Reform Party leader, backing the far right AfD in Germany, and so on. Even if he does not directly fund any of these parties, it is reasonable to expect that he will use X to tip the social media battlefield in favour of whoever he chooses to support. We have grown used to the idea that Russia is a hostile state actor using social media to manipulate and influence our politics; it will take a 1984-level reality inversion to get our collective heads around the notion that the United States might now play a similar role.
It has seemed clear for a while that the social media landscape was fragmenting along various different axes. There is the generational one, of course, with younger people using apps like TikTok and Snapchat that their parents barely understand. There is a growing political division, with X heading rightwards and its liberal users starting to migrate to Bluesky. But perhaps the most significant, and least porous, will be regulatory and geographical fragmentation, with the online world divided between the United States’ absolutist approach to free speech; the authoritarian stance of China, Russia, and other non- or semi-democracies; and Europe and the UK somewhere in the middle. In 2013, Mark Zuckerberg wrote that Facebook’s mission was “to make the world more open and connected.” That dream seems further away than ever.
It's true that the "more speech" argument doesn't hold up when the haters so far outnumber the correctors as they do on the issue of antisemitism, but the thing is that the "independent experts" haven't done much of a job on moderating/removing anti-Jewish and anti-Israel hate speech/blood libels either.
Some readers may recall that several years ago Mark Zuckerberg excused then Facebook's "policy" of NOT removing Holocaust denial from its platform because the accounts sharing such material might sincerely believe that what they are sharing is true.
As regards the removal of blatantly antisemitic materials, I've been told time and time again that whatever I've reported doesn't "violate community standards". An Indigenous Canadian friend assures me that she's had the same experience reporting hateful content denigrating her community, some of it even overtly advocating violence.