Facebook’s Big Disinformation Bust Is Cold Comfort

The company found, and removed, possible election interference on its platforms. But the government, and the world, is too reliant on the company to protect democracy.

Mandel Ngan / Getty

Facebook announced today that it has removed pages, events, and accounts involved in “coordinated inauthentic behavior” on its social-media platforms, including Facebook and Instagram. The posts and accounts in question appeared to have been created to sow discord in advance of a second “Unite the Right” rally in Washington D.C., meant to memorialize last year’s deadly white supremacist protest in Charlottesville. The material Facebook found and removed included false counter-protest events, job ads for protest coordinators, and content referencing diversity and the #AbolishICE movement, among others.

This behavior is inauthentic and the actors bad, in Facebook’s analysis, not because the content is objectionable on its face, but because it does not represent earnest speech. Instead, the posts appear to have been created to give those who might encounter their messages and oppose them a sense of injustice or distress, in order to precipitate discord. This type of propaganda has been common on Facebook and Instagram in the past, including during the run-up to the 2016 U.S. election.

The company didn’t explicitly connect these posts to efforts to interfere with the U.S. midterm elections this year, nor could it confirm the identities of the parties responsible. But it did draw parallels between the new material and those earlier disinformation campaigns. It also found possible connections between the banned accounts and the Russia-based Internet Research Agency (IRA), which was responsible for creating material that reached millions of Americans thanks to likes, shares, page follows, and ads.

Facebook is still reeling from the blowback of its data-extraction and election-interference scandals dating back to well before 2016. On top of that, the company’s stock also shed over 20 percent of its value since late last week, after it revealed that its astronomical growth would slow. Facebook’s announcement is clearly meant, in part, to give its users and investors confidence that the company has learned from its mistakes and is taking more proactive action to protect citizens against misinformation on its platform.

The obvious question: Will it be enough?


During the run-up to the 2016 election and after, the IRA and other propagandists created Facebook pages that published and advertised posts meant to heighten political tensions, especially around race and identity. Those posts, which numbered in the tens of thousands and reached millions of Americans, often sowed discord by inspiring prejudice and jingoism. A post on Instagram, for example, depicts a woman in a hijab, with a caption listing everything she supposedly hates—Europeans, Christians, ham, wine, dogs, and more. The punchline reads, “Complains about Islamophobia.” This image, which was originally created by an IRA-operated disinformation account called Merican Fury, was picked up by legitimate conservative accounts with large followings. That was the intended goal: Create memes that would inflame a target group and sway them to entrench their beliefs in the positions those materials advanced. In other cases, these trolls affirmed messages that appealed to Bernie Sanders or Jill Stein voters, with the ultimate goal of splitting the Democratic vote so as to undermine Hillary Clinton.

But the accounts and posts Facebook removed today are a little different. Instead of playing to the existing conceits of conservative voters, Facebook has implied that they were created as counter-provocations representing fictional leftist positions. According to sample posts provided by Facebook, one of the now-banned accounts, called Resisters, posted feminist affirmations (“women do not have to be thin, cook for you, wear makeup” and so on.) That account also posted events advertising counter-Trump rallies. An account called Black Resistance posted imagery and messages affirmative toward African Americans, and another called Aztlan Warriors did so for native Mexican cultures, some with text encouraging “Resistance” or to “Be proud of who you are.” A more curious banned account for which Facebook shared posts, Mindful Being, posted New Age images and messages, including one that read, “We must unlearn what we have learned because a conditioned mind cannot comprehend the infinite.”

In all these cases, the posts are presumably intended to simulate a thriving opposition movement, containing populations and brandishing messages odious to certain conservative groups and voters. Those incitements can spread to others, stoking the anger they arouse. In other cases, such as the false “No Unite the Right 2 - DC” counterprotest, real people signed up to help organize or attend, which could put them in real physical danger were they to show up.

This is what Facebook means by “coordinated inauthentic behavior”—a phrase that appears several times in today’s announcement. It’s carefully crafted rhetoric. The activity is inauthentic because it is Potemkin activism, in effect—a facade for a deeper advocacy that doesn’t exist, at least not from the agents that appear to be sharing it. And it’s coordinated because small, seemingly disconnected grassroots or community organizations are actually puppets being manipulated by another actor. As the journalist Aaron Sankin observed, much of the bad-actor activity Facebook shared today appears to related to “making the accounts themselves appear authentic,” presumably in order that they could be used to spread further disinformation in the future.


Facebook explained that the agents responsible for these accounts are getting smarter, adapting to many of the checks the company has been using to identify obvious trolls. They are concealing their location, for example, using VPNs so as not to appear to post from foreign nations. They are also funneling payments for ads through third parties based in North America. Even so, some of the accounts exposed possible connections to others that Facebook knew to be IRA-operated, which is one way that the company identified them as bad actors. The Resisters page, for example, had an IRA account as a page admin for a short time.

Facebook’s explanations are lucid and detailed, a new feature of the company’s public comments about influence on its platform. In a wide-ranging interview with Re/Code’s Kara Swisher published two weeks ago, for example, Mark Zuckerberg spoke in detail about the Russian disinformation campaigns, demonstrating a deeper understanding of geopolitics than he had evinced during his testimony before Congress this spring. That didn’t stop Zuckerberg from committing some blunders, like taking a perhaps-too-charitable view of Holocaust deniers, but it did convey a deeper engagement with the global issues that have caught the company flat-footed in the past.

Coining the phrase “coordinated inauthentic behavior” offers another rhetorical move that makes Facebook look with-it and on top of things. It didn’t invent the term “fake news,” although the company has been living with the consequences of that term’s rise in popularity since 2016. There’s no reason to believe Facebook is trying to hide something in “coordinated inauthentic behavior”—it looks like an earnest attempt to give a strange phenomenon a clearer description. But it also connects Facebook’s security measures to broader counterintelligence efforts; “coordinated inauthentic behavior” sounds like a Beltway intelligence-community term, infused with all the gravitas and bureaucracy that community commands. Facebook’s statement also makes mention of U.S. law-enforcement agencies, and Zuckerberg covered similar ground in his interview with Swisher. Not only does that allow Facebook to pass some responsibility to the U.S. government (“we provide our best attribution publicly and report the specific information to the appropriate government authorities”) but also it shows that the company does not believe that it alone has the power to stop election interference via social media, and rightly so.

Even so, the propaganda and election-interference problems that democracies all around the world now face would be impossible without the global information infrastructure the internet provides, and a set of popular, centralized services on those networks that can reach a large group of people—Facebook and Instagram being among the most important. The fact that Facebook is finally taking some charge of its role in undermining democracy comes as cold comfort to those citizens and governments whose futures have now been steered down timelines they might not otherwise have been had greater care guided the creation and growth of those platforms. Facebook isn’t the only blameworthy party there, either—the Obama administration chose to perform on social media rather than to regulate it.

Finally, Facebook’s assurances also offer some reason for greater worry. Today’s announcement doesn’t detail everything that led the company to identify and ban these particular accounts, pages, events, and posts, but it does suggest that a few missteps by those in charge of these campaigns contributed to their unmasking—especially the error of connecting the new accounts to previously known IRA-operated ones. Those actors might already have learned from that lesson, making today’s news the tip of an iceberg nobody has yet seen or anticipated. Intelligence and security has become an arms race, a new kind of cold war made from memes instead of warheads. And yet, the U.S. government hasn’t established a sufficient approach to the threat. In an unrelated briefing today, a senior congressional aide told reporters that “Keeping the pressure on platform companies [like Facebook] is probably the most we can do,” when it comes to election interference.

Facebook is now one of the most important agents in that struggle, all around the globe. Like it or not, if you’re a human who lives and votes in a nation on Earth, you’ll have to trust the company to play its part.

Ian Bogost is a contributing writer at The Atlantic.