
It’s not like dishonest political ads didn’t exist before Facebook. In election years, cable networks sometimes encounter a tricky scenario: A political candidate wants to spend thousands of dollars for an ad attacking an opponent, but some of the facts don’t add up. Did the opponent really vote 50 times to raise taxes, as the ad says? Does it matter if some of those “taxes” were actually fees? When does an exaggeration turn into a falsehood? Rejecting an ad means giving up revenue, and exposes the network to accusations of bias. But cable stations (and, yes, newspapers) accept those hits, because the consequences of running an inaccurate ad are much worse, misleading viewers and poisoning the democratic discourse.
Facebook, though, conducts no such scrutiny of political ads — a failure that has grown more alarming as social media’s role in politics expands and its impact on global elections becomes more decisive. The company camouflages its laziness as principle, claiming that for the Silicon Valley behemoth to police political ads would amount to censorship. But that’s a lame cop-out. Until the company can find a way to vet the truthfulness of the political ads on its platform, it should stop running them altogether, following the lead of Twitter, which announced Wednesday it would stop accepting political ads.
Twitter gave up potential revenue because the company recognized it had social responsibilities that take precedence over profit. It was a bold move — one that Facebook could afford to copy. As the company’s recently posted earnings confirm, Facebook has plenty of money. What it increasingly lacks, though, is public trust — a trend it might start to reverse if it stopped taking political ads.
The social media company has been under a cloud since the 2016 election, when its platform was used to spread conspiracy theories, fake news, and Russian propaganda. Paid ads were only one facet of the problem — but they were among the most egregious. Not only did Facebook allow advertisers to target ads in alarming ways (it allowed advertisers to buy ads targeting “Jew haters,” for instance), but it did not reveal the sources of ads or verify their content. More than 11 million US Facebook users were exposed to ads paid for by Russia, a congressional investigation found.
The company has made some modest improvements since then, and now requires funders of political and issue ads to identify themselves and allows users to see how the ads have been targeted. It has also tackled misinformation in user posts, tweaking its algorithims to stop posts containing fake news or vaccine misinformation from showing up in recommendations. But the company has held the line at ad content: It insists that it shouldn’t have to reject factually false ads, and even abandoned an earlier policy that prohibited ads that had been debunked by third-party fact checkers. The divergence between Facebook and traditional media was on vivid display earlier this month, when the Trump campaign sought to run an inaccurate attack ad against former vice president and potential Democratic nominee Joe Biden. CNN rejected the ad; Facebook allowed it.
Of course, one reason news outlets try to root out falsehoods in ads is to protect themselves from liability. Unlike Facebook, which as an online platform has immunity from some types of lawsuits, a cable network or newspaper can be sued if an advertiser makes false and defamatory claims on their pages or airwaves. But many news outlets refuse to knowingly publish factually inaccurate information even when there is no legal risk that would stem from publishing it. (Traditional over-the-air TV stations play by somewhat unique rules: They must almost always accept ads from candidates for federal office but can’t be held liable if those ads are false and defamatory.)
Mark Zuckerberg, Facebook’s founder, has defended the social network’s hands-off policy by conflating paid political ads with user speech. The right of users to post even false information is what sets Facebook apart from repressive Internet regimes like China’s, he said in a speech recently. But it’s quite the leap from rejecting censorship of user speech in China to rejecting fact-checking of paid speech in the United States. Even Zuckerberg’s own employees aren’t buying the company’s logic: Hundreds of Facebook employees signed a letter opposing the policy. “Free speech and paid speech are not the same thing,” it said. “Allowing paid civic misinformation to run on the platform . . . communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.”
Facebook does indeed face real ethical dilemmas when it comes to policing user speech and how to deal with hateful, bullying, or false posts without impinging on the rights of users. But ad content ought to be a far easier call. When he announced Twitter’s ban on political ads, CEO Jack Dorsey put it this way: “It’s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, but if someone pays us to target and force people to see their political ad . . . well . . . they can say whatever they want!”
Vetting ads is hard work. It reduces profits and can create friction with advertisers. But the power of Silicon Valley to shape the political climate in the United States and the rest of the world is too great for it simply to wash its hands of those tough decisions. Until Facebook is ready to live up to that responsibility, it should stop selling political ads.