The United States on Wednesday broke with 18 governments and top American tech firms by declining to endorse a New Zealand-led response to the live-streamed shootings at two Christchurch mosques, saying free-speech concerns prevented the White House from formally endorsing the largest campaign to date targeting extremism online.
The ‘‘Christchurch Call,’’ unveiled at an international gathering in Paris, commits foreign countries and tech giants to be more vigilant about the spread of hate on social media. It reflects heightened global frustrations with the inability of Facebook, Google, and Twitter to restrain hateful posts, photos, and videos that have spawned real-world violence.
Leaders from across the globe pledged to counter online extremism, including through new regulation, and to ‘‘encourage media outlets to apply ethical standards when depicting terrorist events online.’’ Companies including Facebook, Google, and Twitter, meanwhile, said they’d work more closely to ensure their sites don’t become conduits for terrorism. They also committed to accelerated research and information sharing with governments in the wake of recent terrorist attacks.
The call is named after the New Zealand city where a shooter killed 51 people in a March attack broadcast on social-media sites. Facebook, Google, and Twitter struggled to take down copies of the violent video as fast as it spread on the Web, prompting an international backlash from regulators who felt malicious actors had evaded Silicon Valley’s defenses too easily. Before the attack, the shooter also posted online a hate-filled manifesto that included references to previous mass killings.
New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron organized the call to action, part of Ardern’s international plea this year for greater social-media accountability. Along with New Zealand and France, countries such as Australia, Canada, and the United Kingdom endorsed the document, as did American tech giants including Amazon, Facebook, Google, Microsoft, and Twitter.
‘‘We’ve taken practical steps to try and stop what we experienced in Christchurch from happening again,’’ Ardern said in a statement.
America’s top tech giants celebrated the call — a voluntary effort, not full regulation — as an important step toward tackling one of the Web’s biggest challenges. Amazon, Facebook, Google, Microsoft, and Twitter issued a joint statement saying ‘‘it is right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.’’
But the White House opted against endorsing the call to action, and President Trump did not join world leaders and tech executives in attending the gathering in Paris. In a statement, US officials said they stand ‘‘with the international community in condemning terrorist and violent extremist content online,’’ and support the goals of the Christchurch call to action. But the White House still said it is ‘‘not currently in a position to join the endorsement.’’
A day earlier, as negotiations progressed, White House officials raised concerns that the document might run afoul of the First Amendment.
‘‘We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,’’ the White House said Wednesday. ‘‘Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.’’
Around the world, the Christchurch attack sparked renewed scrutiny of social media. Facebook, Google and Twitter each have hired thousands of reviewers and created new artificial-intelligence tools with the goal of thwarting hate speech, extremism and terrorism online. Despite those efforts, the tech giants were unable to stop the spread of the Christchurch videos.
Fewer than 200 people watched the live stream during the attack, which Facebook said it removed 29 minutes after it began. But within 24 hours, users had attempted to re-upload the video onto Facebook more than 1.5 million times. About 300,000 of those videos slipped through and were published on the site before being taken down by the site’s content-moderation teams and systems designed to automatically remove blacklisted content.
In response, tech companies on Wednesday committed to enforcing policies that prohibit terrorist and extremist content, improving technology that can spot harmful posts in real time and issuing regular reports about their progress. Facebook, Google, and Twitter already have such rules and tools in place, though at times they’ve fielded sharp criticism for failing to use them effectively. They also agreed to share more information among each other, particularly to stop the real-time spread of extremism during emergencies like the Christchurch attack.
These companies also promised to implement ‘‘appropriate checks on livestreaming,’’ with the aim of ensuring that videos of violent attacks aren’t broadcast widely, in real time, online. To that end, Facebook this week announced that users who violate its rules — such as sharing content from known terrorist groups — could be prohibited from using its live-streaming tools. The company has said such a restriction might have prevented the Christchurch shooter from broadcasting the attack using his account.
US officials also have struggled with the rise of online extremism and its ability to incite real-world violence. Self-proclaimed neo-Nazis used Facebook as an organizing tool ahead of their deadly 2017 rally in Charlottesville, Va., for example, and the shooter who opened fire on a synagogue in Pittsburgh last year long had posted anti-Semitic screeds on fringe websites.