In March 2019, before a gunman murdered 51 people at two mosques in Chrchurch, New Zealand, he went live on Facebook to broadcast his attack. In October of that year, a man in Germany broadcast his own mass shooting live on Twitch, the Amazon-owned livestreaming site popular with gamers.
On Saturday, a gunman in Buffalo, New York, mounted a camera to his helmet and livestreamed on Twitch as he killed 10 people and injured three more at a grocery store in what authorities said was a rac attack. In a manifesto posted online, Payton S. Gendron, the 18-year-old whom authorities identified as the shooter, wrote that he had been inspired the Chrchurch gunman and others.
Twitch said it reacted swiftly to take down the video of the Buffalo shooting, removing the stream within two minutes of the start of the violence. But two minutes was enough time for the video to be shared elsewhere.
Sunday, links to recordings of the video had circulated widely on other social platforms. A clip from the original video — which bore a watermark that suggested it had been recorded with a free screen-recording software — was posted on a site called Streamable and viewed more than 3 million times before it was removed. And a link to that video was shared hundreds of times across Facebook and Twitter hours after the shooting.
Mass shootings — and live broadcasts — raise questions about the role and responsibility of social media sites in allowing violent and hateful content to proliferate. Many of the gunmen in the shootings have written that they developed their rac and antisemitic beliefs trawling online forums like Reddit and 4chan, and were spurred on watching other shooters stream their attacks live.
“It’s a sad fact of the world that these kind of attacks are going to keep on happening, and the way that it works now is there’s a social media aspect as well,” said Evelyn Douek, a senior research fellow at Columbia University’s Knight First Amendment Institute who studies content moderation. “It’s totally inevitable and foreseeable these days. It’s just a matter of when.”
Questions about the responsibilities of social media sites are part of a broader debate over how aggressively platforms should moderate their content. That discussion has been escalated since Elon Musk, CEO of Tesla, recently agreed to purchase Twitter and has said he wants to make unfettered speech on the site a primary objective.
Social media and content moderation experts said Twitch’s quick response was the best that could reasonably be expected. But the fact that the response did not prevent the video of the attack from being spread widely on other sites also raises the issue of whether the ability to livestream should be so easily accessible.
“I’m impressed that they got it down in two minutes,” said Micah Schaffer, a consultant who has led trust and safety decisions at Snapchat and YouTube. “But if the feeling is that even that’s too much, then you really are at an impasse: Is it worth having this?”
In a statement, Angela Hession, Twitch’s vice president of trust and safety, said the site’s rapid action was a “very strong response time considering the challenges of live content moderation, and shows good progress.” Hession said the site was working with the Global Internet Forum to Counter Terrorism, a nonprofit coalition of social media sites, as well as other social platforms to prevent the spread of the video.
“In the end, we are all part of one internet, and we know now that that content or behavior rarely — if ever — will stay contained on one platform,” she said.
There may be no easy answers. Platforms like Facebook, Twitch and Twitter have made strides in recent years, the experts said, in removing violent content and videos faster. In the wake of the shooting in New Zealand, social platforms and countries around the world joined an initiative called the Chrchurch Call to Action and agreed to work closely to combat terrorism and violent extremism content. One tool that social sites have used is a shared database of hashes, or digital footprints of images, that can flag inappropriate content and have it taken down quickly.
But in this case, Douek said, Facebook seemed to have fallen short despite the hash system. Facebook posts that linked to the video posted on Streamable generated more than 43,000 interactions, according to CrowdTangle, a web analytics tool, and some posts were up for more than nine hours.
When users tried to flag the content as violating Facebook’s rules, which do not permit content that “glorifies violence,” they were told in some cases that the links did not run afoul of Facebook’s policies, according to screenshots viewed The New York Times.
Facebook has since started to remove posts with links to the video, and a Facebook spokesperson said the posts do violate the platform’s rules. Asked why some users were notified that posts with links to the video did not violate its standards, the spokesperson did not have an answer.
Twitter had not removed many posts with links to the shooting video, and in several cases, the video had been uploaded directly to the platform. A company spokesperson initially said the site might remove some instances of the video or add a sensitive content warning, then later said Twitter would remove all videos related to the attack after the Times asked for clarification.
A spokesperson at Hopin, the video conferencing service that owns Streamable, said the platform was working to remove the video and delete the accounts of people who had uploaded it.
Removing violent content is “like trying to plug your fingers into leaks in a dam,” Douek said. “It’s going to be fundamentally really difficult to find stuff, especially at the speed that this stuff spreads now.”