Technology

Elon Musk’s Twitter takeover and hateful conduct surge: How the company is tackling this

With Elon Musk’s takeover of Twitter, there are fears about how content and hate speech will be moderated on the platform. Reports have shown that there has been a surge in hate conduct on Twitter. Yoel Roth, Twitter’s Head of Safety & Integrity, has issued a detailed thread on how they are dealing with this surge in hate content. Musk has also endorsed Roth’s thread and added to it.
Twitter and the surge in hateful content
Roth wrote that since last Saturday, the platform has seen a surge of ‘hateful conduct.’ He revealed that the company has removed more than “1500 accounts” and reduced impressions on this content to ‘nearly zero’. He added that the company’s measure of success when countering hate speech is ‘impressions’ which is how many times the harmful content has been seen others on the platform.
His claim is that Twitter has “almost entirely eliminated impressions on this content in search and elsewhere” across the platform.“Impressions on this content typically are extremely low, platform-wide. We’re primarily dealing with a focused, short-term trolling campaign. The 1500 accounts we removed don’t correspond with 1500 people; many are repeat bad actors,” Roth wrote, adding that the company will continue to invest in policy and technology to make things better.
He also addressed another issue where users found that tagging hate speech resulted in notices that the said speech was not a violation. It appears that Twitter knows this is a problem where its automated systems are not tagging hate speech as such and the company is trying to fix it.
“To try to understand the context behind potentially harmful Tweets, we treat first-person and stander reports differently. First person: This hateful interaction is happening to or targeting me. stander: This is happening to someone else,” he wrote.
He added that this difference is critical because “standers don’t always have the full context, we have a higher bar for stander reports in order to find a violation.” This is one reason why many reports that do violate policies get marked non-violative in the first review, he added.
Check out his tweets

Our primary success measure for content moderation is impressions: how many times harmful content is seen our users. The changes we’ve made have almost entirely eliminated impressions on this content in search and elsewhere across Twitter. pic.twitter.com/AnJuIu2CT6
— Yoel Roth (@yoyoel) October 31, 2022

We’re changing how we enforce these policies, but not the policies themselves, to address the gaps here.
You’ll hear more from me and our teams in the days to come as we make progress. Talk is cheap; expect the data that proves we’re making meaningful improvements.
— Yoel Roth (@yoyoel) October 31, 2022
Roth said Twitter will be changing how they “enforce these policies.” Keep in mind, the policy against hate speech remains the same.
Meanwhile, Bloomberg reported that many employees part of Twitter’s Trust and Safety organisation are currently unable to alter or penalise accounts that break rules around misleading information, offensive posts and hate speech, except for the most high-impact violations.
Roth replied to this stating, “This is exactly what we (or any company) should be doing in the midst of a corporate transition to reduce opportunities for insider risk. We’re still enforcing our rules at scale.”
But didn’t Elon Musk say he was for free speech?
Musk has long advocated ‘free speech’ on the platform, though he has tempered this with caveats that speech on the platform needs to remain within legal limits. He also noted in his letter to advertisers that he does not wish for Twitter to become a hellscape. None of Twitter’s content moderation policies have changed for now, though a new council with ‘divergent’ views is supposed to eventually take over.

Twitter’s content moderation council will include representatives with widely divergent views, which will certainly include the civil rights community and groups who face hate-fueled violence
— Elon Musk (@elonmusk) November 2, 2022
He has also said that “Twitter will not allow anyone who was de-platformed for violating Twitter rules back on the platform,” until there is a clear process for achieving this. This will likely take a few more weeks. Regarding, the content moderation council, Musk tweeted that it will “include representatives with widely divergent views, which will certainly include the civil rights community and groups who face hate-fueled violence.”

Related Articles

Back to top button