Technology

Bumble will start removing profiles that falsely report users for their identity

Bumble has announced a new policy that explicitly bans identity-based hate and it will take action against those who intentionally submit false reports targeting other users for their identity. The platform intends to remove repeat offenders.  The company noted in its press release that 90% of reports it received against gender-nonconforming people actually did not violate any terms and were eventually dismissed.
Such reports often contained language concerned with the gender of the reported person speculating that their profile might be fake. Taking a tough stance on people who send out false reports like these, Bumble said that it may even boot repeat offenders from the platform.
The dating app also says that it’ll review each report and take appropriate action. The rollout of the policy also includes implicit bias training and discussion sessions with all safety moderators to check how bias can ex when moderating. Azmina Dhrodia, Bumble’s safety policy lead, said in a statement, “We always want to lead with education and give our community a chance to learn and improve. However, we will not hesitate to permanently remove someone who consently goes against our policies or guidelines.”

“The company defines identity-based hate as content, imagery or conduct that promotes or condones hate, dehumanisation, degradation, or contempt against marginalized or minoritised communities with the following protected attributes: race, ethnicity, national origin/nationality, immigration status, caste, sex, gender, gender identity or expression, sexual orientation, disability, serious health condition, or religion/belief,” according to the press statement.
“We want this policy to set the gold standard of how dating apps should think about and enforce rules around hateful content and behaviours. We were very intentional to tackle this complex societal issue with principles celebrating diversity and understanding how those with overlapping marginalized identities are disproportionately targeted with hate,” added Dhrodia.
Aside from human moderation, Bumble already uses automated measures to protect users against comments and images that’ll negatively impact them. The company says that using such safeguards, it’s been successful in detecting 80 per cent of community guidelines violations even before they get reported, which is “part of the company’s commitment to reduce and prevent harm before it happens.”
Content on Bumble that’s not automatically detected can be brought to moderators’ attention through the Block + Report feature, which allows users to report someone for identity-based hate, either straight from their profile or via a chat.

Related Articles

Back to top button