A systematic trolling campaign was launched against Twitter not long after Elon Musk took over the business last week. The coordinated attempt, according to Yoel Roth, the company’s head of safety and security, was to lead users to believe that Twitter’s regulations have been relaxed. Additionally, Roth stated that the firm was striving to stop the campaign that had sparked a rise in hate speech and other offensive behaviour on the website. The executive has since issued an update on Twitter’s cleanup efforts, stating that over 1,500 accounts involved in the trolling have been deleted and that the company has made “measurable progress” since Saturday.
These 1,500 accounts didn’t belong to 1,500 people, according to Roth. Many of them were repeat offenders, he tweeted. The executive added that the number of times a piece of information is viewed by users, or impressions, is Twitter’s main yardstick for the performance of content moderation, and that the business was able to almost completely eliminate impressions of the vile content that flooded its platform.
Roth discussed how Twitter is modifying how it enforces its rules addressing harmful tweets in addition to giving an update on how the latest trolling campaign on the platform is being handled. Bystander reports are handled differently by the firm than first-person reports, he said, adding that “we have a higher threshold for bystander reports in order to discover a violation” because bystanders “don’t necessarily have full context.” Because of this, even when reports of hostile behaviour on the platform are made by unaffiliated third parties, the reports are frequently tagged as non-violations of the site’s regulations.
With a pledge to share additional information about how the website is altering how it enforces its rules, Roth concluded his series of tweets. How Twitter’s personnel will be able to uphold its principles in the coming days, however, is called into question by a recent Bloomberg story. The news source claims that Twitter has restricted access to internal tools used for content moderation for the majority of its staff.
It appears that the majority of Twitter’s Trust and Safety team employees have lost the authority to take action against accounts that violate the platform’s policies on hate speech and disinformation. Employees are understandably concerned about how Twitter will be able to control the dissemination of false information with the US midterm elections on November 8 just a few days away in light of this tragedy.
According to Bloomberg, the decision to limit the employees’ access to moderation tools is a part of a larger effort to freeze the software code of Twitter, preventing employees from making modifications to the service when it changes hands. The group added that Musk requested a revision of various Twitter policies, notably the one regarding misinformation that sanctions postings that contain untrue information about politics and COVID-19. A portion of Twitter’s abusive conduct policy that sanctions posts containing “targeted misgendering or deadnaming of transgender individuals” is another regulation Musk reportedly urged the team to consider.