Roth explained that those 1,500 accounts didn’t correspond to 1,500 people. “Many were repeat bad actors,” he tweeted. The executive also said that Twitter’s primary success measure for content moderation is impressions — that translates to the times a piece of content is seen by users — and the company was able to reduce impressions on the hateful content that flooded its website to nearly zero.
Our primary success measure for content moderation is impressions: how many times harmful content is seen by our users. The changes we’ve made have almost entirely eliminated impressions on this content in search and elsewhere across Twitter. pic.twitter.com/AnJuIu2CT6
— Yoel Roth (@yoyoel) October 31, 2022
In addition to providing an update about dealing with the recent trolling campaign on Twitter, Roth also talked about how the website is changing how it enforces its policies regarding harmful tweets. He explained that the company treats first person and bystander reports differently: “Because bystanders don’t always have full context, we have a higher bar for bystander reports in order to find a violation.” That’s why reports by uninvolved third parties about hateful conduct on the platform often get marked as non-violation evens if they do violate its policies.
Roth ended his series of tweets with a promise to reveal more about how the website is changing how it enforces its rules. However, a new Bloomberg report puts into question how Twitter’s staff can enforce its policies in the coming days. According to the news organization, Twitter has frozen most employees’ access to internal tools used for content moderation.
Apparently, most members of Twitter’s Trust and Safety organization have lost the ability to penalize accounts that break rules regarding hateful conduct and misinformation. This event has understandably raised concerns among employees on how Twitter will be able to keep the spread of misinformation in check, when the November 8th US midterm election is just a few days away.
Bloomberg said the restriction placed upon the employes’ access to moderation tools is part of a broader plan to freeze Twitter’s software code, which will prevent staff members from pushing changes to the website as its changes ownership. The organization also said that Musk asked the Twitter team to review some of its policies, including its rule regarding misinformation that penalizes posts containing falsehoods about politics and COVID-19. Another rule Musk reportedly asked the team to review is a section in Twitter’s hateful conduct policy that penalizes posts containing “targeted misgendering or deadnaming of transgender individuals.”
Original Story At https://www.engadget.com/twitter-removed-1500-accounts-coordinated-trolling-campaign-073056421.html?src=rss
Note: This article is automatically uploaded from feeds, SPOKEN by YOU is not responsible for the content within it.