Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Amid rising concerns that X has become less safe under billionaire Elon Musk, the platform formerly known as Twitter is seeking to assure advertisers and critics that it still polices harassment, hate speech and other offensive content.
From January to June, X suspended 5.3 million accounts and removed or labeled 10.7 million posts for violating its rules against posting child sexual exploitation materials, harassment and other harmful content, the company said in a 15-page transparency report released Wednesday. X said it received more than 224 million user reports during the first six months of this year.
It’s the first time X has released a formal global transparency report since Musk completed his acquisition of Twitter in 2022. The company said last year that it was reviewing how it approaches transparency reporting, but still released data about how many accounts and how much content were pulled down.
Safety issues have long dogged the social media platform, which has faced criticism from advocacy groups, regulators and others that the company doesn’t do enough to moderate harmful content. But those fears heightened after Musk took over Twitter and laid off more than 6,000 people.
The release of X’s transparency report also comes as advertisers plan to cut their spending on the platform next year and the company escalates its battle with regulators. This year, X Chief Executive Linda Yaccarino told U.S. lawmakers that the company was restructuring its trust and safety teams and building a trust and safety center in Austin, Texas.
Musk, who said last year that advertisers who were boycotting his platform could “go f— yourself,” also has moderated his tone. At this year’s Cannes Lions International Festival of Creativity, he said that “advertisers have a right to appear next to content that they find compatible with their brands.”
When Musk took over Twitter, several changes he made raised alarms among safety experts. X reinstated previously suspended accounts, including those of white nationalists, stopped enforcing its policy against COVID-19 misinformation and abruptly disbanded its Trust and Safety Council, an advisory group that included human rights activists, child safety organizations and other experts.
“All of those things add up to a less safe environment,” said Stephen Balkam, the founder and chief executive of the Family Online Safety Institute. The group was part of the Trust and Safety Council before it was shuttered.
X’s transparency report is “very opaque” and appears to contradict Musk’s image as a “free speech absolutist” because it shows that the company is taking down content, Balkam said.
Eirliani Abdul Rahman, co-founder of YAKIN, short for Youth, Adult Survivors & Kin In Need, who was among the members who resigned from Twitter’s Trust and Safety Council, said the report was “laudatory” but insufficient.
“This would be for me lip service when the owner himself doesn’t actually abide by the rules,” Abdul Rahman said.
A spokesperson for X could not be immediately reached, but Yaccarino said on X: “Our commitment to transparency and safety continues.”
X has also grappled with criticism that it has become less transparent under Musk’s leadership. The company, which was once publicly traded, became private after Musk purchased it for $44 billion.
The change meant that the social media platform no longer reported its quarterly user numbers and revenue publicly. Last year, X started charging for access to its data, making it tougher for researchers to conduct studies about the platform.
Concern about the lack of moderation on X also has posed a threat to its advertising business. The World Bank in September halted paid advertising on the platform after its ads showed up under a racist post. About 25% of advertisers expect to decrease their spending on X next year and only 4% of advertisers think the platform’s ads provide brand safety, according to a survey by the market research firm Kantar.
Some of the top problems that users reported on X involved posts that allegedly violated the platform’s rules on harassment, violent content and hateful conduct, the platform’s transparency report shows.
Musk has said on X that his approach to enforcing the platform’s rules is to restrict the reach of potentially offensive posts rather than taking down the posts. He sued California last year over a state law that lawmakers say aims to make social networks more transparent because of free speech concerns.
X’s transparency report shows that toughly 2.8 million accounts were suspended for violating the platform’s rules against child sexual exploitation, making up more than half of the 5.3 million accounts that were pulled down.
But the report also showed that X resorted to labeling user content in some cases rather than removing or suspending accounts.
X applied 5.4 million labels to content reported for abuse, harassment and hateful conduct, relying heavily on automated technology. Roughly 2.2 million pieces of content were taken down for violating those rules.
The platform’s rules state that the site doesn’t allow media depicting hateful imagery such as the Nazi swastika in live videos, account bios, profiles or header images. Other instances, though, must be marked as sensitive media. This week, X also made changes to a feature that allows users to block people on the platform. People whom users have blocked will be able to see their posts but not engage with them.
X also suspended nearly 464 million accounts for violating its rules against platform manipulation and spam. Musk vowed to “defeat the spam bots” on Twitter before he took over the platform. The company’s report included a metric called the “post violation rate” that showed users are unlikely to come across content that breaks the site’s rules.
Meanwhile, X continues to face legal challenges in several countries including Brazil, whose Supreme Court blocked the site because Musk failed to comply with court orders to suspend certain accounts for posting hate speech. The company bowed to legal demands this week in an attempt to get reinstated. It has also been reporting content moderation data to regulators in places such as Europe and India.
“He’s running up against limitations perhaps for the first time in his career around content issues,” Balkam said. “My only guess is the executives of his companies must be tearing their hair out while he spends his days and hours into the middle of the night using this platform to troll people.”
The report included the number of requests X gets from government and law enforcement agencies. The company received 18,737 government requests for user account information and it disclosed information in about 53% of these cases.
Twitter started publicly reporting in 2012 the number of government requests it received for user information and content removal. The company’s first transparency report, which included data about copyright takedown notices, came after Google started releasing such information in 2010.
After revelations surfaced in 2013 that the National Security Agency had access to user data that Apple, Google, Facebook and other tech giants collected, a growing number of online platforms started to disclose more information about requests they received from the government and law enforcement.