Viral video app TikTok admitted it previously had a policy in place which limited the reach of videos posted by disabled users on the site, claiming that the “blunt and temporary policy” was aimed at curbing bullying.
German tech blog Netzpolitik first reported on the policy on Monday, citing leaked documents that it obtained from TikTok which outlined its former moderation guidelines, as well as interviews with a source at TikTok with knowledge of the policies.
According to the Netzpolitik, TikTik's moderation guidelines laid out rules for “Imagery depicting a subject highly vulnerable to cyberbullying.” It went on to describe users covered under the policy as people who are “susceptible to harassment or cyberbullying based on their physical or mental condition.”
It listed examples which included facial disfigurements, autism, and Down Syndrome, as revealed by screenshots of the policy.
According to Netzpolitik, TikTok's moderation guidelines limited the visibility of content produced by those users, and people on the app who had disabilities were categorized as “Risk 4,” meaning their videos were only visible in the country where it was uploaded. Some users who were deemed by moderators to be particularly vulnerable had their videos hidden from the app's main “For You” feed if they exceeded a certain number of views, which further limited the video's reach.
The policy was in place until at least September 2019, according to the report.
TikTok admitted to using the policy but said it was “never designed to be a long-term solution.”
“Early on, in response to an increase in bullying on the app, we implemented a blunt and temporary policy,” a spokesperson for TikTok said in a statement.
“This was never designed to be a long-term solution, but rather a way to help manage a troubling trend. While the intention was good, the approach was wrong and we have since changed the earlier policy in favour of more nuanced anti-bullying policies and in-app protections. We continue to grow our teams and capacity and refine and improve our policies, in our ongoing commitment to providing a safe and positive environment for our users.”
TikTok has come under fire in recent weeks for its moderation policies after it suspended the account of US teenager Feroza Aziz, who posted a viral video on the app disguised as a makeup tutorial. The video criticized the Chinese government's treatment of Uighur Muslims in China's western autonomous region of Xinjiang.
The company claimed that the suspension of Aziz's account was due to “human error”, then issued a lengthy public apology before reinstating her account. In a statement to Business Insider in response to the controversy, TikTok said it “took a blunt approach to minimizing conflict” in its early moderation policies.
“A previous version of our moderation guidelines allowed penalties to be given for things like content that promotes conflict between religious sects or ethnic groups, spanning a number of regions around the world. The old guidelines in question are outdated and no longer in use.”
A report compiled by the Australian Strategic Policy Institute last month also alleged that ByteDance, the company that owns TikTok, is working closely with China's government to facilitate human rights abuses against Uighurs through its Chinese apps, an allegation the company denies.