Reddit recently announced that it had removed 6% of the content uploaded to the platform in 2020. This figure is up slightly from the 5% removed for 2019. Reddit admins suggest the increase in removal is down to policy changes reflecting a harsher stance against hate speech and racism.
The change came alongside Reddit banning controversial subreddits, including “The_Donald,” for violating content policies directed at preventing hate speech. The company says there was a noticeable drop in hate speech after the changes were implemented.
Of the content removed from the website, 2% of it was taken down by the Reddit staff themselves, while subreddit moderators, including automated bots, removed the other 4%. The amount of content taken removed by community moderators, both human and otherwise, increased 61% over 2019.
The company attributes the removal increase due to the uptake of automated moderator tools, as well as a 49% increase in content review submissions compared to the previous year. Automods have proven particularly effective in removing potentially bad posts and comments before they are reported for manual removal.
Company employees report removing 82,858 comments over violations related to hate speech, harassment, violent crime, and porn.
One thing to take away from this statistic is that Reddit says around 99% of the content removed was manipulated content, including community interference, spam, and voter manipulation, rather than genuine policy infringements.
Less than 1% of the total removed content was attributed to harassment and hate speech. Most permanent bans on Reddit were because of spam, rather than hate speech and the like.
These statistics show that it’s much easier for messaging companies and social media sites to police bad behavior across the platform, rather than individual posts and comments that violate content policies. Other social media sites, including Facebook, say they plan to take a similar approach to tackle misinformation.