By now, you’ve probably heard of the new rules banning Twitter, Instagram, Spotify and Facebook from all the major social networks, including Google+ and YouTube.
You may have heard that these new rules are being enforced in the U.S., Canada, Mexico, Australia and New Zealand, and are part of a larger crackdown on hate speech on the web.
The new rules do not address any specific hate speech, and they are likely to result in an increase in the number of people deleted, according to the Electronic Frontier Foundation, a nonprofit organization dedicated to civil liberties and free speech online.
In an interview with Wired, Twitter spokesperson Ben Goertzel said, “The new guidelines do not apply to our existing content that was published or shared by users in the past.”
Goertsel said the company had already received complaints from users about the new guidelines.
“We have a number of different ways to protect the user experience,” he said.
“In some cases, we may not have an active account or user account that’s being used to post hateful content, but we are making sure we’re making sure that those are blocked.”
However, Goertsels statement does not address the possibility that users will still be able to access these sites and use them to post hate speech.
That may be because of a change in how Twitter’s moderation system works, which may be making it harder for users to remove content, said Matt Blaze, director of digital and public policy at the Electronic Privacy Information Center (EPIC).
In September, the Center for Democracy and Technology (CDT) released a report on Twitter’s practices in terms of reporting abuse, which revealed that Twitter had received over 20,000 reports of abuse from users in 2017.
That’s nearly a quarter of the total reports it received in 2017, and a dramatic increase from just a year earlier, when only about a quarter were reported.
Blaze said that despite the new regulations, he is concerned that the new systems might actually create an incentive for hate speech to continue on Twitter.
“If you have a tool that lets you report hate speech and get a bunch of people on there, that’s one way to discourage people from reporting that,” he told me.
“If it’s an effective tool, and it’s used well, and the people reporting hate speech are using it well, that could actually incentivize more of those kinds of things to happen.”
The Center for Public Integrity has previously reported on the extent to which Twitter is engaging in this type of harassment, which can include the use of bots and automated accounts to flood the site with abusive content.
In September 2017, a Twitter bot that was using the alias “Skeet” started flooding the site for tweets about President Donald Trump.
It had already gotten over 3,000 likes, and by the end of the day it had over 5,000 followers.
In response to the influx of tweets, users were quick to start reporting that the bot was tweeting out racist, homophobic, and misogynistic content.
Twitter responded by deleting tweets from “Sket” and other users who were sharing racist, sexist, homophobic and other hate speech in the hopes that it would be reported as spam, according the Center.
The bots account was eventually shut down, but the flood of hate speech continued to happen.
Blast said the number and severity of hate crimes on Twitter has gone up in the wake of the election, and that the company is being pressured by government agencies to act more aggressively against hate speech than before.
“Twitter is basically saying ‘We don’t care about your hate speech anymore.
We’re going to take a closer look at your reports and we’re going on the offensive,'” he said, adding that the increased crackdown on speech may have caused the increased reports.
“It’s probably an effort to put pressure on Twitter to do more, to shut down people who are spreading hate speech more effectively and to report more cases of abuse.
That probably caused the rise in the hate crimes.”
Blaze also said that the social network is now working with Twitter to “ensure that users report hate crime incidents more quickly and effectively.”
Blast told me that Twitter’s policy is already working well in the United States, where he said the site’s automated reporting system is used to alert users if they believe a user has committed a hate crime.
“When a user reports a crime, it’s really easy for the company to report that crime,” he explained.
“The system has been in place for years, and I can say it’s been effective at reporting hate crimes, and actually preventing hate crimes from happening.”
Blake said he believes the new enforcement guidelines are just another step in a broader effort by Twitter to combat hate speech online, which is part of an overall trend of the company going after companies that publish or distribute hate speech that could be harmful to people, like a black person,