Twitter

Twitter to Label Altered Media, Remove If It May Cause Harm

 Deciding what might cause harm could be difficult to define, though. 

FILE TWITTER IPO
Bloomberg via Getty Images

Twitter will begin to label and in some cases remove doctored or manipulated photos, audio and videos that are designed to mislead people.

The company said Tuesday that the new rules prohibit sharing synthetic or manipulated material that's likely to cause harm. Material that is manipulated but isn't necessarily harmful may get a warning label.

Under the new guidelines, the slowed-down video of House Speaker Nancy Pelosi in which she appeared to slur her words could get the label if someone tweets it out after the rules go into effect March 5. If it was proven that it also causes harm, Twitter could also remove it.

But deciding what might cause harm could be difficult to define, and some material will likely fall into a gray area.

“This will be a challenge and we will make errors along the way — we appreciate the patience," Twitter said in a blog post. “However, we’re committed to doing this right."

Twitter said it considers threats to the safety of a person or a group serious harm, along with risk of mass violence or widespread civil unrest. But harm could also mean threats to people's privacy or ability to freely express themselves, Twitter said. This could include stalking, voter suppression and intimidation epithets and “material that aims to silence someone.”

Google, Facebook, Twitter and other technology services are under intense pressure to prevent interference in the 2020 U.S. elections after they were manipulated four years ago by Russia-connected actors. On Monday, Google's YouTube clarified its policy around political manipulation, reiterating that it bans election-related “deepfake” videos. Facebook has also been ramping up its election security efforts.

As with many of Twitter's policies, including those banning hate speech or abuse, success will be measured by how well the company can enforce it. Even with rules in place, enforcement can be uneven and slow. This is likely to be especially true for misinformation, which can spread quickly on social media even with safeguards in place.

Facebook, for instance, has been using third-party fact-checkers to debunk false stories on its site for three years. While the efforts are paying off, the battle against misinformation is far from over.

Twitter said it was committed to seeking input from its users on such rules. Twitter said it posted a survey in six languages and received 6,500 responses from around the world. According to the company, the majority of respondents said misleading tweets should be labeled, though not everyone agreed on whether they should be removed or left up.

Copyright AP - Associated Press
Contact Us