Congress

For the First Time, Facebook Spells Out What It Forbids

The updated community standards will mirror the rules its 7,600 moderators use to review questionable posts, then decide if they should be pulled off Facebook

If you've ever wondered exactly what sorts of things Facebook would like you not to do on its service, you're in luck. For the first time, the social network is publishing detailed guidelines to what does and doesn't belong on its service — 27 pages worth of them, in fact.

So please don't make credible violent threats or revel in sexual violence; promote terrorism or the poaching of endangered species; attempt to buy marijuana, sell firearms, or list prescription drug prices for sale; post instructions for self-injury; depict minors in a sexual context; or commit multiple homicides at different times or locations.

Here are just some examples of what the rules ban. Note: Facebook has not changed the actual rules — it has just made them public.

CREDIBLE VIOLENCE
Is there a real-world threat? Facebook looks for "credible statements of intent to commit violence against any person, groups of people, or place (city or smaller)." Is there a bounty or demand for payment? The mention or an image of a specific weapon? A target and at least two details such as location, method or timing? A statement to commit violence against a vulnerable person or group such as "heads-of-state, witnesses and confidential informants, activists, and journalists."

Also banned: instructions on "on how to make or use weapons if the goal is to injure or kill people," unless there is "clear context that the content is for an alternative purpose (for example, shared as part of recreational self-defense activities, training by a country's military, commercial video games, or news coverage)."

HATE SPEECH
"We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status," Facebook says. As to what counts as a direct attack, the company says it's any "violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation." There are three tiers of severity, ranging from comparing a protected group to filth or disease to calls to "exclude or segregate" a person our group based on the protected characteristics. Facebook does note that it does "allow criticism of immigration policies and arguments for restricting those policies."

GRAPHIC VIOLENCE
Images of violence against "real people or animals" with comments or captions that contain enjoyment of suffering, humiliation and remarks that speak positively of the violence or "indicating the poster is sharing footage for sensational viewing pleasure" are prohibited. The captions and context matter in this case because Facebook does allow such images in some cases where they are condemned, or shared as news or in a medical setting. Even then, though, the post must be limited so only adults can see them and Facebook adds a warnings screen to the post.

CHILD SEXUAL EXPLOITATION
"We do not allow content that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images," Facebook says. Then, it lists at least 12 specific instances of children in a sexual context, saying the ban includes, but is not limited to these examples. This includes "uncovered female nipples for children older than toddler-age."

ADULT NUDITY AND SEXUAL ACTIVITY
"We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring," Facebook says. That said, the company says it "defaults" to removing sexual imagery to prevent the sharing of non-consensual or underage content. The restrictions apply to images of real people as well as digitally created content, although art — such as drawings, paintings or sculptures — is an exception.

Facebook already banned most of these actions on its previous "community standards" page, which sketched out the company's standards in broad strokes. But on Tuesday it will spell out the sometimes gory details.

The updated community standards will mirror the rules its 7,600 moderators use to review questionable posts, then decide if they should be pulled off Facebook. And sometimes whether to call in the authorities.

The standards themselves aren't changing, but the details reveal some interesting tidbits. Photos of breasts are OK in some cases — such as breastfeeding or in a painting — but not in others. The document details what counts as sexual exploitation of adults or minors, but leaves room to ban more forms of abuse, should it arise.

Since Facebook doesn't allow serial murders on its service, its new standards even define the term. Anyone who has committed two or more murders over "multiple incidents or locations" qualifies. But you're not banned if you've only committed a single homicide. It could have been self-defense, after all.

Reading through the guidelines gives you an idea of how difficult the jobs of Facebook moderators must be. These are people who have to read and watch objectionable material of every stripe and then make hard calls — deciding, for instance, if a video promotes eating disorders or merely seeks to help people. Or what crosses the line from joke to harassment, from theoretical musing to direct threats, and so on.

Moderators work in 40 languages. Facebook's goal is to respond to reports of questionable content within 24 hours. But the company says it doesn't impose quotas or time limits on the reviewers.

The company has made some high-profile mistakes over the years. For instance, human rights groups say Facebook has mounted an inadequate response to hate speech and the incitement of violence against Muslim minorities in Myanmar. In 2016, Facebook backtracked after removing an iconic 1972 Associated Press photo featuring a screaming, naked girl running from a napalm attack in Vietnam. The company initially insisted it couldn't create an exception for that particular photograph of a nude child, but soon reversed itself, saying the photo had "global importance."

Monica Bickert, Facebook's head of product policy and counterterrorism, said the detailed public guidelines have been a long time in the works. "I have been at this job five years and I wanted to do this that whole time," she said. Bickert said Facebook's recent privacy travails, which forced CEO Mark Zuckerberg to testify for 10 hours before Congress, didn't prompt their release now.

The policy is an evolving document, and Bickert said updates go out to the content reviewers every week. Facebook hopes it will give people clarity if posts or videos they report aren't taken down. Bickert said one challenge is having the same document guide vastly different "community standards" around the world. What passes as acceptable nudity in Norway may not pass in Uganda or the U.S.

There are more universal gray areas, too. For instance, what exactly counts as political protest? How can you know that the person in a photo agreed to have it posted on Facebook? That latter question is the main reason for Facebook's nudity ban, Bickert said, since it's "hard to determine consent and age." Even if the person agreed to be taped or photographed, for example, they may not have agreed to have their naked image posted on social media.

Facebook uses a combination of the human reviewers and artificial intelligence to weed out content that violates its policies. But its AI tools aren't close to the point where they could pinpoint subtle differences in context and history — not to mention shadings such as humor and satire — that would let them make judgments as accurate as those of humans.

And of course, humans make plenty of mistakes themselves.

Copyright AP - Associated Press
Contact Us