Facebook Clarifies Policy on Extreme Violence

FacebookFacebook has reacted to world-wide outrage from everyday users to national leaders by taking steps to clarify its policy regarding which extremely violent photos and video clips will be allowed and why others might not be. On the company’s web site the policy is best reduced to two critical points.

  • “…when we review content that is reported to us, we will take a more holistic look at the context surrounding a violent image or video, and will remove content that celebrates violence.”

  • On the other hand; “…we will consider whether the person posting the content is sharing it responsibly, such as accompanying the video or image with a warning and sharing it with an age-appropriate audience.”

What they seem to be saying is, “…people who share violent, graphic content,” should do so in a “responsible manner.

This latest effort to adjust their operational guidelines comes only a week after the restrictions which had previously prohibited teenagers from sharing content of their sites with “strangers.” However, the relaxed sharing policy was accommodated by assurances to parents that the young users could experience more choices while continuing to be protected from cyber bullying and predators.

While not ready to disclose details about content monitoring policy, Facebook has said it has been working to expand user controls over content available on their sites. The company has claimed it is improving its advance warnings about the violent or explicit content of images. But no information seems to be found on the policy pages about how the relaxed rules for teenagers will not increase the opportunities for predatory or bullying behavior.

Some might argue that the ambiguity of the clarified policy is an open door for censorship by Facebook. That was certainly the position taken by Cuban artist Erik Ravelo in his interview posted by the Huffington Post Sept. 10, 2013. Ravelo’s photographs depicting children with symbolic images of adults reportedly attracted some 18,000 “likes” before the photos were, “censored by Facebook,” according to the Huffington Post story.

However, a simple search using the words “erik ravelo the untouchables” results in at one Facebook page where seven of the images can be viewed without any warning about offensive content preceeding the opening of the images.

Images are not the only targets of Facebook content control policy. Arizona governor Jan Brewer publically complained she her site had been “censored” because, she claims, in 2011 she was posting information opposing the immigration inaction and outright refusal to enforce the law by the Obama administration. After the removal of Brewer’s site, the public attention drew an apology from Facebook; but the apology contained no explanation for the site removal in the first place.

How Facebook’s system triggers reviews may be as controversial as the selective enforcement of the policy itself. Anyone who happens to visit a site or receives a post they consider offensive can click on the “flag” tab and identify the site or posting as “spam.” If or when enough of these flags are received by company, the site is taken down and reviewed by a group of people who determine the site’s future. Lot’s of flags usually lead to site removal.

But, like so much of the cyber-world, there are those groups who launch campaigns in which participants attack sites with flags intending to force their removal. This process has become so common a method to attempt to silence political opposition or close out a religious site that it has gained its own term. It is called “flag spamming.”

Facebook’s attempt to clarify their content control policy and explain the criteria they use to remove some content and leave other – nearly identical – content online might seem impefect. But considering the billions of users and nearly unlimited variables in personal preference, their policy might well be the best system available. It really comes down to personal responsibility.

by Marcus Murray