The news is by your side.

Facebook’s community report reveals content that violates its policies

20

Get real time updates directly on you device, subscribe now.

Facebook disclosed ten policies on Facebook and four policies on Instagram across the world including Pakistan describing about the categories of the content that violates its metrics and lead to the blockade or ban.

In its fourth edition of Community Standards Enforcement Report for quarter 2 and 3, it provided details about the content violated its policies subsequently facing blockade or ban on the social networking platforms.

Facebook reveals its metrics including the prevalence of the content violation; its action against it called as actioned content; proactive action on the content even before it was reported; how much was detected before someone reported it.

The metrics include appealed content which describes how much content people appealed after it took an action; and the restored content which describes how much content was restored after Facebook initially took action.

In this first report for Instagram, Facebook reveals its strict content policy on four policy areas including child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda.

While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services.

Facebook has also recently strengthened its policies around self-harm and made improvements to its technology to find and remove more violating content.

“On Facebook, we took action on about 2 million pieces of content in Q2 2019, of which 96.1% we detected proactively, and we saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.1% we detected proactively,” the statement issued said.

On Instagram, it can be seen similar progress with removal of about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.

Dangerous Individuals and Organizations policy bans all terrorist organizations from having a presence on our services. Facebook has identified a wide range of groups, based on their behavior, as terrorist organizations. Previous reports only included our efforts specifically against al Qaeda, ISIS and their affiliates as we focused our measurement efforts on the groups understood to pose the broadest global threat. Now, we’ve expanded the report to include the actions we’re taking against all terrorist organizations. While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content affiliated with any terrorist organization on Facebook is 98.5% and on Instagram is 92.2%. We will continue to invest in automated techniques to combat terrorist content and iterate on our tactics because we know bad actors will continue to change theirs.

In this report, we are adding prevalence metrics for content that violates our suicide and self-injury and regulated goods (illicit sales of firearms and drugs) policies for the first time. Because we care most about how often people may see content that violates our policies, we measure prevalence, or the frequency at which people may see this content on our services. For the policy areas addressing the most severe safety concerns – child nudity and sexual exploitation of children, regulated goods, suicide and self-injury, and terrorist propaganda – the likelihood that people view content that violates these policies is very low, and we remove much of it before people see it.

As a result, when Facebook samples views of content in order to measure prevalence for these policy areas, many times we do not find enough, or sometimes any, violating samples to reliably estimate a metric. Instead, we can estimate an upper limit of how often someone would see content that violates these policies. In Q3 2019, this upper limit was 0.04%. Meaning that for each of these policies, out of every 10,000 views on Facebook or Instagram in Q3 2019, it is estimated that no more than 4 of those views contained content that violated that policy.

Over the last two years, Facebook has invested in proactive detection of hate speech so that we can detect this harmful content before people report it to us and sometimes before anyone sees it. Its detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve previously seen in content that violates our policies against hate.

Facebook will continue to invest in systems that enable us to be proactive in combating hateful content on our services as well as the processes we use to ensure our accuracy in removing content that violates our policies while safeguarding content that discusses or condemns hate speech.

Similar to how we review decisions made by our content review team in order to monitor the accuracy of our decisions, our teams routinely review removals by our automated systems to make sure we are enforcing our policies correctly. Facebook also continues to review content again when people appeal and tell us we made a mistake in removing their post.

Facebook also launched a new page so people can view examples of how our Community Standards apply to different types of content and see where we draw the line.

Get real time updates directly on you device, subscribe now.