New Facebook Content Moderation Can Affect Your Marketing Campaigns

Facebook content moderation scrolling on laptop

Facebook, currently Meta, employs an algorithm that moderates content that appears on user feeds. Ultimately, the goal is to share content that they user will most likely engage with. Unfortunately, one of the outcomes from this technology was the proliferation of hate speech and misinformation. In response, artificial intelligence (AI) and machine learning were developed in order to help reduce these problems. And over the last few years, Facebook content moderation integrates AI to screen user-generated content (USG).

How does this affect your Facebook advertising campaigns? Staying updated on the latest Facebook content moderation changes ensures your campaigns run smoothly. Prior to launch, you can account for potential roadblocks and develop counter-strategies. Most importantly, major algorithm updates will start rolling out at the beginning of 2022. So, continue reading to learn how this will affect your paid social’s bottom line.

How Does Facebook Content Moderation Work with AI?

Older AI models required thousands of examples in order to learn what harmful content looks and sounds like. After learning the language, it’s programmed to remove posts using the blacklisted words from user feeds. Finding and entering the examples would take engineers months to teach the AI what to look for. In the meantime, the site was producing hate speech and misinformation at a much faster rate.

However, Meta announced on December 8, 2021, they’ve developed a new AI called Few-Shot Learner (FSL). It can evolve alongside USG and can learn to spot harmful speech faster than before. From Meta:

Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it. But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.

To tackle this, we’ve built and recently deployed Few-Shot Learner (FSL), an AI technology that can adapt to take action on new or evolving types of harmful content within weeks instead of months. This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks. FSL can be used in more than 100 languages and learns from different kinds of data, such as images and text. This new technology will help augment our existing methods of addressing harmful content.

How Few-Shot Learner (FSL) Identifies Harmful Content

FSL identifies harmful content in three ways.

  • Zero-shot: Policy descriptions with no examples
  • Few-shot with demonstration: Policy descriptions with a small set of examples (n<50)
  • Low-shot with fine-tuning: Machine Learning developers can fine-tune the FSL base model with a low number of training examples

To monitor how FSL learns, Facebook tested it by deploying it to find content that has misleading or sensationalized information about COVID-19. Another test was launched to find nuanced language that incites violence that earlier versions of AI technologies couldn’t find. An example provided by Meta is, “Does that guy need all his teeth?” This phrase clearly incites violence.

Earlier AI technology wouldn’t find this because it doesn’t overtly use violent words or hate speech. But the words do incite violence and are hate speech – within their context. FSL can identify this language as violent.

FSL has also been successful in finding harmful posts when using it in combination with older existing AI tech. In general, FSL has helped reduce misinformation and hate speech on Facebook.

FSL Can Adapt to New and Evolving Harmful Speech

Because news events and speech evolve over time, FSL was developed by Facebook engineers with the goal of adapting to this changing content.

Few-Shot Learner (FSL) can adapt to take action on new or evolving types of harmful content within weeks instead of months. It not only works in more than 100 languages, but it also learns from different kinds of data, such as images and text, and it can strengthen existing AI models that are already deployed to detect other types of harmful content.

With FSL, Facebook adds more capability to the search for harmful content. Also, it speeds up the process of training the AI with fewer examples of harmful speech. Instead, FSL is trained in general language concepts and in multiple languages.

[FSL is] first trained on billions of generic and open-source language examples. Then, the AI system is trained with policy-violating content and borderline content we’ve labeled over the years. Finally, it’s trained on condensed text explaining a new policy. Unlike previous systems that relied on pattern-matching with labeled data, FSL is pretrained on general language, as well as policy-violating and borderline content language, so it can learn the policy text implicitly.

FSL is a new technology and will continue to adapt as it learns what harmful speech looks like according to Facebook policies. Developers are still learning the parameters of what the technology can do, and how best to use it to find and identify harmful content in text, image, and video posts. You can learn more about FSL and other AI projects by following the Meta AI blog and reading this PDF.

How Does FSL AI Affect My Paid Ads on Facebook?

Businesses who advertise on the Facebook platform are wise to closely watch how FSL deploys. While you don’t want to have your ads show up next to harmful posts on Facebook, you also may find that the AI causes problems with your ads.

From paid ads to organic content posts, businesses rely on the News Feed to reach their target consumers on Facebook. With advanced AI filtering posts, your business may have less insight into how and where their organic posts and paid ads appear.

We recommend monitoring Meta’s studies as FSL progresses. In the meanwhile, we offer profitable actions in order to receive guidance on how to increase ROI.

FSL Does Have Some Drawbacks

Tom Simonite from Wired discusses potential issues with FSL AI technology. “The impressive capabilities—and many unknowns—about giant AI creations like Facebook’s prompted Stanford researchers to recently launch a center to study such systems, which they call “foundation models” because they appear set to become an underpinning of many tech projects.”

Similar AI projects will be used in healthcare and finance tech as well as tech companies. Researchers say that while it is exciting to be able to tell foundational AI to do what you want with text only, its capacity is poorly understood. A potential drawback to less curation of training is that engineers lose some control and oversight of how the AI works.

Work With the Facebook Experts

Rely on a digital advertising team who are proactively reacting to industry breakthroughs. Our paid social advertisement services include Facebook, Instagram, LinkedIn, Pinterest and more. Contact us for a custom digital advertising strategy that gains leads without sacrificing your bottom line.

Scroll to Top