Facebook’s Answer To Deceptive Advertising: 1,000 Additional Human Moderators


Social Media Week

Battling growing concerns over brand safety, Facebook is building an army of ad-checkers.

20 days until SMW Chicago. Watch live or on-demand. Activate free trial now!

Facebook announced this week that it will add 1,000 jobs reserved for people who will review and verify the authenticity and appropriateness of ads placed in its platform.

The news comes on the heels of an investigation around how Russia used targeted Facebook ads to meddle in the U.S. Presidential Election and create public discord in the months that preceded it. (Just this week, Facebook revealed that the ads were shown to approximately 10 million people.)

TechCrunch reports that Facebook is hiring the staffers to be more proactive when it comes to ensuring brand safety. This includes helping assess not only the contents of the ads at “face value,” but also taking a closer look at the full context of the media buy, AKA which parties the advertiser is targeting. No doubt there are certain patterns of deceptive buys that will emerge and allow human moderators to reject more of these ads before they’re pushed out into the ecosystem.

We can expect to see an expansion of Facebook’s current policy regarding such ads, which today bans “shocking content, direct threats, and the promotion of the sale or use of weapons,” to disallow ads that include “subtle expressions of violence.”

New verification steps will also be added. USA Today reports that advertisers will be required to present “more thorough documentation” as well as confirm the business or organization they represent before they can purchase any ads.

Brand safety has become a major concern for advertisers who have benefitted from the scale and targeting that major platforms like Facebook and Google provide. Automated (programmatic) buying makes it easy for marketers to reach relevant audiences quickly and efficiently, but as the ease of the actual media buy increases, the risk does as well—despite technologies used to weed out inappropriate content.

A recent study from the CMO Council found that more than 70 percent of CMOs are “feeling pressure” from brand safety issues and 37 percent have pulled content due to brand safety issues.

In March, more than 250 brands pulled their ads from Google’s YouTube platform amid concerns that their content was appear in-stream alongside extremist videos and other inappropriate content. Media agencies have responded by partnering with companies like OpenSlate, a data company that helps buyers verify their content placements. Google responded by hiring an army of its own staffers.

Other platforms, like dailymotion, have made moves to stop monetizing user-generated content altogether, instead prioritizing content from trusted partners.

The trend across all of these efforts: a human presence. While automation is great for buying targeted media efficiently, machine learning has not proven to be a panacea for brand safety issues across the major platforms. As a result, advertisers and platforms alike are relying on a combination of man and machine to ensure that automation efforts are deployed successfully, and that ad partners are vetted by trained human moderators where and when it makes sense.

Access 200+ hours of educational talks and exclusive interviews on SMW Insider. Activate your free trial here!



Katie Perry

Contributor, Social Media Week

Katie Perry is a marketing & content strategist and contributor to SMW News, a leading news platform covering startups, tech, brands and the future of work. You can follow Katie on Twitter at @katieeperry.



Want to write for Social Media Week?

We're looking for individuals around the globe to contribute articles on marketing, media, technology, and more.







Comments