Register

Facebook’s Answer To Deceptive Advertising: 1,000 Additional Human Moderators

Tech

photo

Battling growing concerns over brand safety, Facebook is building an army of ad-checkers.

Facebook announced this week that it will add 1,000 jobs reserved for people who will review and verify the authenticity and appropriateness of ads placed in its platform.

The news comes on the heels of an investigation around how Russia used targeted Facebook ads to meddle in the U.S. Presidential Election and create public discord in the months that preceded it. (Just this week, Facebook revealed that the ads were shown to approximately 10 million people.)

TechCrunch reports that Facebook is hiring the staffers to be more proactive when it comes to ensuring brand safety. This includes helping assess not only the contents of the ads at “face value,” but also taking a closer look at the full context of the media buy, AKA which parties the advertiser is targeting. No doubt there are certain patterns of deceptive buys that will emerge and allow human moderators to reject more of these ads before they’re pushed out into the ecosystem.

We can expect to see an expansion of Facebook’s current policy regarding such ads, which today bans “shocking content, direct threats, and the promotion of the sale or use of weapons,” to disallow ads that include “subtle expressions of violence.”

New verification steps will also be added. USA Today reports that advertisers will be required to present “more thorough documentation” as well as confirm the business or organization they represent before they can purchase any ads.

Brand safety has become a major concern for advertisers who have benefitted from the scale and targeting that major platforms like Facebook and Google provide. Automated (programmatic) buying makes it easy for marketers to reach relevant audiences quickly and efficiently, but as the ease of the actual media buy increases, the risk does as well—despite technologies used to weed out inappropriate content.

A recent study from the CMO Council found that more than 70 percent of CMOs are “feeling pressure” from brand safety issues and 37 percent have pulled content due to brand safety issues.

In March, more than 250 brands pulled their ads from Google’s YouTube platform amid concerns that their content was appear in-stream alongside extremist videos and other inappropriate content. Media agencies have responded by partnering with companies like OpenSlate, a data company that helps buyers verify their content placements. Google responded by hiring an army of its own staffers.

Other platforms, like dailymotion, have made moves to stop monetizing user-generated content altogether, instead prioritizing content from trusted partners.

The trend across all of these efforts: a human presence. While automation is great for buying targeted media efficiently, machine learning has not proven to be a panacea for brand safety issues across the major platforms. As a result, advertisers and platforms alike are relying on a combination of man and machine to ensure that automation efforts are deployed successfully, and that ad partners are vetted by trained human moderators where and when it makes sense.

Learn the latest trends, insights and best practices from the brightest minds in media and technology. Sign up for SMW Insider to watch full-length sessions from official Social Media Week conferences live and on-demand.




Newsletter Subscription

Get the latest insights, trends and best practices from today's leading industry voices.



Watch SMW Live

SMW Insider is a premium video platform that streams more than 180+ hours of talks, presentations, and interviews from leading industry experts.

Subscribe Now

Write for Us

Interested in sharing your ideas and insights with the world? Become a SMW News contributor and reach 300k readers each month.

Apply Here