Register

What Will Facebook’s White Nationalism Ban Look Like in Action?

Culture

photo

Facebook’s ban on white nationalism and white separatism goes into effect this week. What will it truly mean for the atmosphere of the site?

This week, Facebook’s landmark policy against white nationalism and white separatism will take effect across Facebook, Messenger, and Instagram. Announced last week in a company blog post and in a series of statements to the press, its stated goal is to enact “a ban on praise, support and representation” of these dangerous ideologies. So now that the policy is live and the sites have mobilized to enforce it, what will it mean for the user experience on these platforms?

What Content Will Be Impacted?

At present, Facebook seems to only have the ability to target and remove explicit expressions of white supremacy/nationalism/separatism. Statements that outwardly praise, support, or represent these ideologies will be removed. However, more coded or implicit expressions of these principles will likely evade detection and removal—at least initially. “Implicit and coded white nationalism and white separatism will not be banned immediately, in part because the company said it’s harder to detect and remove,” Vice’s Motherboard reported upon the statement’s release.

Buzzfeed News revealed an additional quirk of the policy as presently written: Holocaust denial, another hotly contested type of content that frequently proliferates on the site, will not be included in this policy. Rather, statements of this nature will be flagged as “misinformation” (noted, but ultimately permissible), as opposed to hate speech (which is expressly prohibited). The curiously selective methodology Facebook has employed to determine what language is and isn’t allowed has drawn the concern of Kristen Clarke, executive director of the National Lawyers’ Committee for Civil Rights under law:

For too long, Facebook has maintained a policy that carved out an indefensible distinction between white supremacy and white nationalism and white separatism, and that carve-out allowed violent white supremacists to openly exploit the platform to incite violence across the country, and frankly across the globe.”

Why The Shift in Policy?

Clarke’s mention of the global level that such violence has reached, is likely related to scrutiny Facebook received in the wake of the Facebook Live footage of last month’s Christchurch mosque attack. In addition to prompting rumored reconsideration of live streaming regulations on the site, this major change in mentality on Facebook’s part is likely related to the extensive work COO Sheryl Sandberg and others in the organization are doing with civil rights organizations. Over the past three months, this work has yielded changes in the ad platform’s targeting measures; as this policy rolls out, we see how it will impact users on a larger scale.

It should be noted that white supremacy has been banned on the site for a few years now. Brian Fishman, Facebook’s Policy Director of Counterterrorism, spoke on a philosophical shift that also helped in moving this new rule forward: “we decided that the overlap between white nationalism, white separatism, and white supremacy is so extensive that we can’t really make a meaningful distinction between them.” Deferring to the civil rights groups and scholars they’ve collaborated with, the company’s blog post confirmed this revelation. “It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.”

How Will Facebook Find Offending Content?

As mentioned above, Facebook’s systems are not yet advanced enough to remove all content that could likely be deemed objectionable under this policy. But according to AdWeek, the policy will be enforced with “a combination of machine-learning tools, artificial intelligence, and human review.” To their credit, Facebook seems to be using similar techniques to those that are used to scrub the site of content associated with ISIS, Al Qaeda, and other terrorist groups—a sign that, unlike other sites or entities, the company does see this threat as one which could be classified as terrorism.

Among the techniques cited is “content matching,” a process completed by algorithm that identifies images previously flagged as hateful or otherwise violating and then deletes it. User-flagged content can also be removed, presumably with an option to individual users to flag content as indicative or supportive of white supremacy.

Will It Work?

It looks good for Facebook to be able to make such sweeping declarations of intent, particularly on an issue that presents so much danger to its users and society as a whole. After all, they want to be better! They’re listening to their critics! Posters will be directed to anti-hate resources! But as Vice’s reporters rightfully point out, “a social media policy is only as good as its implementation and enforcement.”

To effectively eliminate this threat, Facebook must commit—in action as well as in word—to advancing its technology to a point where it can identify explicit and implicit threats. It must continue listening to its users, employees, and experts to get a fuller view of the issues at hand. Clarke said in a statement, “[they] also need to enforce those policies consistently, provide meaningful transparency around any AI techniques used to address the problem, and adequately train its personnel.” And crucially, they need to acknowledge the power that enacting a policy like this implies. Vera Eidelman, staff attorney for the ACLU, offers a caution even as she supports this landmark move on Facebook’s part: “every time Facebook makes the choice to remove content, a single company is exercising an unchecked power to silence individuals and remove them from what was become an indispensable platform.”

Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.

WATCH OUR 2019 PROMO




Newsletter Subscription

Get the latest insights, trends and best practices from today's leading industry voices.


Headline Speaker Announcement

Bestselling Author and Entrepreneur, Seth Godin to speak at Social Media Week New York on May 2

Learn More

Watch SMW Live

SMW Insider is a premium video platform that streams more than 180+ hours of talks, presentations, and interviews from leading industry experts.

Subscribe Now

Write for Us

Interested in sharing your ideas and insights with the world? Become a SMW News contributor and reach 300k readers each month.

Apply Here