4 Major Brands and Platforms Addressing Digital Literacy and Fake News in 2020
Spanning policy updates, literacy campaigns, challenges and visual databases, here’s how four major brands and platforms are tackling deepfakes and fake news in 2020.
Join us for #SMWONE May 5 - 28, 2020 and hear from 300+ speakers across 150 sessions.
The majority of marketers realize the issues presented by fake news and “deepfake” techniques in skewing the information we’re exposed to and the implications for determining what is fact from fiction.
We face a critical point in our industry where many brands and platforms are facing increased pressure for setting a benchmark for detecting these types of conversations.
Here are a few that are taking action in 2020.
Tumblr’s Digital Literacy Initiative “World Wide What”
With the 2020 election on the horizon, social media platforms are making moves to update their strategies to curb the spread of information. The latest to join the bandwagon is Tumblr, which recently launched an internet literacy campaign targeted to help younger demographics entering the voting scene spot fake news and unsavory posts.
The initiative, World Wide What, was developed in partnership with UK-based internet literacy organization, Ditch the Label. The campaign’s structure emphasizes six core community topics in video form that include fake news skewed views, authenticity, cyberbullying, the importance of minimizing screen time, how much we share online, and creating a safer internet through moderation.
Unlike traditional literacy materials, the platform is tapping into visual, more culturally messaging such as GIFs, memes, and short text in line with imagery native to the Tumblr brand. Videos will also leverage outside experts and industry leaders to tackle certain subjects through a series of Q&As in the coming weeks and months.
“We are constantly striving to learn and utilize new ways to create a safe place for our communities,” Tumblr shared in a statement on the World Wide What site.
Google x Jigsaw Visual Database of Deepfakes
In September 2019, Google tapped Jigsaw in an effort to develop a dataset of visual deepfakes aimed to boost early detection efforts. The tech giant worked with both paid and consenting actors to record and gather hundreds of videos which ultimately were crafted into deepfakes. The final products including both real and fake videos, were then incorporated into the Technical University of Munich and the FaceForensics benchmark and made widely available for synthetic video detection methods.
Fast forward to November, Jigsaw has continued on this momentum by releasing what it refers to as “the largest public data set of comments and annotations with toxicity labels and identity labels. “ This includes the addition of comments and annotations with toxicity and identity labels. The goal with incorporating these details is to more accurately measure bias within AI comment classification systems. Traditionally conversations are measured with synthetic data from template sentences that often fail to address the complexity and variety of comments.
“By labeling identity mentions in real data, we are able to measure bias in our models in a more realistic setting, and we hope to enable further research into unintended bias across the field,” shared in a recent Medium post. The key in the ever-evolving deepfake tech space will be a healthy and growing research community.
Twitter Policies Targeting Synthetic and Manipulated Media
Twitter is looking to its community for support in fleshing out its strategy for addressing synthetic and manipulated media, what the company defines as “…any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.
As a draft to its policy, the platform has outlined that it will:
- Place a notice next to Tweets that share synthetic or manipulated media
- Warn people before they share or like Tweets with synthetic or manipulated media
- Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated
The platform also vowed to remove any deepfake believed capable of threatening someone or leading to serious harm. This raises the question as to how it would address these types of manipulated conversations spurring a falsity but not technically causing harm or that use newer creation methods that lag behind the detection techniques.
To garner feedback from users, the platform created a multiple-choice survey that addresses the broader preference of removing versus flagging (e.g. should altered photos and videos be removed, have warning labels, or not be removed at all). To date, the survey is closed and the platform is reported to be working on an official policy that will be announced 30 days prior to roll out.
Facebook’s “Deepfake Challenge” and Ban
This past fall Facebook teamed up with Amazon Web Services (AWS), Microsoft, and academics from Cornell Tech, University of Oxford, UC Berkley, University of Maryland, and SUNY Albany to launch the Deepfake Detection Challenge. The DFDC as its referred to includes a data set of 100k+ videos using paid actors — as well as grants and awards —aimed to inspire new ways of detecting and preventing AI-manipulated media.
The DFDC will run to the end of March of this year with the goal of “…producing technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.” According to the official website, a winner will be determined based on “a test mechanism that enables teams to score the effectiveness of their models, against one or more black-box tests from our founding partners,” the company shared.
‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” shared Facebook CTO Mike Schroepfer in a recent blog post.
In addition to these efforts, the platform followed up with a new policy that would remove synthesized or edited content in ways that “aren’t apparent to an average person and would likely mislead,” or deepfake posts that use AI technologies to “merge, replace, or superimpose content onto a video, making it appear authentic.”
Again, the question becomes how we as an industry will move forward walking the fine line between malicious deepfakes and those with less-harmful intents of creative parodies or satire.
Learn more about this topic as part of our 2020 theme HUMAN.X through the lens of the subtheme Privacy Matters. Read the official announcement here and secure your early-bird discount today to save 20% on your full-conference pass to #SMWNYC (May 5-7, 2020).
Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.
WATCH THE SMWNYC 2019 RECAP
Write for Us
Interested in sharing your ideas and insights with the world? Become a SMW News contributor and reach 300k readers each month.