Instagram’s Fighting Back Against Bullying with an Array of New Features



Mosseri seeks to strike a balance between living up to his strong stance against bullying, and moderation that could alienate users.


Access exclusive SMW+ content by marketers whose careers you can emulate with a free 30-day trial!

Instagram has garnered a reputation for putting a bright and positive veneer on social media. Its growth has been meteoric, and its potential for revenue has risen along with it. But that veneer is increasingly used to cover a darker side of the platform—a side Instagram head Adam Mosseri is finally ready to combat.

In a recent interview with Time, as well as a product announcement on the company’s blog, Mosseri espoused his goals for lessened bullying on the site. In the process, he revealed two concrete moves the company is making to take power away from bullies. “We can do more to prevent bullying from happening on Instagram,” he said, “and we can do more to empower the targets of bullying to stand up for themselves.”

To Rethink, and To Restrict

Mosseri and Instagram’s commitment “to leading the industry in the fight against online bullying” is taking two forms. The first is an AI-assisted feature to head off potentially hateful, hurtful, or offensive comments before they’re posted. As the offending user prepares to send the post, an interstitial will pop up, asking “are you sure you want to post this?” This prompt, combined with a reminder to keep Instagram supportive, has reportedly “encourage[d] some people to undo their comment and share something less hurtful once they have had a chance to reflect.” Their hope? That, when prompted to really think about what they’re about to put out into the world, the poster will reverse course, softening or abandoning altogether the inflammatory remark.

In the realm of empowering the targets of bullying, Instagram is preparing to roll out a feature called Restrict. When feedback revealed that the bullied can suffer retaliation for using features like Block and Report, particularly if the interactions are an extension of a face-to-face relationship, this intermediate step was developed and tested. “If you restrict someone,” TechCrunch reports, “their comments on your posts will only be visible to you and them, unless you approve the comment for general consumption. They also won’t be able to see if you’re active on Instagram or if you’ve read their DMs.”

3This approach differs notably from a mute function, where these messages would be flat-out invisible to the target; does making the audience smaller for these abusive comments actually minimize their ability to hurt their subject? Instagram believes so.

Lesser Reported Advances

These are far from the only measures that Instagram is taking, or have taken, to address the problem of bullying on the platform. Indeed, they’re currently operating with “an offensive comment filter that automatically screens bullying comments that ‘contain attacks on a person’s appearance or character, as well as threats to a person’s well-being or health’ as well as a similar feature for photos and captions.” In total, three separate classifiers for bullying behavior are currently operating on the site: one each for text, photos, and videos. They flag violating content once per hour, and yet they’re also missing a lot and misidentifying things. For my part, another feature in the works holds tremendous promise for preemptively identifying opportunities for abuse.

Time reported that an added step to product development is being considered. “Before anything is launched, it is now vetted for all the ways it could be weaponized, including for bullying. And if it becomes clear that a feature is being abused frequently, [Instagram’s head of public policy Karina] Newton says that they will consider taking it away, even if it’s popular. This addition to the development process could have huge implications for how a platform creates an ecosystem. Instagram and its parent company Facebook, as well as other tech companies across the wider landscape, seem to only now realize that all user experiences are not created equally.

For those who are disproportionately bullied, harassed, or targeted online, this sort of consideration matters; Instagram’s stated commitment to thinking these challenges out in advance could—if it’s done with a diverse panel of stakeholders—head off some problematic consequences.

A Welcome, if Tricky, Intervention

Instagram’s willingness to address this problem so directly represents a marked shift in how content and its impact has previously been handled on social media. “For years, internet companies distanced themselves from taking responsibility for content on their platforms,” Time notes. “But as political scrutiny has mounted, executives have struck a more accountable tone.” Mosseri’s tone in his first US interviews since taking the helm of Instagram does acknowledge the challenge of the task (“I do worry that if we’re not careful, we might overstep”), but ultimately it matches Time’s assertion: “We will make decisions that mean people use Instagram less, if it keeps people more safe.”

Given their dedication to creating a safer and less combative online ecosystem, with any luck their usage won’t have to wane—and they can reclaim that positive mantle that so many find on the platform.

Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.


Newsletter Subscription

Get the latest insights, trends and best practices from today's leading industry voices.

Learn More

Write for Us

Interested in sharing your ideas and insights with the world? Become a SMW News contributor and reach 300k readers each month.

Apply Here