Is Twitter’s Political Quarantine an Escape Room? and Other Questions About The New Policy



Accountability is finally coming for politicians who use their platform on Twitter to spread hate and harassment…or is it?


Access exclusive SMW+ content by marketers whose careers you can emulate with a free 30-day trial!

At long last, there appears to be a sheriff in town.

After years of laissez-faire policy on potentially dangerous tweets being left up on the service in the name of newsworthiness, Twitter is reportedly implementing a policy that will make it harder for harmful content to spread. But upon learning of its details, we have more than a few questions. We ask and answer some of them here, while also acknowledging that some of them are incredibly challenging to address.

So what’s the new rule?

According to The Verge, this new policy will do two things to tweets shared by political figures who share content deemed dangerous or harmful:

  • The post will merit an interstitial, warning a user accessing it that the content ahead has violated the Terms of Service of the site.
  • The post’s reach will be limited on the site.

The interstitial will reportedly read, “The Twitter rules about abusive behavior apply to this tweet. However, Twitter has determined that it may be in the public interest for the Tweet to remain available.” To view the content, users will have to click past this notice—not unlike similar measures that have been taken on Reddit for r/the_donald (more on that in a moment).

In terms of limited reach, flagged tweets “will no longer appear in Safe Search, the Top Tweets timeline, live event pages, recommended push notifications, or the Explore page.” Put another way, any elevation of the post will have to happen manually via followers- it will receive no algorithmic assistance.

Who does this new rule affect?

Twitter has an incredibly narrow definition of who this policy applies to. Wired reports that applicable accounts must meet all of the following criteria:

  • Verified by Twitter
  • Carrying more than 100,000 followers
  • “Represent a government official, [are] running for public office, or [are] considered for a government position (i.e. next in line, awaiting confirmation, [or] named successor to an appointed position)”

In interviews on the topic, Twitter CEO Jack Dorsey insists, “a critical function of our service is providing a place where people can openly and publicly respond to their leaders and hold them accountable.

Is this rule just about…well, you know.

Probably. While Twitter itself hasn’t named who they anticipate applying the rule to, countless reporters have filled in that blank for them. There is precedent for Twitter removing the tweets of a world leader that isn’t named Donald Trump; in February 2019, a tweet traced to Iran’s Ayatollah Ali Khameni was removed for its threatening nature toward author Salman Rushdie. But that removal actually wouldn’t hold under these new rules, because the account in question isn’t verified through Twitter’s official channels.

It’s also worthwhile to note that these measures are not retroactive, which means that any posts that currently exist and violate this standard, would remain untouched. Further, any restrictions this rule would provide do not extend to members of the political administration or surrogates for the individual. So no matter who the rule might apply to, there are still holes that merit further questioning.

Can this rule really limit the reach of these tweets?

On the platform? Maybe. It’s incredibly easy to scroll past content that doesn’t immediately present itself to you.

At the same time, maybe not. Dorsey is right in one way: many of the tweets that this policy would block, do make the news. And that fact will widen their reach far wider than any algorithm ever could. Much has been said about just how small Twitter’s user base is, both in number and representative ideology. But Tweets notoriously live a much longer life with the help of the media. And the company’s CEO knows this.

In a 2017 interview, Dorsey said to Wired:

We’re not taking something down that people should be able to report on and actually show that this is what the source said. It’s really important to make sure that we provide the source for the right reporting, and to minimize bias in articles.

Leaving a tweet up, with one additional step to reach it, only slows down individuals reading it on the site. For those accessing the sentiment of these tweets via screenshots plastered across cable news or embedded in news articles, this “speed bump” is far less effective, and unlikely to slow down the spread of this information. So if this is an effort to quarantine the spread of hate or harm, someone should probably check the exits.

Vanity Fair said of the policy, “it’s a potentially major step toward rooting out harmful content on the site, but raises familiar concerns for a company that has long been inept at policing content, and hesitant to do so in the first place.” We have answers to some of the questions that could give this policy teeth, but several more—who determines what gets quarantined? Will embedded tweets also carry the warning? Can a user be removed after a number of these tweets take place? If so, how many?—remain unanswered.

Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.


Newsletter Subscription

Get the latest insights, trends and best practices from today's leading industry voices.

Learn More

Write for Us

Interested in sharing your ideas and insights with the world? Become a SMW News contributor and reach 300k readers each month.

Apply Here