Social Media Censorship Needs To Be Changed Very, Very Soon



Social Media Week is a leading news platform and worldwide conference that curates and shares the best ideas and insights into social media and technology's impact on business, society, and culture.

Ready to try SMW+, our new on-demand streaming service for marketers? Start your free trial today to access exclusive content!

Why do a majority of social media platforms allow abusive, violent, pornographic and racist imagery to appear on our feeds?
When will they monitor the content and how will they censor who sees it?

It is vital that we track and measure the filters our colleagues, peers and family see, for without that we are just savages of our own minds. As I browse through my Facebook, Twitter and Tumblr feeds, I notice the way it morphs into a sub-reality of our thoughts and imagination. We have become what we comment on. And in this phase of our hyper digital identities, we barely take time to realize the impact it will have, on our children, the elderly or our community.

Even though my Tumblr feed boasts a selection of imagery based on the artistic and cultural preferences I filtered through it, I am often shocked at the nature of how free this platform is when it comes to nudity. It’s as if their visual brands cannot exist without a bit of a shock factor. I see more blood, more sex, more violence on here than some days on the news. Often toxic and up for interpretation, I guess we have to take each image at its face value. But it doesn’t just stand alone as is, it asks to be commented on. And what is an image without a comment? Only the start of the viral continuity a piece of content has.

Upon investigation, I researched Tumblr’s policies and found that many people are concerned about the platform censoring NSFW/adult content. They basically explain that it relies on their flagging/filtering features. Now you can avoid blogs by applying the safe mode so I or anyone won’t come across it. The blog will still be promoted in third-party search engines. However, how many children are going to change their settings to enable safe mode from filters focusing on bigotry, defamation, porn or mental illnesses. They won’t, and that is where my insecurity lies, in the fact that they are so over exposed to it and it will affect the way they approach their real lives against their virtual ones.

I have often been spammed on both Facebook and Twitter with some seriously uncomfortable imagery, tearing apart the church with acts of aggression or women sharing body parts with the public that should not be seen on social media platforms. They have gone to great lengths to tag me or the company/brand I work for so I am aware of such content. I can’t fathom how these platforms let in slip into the web without making a scene, however Instagram removes an image for a fashion editorial nip slip. Where are we going with our censorship and how will we get there? Will social media platforms be on the same page going forward or has it come down to each their own?

I want a safer visibility and experience for our next generation. Millennials and born frees have become accustomed to this mindset and might have become desensitized to such content, but do we want the next generation to completely detach themselves of their virtual selves or feel again?

What content do you think needs censorship and how do we communicate the urgency to these social media platforms?

Newsletter Subscription

Get the latest insights, trends and best practices from today's leading industry voices.

Learn More

Write for Us

Interested in sharing your ideas and insights with the world? Become a SMW News contributor and reach 300k readers each month.

Apply Here