Shadowbanning: Real, Rumor, Or Just Poor Content?
Do you suspect some of the social media platforms you use of implementing shadowbanning?
Shadowbanning has become a hot topic over the last 10 years as more and more political discussions, campaigns, causes and entertainment move online.
Each platform has its own set of rules that determine what sort of content it allows. But how far should platforms take content moderation? Are they willing to shadowban users in an effort to control certain narratives?
In this post, we look at suspected occurrences of shadowbanning and evidence of its existence and how content moderation and visibility filtering play huge roles in what we feel to be “shadowbanning.”
What is shadowbanning?
Shadowbanning, sometimes referred to as “stealth banning,” is a term whose definition changes depending on who you’re asking.
Its official definition refers to a practice in which a social media platform prevents a user’s post from being visible to other users of that platform even if the post appears to be visible to the original poster themselves.
This is why the practice is referred to as “shadowbanning.” Instead of outright banning the user, the platform allows them to post even though they’re actively (and secretly) preventing their algorithm from boosting that user’s posts.
Even worse, they might prevent that user’s posts from being visible on the platform at all even if you browse their profile directly.
This means that although the user can carry on publishing content like normal, they aren’t able to receive views and engagements on that content.
Is shadowbanning real?
Before we share our evidence, we can cut a long explanation short by saying that yes, shadowbanning is real.
Multiple platforms have been found guilty of engaging in the practice. Some even actively use it as a form of content moderation.
The line between content moderation and shadowbanning is often blurred because most platforms don’t see certain forms of content moderation as shadowbanning.
They see it as a necessary precaution to take against users who create certain kinds of content, use certain kinds of language or outright violate community guidelines.
And of platforms who do limit reach, many will not inform users when they do so nor will they call it “shadowbanning.”
We’ll expand on this more later. For now, let’s go over a few instances in which social media platforms seemed to be shadowbanning users.
The Twitter Files
The Twitter Files are, without a doubt, the biggest controversy surrounding shadowbanning.
Conservatives had long suspected Twitter of suppressing right-wing voices on the platform. As such, when Elon Musk bought the company in 2022, he turned over some of its internal documents to a handful of journalists and authors.
This turnover occurred in a series of installments between December 2022 and March 2023. The documents in question have been given the name “The Twitter Files.”
Cases of shadowbanning were mostly revealed when journalist Bari Weiss revealed her findings in the second installment of The Twitter Files on December 8, 2022.
She discovered that several conservative accounts had internal labels, including “Trends Blacklist,” “Search Blacklist” and “Do Not Amplify.”
Accounts assigned these tags will not have their posts appear in the Trends section of Twitter (now X) nor will they appear in search results on the platform.
“Do Not Amplify” prevents the user’s posts from appearing in users’ home feeds.
The company calls these labels “visibility filtering,” or “VF.”
In her report, Weiss stated a senior Twitter employee said to “think about visibility filtering as being a way for us to suppress what people see to different levels. It’s a very powerful tool.”
She also stated a Twitter engineer told her that “we control visibility quite a bit. And we control the amplification of your content quite a bit. And normal people do not know how much we do.” Two additional Twitter employees confirmed these remarks.
The documents even revealed an internal group at the company that was in charge of deciding whether or not to limit an account’s visibility.
It was called the Strategic Response Team – Global Escalation Team, or SRT-GET. It decided the fate of around 200 accounts per day.
Above that group was a secondary group called the Site Integrity Policy – Policy Escalation Support, or SIP-PES. Among its members were Twitter executives Vijaya Gadde (Head of Legal, Policy and Trust), Yoel Roth (Global Head of Trust & Safety), and Jack Dorsey (CEO).
Some accounts were given the internal label “Do Not Take Action on User Without Consulting With SIP-PES,” meaning moderation teams were not allowed to decide the fate of such accounts without consulting senior members of the company.
Before we move on, as revealing as The Twitter Files were, many criticize the way they were released.
The full documents are not available to the public. This means that although we now know that Twitter was, in fact, limiting the visibility of conservative posts on the platform, we don’t know if they were also limiting visibility of left-leaning voices or to what degree.
We also don’t know exactly what led to these accounts receiving the internal labels they received.
Even so, the files demonstrate a social media platform’s ability and willingness to limit an account’s visibility.
Note: It’s entirely likely that visibility filtering has changed since the platform became more right-leaning.
TikTok limiting visibility of political content
These findings are dependent on what you consider to be shadowbanning.
The official definition of the practice describes a situation in which a user is still able to post content like normal even if that content no longer shows up for other users, even if they search for it outright, such as by viewing that user’s profile.
The examples we found describe situations in which TikTok limited visibility on individual videos or hashtags rather than limiting visibility on entire profiles.
We’ll start with our most recent example.
In May of 2025, TikTok creator Dylan Page published a video about Ibrahim Traoré, the president of Burkina Faso.
While the video eventually returned to the platform and now has over 11 million views, the video did disappear from Page’s profile for at least a week after it earned over 6 million views in a little over a day.
Page had this to say after users accused him of deleting the video:
“If you go on my profile, for me, it shows up. There it is. Right there. It has, however, been stuck on 6 million views for awhile and I probably don’t talk about this enough, to be honest with you, but TikTok does this sometimes. They hide the video for certain people, in certain regions and stuff like that… And it’s not even just with this one. There’s probably a lot more videos, in fact, I know there’s a lot of videos that you guys have never seen because of this issue.”
So, while the video wasn’t outright banned or deleted from the platform, its visibility was still limited for users in certain regions, such as the United States where I reside.
After seeing Page’s video about the video no longer being visible on the platform, I wasn’t able to find the video on Page’s profile after originally seeing it on my For You Page.
In a report released in 2020, the Australian Strategic Policy Institute (ASPI) discovered that TikTok was suppressing and outright shadowbanning videos that used specific hashtags.
TikTok videos that used certain LGBTQ-based hashtags were being suppressed in eight languages. Even worse, they were being suppressed in TikTok’s code in the exact same way content related to terrorist organizations, illegal substances and profanity were being suppressed.
Videos that contained hashtags that criticized Russian president Vladimir Putin and former Indonesian president Joko Widodo were shadowbanned. Instead of deleting these videos, TikTok suppressed their visibility, so although they still appeared on the platform for the original users who uploaded them, they could not be viewed or even found by other users.
TikTok’s response to the report stated that the suppression was due to the platform’s “localised approach to moderation” since “some terms that the ASPI provided were partially restricted due to relevant local laws.”
They also stated that some hashtags were suppressed “because they were primarily used when looking for pornographic content.”
What social media platforms have to say about shadowbanning
Some instances of shadowbanning aren’t obvious enough to warrant a full report. Even so, most social media platforms have been accused of shadowbanning at one point or another.
Let’s talk about how platforms have responded to such accusations.
Twitter (X)
In 2018, Twitter users noticed specific accounts were not showing up in the platform’s auto search functionality, even when you typed their account names into the search bar directly.
Users accused the platform of shadowbanning these accounts.
In response to these accusations, Twitter stated that they “do not shadow ban,” and that “you are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile).”
They also explained that they do, in fact, rank tweets, and that their ranking models may lead to situations where certain tweets and accounts receive more visibility in feeds than others.
As for the auto search issue, they clarified that it was only happening to auto suggestions. Users who weren’t appearing as search suggestions were still appearing in search results themselves, and their tweets still showed up when other users viewed their accounts.
They gave these explanations for the issue:
- It affected hundreds of thousands of accounts.
- No single political party or geographic location was targeted.
- *Certain accounts may have been affected because of the way users interacted with those accounts.
*Twitter explained how certain communities were trying to boost each other’s engagement through coordinated efforts. They suspected that some accounts who disappeared from auto search were on the receiving end of these efforts, and because of the way the platform implemented auto search at the time, the combination of the two caused these accounts to not show up as search suggestions altogether.
In December of 2022, current X CEO Elon Musk claimed the platform was working on a tool that would allow users to see when they’ve been shadowbanned and give them a way to appeal.
This tool has yet to be released but still seems to be in development as of 2025.
While Instagram won’t admit to outright shadowbanning users, the platform does let you check if your account is not able to be recommended to other users. This feature is called the Account Status feature.
With this feature, you can see exactly which posts are causing the issue. You can also edit or remove content that violates the platform’s community guidelines from here and submit requests to have Instagram’s support staff review content you feel should not be suppressed.
Instagram also engages in a content moderation technique called “deprioritization” in which an algorithm is told not to recommend a piece of content as much.
Social media platforms, including Instagram, use this technique to suppress posts that do not violate community guidelines but are deemed “not appropriate” for a platform’s “global community.”
Instagram, in particular, suppresses such content from Explorer and search feeds. However, followers of these accounts can still see deprioritized content in their home feeds.
In direct response to accusations of shadowbanning, Instagram has stated that they have a system for ranking content that controls which content the algorithm recommends to other users.
They said that although this system might lead to some posts being ignored by the algorithm, causing inconsistent engagement, it’s not the platform’s intention and that it’s in their best “interest as a business to ensure that creators are able to reach their audiences and get discovered so they can continue to grow and thrive on Instagram.”
Facebook actively relies on a deprioritization technique it calls “content demotion.” Content that’s flagged for demotion is not deleted, but Facebook greatly reduces its appearance in Feed.
Your content might get demoted if it’s sexually suggestive, hateful, filled with profanity, contains gore or is low in quality.
You might also get demoted if you regularly violate the platform’s community standards.
Most demotions occur globally, but Facebook states it is able to apply demotions to a specific region or specific “critical events.”
How to avoid shadowbanning
Most social media platforms are not as transparent as Instagram when it comes to content moderation. This is why many cases of content moderation turn into conspiracy theories about shadowbanning.
Even so, we can come to the conclusion that politically-charged content and content that includes illicit behavior and imagery is more likely to face content moderation than other forms of content.
This includes what’s known as “borderline content,” which is content that’s not illicit but is close. An example would be an image that doesn’t contain nudity but is sexually suggestive by nature.
To avoid facing this level of moderation, it’s best that you avoid using profanity, illicit imagery, hate speech and spreading information that isn’t based on facts.
Overall, your content should be consistent both in quality and type. This will help you avoid getting deprioritized based on audience interest.
To put it simply, if you suspect your social media account is the victim of shadowbanning, you’re probably experiencing content moderation, deprioritization or visibility filtering, whatever you want to call it.
In this case, you need to take a good hard look at your content and determine if something within it is causing the algorithm or your audience to ignore or lose interest in it.
