Shadow banning: Why don't some posts appear on your social media feed?
Social media icon applications appear on a mobile phone. (Photo by AFP)
Audio By Vocalize
Why is it that on platforms like Facebook, X, Instagram, and
TikTok, some social media users’ posts don’t show up on their followers’ timelines,
unless they manually go to their accounts to scroll through?
It has been a much-debated subject for years. Increasingly, with
the power of social media algorithms – those mysterious systems deciding which
posts land at the top of your feed and which ones you might never see based on content
you interact with or share, users you interact with most, and what’s trending.
But algorithmic design aside, there is a perception that social
media companies deliberately suppress some posts without notifying account
holders, commonly referred to as ‘shadow banning’ or ‘algorithmic suppression.’
It occurs when a user’s content becomes less visible through
reduced reach, lower ranking in feeds, or exclusion from search results, without
an explicit ban or suspension from a platform.
Unlike account suspensions, shadow bans are subtle, analysts
say, making them difficult to detect.
Earlier this month, popular Kenyan activist Boniface Mwangi complained
on X that he had been shadow-banned on several platforms, accusing President
William Ruto’s administration of “banning keywords and using algorithm
manipulation to reduce my reach”.
“I have over 3 million followers across all social media
platforms, but some posts get fewer than 5,000 views,” he wrote on September 6
to a flurry of comments from followers who said they no longer saw many of his
posts on their timelines in recent weeks.
Yet Mwangi, who recently announced his bid to run for president in 2027, is not the first; politicians, activists, and
businesses around the world have for years accused social media giants of censorship
using algorithms, whose operation still remains largely opaque.
Generally, social media platforms limit content that might
get them in trouble with authorities, users, or advertisers through the moderation
of what is posted on their platforms.
This includes cracking down on misinformation or anything
that goes against their respective community guidelines.
For X, the platform formerly Twitter says on its website
that it does not shadow ban content: “We don’t shadow ban! Ever. We do rank
posts to create a more relevant experience for you, however, and you’re always
able to see posts from people you follow.”
Following his 2022 buyout of the platform, however, the
American billionaire Elon Musk released internal documents that showed that
Twitter was using “visibility filtering” to limit the reach of certain
accounts, particularly around sensitive or controversial topics.
Meta, which owns Facebook, Instagram, and Threads, has
denied deliberately suppressing particular voices.
Even so, the company last year announced it would begin
limiting political content on users’ feeds – posts “likely to mention
governments, elections, or social topics that affect a group of people and/or
society at large,” it said.
A July 2025 blog post on Meta’s website says the company now
treats political content “more like other types of content on our platforms”
and may recommend it from accounts that people don't already follow.
Facebook, Instagram, and Threads now
have settings that users can update to adjust the amount of political content
they see or have recommended to them.
Instagram’s chief, Adam Mosseri, said this February that the
company does not limit the reach to one’s followers, although for recommendations
to non-followers, “there are some instances where we will limit reach, because
we bear more responsibility when we’re showing content from accounts people
don’t follow.”
The platform also has an “Account Status” section where creators
can see if their content is being restricted.
At the same time, Meta acknowledges that posts can be “downranked”
in feeds if they are flagged as misinformation, violate community guidelines,
or come close to prohibited content.
Google-owned YouTube has previously said that if a video
doesn’t violate the app’s community guidelines but “comes close” to doing so,
it may be ineligible for algorithmic recommendations.
Meanwhile, TikTok, the popular Chinese video-sharing platform,
maintains that it moderates only to uphold community guidelines, but leaked
internal documents in 2019 showed that the network instructed its moderators to
censor videos criticising, among many other things, China’s socialist system.
Additionally, the documents showed instructions to limit
visibility of content from certain groups deemed not very attractive, including
people with disabilities.
Governments can always reach out to social media companies: Meta,
for instance, hands over data on millions of users’ accounts to governments
upon request “in accordance with applicable law and our terms of service.” This
could be for legal process or emergency disclosure reasons.
It is, however, difficult to prove the governments’ role in the
shadow-banning of social media posts by users in their countries.
Still, some researchers say authorities can influence the
social media platforms' moderation policies, which could include some content
not being favoured.
This is more so in countries where governments seek to crack
down on free expression or police certain forms of speech more aggressively, where
these platforms must frequently respond to demands to remove material or risk
consequences like fines or bans.
As a workaround, digital strategists suggest using analytics
tools to monitor engagement metrics and detect unusual drops that could
indicate suppressed visibility, and expanding to multiple social media
platforms to help cushion against algorithmic changes.
Internet creators are also advised to build parallel
channels, such as newsletters and websites, where they have more control.


Leave a Comment