Telegram Remains a Haven for Hate Speech Shielded by Corporate Secrecy
We know too little about the platform providing 'bulletproof' chat services to the Internet's worst actors
A recent article published in Mother Jones titled "How Telegram Became the Center of the Internet" concerns me over its unfairly flattering portrayal of the social media company and platform Telegram.
While the author acknowledges the platform’s unmoderated nature has made it a haven for criminals, extremists, and foreign actors seeking to spread disinformation and propaganda, the article fails to adequately address Telegram's lax moderation policies. It is important to understand the role those policies play in enabling crime, psychological operations, and the general perpetuation of harmful conspiracy theories aimed at the West. The author’s suggestion that the platform has become the “spiritual home” of the Internet is outrageously facile and laughable on its face until you consider the myriad conspiracy theories with a theological bent which are disseminated on the app.
Second, the article lacks critical analysis of Telegram's business model, funding, or origins.
As noted as a topic of concern by Professor Megan Squire in her article, “Alt-Tech & the Radical Right, Part 3: Why Do Hate Groups and Terrorists Love Telegram?”,
Unlike other social platforms, Telegram charges no fees, offers no advertisements, and has no apparent business model outside of accepting donations from its founders.
Examining Telegram's financial structure and its backers could provide valuable insights into the platform's motivations and potential biases. As you may be aware, Telegram and VKontakte have linked origins. Reportedly, their founders desired to make Telegram ‘bulletproof’, or takedown-resistant, after leaving Russia.
According to The Washington Post,
To make that happen, they registered a network of shell companies around the world, the better to avoid taxes, contract with local data centers, and disguise the app’s true ownership.
Publicly, Durov has also said this arrangement was intended to deter subpoenas and other requests from government.
Telegram's use of shell companies makes it difficult to hold the company accountable for its actions, as it is unclear who is ultimately responsible for the platform's decisions, or what government can truly hold it accountable.
While the Mother Jones article did not touch on Telegram's legal structure and how this helps to shield the platform from said accountability, I am hoping another investigative journalist will probe these areas more deeply.
Further, the organization’s Russian founding and its founders’ previous association with the Kremlin raise serious concerns about the platform's potential for misuse by state actors. Telegram's hands-off approach to moderation has widely raised concerns that the platform may be catering to specific political or ideological agendas. An exploration of potential motives behind Telegram's lax policies should be in play.
As recently described by Carnegie Endowment for International Peace in their September 2023 article regarding social media-linked violence in Myanmar:
If these three areas are gaps in Meta’s content moderation, Telegram takes those inadequacies to an extreme: the company’s small production team almost certainly lacks the capacity and the competency to moderate content from its rapidly growing global user base of over 700 million monthly active users—and it does not even try.
[Telegram] explicitly presents itself as a free speech alternative to other platforms and consequently not only has very limited platform guidelines but also seems to neglect rules that do exist.
Although [Facebook] bans explicit calls to violence and reportedly took down hundreds of calls to violence in the wake of the January 6 attack (which was largely organized over its platform), Telegram and many other fringe platforms are renowned for letting hateful content go largely unchecked.
While Telegram provides some basic reporting capabilities, such as the ability to flag illegal content, these features provide the appearance of mere fig leaves, as they rarely result in visible moderation activity, and no U.S. law enforcement entity can claim to have an effective investigative capability on Telegram due to general lack of cooperation from the platform's owners.
It appears at least some of this is intentional, as they have demonstrated the ability to detect and segregate harmful content in order to comply with requirements to be listed in U.S. app stores. According to cybersecurity company Avast in a 2021 blog post by David Strom titled, “Hate speech on Telegram is on the rise”:
To get approval for its app on Google Play and the App Store, Telegram has put in place self-censorship “flags” so that mobile users can’t view the most heinous posts. However, all of this content is easily viewed in a web browser.
I believe these issues merit a more thorough investigation, don’t you?