Telegram Child Safety Measures: Platform Partners with Internet Watch Foundation to Combat CSAM

Telegram Child Safety Measures
Telegram partners with the Internet Watch Foundation to combat child sexual abuse content, marking a major shift in the platform’s approach to moderation and user safety.

Telegram, a messaging platform long criticized for its insufficient moderation of child sexual abuse material (CSAM), has taken a significant step by partnering with the U.K.-based Internet Watch Foundation (IWF). This marks the first time Telegram has collaborated with a major international organization to address such content, signaling a potential shift in its approach to user safety, according to an article written by cybersecurity reporter Kevin Collier for NBC News, published Tuesday.

A History of Controversy
Known for its commitment to privacy and minimal moderation, Telegram has gained a reputation as a platform that some exploit for illegal activities, including the distribution of CSAM. For years, major organizations like the IWF, the U.S. National Center for Missing and Exploited Children, and the Canadian Centre for Child Protection expressed frustration with Telegram’s refusal to cooperate in removing harmful content.

Telegram’s co-founder and CEO, Pavel Durov, has faced increasing scrutiny. In August, Durov was arrested in France as part of a broader investigation into allegations of the platform’s complicity in illegal schemes, including the dissemination of explicit materials involving minors. Although Durov remains free on bail, his arrest appears to have prompted a reevaluation of Telegram’s content moderation policies.

A New Partnership with the IWF
The partnership with the IWF enables Telegram to implement advanced tools to combat CSAM. Using the IWF’s database of known abuse imagery, Telegram will be able to automatically identify, flag, and block such content. The technology will also prevent links to websites hosting illegal material and detect AI-generated abuse imagery.

According to the IWF, this collaboration represents a “transformational first step.” Derek Ray-Hill, the IWF’s interim CEO, emphasized the importance of the partnership, stating, “Telegram can begin deploying our world-leading tools to help make sure this material cannot be shared on the service.”

Remi Vaughn, Telegram’s head of media relations, echoed this sentiment in the IWF’s press release, describing the partnership as a move to “further ensure that Telegram can continue to effectively delete child abuse materials before they can reach any users.”

A Shift in Telegram’s Approach
Telegram has historically resisted moderation efforts, citing its commitment to privacy and free speech. This stance has made it popular among certain groups advocating for minimal censorship but has also drawn criticism from child safety advocates.

Following his arrest, Durov announced plans to “significantly improve” moderation on Telegram, marking a departure from the platform’s earlier stance. The partnership with the IWF demonstrates a tangible effort to follow through on these promises.

Looking Ahead
While the partnership with the IWF is a significant step, experts caution that it is only the beginning. Effective moderation will require sustained commitment and expansion of these tools to ensure they cover all aspects of the platform. Ray-Hill noted, “This is a transformational first step on a much longer journey.”

Telegram’s decision to collaborate with the IWF could set a precedent for other platforms resistant to content moderation. For now, child safety advocates will be watching closely to see if Telegram’s new measures result in meaningful change.

What About Other Apps & CSAM?
Other major social media platforms, such as Facebook, Instagram, Twitter (now X), TikTok, and YouTube, have implemented a range of protections to combat online CSAM. These platforms commonly collaborate with organizations like the U.S. National Center for Missing and Exploited Children (NCMEC), leveraging its CyberTipline to report and remove illegal content. Many also employ advanced technologies, including AI and hashing tools like Microsoft’s PhotoDNA, to detect and block known CSAM before it can spread.

For example, Meta (which owns Facebook and Instagram) utilizes a combination of AI and human moderation to scan content and detect harmful material. TikTok has introduced proactive moderation systems that identify and remove CSAM while also working with law enforcement agencies globally to report offenders. YouTube employs machine learning tools to flag potentially harmful videos and trains moderators to review flagged content. These platforms also regularly update their policies and algorithms to adapt to emerging threats, including AI-generated abuse imagery and new methods of exploitation.

Despite these measures, gaps remain. Critics argue that platforms often rely too heavily on automated tools that may not detect more nuanced or disguised content. Additionally, encrypted messaging services like WhatsApp and Signal face challenges in balancing user privacy with content moderation, as their end-to-end encryption makes monitoring for CSAM more difficult. While many platforms have made progress, the fight against online CSAM is an ongoing challenge that requires continuous innovation and global cooperation.

Knowledge Sparks Reform for Survivors.
Share This Story With Your Network.

Learn how we helped 100 top brands gain success