Telegram has taken its game up a notch in 2024 by using advanced AI technology to delete 15.4 million groups and channels sharing harmful content.
Since Pavel Durov's arrest in France over allegations about harmful materials in the app, the platform has become tighter in security. Well, at the very least, they are just doing what's really needed.
AI Moderation in Telegram
The company credits its massive crackdown to advanced AI moderation tools designed to identify and eliminate problematic content efficiently. As TechCrunch reports, they target everything from fraudulent schemes to terrorism-related content so that the platform becomes a safer space for its millions of users.
Telegram revealed these efforts on a newly launched moderation page created to enhance transparency about its content management processes. The page reveals a significant spike in enforcement activity since Durov's arrest in August 2024.
Read More: Non-Paying Netflix Users Can Play 'Squid Game: Unleashed' Mobile Game For Free—Only For Limited Time
A Timeline of Escalated Moderation
In September 2024, Telegram made an official declaration of being committed to stricter content regulation. Following this declaration, the platform has taken quite aggressive measures towards complaints of harmful content.
In a message published on his official Telegram channel, Durov explained the importance of the platform in creating a safe space for users.
Telegram Only Did What Should Be Done
The arrest of Durov has put Telegram in the world spotlight, forcing the platform to act in a more meaningful way to meet legal and ethical expectations. It's noteworthy to mention that the use of AI moderation is not only an evolutionary step in technology but also a public relations step toward rebuilding trust with the users and international regulators.
This does not mean that no harmful content will circulate on the platform, but what's good here, is that such content is minimized.
Moderation Efficiency Improved through AI
AI-powered moderation tools allow Telegram to scan vast amounts of data in real time and flag and remove harmful content much faster than any human process could. It is very effective at detecting subtle violations, such as coded language or encrypted threats, that may slip past traditional moderation.
The Reputation Impact on Telegram
The removal of 15.4 million harmful groups and channels in Telegram is considered one of its biggest milestones when it comes to responsible content regulation. However, critics feel that it is a move to contain Durov's legal issues, and supporters are of the belief that this reflects a new change towards meeting global standards.
With some Telegram groups about sexually explicit videos and images, scams, and other dark schemes, the platform is commendable in shifting towards AI moderation tools.
It's not perfect by any means but balancing user privacy with the need for effective content control is seen there. It's actually a big help for users who just want to use the messaging app for the sake of a genuine connection.
With over 700 million users worldwide, success in cleaning up its platform will be crucial for its reputation and also give a secure digital space for its growing community.
While you're scrolling on Telegram, check our previous report on how to spot fake smartphones.
Related Article: Chinese Spyware 'EagleMsgSpy' Can Capture Audio Recordings, Other Personal Data on Android Device
© Copyright 2024 Mobile & Apps, All rights reserved. Do not reproduce without permission.