Short-video platform TikTok has reported the removal of more than 580,000 videos posted from Kenya during the third quarter of 2025 for violating its Community Guidelines, underscoring a significant escalation in its automated moderation efforts in the East African market.
According to TikTok’s Q3 2025 Community Guidelines Enforcement Report released this week, the vast majority of these removals took place between July and September 2025. Of the offending clips, 99.7 per cent were flagged and taken down before any user reported them, reflecting the platform’s deepening reliance on artificial intelligence systems to identify and eliminate content that breaches its rules.
The data shows that 94.6 per cent of the removed videos were deleted within 24 hours of being uploaded, indicating that TikTok’s automated tools are able to act rapidly once a flagged post enters the platform. In addition to recorded videos, TikTok also interrupted about 90,000 live streams during the same period for guideline violations — approximately 1 per cent of all live broadcasts in Kenya.
The latest report forms part of TikTok’s broader transparency initiative, which aims to detail how the company enforces its policies on safety, harm reduction, and content violations across its global user base.
Global Moderation Trends Mirror Local Patterns
The Kenyan figures come against a backdrop of intensified moderation worldwide. Across all TikTok markets, the platform removed more than 204 million videos in the third quarter of 2025, accounting for roughly 0.7 per cent of all uploaded content. Of those, 99.3 per cent were deleted proactively before receiving user reports, and 94.8 per cent were taken downwithin 24 hours.
Beyond video content, TikTok reported large-scale removals of problematic user accounts including over 118 million fake profiles intended to manipulate engagement metrics, and more than 22 million accounts suspected to belong to children under the age of 13. These removals form part of measures aimed at protecting platform integrity and complying with child safety standards.
TikTok said its investment in automated moderation technologies enabled a record 91 per cent of all violative content to be removed by AI systems, with human moderators handling more complex or nuanced cases that require contextual judgement.
In a statement accompanying the report, the company said integrating advanced automated tools with thousands of trust and safety professionals helps ensure rapid, consistent enforcement of content rules, even as harmful material evolves in form and sophistication.
TikTok’s enforcement disclosures also highlighted new features introduced in late 2025 including Time and Well-Being tools designed to help users, particularly younger audiences, manage screen time and engage more mindfully with content. These initiatives are part of ongoing efforts to balance engagement with safety, amid growing concerns around harmful content, misinformation, and the impact of algorithmic distribution.