Can AI be your shield? How Kenyans are using tech to fight cyberbullying

The image is only used for representation: a code projected over a woman. PHOTO/Pexels

In Kenya’s increasingly connected digital landscape, social media platforms like X, Instagram, TikTok, Facebook, and WhatsApp are spaces of influence, information, and interaction.

But with increased connectivity comes a darker reality: cyberbullying.

As internet penetration surpasses 50 percent and online discourse becomes more charged, especially among vibrant communities like Kenyans on Twitter (KOT), the number of users experiencing online harassment is rising sharply.

In the ongoing battle against cyberbullying, artificial intelligence is fast becoming a frontline defence across Kenya’s digital platforms. From social media timelines to private group chats, AI is now being used to detect and filter harmful language in real time. This shift towards automated moderation is helping reduce exposure to toxic content, offering users a layer of emotional protection before abuse reaches their screens.

Once developed, these systems can be integrated across multiple platforms, making them a scalable solution for tech firms seeking to maintain safer digital environments. AI’s strength also lies in its ability to adapt. With the right training, it can learn to decode local slang and context-specific insults, allowing it to detect more subtle forms of cyberbullying that might slip past human moderators.

Artificial intelligence (AI) is emerging as a powerful solution in the fight against cyberbullying, offering real-time detection, content filtering, and proactive response mechanisms.

In Kenya, where digital expression often treads a thin line between humour and hostility, tech innovators and social media users alike are turning to AI tools to shield themselves from online abuse.

AI as digital shield

AI technologies such as natural language processing (NLP) and machine learning are already being used globally to identify and address harmful online content. These tools analyse massive volumes of user-generated data to flag hate speech, abusive comments, and even coordinated online attacks.

In Kenya, the shift is slowly gaining traction. Platforms are integrating AI-powered moderation systems to filter toxic content before it reaches users.

According to Meta’s Transparency Report, a significant portion of cyberbullying content on Instagram is taken down proactively by AI systems before any user flags it.

On the local front, tech firms like Eveminet Communications are developing AI solutions tailored to Kenya’s digital culture—tools that can understand and contextualise local dialects like Sheng, helping differentiate between playful banter and genuine bullying.

Recent cyberbullying cases

In May 2025, Dagoretti North MP Beatrice Elachi became a target of online vitriol following the tragic loss of her son, Elvis Namenya.

Elachi’s grieving process was hijacked by online trolls—many from Gen Z—who mocked and weaponised her pain. During a local TV appearance on May 29, 2025, Elachi revealed that she had stopped reading online messages altogether and instead relied on aides to filter her communication.

“Gemini has been a game-changer,” Elachi said. “It scans messages and comments, alerting me before things spiral out of control. It gives me peace of mind and allows me to focus on serving my constituents.”

Dagoretti North MP Beatrice Elachi. PHOTO/https://www.facebook.com/profile.php?id=100089274136665
Dagoretti North MP Beatrice Elachi. PHOTO/https://www.facebook.com/profile.php?id=100089274136665

She revealed she used AI tools like Gemini and Siri to navigate the digital space without a SIM card as a coping mechanism against past cyberbullying.

Elachi said the experience pushed her to disconnect from reading or watching anything said about her online.

The experience mirrors what AI tools like sentiment analysis software can do—filtering out distressing content before it reaches users. Her experience demonstrates the pressing need for AI-based digital support for public figures navigating personal crises.

Politics of online backlash

In June 2025, Charlene Ruto, daughter of President William Ruto, found herself at the centre of an online storm after engaging youth leaders on issues of national unity. Her comments, perceived by many KOT users as dismissive, triggered a wave of trolling and character attacks. On June 20, she publicly decried cyberbullying and cancel culture, warning that they could radicalise youth and erode democratic engagement.

Several tech experts out of this suggested that deploying AI-driven chatbots or moderation tools on her social media pages could help flag or auto-remove abusive comments. Her case highlights the role AI could play in managing reputational risks and psychological well-being for individuals in the public eye.

TikTok

TikTok’s updated Bullying Prevention policies, as of January 21, 2025, reflect growing pressure to address rising cases of online harassment, particularly in Kenya, where digital spaces have become battlegrounds for both creativity and cruelty. While the platform has introduced AI-driven content moderation and reporting tools, cyberbullying remains a persistent challenge.

With support from TikTok’s regional policy leads and digital safety advocates, survivors are reclaiming their spaces—some using the very platform that harmed them to raise awareness, educate followers, and advocate for more localised protections in the fight against online abuse.

For TikTok sensation and media personality Azziad Nasenya, cyberbullying has become an unfortunate part of her digital journey. In 2025, after posting a series of dance videos, she was once again subjected to body-shaming and trolling. The backlash reignited the #SayNoToCyberBullying conversation online.

A screengrab of TikTok sensation and media personality Azziad Nasenya. PHOTO/A screengrab by K24Digital from @Azz_iad

In response, some local developers proposed AI tools that could track toxic comments in real-time using NLP. Tools like those from Eveminet could automatically detect repeated patterns of hate or coordinated attacks, allowing Azziad and other creators to mute or block offenders preemptively. Her resilience, despite constant online hate, shows how AI could be a valuable line of defence for digital creatives.

Kenya’s digital space is both empowering and perilous. KOT, while known for driving civic campaigns like #JusticeForKenyans and #LowerFoodPrices, is also infamous for its cancel culture and mob-style takedowns.

The Communications Authority of Kenya (CA) has stepped in by providing a reporting portal through the National KE-CIRT/CC, allowing users to report cyberbullying for investigation by the Directorate of Criminal Investigations (DCI). However, the process can be slow and emotionally draining for victims. AI tools, especially those designed for the Kenyan context, offer faster mitigation.

Eveminet and similar firms are pushing for AI systems that understand local language use, digital behaviour patterns, and cultural nuances. These efforts seek to ensure that online safety tools don’t simply copy-paste Western solutions but rather reflect Kenya’s digital reality.

However, the same precision that makes AI effective can also be a source of concern. Satirical posts and humorous content—especially common in Kenya’s vibrant online culture—can be wrongly flagged as harmful, raising questions about freedom of expression. In a society where coded language and layered meaning are part of everyday communication, the risk of false positives looms large.

There are also ethical issues around how these systems operate. AI monitoring tools often require access to large volumes of personal data, sparking debate over privacy, particularly when children are involved.

Moreover, while the technology shows promise, it remains largely inaccessible to smaller organisations and grassroots movements due to cost constraints. Most systems in use today are trained on datasets from different cultural contexts, meaning they may misinterpret local phrases or expressions—leading to inaccurate moderation and potential biases.

Kenya must adopt a collaborative, multi-sectoral approach to combat cyberbullying using AI. The government should consider subsidising AI tools for schools, NGOs, and community hubs. Local developers must refine AI models to handle Sheng and Swahili, ensuring relevance and accuracy.

Meanwhile, platforms should balance automated moderation with human review to avoid unjust censorship. Public awareness campaigns must also continue, promoting digital etiquette and encouraging responsible online interaction.

As Kenya’s digital footprint grows, the need for smart, culturally aware, and ethical tech solutions has never been greater. For public figures like Beatrice Elachi, Charlene Ruto, and Azziad Nasenya, who continue to navigate the highs and lows of online visibility, AI offers hope—not as a cure-all, but as a critical layer of protection.

This calls on Kenyans to champion responsible digital citizenship, support homegrown tech innovations, and embrace AI-driven tools to build a safer, more compassionate online ecosystem.

The integration of AI into Kenya’s digital spaces underscores both the opportunities and dilemmas of tech-driven moderation.