AI Agents for Counter-Extremism: Deployment Frameworks for Covert and Overt Digital Deradicalization
DOI:
https://doi.org/10.51483/IJAIML.5.2.2025.23-42Keywords:
Artificial intelligence, Counter-extremism, Islamic theology, Digital radicalization, AI ethics, Counter-narrative, Deradicalization, EU AI Act, Community engagementAbstract
This article analyzes strategies for deploying AI agents to counter online
extremism, focusing on digital radicalization in Islamic contexts. It assesses
technical feasibility, legal constraints (including the EU AI Act), ethical concerns,
and theological implications. Drawing on recent case studies—such as ISIS’s
2023 AI propaganda guide and the shift of extremist content to gaming
platforms—it highlights the dangers of definitional confusion around terms
like “Keyboard Jihad,” which may lead to misidentifying legitimate Islamic
discourse. The study evaluates three AI deployment models: overt analytical
agents, direct engagement agents, and covert engagement agents. It concludes
that transparent, community-partnered models—especially those offering
authentic theological guidance—are the most effective and ethically sound.
These models help fill gaps in Islamic knowledge that extremists exploit, while
avoiding the legal and strategic pitfalls of covert influence operations, which
violate current EU AI regulations. The article recommends a three-track strategy:
immediate use of overt analytical agents with safeguards; piloting direct
engagement agents through community and theological consultation; and
halting covert operations until explicitly authorized by law. Rooted in the Islamic
legal principle of maslaha (public interest), the framework prioritizes
authenticity, transparency, and respect for democratic values in counterextremism
efforts.




