What we found is both alarming and instructive: a threat actor with limited coding expertise using AI to automate tool development, write phishing emails, and streamline malicious infrastructure. Unlike traditional actors who rely on underground communities for toolkits or scripts, this individual—identified as Toby —has effectively weaponized accessible AI to scale and professionalize his operations.
This case illustrates a clear shift in the cyber threat landscape: artificial intelligence is no longer a future risk—it’s an active enabler of cybercrime today.
Get Our Paper
Toby: S2COTHie Analysis Report of Nigerian Use of AI
AI as a Developer for Cybercrime
Unlike many cybercriminals who rely on jailbroken AI models (e.g., WormGPT or EvilGPT), Toby takes a bold approach: he uses mainstream, commercial AI platforms to develop software. Through a series of basic prompts, he directed ChatGPT to help him build a custom Email Portal Validator—a Python-based tool designed to scan a list of domains and detect their webmail login portals. The tool features:
This allowed him to efficiently validate targets and match stolen credentials to real login portals—an essential step for launching Business Email Compromise (BEC) attacks.
ChatGPT Prompts for Email Portal Enumeration Tool By Toby
As demonstrated by these simple prompts, the cybercriminal was able to successfully develop a GUI application to search for webmail portals associated with the domains he obtained.
Crafting Phishing Kits with Grok
Beyond automation, Toby also used Grok to generate Webmail HTML phishing kits and write convincing phishing emails. One campaign impersonated a postmaster system notification, urging users to verify their account or risk suspension. These AI-generated messages were grammatically sound, visually polished, and tailored to mimic legitimate administrative emails.
What makes this particularly dangerous is the ability of AI to bypass traditional phishing detection. These systems, often trained to flag poor grammar, reused templates, or suspicious keywords, struggle to catch dynamically generated, high-quality content produced by advanced language models.
Webmail Phishkit Prompts
The phishing kit developed with Grok is both convincing and unique. While it resembles known webmail phishing kits found in underground forums, it includes notable enhancements. The kit is designed to capture credentials up to four times to validate their accuracy and does not require a full web server, as all necessary resources are embedded directly within the phishing page.
The Impact: Low Skill, High Capability
This case underscores a larger trend: AI is lowering the barrier to entry for cybercrime. Threat actors no longer need to possess advanced programming skills or rely on expensive underground developers. With access to AI, individuals like Toby can independently develop, test, and deploy malicious tools—fast and at scale.
Why It Matters
Scalability: AI accelerates tool development and campaign execution
Evasion: AI-generated content can bypass security filters and appear legitimate
Accessibility: Even low-skill actors can become operationally effective
This misuse of AI highlights the urgent need for stronger safeguards, better detection of AI-assisted phishing, and more aggressive monitoring of how commercial AI tools are used in real-world threat campaigns.
Conclusion
The case of Toby is not an anomaly—it’s a preview of where cybercrime is heading. As AI continues to evolve, so too will its misuse. Defenders must adapt quickly, recognizing that the same tools used to protect can also be used to exploit.

