Abstract
This comparative study examines patterns of Large Language Model (LLM) weaponization through systematic analysis of four major exploitation incidents spanning from 2023-2025. While existing research focuses on isolated incidents or theoretical vulnerabilities, this study provides one of the first comprehensive comparative framework analyzing exploitation patterns across state-sponsored cyber-espionage (Anthropic Claude incident), academic security research (GPT-4 autonomous privilege escalation), social engineering platforms (SpearBot phishing framework), and underground criminal commoditization (WormGPT/FraudGPT ecosystem). Through comparative analysis across eight dimensions, adversary sophistication, target selection, exploitation techniques, autonomy levels, detection evasion, attribution challenges, defensive gaps, and capability democratization, this research identifies critical cross-case patterns informing defensive prioritization. Findings reveal three universal exploitation mechanisms transcending adversary types: autonomous goal decomposition via chain-of-thought reasoning (present in all four cases), dynamic tool invocation and code generation (3/4 cases), and adaptive social engineering (4/4 cases). Analysis demonstrates progressive capability democratization: state-level sophistication (Claude: 80-90% autonomy) transitioning to academic accessibility (GPT-4: 33-83% success rates), specialized criminal tooling (SpearBot: generative-critique architecture), and mass commoditization (WormGPT: $200-1700/year subscriptions). Comparative findings identify four cross-cutting defensive imperatives applicable regardless of adversary type: multi-turn conversational context monitoring, behavioral fingerprinting distinguishing legitimate from malicious complex workflows, federated threat intelligence enabling rapid cross-organizational learning, and capability-based access controls proportional to LLM reasoning sophistication.
| Original language | American English |
|---|---|
| Pages (from-to) | 125-146 |
| Number of pages | 22 |
| Journal | International Journal of Academic Studies in Science and Education (IJASSE) |
| Volume | 3 |
| Issue number | 2 |
| DOIs | |
| State | Published - Dec 31 2025 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 4 Quality Education
-
SDG 9 Industry, Innovation, and Infrastructure
-
SDG 16 Peace, Justice and Strong Institutions
Keywords
- Large language models
- Comparative analysis
- Cyber exploitation patterns
- LLM weaponization
- Autonomous agents
- Capability democratization
- Underground AI
- Defensive frameworks
Organization custom fields
- Author/co-author in international publications
- Organization is International
Research output
- 1 Paper
-
Patterns of LLM Weaponization: A Comparative Analysis of Exploitation Incidents Across Commercial AI Systems
Antoniou, G., Dec 15 2025.Research output: Contribution to conference › Paper
Open AccessFile
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver