Agentic ProbLLMs: Exploiting AI Computer-Use and Coding Agents
This talk demonstrates end-to-end prompt injection exploits that compromise agentic systems. Specifically, we will discuss exploits that target computer-use and coding agents, such as Anthropic's Claude Code, GitHub Copilot, Google Jules, Devin AI, ChatGPT Operator, Amazon Q, AWS Kiro, and others.
Exploits will impact confidentiality, system integrity, and the future of AI-driven automation, including remote code execution, exfiltration of sensitive information such as access tokens, and even joining Agents to traditional command and control infrastructure. Which are known as "ZombAIs", a term first coined by the presenter as well as long-term prompt injection persistence in AI coding agents.
Additionally, we will explore how nation state TTPs such as ClickFix apply to Computer-Use systems and how they can trick AI systems and lead to full system compromise (AI ClickFix).
Finally, we will cover current mitigation strategies and forward-looking recommendations and strategic thoughts.
During the Month of AI Bugs (August 2025), I responsibly disclosed over two dozen security vulnerabilities across all major agentic AI coding assistants. This talk distills the most severe findings and patterns observed.
Key highlights include:
* Critical prompt-injection exploits enabling zero-click data exfiltration and arbitrary remote code execution across multiple platforms and vendor products
* Recurring systemic flaws such as over-reliance on LLM behavior for trust decisions, inadequate sandboxing of tools, and weak user-in-the-loop controls.
* How I leveraged AI to find some of these vulnerabilities quickly
* The AI Kill Chain: prompt injection, confused deputy behavior, and automatic tool invocation
* Adaptation of nation-state TTPs (e.g., ClickFix) into AI ClickFix techniques that can fully compromise computer-use systems.
* Insights about vendor responses: from quick patches and CVEs to months of silence, or quiet patching
* AgentHopper will highlight how these vulnerabilities combined could have led to an AI Virus
Finally, the session presents practical mitigations and forward-looking strategies to reduce the growing attack surface of probabilistic, autonomous AI systems.
Licensed to the public under http://creativecommons.org/licenses/by/4.0
Typography is the art of arranging type to make written language legible, readable, and appealing when displayed. However, for the neophyte, typography is mostly apprehended as the juxtaposition of characters displayed on the screen while for the expert, typography means typeface, scripts, unicode,
Who uses Palantir software in Germany and who plans to do so in the near future? What are the legal requirements for the use of such analysis tools? And what is Interior Minister Alexander Dobrindt planning for the federal police forces in the matter of Palantir?
Palantir software analyzes the data
Reports of GNSS interference in the Baltic Sea have become almost routine — airplanes losing GPS, ships drifting off course, and timing systems failing. But what happens when a group of engineers decides to build a navigation system that simply *doesn’t care* about the jammer?
Since 2017, we’ve bee