This talk explores advanced prompt injection exploits targeting widely used LLM applications, including Microsoft Copilot, Google Gemini, Google NotebookLM, Apple Intelligence, GitHub Copilot Chat, Anthropic Claude and others. Using real-world demonstrations, we will discuss the following threats in detail:
- Misinformation, Phishing, and Scams: Including advanced techniques such as conditional instructions.
- Automatic Tool Invocation: Exploiting tool integration to escalate privileges, extract sensitive data, or modify system configurations.
- Data Exfiltration: Leveraging strategies, such as markdown and hidden payloads, to bypass security controls and leak data.
- SpAIware and Persistence: Manipulating LLM memory for long-term control and persistence.
- ASCII Smuggling: How LLMs can hide secrets and craft hidden text invisible to users.
For each threat category, we will discuss mitigations and show how vendors are addressing these vulnerabilities.