→ We Are Still Unable to Secure LLMs from #Malicious Inputs
schneier.com/blog/archives/202…
“This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks.”
“It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.”
#AI #LLMs #stop #agents #secure #attacks #problem
We Are Still Unable to Secure LLMs from Malicious Inputs - Schneier on Security
Nice indirect prompt injection attack: Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.Bruce Schneier (Schneier on Security)