Cookies
Close Cookie Preference Manager
Cookie Settings
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Made by Flinch 77
Oops! Something went wrong while submitting the form.

Prompt injection is still the number one LLM security risk, and what to do about it.

A hacker tricked an AI chatbot into giving away $47,000, exposing a major flaw in AI security known as prompt injection. As AI agents gain more access to sensitive data, this incident highlights the urgent need for stronger data security frameworks to mitigate growing risks. In this article we describe the main LLM security risks, and how you can reduce them with access management.

Last week, a hacker tricked an AI chatbot into handing them over USD 47.000. This wouldn’t be particularly newsworthy if it weren’t that the chatbot was programmed to under no circumstances, hand out prizes. This is why prompt injection is still the number one vulnerability in OWASP’s top 10 list of LLM security vulnerabilities for 2025.

This is particularly interesting from a data security perspective as this makes it so that we cannot rely on AI Agents to enforce whatever guardrails they’re programmed to apply. If a chatbot whose sole purpose is to not to hand out a prize can be manipulated into handing it out anyway, it is pretty evident you cannot rely on AI agents to self-regulate on data security.

With AI Agents getting access to data and tooling it will be extremely important to implement a scalable framework for data security that enables fine-grained access management and monitoring across the data and tooling landscape. Again, we cannot rely on AI agents to do this securely, as proven by the other LLM security risks in OWASP’s top 10. 

In this article, we discuss the importance of data security to address some of the top 10 LLM security vulnerabilities.

Sensitive information disclosure 

The risk of sensitive information disclosure has rightly moved up from sixth to second position after the many cases of hackers using prompt injection to expose sensitive data, and the countless Copilot data leakages. The risk of sensitive data leakage occurs when your AI agent is finetuned on sensitive information or when it is granted access to your organisation’s data through Retrieval Augmented Generation (RAG). In those cases your AI agent can accidentally disclose sensitive information to unauthorised users.

With RAG becoming the dominant architecture, it will be very important to carefully limit access to data, and closely monitor data access and usage by your AI Agents for anomalies.

Supply chain risk

With the growing adoption of third party AI models and LLM’s, organisations have an increased risk of security incidents via third party software or packages. Hugging Face for instance, is rife with corrupted files giving hackers backdoor access to your infrastructure and data. OWASP recommends practical supply chain security controls, but with the speed AI teams have to deliver, it is impossible to vet every application or library. Therefore, it is also recommended to assume someone in your organisation has already deployed a corrupted package giving attackers access to your infrastructure. This means applying zero-trust principles whereby access to data and resources is limited to the absolute minimum. 

Data and model poisoning

Data poisoning happens when the data used to train and fine-tune LLM’s or that is fed into LLM’s through RAG is compromised and manipulated to introduce security vulnerabilities in the LLM. This backdoor can then be used to extract sensitive information at a later point in time.

Access management to data and infra plays a pivotal role in preventing data poisoning. Organisations that give LLM’s access to data will have to:

  • Limit write access and carefully monitor write access to data repositories used for training and finetuning LLM’s.
  • Limit access to code repositories to prevent unauthorised access to credentials.

Excessive agency

AI agents are becoming so advanced that they can complete tasks on your behalf. Beyond generating code or analysing data, these AI agents can now perform actions on your computer by looking at your screen, clicking buttons and typing text. This creates the risk of the LLM performing harmful actions either through hallucinations or through prompt injection.
It will be very important for programmers to limit their AI agents’ functionality, but it will be equally important to apply least privilege access management to data and applications to prevent a rogue AI agent from causing ruckus.

It is clear that organisations will have to invest in better data security when they want to operationalise AI Agents. With prompt injection being the most important LLM vulnerability, the improvements in accessibility to data, and the growing agency of AI agents, organisations will have to invest in an identity-centric, universal data security framework that implements fine-grained access and monitors data and usage across data, code repositories and applications.

Talk to the team