One of the take-aways from Black Hat is the growing consensus about the unique and unparalleled data security risks associated with GenAI among security practitioners. Or as one attendee put it:
“Usually when I go to conferences I return inspired and full of new ideas. This year, I came back genuinely scared”
In one particularly daunting presentation, Michael Bargury exposed the terrifying security risks of what can happen when an attacker gets access to Copilot. Copilot is a GenAI agent that answers a user’s questions using internal documents, data and emails. When a user asks a question Copilot looks for the relevant information to answer that question using Retrieval Augmented Generation (RAG). This has the potential to become an extremely powerful tool for productivity as it significantly lowers the technical threshold to get valuable information, and reduces context switching. This is why Morgan Stanely has given its financial advisors and support staff access to +100.000 documents using Microsoft’s Open AI.
However, as organisations are implementing RAG we’re becoming aware of several limitations that harm the trust in their outcome, such as naive ranking, information loss due to chunking, parsing issues. As the industry is resolving these issues with Multi-Agent AI Systems, we’re still faced with important security risks such as data poisoning and data loss through RAG.
By handing over the keys to your company’s body of knowledge to CoPIlot introduces cybersecurity risks that make even the most seasoned cybersecurity professional squirm. If an attacker gets access to the agent, they can basically exfiltrate or poison any and all your company’s data. Until recently, Microsoft made it very easy for external attackers to get access to your Copilot by making it publicly accessible over the web by default.
Even when limiting access to employees only, the security risks remain significant. Knowing that credential theft is still the most popular mode of attack, and that 62% of GenAI security breaches, originated from internal parties, the risks of data breaches through Copilot remain, even when access to Copilot is limited to employees only.
It’s clear that Copilot and by extension GenAI has the potential to truly democratize access to information for non-technical users, but without good data security controls they pose huge cybersecurity risks. Below are 5 data security controls that will significantly reduce risk:
Raito offers a central platform to manage fine-grained access and data security controls to structured and unstructured data used for GenAI. With Raito, you can RAG your body of knowledge without creating undue security risks.
Monitor
Raito lets you centrally monitor data access and usage, monitor your data security maturity, and detect and remediate data security risks in your multi cloud environment. This will help you detect excessive access privileges for GenAI, when service accounts are used to RAG data, and unusual access patterns.
Manage
With Raito you can centrally manage a GenAI Agent’s access to all your company’s structured and unstructured data. Raito’s identity centric access controls, access management federation, and integration in DevOps lets you implement consistent, universal fine-grained permissions at scale in a multicloud environment.
Automate
RAG’s undeterministic nature and access to large amounts of data makes that traditional access control technology like ACL and RBAC does not scale. RAG requires a more dynamic and scalable approach that can be found in Attribute-Based Access Controls, where access is determined dynamically using the data and user’s attributes. Raito’s ABAC policies let you dynamically grant access and mask and filter data using the metadata from your data providers or data catalogs.