What is Zero Trust AI Access (ZTAI)?

The rapid maturing of large language models (LLMs) is revolutionizing how we interact with technology. Most enterprises are still in the exploratory stage of deploying LLMs and identifying potential use cases with a focus on small-scale pilots. However, many employees have already adopted unsanctioned generative AI applications either directly (ChatGPT, Gemini, etc.) or via third-party extensions to sanctioned applications.

Software as a Service (SaaS) vendors are trying to keep pace by incorporating AI features into their services. This enables customers to leverage the advantages of LLM technology by providing AI-based sales insights, AI design assistance, and coding co-pilots. However, this unmanaged adoption of AI also introduces significant security risks, making zero-trust AI access (ZTAI) a vital component of a corporate AI cybersecurity strategy.

En savoir plus Mise en œuvre de la sécurité zéro confiance

The Challenges of Zero Trust AI Security

Many organizations are working toward an implementation of the zero trust security framework to manage corporate cybersecurity risks. However, the emergence of generative AI (GenAI) creates significant challenges for this practice, including:

  • Blurred Boundaries: Zero-trust security models have traditionally relied on a clear distinction between users and applications. However, LLM-integrated applications disrupt this distinction, functioning simultaneously as both.
  • Powerful, Naive Agents: These “AI agents” can be viewed as intelligent yet naive entities within an organization. They are capable of sophisticated interactions and content generation based on vast knowledge but often lack an understanding of corporate norms and security policies.
  • GenAI Vulnerabilities: Deployment of these new user/application entities, which cannot be trusted blindly, poses several unique security challenges, such as prompt injection and data poisoning.
  • AI Data Leaks: Unsanctioned use of GenAI creates data and access risks, such as data leakage, risky access to online resources, and access to corporate resources on behalf of unauthorized users.
  • Sensitive Data Exposure: Users may enter sensitive corporate or customer data into AI applications, where it might be used as training data for the LLM and exposed to other users.
  • Prompt Injection: Specially crafted inputs may cause an LLM to misbehave, resulting in actions that evade system guardrails and place the company and its data at risk.

LLMs from a Zero-Trust Perspective

To address these challenges in LLM deployment, a unique set of zero-trust measurements is imperative:

  1. Carefully validating, monitoring, and enforcing the input, output, training data, and access attempts by LLM-integrated applications is crucial. To mitigate the risks associated with LLM-integrated applications, organizations must have visibility into any injection and poisoning attempt, unintentional inclusion of sensitive data in training, and unauthorized access.
  2. From a zero-trust perspective, these newly introduced dynamic, autonomous, and creative entities within organizations cannot be blindly trusted. This requires a new approach to security while implementing them.
  3. LLMs’ unpredictable nature, expansive knowledge, and susceptibility to manipulation call for a revised, zero-trust-based AI access framework. This will enable security practitioners to ensure robust data protection, security, and access that complies with corporate policies.

Zero Trust AI Checklist

With a Zero Trust AI access (ZTAI) approach in mind, it’s crucial to view LLM-integrated applications as entities with a need for strict access control policies — even stricter than the average employee. We cannot trust the LLM decision-making about which content to fetch and what websites to access. Accordingly, we cannot trust the data that is being fetched and presented to the user, which must go under diligent security measures.

Security Checklist for LLM-Integrated Applications’ Access to the Internet

Internet access dramatically increases the security risks associated with LLM-integrated applications. Some best practices for managing these risks include:

  • Deny Access When Possible: Unless it’s an absolute necessity, do not provide your model access to the internet, as it dramatically increases the attack surface.
  • Implement Access Controls: When augmenting LLM with access to the internet, make sure strict access control policies are in place, in line with your organizational policies. Do not outsource your current access control policies to external vendors and apply them to your new users.
  • Block Restricted Websites: Implement URL filtering and phishing protection for your LLM-based applications to prevent them from accessing restricted websites.
  • Strictly Control Accessible Data: Limit the destinations the agent can access and the data it can fetch, based on the use case. Limitations should include appropriate content, file types, and website categories.
  • Perform Data Validation: Continuously validate data fetched and presented to the user by the LLM. Do not trust the LLM to only provide unharmful data as it may be prone to hallucinations or contain poisoned training data.

A zero-trust AI access framework does not trust the LLM-integrated application’s behavior or decision-making when accessing corporate resources. This includes which and when resources are accessed, what data is exported to which user, and what actions can be performed using these resources.

Security Checklist for LLM-Integrated Applications’ Access to Corporate Resources

Like providing access to the Internet, allowing LLM-integrated applications to access corporate resources may be necessary but can also be dangerous. Security best practices for managing these risks include:

  • Restrict Elevated Privilege Use: Limit the privileges of LLM-integrated applications, especially for high-risk operations, such as modification and deletion of corporate data.
  • Implement Least Privilege: Limit the corporate resources your LLM can access to the absolute minimum.
  • Require Re-Authentication and Account Timeouts: Before the agent performs any operation using corporate resources, require the user to re-authenticate and limit the time of the session.
  • Perform Behavioral Analytics: Monitor the behavior and decision-making of LLM-integrated applications. Factors to watch include which resources are accessed, what data is exported to which user, and what operations are performed using corporate resources.
  • Maintain Real-Time Visibility: Make sure to have clear, detailed, and real-time visibility into the online or corporate resources an agent is accessing and the operations being executed.
  • Track Anomalous Behavior: Implement a system to detect abnormal behavioral patterns by the LLM, such as unexpected data access or deviation from typical responses or access patterns.
  • Implement Role-Based Access Controls: Implement a dynamic user-based access control policy for the agent, meaning the LLM inherits the same access rights as the end user in each session.
  • Minimize Cross-LLM Communication: Damage can be exponential when an LLM can interact with another LLM. Make sure to limit the communication of LLM-based applications to the absolute minimum.

Zero Trust Networks with Harmony Infinity

The introduction of LLMs and GenAI makes implementing a zero-trust security architecture more important than ever. At the same time, the capabilities of these systems also dramatically complicate zero-trust implementations and make ZTAI essential.

Implementing ZTAI can be complex and requires granular visibility and control over all actions performed by LLM-integrated applications within an organization’s environment. Check Point’s Infinity offers the tools that your organization needs to implement effective zero-trust in the age of AI, including integrated AI capabilities for advanced threat prevention and response.

×
  Commentaires
Ce site web utilise des cookies pour sa fonctionnalité et à des fins d’analyse et de marketing. En poursuivant votre navigation sur ce site, vous acceptez l’utilisation de cookies. Pour plus d’informations, veuillez lire notre Avis sur les cookies.
OK