Extend AI Security to Private Clouds
Prisma AIRS™ AI Runtime provides comprehensive security and monitoring for AI
workloads in private clouds, protecting against threats like data exfiltration and prompt
injection.
You can secure and monitor AI workloads that are deployed in private clouds,
such as those built on ESXi and KVM servers. This capability extends
protection to your AI applications and models
even when they interact with public cloud Large Language Model (LLM) providers. By
protecting the traffic between your private cloud workloads and external LLMs, you can
safeguard against data exfiltration, prompt injection, and other threats specific to AI
interactions. This functionality is essential for organizations with hybrid cloud
strategies. It ensures that security is not a barrier to leveraging AI, allowing you to
maintain control and visibility over your AI ecosystem regardless of where your data and
applications are located.
To enable this, the Prisma AIRS™ AI Runtime: Network intercept can be manually
deployed and bootstrapped in your private cloud environment. This deployment provides a
crucial security layer for AI workloads that reside outside of public cloud
infrastructure. Once deployed, the firewall can be centrally managed by either Strata™
Cloud Manager or Panorama, allowing for consistent policy enforcement and monitoring
across your entire network.