Prisma AIRS
Palo Alto Networks Prisma AIRS™ is a purpose-built, centralized, comprehensive security platform.
It protects the entire AI ecosystem, including AI application, AI model, AI data protection, and AI Agent protection. Prisma AIRS embeds protection in all layers 1-7.
Prisma AIRS includes features like AI Runtime Firewall, AI Runtime API (API reference docs), AI Model Security (learn more), AI Red Teaming (learn more), and posture management.
- AI Runtime Firewall protects your organization’s cloud network architecture from AI-specific and conventional network attacks by leveraging real-time, AI-powered security. It secures your next-generation AI applications, AI models, and AI datasets from network threats such as prompt injections, sensitive data leakage, insecure output (for example, malware and URLs), and model DoS attacks.
- AI Runtime API secures your AI applications, AI data, and AI agents by embedding Security-as-Code directly into your source code. The APIs allow you to scan prompts and model responses to identify potential threats programmatically, and provide actionable recommendations. It embeds protection in OSI layers 4-7. To get started with Prisma AIRS APIs.
- AI Model Security is a comprehensive solution designed to ensure that only secure and vulnerability-free AI/ML models are deployed and used in production environments . It protects against critical risks such as arbitrary code execution during model loading or inference, deserialization threats, neural backdoors, and runtime threats . This is achieved by scanning model versions against defined security rules to identify vulnerabilities and enforce consistent security standards prior to deployment
- AI Red Teaming is an automated solution designed to assess AI systems, including Large Language Models (LLMs) and LLM-powered applications, for safety and security vulnerabilities . It simulates real-world threats by sending crafted attack prompts to a target AI system and evaluating its responses for compromised content . The service then generates comprehensive reports with a risk score, enabling organizations to proactively identify and mitigate vulnerabilities during AI development
What's New
| November 2025 |
|
| October 2025 | |
| September 2025 |