See all the new features made available for Prisma AIRS AI Model
Security.
Here are the new Prisma AIRS AI Model Security features.
Customize Security Groups and View Enhanced Scan Results
December 2025
Supported for:
Prisma AIRS (Managed by Strata Cloud Manager)
The AI Model Security web interface has been enhanced with the following new features
that provide deeper insights into model violations and offer greater flexibility for
customizing your security configuration. Below are the key new capabilities:
Customize Security Groups—In addition to the set of default groups
created for all new users, you can now create new custom model security groups
directly using Strata Cloud Manager. Additionally, edit names and descriptions
of existing security groups and delete the unused security groups (except the
default ones).
File Explorer—For each scan, you can now view the visualization of every file
scanned by AI Model Security in its original file structure. You can also view
detailed, file-level violation information for every scanned file.
Enhanced JSON View—You can now view direct JSON responses from the API for scans,
violations, and rule evaluations. The JSON view also provides detailed
instructions on how to retrieve this data from your local machine.
Secure AI Models with AI Model Security
October 2025
Supported for:
Prisma AIRS (Managed by Strata Cloud Manager)
Models serve as the foundation of AI/ML
workloads and power critical systems across organizations today. Prisma AIRS now
features AI Model Security, a comprehensive solution that ensures only secure,
vulnerability-free models are used while maintaining your desired security
posture.
AI/ML models pose significant security risks as they can
execute arbitrary code during loading or inference. This is a critical vulnerability
that existing security tools fail to adequately detect. Compromised models have been
exploited in high-impact attacks including cloud infrastructure takeovers, sensitive
data theft, and ransomware deployments. Your valuable training datasets and
inference data processed by these models make them prime targets for cybercriminals
seeking to infiltrate AI-powered systems.
Model Security Groups—Create Security Groups that apply different managed
rules based on where your models come from. Set stricter policies for external
sources like HuggingFace, while tailoring controls for internal sources like
Local or Object Storage.
Model Scanning—Scan any model version against your Security Group rules.
Get clear pass/fail results with supporting evidence for every finding, so you
can confidently decide whether a model is safe to deploy.
Key Benefits:
Prevent Security Risks Before Deployment: Identify vulnerabilities, malicious
code, and security threats in AI models before they reach production
environments.
Enforce Consistent Security Standards: Apply organization-wide security policies
across all model sources, ensuring every model meets your requirements
regardless of origin.
Accelerate Secure AI Adoption: Reduce manual security review time with automated
scanning, enabling teams to deploy models faster without compromising
security.
Maintain Compliance and Governance: Demonstrate security due diligence with
detailed scan evidence and audit trails for regulated industries and internal
compliance requirements.