AI Red Teaming
Focus
Focus
Prisma AIRS

AI Red Teaming

Table of Contents

AI Red Teaming

Learn about new features for Prisma AIRS AI Red Teaming.
Here are the new Prisma AIRS AI Red Teaming features.

AI Summary for AI Red Teaming Scans

January 2026
Supported for:
  • Prisma AIRS (Managed by Strata Cloud Manager)
When you complete an AI Red Teaming scan, you receive an AI Summary (in the scan report) that synthesizes key risks and their implications. This executive summary eliminates the need for manual interpretation of technical data, allowing you to quickly understand which attack categories or techniques pose the greatest threats to your systems and what the potential business impact might be.
The AI Summary contains the scan configuration, key risks, and implications.
This capability is particularly valuable when you need to communicate AI risk assessment results across different organizational levels or when preparing briefings for leadership meetings. Rather than struggling to translate technical vulnerability reports into business language, you can rely on the AI Red Teaming generated report to articulate security, safety, compliance, brand, and business risks in terms that resonate with executive audiences. This summary is also valuable in prioritising remediation measures which teams can adopt for a safer deployment of AI systems in production.

Enhanced AI Red Teaming with Brand Reputation Risk Detection

January 2026
Supported for:
  • Prisma AIRS (Managed by Strata Cloud Manager)
You can now assess and protect your AI systems against Brand reputation risks using Prisma AIRS enhanced AI Red Teaming capabilities. This feature addresses a critical gap in AI security by identifying vulnerabilities that could damage your organization's reputation when AI systems interact with users in production environments. Beyond the existing Security, Safety, and Compliance risk categories, you can now scan for threats including Competitor Endorsements, Brand Tarnishing Content, Discriminating Claims, and Political Endorsements.
The enhanced agent assessment capabilities automatically generate goals focused on Brand Reputational risk scenarios that could expose your organization to public relations challenges or regulatory scrutiny. You benefit from specialized evaluation methods designed to detect subtle forms of reputational risk, including false claims and inappropriate endorsements that traditional security scanning might miss. This comprehensive approach allows you to proactively identify and address potential brand vulnerabilities before deploying AI systems to production environments, protecting both your technical infrastructure and corporate reputation in an increasingly AI-driven business landscape.

Error Logs and Partial Scan Reports

December 2025
Supported for:
  • Prisma AIRS (Managed by Strata Cloud Manager)
When you conduct AI Red Teaming scans using Prisma AIRS, you may encounter situations where scans fail completely or complete only partially due to target system issues or connectivity problems. The Error Logs and Partial Scan Reports feature provides you with comprehensive visibility into scan failures and enables you to generate actionable reports even when your scans don't complete successfully. You can access detailed error logs directly within the scan interface, both during active scans on the progress page and after completion in the scan logs section, allowing you to quickly identify whether issues stem from your target AI system or the Prisma AIRS platform itself.
This feature particularly benefits you when conducting Red Teaming assessments against enterprise AI systems that may have intermittent availability or response issues. When your scan completes the full simulation but doesn’t receive valid responses for all attacks, AI Red Teaming marks it as partially complete rather than failed. You can then choose to generate a comprehensive report based on the available test results, giving you valuable security insights even from incomplete assessments. AI Red Teaming transparently informs you about credit consumption before report generation and clearly marks any generated reports as partial scans, indicating the percentage of attacks that received responses.
By leveraging this capability, you can maximize the value of your Red Teaming efforts, troubleshoot scanning issues more effectively, and maintain continuous security assessment workflows even when facing target system limitations or temporary connectivity challenges during your AI security evaluations.

Automated AI Red Teaming

October 2025
Supported for:
  • Prisma AIRS (Managed by Strata Cloud Manager)
Palo Alto Networks' is an automated solution designed to scan any AI system—including LLMs and LLM-powered applications—for safety and security vulnerabilities.
The tool performs a Scan against a specified Target (model, application, or agent) by sending carefully crafted attack prompts to simulate real-world threats. The findings are compiled into a comprehensive Scan Report that includes an overall Risk Score (ranging from 0 to 100), indicating the system's susceptibility to attacks.
Prisma AIRS offers three distinct scanning modes for thorough assessment:
  1. Attack Library Scan: Uses a curated, proprietary library of predefined attack scenarios, categorized by Security (e.g., Prompt Injection, Jailbreak), Safety (e.g., Bias, Cybercrime), and Compliance (e.g., OWASP LLM Top 10).
  2. Agent Scan: Utilizes a dynamic LLM attacker to generate and adapt attacks in real-time, enabling full-spectrum Black-box, Grey-box, and White-box testing.
  3. Custom Attack Scan: Allows users to upload and execute their own custom prompt sets alongside the built-in library.
A key feature of the service is its single-tenant deployment model, which ensures complete isolation of compute resources and data for enhanced security and privacy.