Prisma AIRS API Scan Logs
Focus
Focus
Prisma AIRS

Prisma AIRS API Scan Logs

Table of Contents

Prisma AIRS API Scan Logs

View the Prisma AIRS AI Runtime: API intercept scan logs.
Where Can I Use This?What Do I Need?
  • Prisma AIRS AI Runtime: API intercept threat logs
This page summarizes the API threat logs detected by the Prisma AIRS AI Runtime: APIs.
Before you begin, onboard Prisma AIRS AI Runtime: API intercept in Strata Cloud Manager to add your application, generate an API key, and create a security profile.
To view the PPrisma AIRS AI Runtime: API intercept threat logs.
  1. Navigate to AI Security API Applications.
  2. In the API Scan Log page, you can:
    • View the number of text records, API calls made, and threats detected.
    • See the scan logs organized into Benign and threat categories.
    • Get details on the Scan ID, API Key, Profile ID, Profile Name, Application Name, Model Name, Report ID, prompt detection types (request or response), verdict, and the corresponding action taken.
    • Customize the columns displayed in Settings.
    • Choose the Past 24 Hours drop-down at the top right to filter the logs by your desired time.

Multi-category Configurability and Reporting for Toxicity Detection

Prisma AIRS AI Runtime supports multi-category toxicity detection which allows the AI Runtime API to identify specific types of harmful content by moving beyond a single general toxicity classification. It allows you to distinguish between various sub-categories of toxicity based on your specific AI application use cases and compliance requirements. AI Runtime classifies toxic content into one or more sub-categories, which are viewable in the AI Runtime scan API response. The table below describes these categories:
Violent CrimesAny content depicting, promoting, or excusing criminal violent actions toward people, animals, or other entities. This includes, but limited to, genocide, extremism, terrorism, murder, battery/assult, kidnapping, hate crimes, torture
Suicide & Self-HarmAny content depicting, promoting, or excusing physical actions intended to purposely hurt, injure, damage one’s body or kill oneself, including, but not limited to, suicide, non-suicidal self-injury, and eating disorders.
Indiscriminate WeaponsAny content depicting, promoting, or excusing the creation of weapons of mass destruction, including chemical, biological, radiological, nuclear, or high-yield explosive weapons.
HateAny content that discriminates, criticizes, insults, denounces, stereotypes or dehumanizes a person or group based on differentiating/protected attributes. This also includes content that promotes discriminatory practices based on the aforementioned attributes, including employment discrimination.
Sexual ContentAny content related to descriptions of sexual activity, genitals, and other forms of sexual content, including those portrayed as an assault or a forced sexual violent act against one’s will, and child sexual exploitation.
Controlled / Regulated SubstancesAny content depicting, promoting, or excusing the production, possession, and use of substances regulated by law, often due to potential for abuse, addiction, or harm (e.g. prescription medications, illegal drugs, chemicals for manufacturing illegal substances).
CybercrimesAny content depicting, promoting, or excusing acts of cybercrime.
Other Non-Violent Crime & MisconductAny content:
  • Depicting, promoting, or excusing non-violent criminal actions, including financial, property, weapons, and intellectual property crimes.
  • Depicting, promoting, or excusing acts of fraudulent or dishonest behavior, such as misleading advertising and exploitative short term lending practices.
  • Depicting, promoting, or excusing a form of personal privacy violation.
    This does not include cybercrimes.
To configure multi-category toxicity detection:
  1. Access the AI Security Profile settings in Strata Cloud Manager.
    1. Select AI Runtime from the navigation menu.
    2. Navigate to Policies > AI Security Profiles.
    3. Select an existing AI Security Profile or Add Profile to create a new one.
  2. Configure Model Protection settings within the selected AI Security Profile.
    1. Within the AI Security Profile details, navigate to the Model Protection tab.
    2. Locate the Toxic Content Detection section.
  3. Enable and customize granular toxic content policies.
    1. Toggle Enable Granular Toxic Content Policies.
    2. For each listed toxic content category, configure the desired action and confidence level thresholds. From the drop-down menu for each category, select either Allow or Block.
    3. For each selected action, specify the behavior for different confidence levels; for Moderate Confidence: Select Allow or Block; for High Confidence: Select Allow or Block.
  4. Save the updated AI Security Profile.
  5. Verify the deployment and effectiveness of the new policies:
    1. Monitor Scan Reports and SLS logs within Strata Cloud Manager.
    2. Look for entries where ai_subtype_details includes the newly configured categories and actions.
    3. Test your AI applications with content that should trigger the new granular policies and observe the expected behavior.

Configure Session URL in API Logs

To configure the Session URL in API Logs feature, you must ensure that your Prisma AIRS environment is correctly forwarding logs to your SIEM via the Strata Logging Service. The feature works by automatically appending a session_URL field to the standard API log schema.
  1. Navigate to Incidents and Alerts Log Viewer.Verify that you see the AIRS AI Runtime Security API log type.
  2. Select Firewall/AI Security.
  3. Ensure Strata Logging Service log forwarding is enabled. This is typically done while associating your deployment profile with a Tenant Service Group (TSG).
    Strata Logging Service generates the AI security logs when AI security threats are detected between AI applications and AI models. These logs include detailed threat snippet identification and reporting and provide in-depth threat information and reports for different protection types such as AI model protection, AI application protection, and AI data protection. For more information, see Threat Logs and AI Security Logs.
  4. To receive logs containing the session_URL in your SIEM:
    1. Create a Log Forwarding Profile: In SCM, specify the log type (Prisma AIRS API) and the destination (your SIEM's IP/URL).
    2. Select attributes. While the session_URL is a mandatory field in the new schema, ensure your SIEM is configured to parse this new String field from the incoming JSON or Syslog stream.
      When you configure log forwarding to a SIEM, Prisma AIRS automatically enriches the standard API scan log with the session_URL field. Below is a representation of how this appears in the JSON structure of a log entry sent to your SIEM:
      { "timestamp": "2025-10-01T14:10:45Z", "scan_id": "[scan ID]", "app_name": "SmartSync", "verdict": "Block", "threat_type": "Prompt Injection", "severity": "Critical", "model_name": "GPT-5", "user_ip": "[IP address]", "session_URL": "https://stratacloudmanager.paloaltonetworks.com/ai-runtime/sessions/[session-number]" }
      The log includes standard metadata alongside the new mandatory field.
      There are a few key fields in the log output related to the session URL:
      • scan_id: A unique identifier for the specific atomic API call that was scanned.
      • session_URL: The new String field containing the direct link to the full conversation context in Strata Cloud Manager (SCM).
      • verdict: Indicates the action taken (e.g., Block), which identifies this entry as a violation.