Prisma AIRS AI Runtime: API Intercept Enhancements
Focus
Focus
What's New in the NetSec Platform

Prisma AIRS AI Runtime: API Intercept Enhancements

Table of Contents

Prisma AIRS AI Runtime: API Intercept Enhancements

Learn the new features introduced in Prisma AIRS AI Runtime: API intercept.
Prisma AIRS AI Runtime: API Intercept
  • Accelerate Python Application Seucrity with Prisma AIRS Python SDK
    Introducing the Prisma AIRS API Python SDK, which seamlessly integrates advanced AI security scanning into Python applications. It supports Python versions 3.9 through 3.13, offering synchronous and asynchronous scanning, robust error handling, and configurable retry strategies. This SDK empowers developers to "shift left" security, embedding real-time AI-powered threat detection and prevention directly into their Python applications. By providing a streamlined interface for scanning prompts and responses for malicious content, data leaks, and other threats, it helps secure your AI models, data, and applications from the ground up.
  • API Detection for the European Region
    You can now use Strata Cloud Manager to manage the API detection services hosted in the EU (Germany) region. When creating a deployment profile, you select your preferred region, and all subsequent scan requests are routed to the corresponding regional API endpoint. This allows for localized hosting and processing of your AI security operations.
    By enabling regional deployment of AI security services, you can: comply with data residency requirements, reduce latency by processing security scans closer to your European users and infrastructure.
  • Automatic Sensitive Data Masking in API Payloads
    Automatic detection and masking of sensitive data patterns are now available in the scan API output, which scans the prompts and responses in Large Language Models (LLM). This feature replaces sensitive information such as Social Security Numbers and bank account details with "X" characters while maintaining the original text length. API scan logs indicate sensitive content with the new "Content Masked" column.
    As LLMs become more prevalent, the risk of inadvertently exposing sensitive data increases. This automatic masking capability enhances data privacy and maintains compliance with data protection regulations. Proactively obscuring sensitive information reduces the risk of data leakage, strengthens the security posture of AI applications, and builds greater trust in the use of AI models by ensuring sensitive details are never fully exposed in logs or intermediary steps.
  • Protect AI Agents on Low-Code/No-Code Platforms
    You can now protect and monitor AI agents against unauthorized actions and system manipulation. This feature extends security to AI agents developed on low-code/no-code platforms, like Microsoft Copilot Studio, AWS Bedrock, GCP Vertex AI, and VoiceFlow, as well as custom workflows. As AI agents become more prevalent, they introduce new attack surfaces. This protection is crucial for ensuring the integrity and secure operation of your AI agents, regardless of how the agents were developed.
  • Validating LLM Outputs for Contextual Grounding
    You can now enable Contextual Grounding detection in your LLM response, which detects responses that contain information not present in or contradicting the provided context. This feature works by comparing the LLM's generated output against a defined input context. If the response includes information that wasn't supplied in the context or directly contradicts it, the detection flags these inconsistencies, helping to identify potential hallucinations or factual inaccuracies. Ensuring that LLM responses are grounded in the provided context is critical for applications where factual accuracy and reliability are paramount.
  • Define AI Content Boundaries with Custom Topic Guardrails
    You can enable the Custom Topic Guardrails detection service to identify a topic violation in the given prompt or response. This feature allows you to define specific topics that must be allowed or blocked within the prompts and responses processed by your LLM models. The system then monitors content for violations of these defined boundaries, ensuring that interactions with your LLMs stay within acceptable or designated subject matter.
    Custom Topic Guardrails provide granular control over the content your AI models handle, offering crucial protection against various risks. For example, you can prevent misuse, maintain brand integrity, ensure compliance, and enhance the focus of the LLM's outputs.