Prisma AIRS AI Runtime Enhancements
Focus
Focus
What's New in the NetSec Platform

Prisma AIRS AI Runtime Enhancements

Table of Contents

Prisma AIRS AI Runtime Enhancements

Prisma AIRS AI Runtime: API intercept
Prisma AIRS have been enhanced to detect malicious code and identify toxic content threats.
Prisma AIRS AI Runtime: API Intercept
  • Malicious Code Detection in LLM Outputs
    AI application protection now includes Malicious Code Detection, which analyzes code snippets generated by Large Language Models (LLMs) to identify potential security threats. The feature supports scanning for malicious code in JavaScript, Python, VBScript, PowerShell, Batch, Shell, and Perl. You can enable this detection by updating the API Security Profile. This feature is vital for preventing supply chain attacks, enhancing application security, maintaining code integrity, and mitigating AI risks in the deployment and utilization of generative AI.
  • Secure LLMs Through Toxic Content Detection
    Add Toxic Content Detection in LLM model requests and responses to protect them from generating or responding to inappropriate content. Toxic content includes references to hateful, sexual, violent, or profane themes. This advanced detection is designed to counteract sophisticated prompt injection techniques that malicious actors might use to bypass standard LLM guardrails. This capability is crucial for maintaining the ethical integrity and safety of your AI applications by protecting brand reputation, ensuring user safety, mitigating misuse, and promoting a responsible AI.
For details on using the scan APIs refer to the API reference documentation.