AI Runtime Security: API Intercept: Toxic Content
Detection
Released in March
AI Security Profile Customization
AI Model Protection:
Added Toxic
Content Detection in LLM model requests and
responses to protect the models from generating or
responding to inappropriate content. Toxic content
includes references to hateful, sexual, violent, or
profane themes. Malicious threat actors can easily
bypass the LLM guardrails against toxic content through
direct or indirect prompt injection.
Enable this detection by updating the API security
profile. For details on using the scan APIs refer to the
API reference
documentation.
AI Runtime Security: Network Intercept Managed by Panorama
Released in February
You can now manage and monitor your AI Runtime Security: Network
intercept (AI firewall) with Panorama.
AI security policy and logs can now be defined and observed on
Panorama.
To get started:
Select “Panorama for Management (with Log Collector)”
when creating a deployment profile
for Panorama in the Customer Support
Portal.
Create and manage multiple AI security profiles and
their revisions.
AI Security Profile Customization
AI Application Protection: Enhanced the
application security with advanced options for URL
filtering with custom allow and block lists for the
predefined URL security
categories.
AI Data Protection: Expanded data loss prevention
(DLP) profile selection - You can now define your custom
DLP profiles for AI security.
Database Security Detection: Enable database
security detection to regulate database security threats
in the prompt or response. This feature allows you to
allow or block malicious SQL queries, preventing
unauthorized actions on your database. (For detailed
instructions on implementing this feature and using the
scan APIs, refer the API intercept
overview section).