AI Runtime Security
Use case: Inspect Traffic between User and AI Medical Assistant
Table of Contents
Expand All
|
Collapse All
AI Runtime Security Docs
Use case: Inspect Traffic between User and AI Medical Assistant
Use case: Inspect Traffic between User and AI Medical Assistant
Where Can I Use This? | What Do I Need? |
---|---|
|
The AI Runtime Security will automatically discover your AI
datasets, applications, models, and their traffic flow. Given that your chatbot uses
an AI model, it’s essential to protect against prompt injections, and malicious
URLs, and ensure personal health data does not leak into outputs. We’ll set up an AI
Security profile to address these concerns.
- Log in to Strata Cloud Manager.Select Manage → Configuration → NGFW and Prisma Access.From the top menu, select Security Services → AI Security.Select Add Profile.Configure the following:
- AI Model Protection: Enable prompt injection detection with action Block.
- AI App Protection: Allow benign URLs by setting the default URL action to "Allow. select the following 14 harmful URL categories and choose action "Block": command-and-control, copyright-infringement, dynamic-dns, extremism, grayware, malware, newly-registered-domain, parked, phishing, proxy-avoidance-and-anonymizers, questionable, ransomware, scanning-activity, unknown.
- AI Data Protection: Specifically for model responses, import the predefined Enterprise DLP profile "PHI" to alert on personal health information in model responses.
- Latency setting: Set the maximum detection latency to 5 seconds, and continue detection asynchronously afterward so any threats are reported offline.
Create a Security Policy Rule.Add the AI security profile to a security group. Add this security group to the security policy you created between the Chatbot and the AI model. As the user app interacts with the AI Medical Assistant, the AI Runtime Security instance monitors the traffic and generates logs.Select Incidents and Alerts → Log Viewer.- Select Firewall/AI Security.
- Review the logs to see traffic blocked according to your AI Security profile.
- Analyze log entries for `ai-model-protection`, `ai-data-protection`, and `ai-application-protection`.
For example, if a prompt injection is detected in a request to an LLM, the traffic is blocked (based on the setting in the AI Security profile) and a log will be generated in the AI Security log viewer. Additionally, if personal health information is detected in a model response, an AI Security log will be generated, and you will see an alert for data leakage from the model.