This page provides an example configuration to protect traffic flow between an AI
medical patient assistant and its end users. The AI application provides answers on
medical diagnoses and queries patient data where needed.
The Prisma AIRS AI Runtime: Network intercept will
automatically discover your AI datasets, applications, models, and their traffic
flow. Given that your chatbot uses an AI model, it’s essential to protect against
prompt injections and malicious URLs, and ensure personal health data does not leak
into outputs. We’ll set up an AI Security profile to address these concerns.
Navigate to Manage →
Configuration → NGFW and Prisma
Access.
From the top menu, select Security Services →
AI Security.
Select Add Profile.
Configure the following:
AI Model Protection: Enable prompt injection detection with the
action Block.
AI App Protection: Allow benign URLs by setting the
default URL action to Allow. Select the following 14 harmful URL
categories and choose the action Block: Command and control,
copyright-infringement, dynamic DNS, extremism, grayware, malware,
newly-registered-domain, parked, phishing,
proxy-avoidance-and-anonymizers, questionable, ransomware,
scanning-activity, unknown.
AI Data Protection: Specifically for model responses, import the
predefined Enterprise DLP profile "PHI" to alert on personal health
information in model responses.
Latency setting: Set the maximum detection latency to 5 seconds,
and continue detection asynchronously afterward so any threats are
reported offline.
Add the AI security profile to a security group. Add this security
group to the security policy you created between the Chatbot and the AI
model. As the user app interacts with the AI Medical Assistant, the Prisma AIRS: Network intercept monitors the traffic
and generates logs.
Select Incidents and Alerts → Log
Viewer.
Select Firewall/AI Security.
Review the logs to see traffic blocked according to your AI Security
profile.
Analyze log entries for `ai-model-protection`, `ai-data-protection`, and
`ai-application-protection`.
For example, if a prompt injection is detected in a request to an
LLM, the traffic is blocked (based on the setting in the AI Security
profile), and a log will be generated in the AI Security log viewer.
Additionally, if personal health information is detected in a model
response, an AI Security log will be generated, and you will see an alert
for data leakage from the model.