Use case: Inspect Traffic between User and AI Medical Assistant
Use case: Inspect Traffic between User and AI Medical Assistant
This page provides an example configuration to protect traffic flow between an AI
medical patient assistant and its end users. The AI app provides answers on medical
diagnoses and queries patient data where needed.
The AI Runtime Security will automatically discover your AI
datasets, applications, models, and their traffic flow. Given that your chatbot uses
an AI model, it’s essential to protect against prompt injections, and malicious
URLs, and ensure personal health data does not leak into outputs. We’ll set up an AI
Security profile to address these concerns.
Add the AI security profile to a security group. Add this security
group to the security policy you created between the Chatbot and the AI
model. As the user app interacts with the AI Medical Assistant, the AI
Runtime Security instance monitors the traffic and generates logs.
Select
Incidents and Alerts
→ Log
Viewer
.
Select
Firewall/AI Security
.
Review the logs to see traffic blocked according to your AI Security
profile.
Analyze log entries for `ai-model-protection`, `ai-data-protection`, and
`ai-application-protection`.
For example, if a prompt injection is detected in a request to an
LLM, the traffic is blocked (based on the setting in the AI Security
profile) and a log will be generated in the AI Security log viewer.
Additionally, if personal health information is detected in a model
response, an AI Security log will be generated, and you will see an alert
for data leakage from the model.