Use case: Inspect Traffic between User and AI Medical Assistant
Focus
Focus
AI Runtime Security

Use case: Inspect Traffic between User and AI Medical Assistant

Table of Contents

Use case: Inspect Traffic between User and AI Medical Assistant

Use case: Inspect Traffic between User and AI Medical Assistant
This page provides an example configuration to protect traffic flow between an AI medical patient assistant and its end users. The AI app provides answers on medical diagnoses and queries patient data where needed.
Where Can I Use This?
What Do I Need?
  • Inspecting Traffic Between User and App
The AI Runtime Security will automatically discover your AI datasets, applications, models, and their traffic flow. Given that your chatbot uses an AI model, it’s essential to protect against prompt injections, and malicious URLs, and ensure personal health data does not leak into outputs. We’ll set up an AI Security profile to address these concerns.
  1. Log in to SCM.
  2. Select
    Manage
    → Configuration
    → NGFW and Prisma Access
    .
  3. From the top menu, select
    Security Services
    → AI Security
    .
  4. Select
    Add Profile
    .
  5. Configure the following:
    • AI Model Protection
      : Enable prompt injection detection with action Block.
    • AI App Protection
      : Allow benign URLs by setting the default URL action to "Allow. select the following 14 harmful URL categories and choose action "Block": command-and-control, copyright-infringement, dynamic-dns, extremism, grayware, malware, newly-registered-domain, parked, phishing, proxy-avoidance-and-anonymizers, questionable, ransomware, scanning-activity, unknown.
    • AI Data Protection
      : Specifically for model responses, import the predefined Enterprise DLP profile "PHI" to alert on personal health information in model responses.
    • Latency setting
      : Set the maximum detection latency to 5 seconds, and continue detection asynchronously afterward so any threats are reported offline.
  6. Add the AI security profile to a security group. Add this security group to the security policy you created between the Chatbot and the AI model. As the user app interacts with the AI Medical Assistant, the AI Runtime Security instance monitors the traffic and generates logs.
  7. Select
    Incidents and Alerts
    → Log Viewer
    .
    • Select
      Firewall/AI Security
      .
    • Review the logs to see traffic blocked according to your AI Security profile.
    • Analyze log entries for `ai-model-protection`, `ai-data-protection`, and `ai-application-protection`.
    For example, if a prompt injection is detected in a request to an LLM, the traffic is blocked (based on the setting in the AI Security profile) and a log will be generated in the AI Security log viewer. Additionally, if personal health information is detected in a model response, an AI Security log will be generated, and you will see an alert for data leakage from the model.

Recommended For You