Create Model Groups for Customized Protections
Focus
Focus
AI Runtime Security

Create Model Groups for Customized Protections

Table of Contents

Create Model Groups for Customized Protections

Create
Model Groups
to group the AI models to apply specific App protection, AI model protection, and AI data protection.
This page helps you to create
Model Groups
to group the AI models to apply specific application protection, AI model protection, and AI data protection. You can add a model group to an existing AI security profile or a new AI security profile.
Where Can I Use This?
What Do I Need?
  • Create Customized Model Groups
  1. Log in to Strata Cloud Manager (SCM).
  2. Select
    Manage
    → Configuration
    → NGFW and Prisma Access
    → Security Services
    → AI Security
    → Add Profile
    .
  3. Select
    Add Model Group.
    The security profile has a default model group defining the behavior of models not assigned to any specific group. If a supported AI model isn't part of a designated group, the default model group’s protection settings will apply.
    1. Enter a
      Name
      .
    2. Choose the AI models supported by the cloud provider in the
      Target Models
      section. See
      AI Models on Public Clouds Support Table
      list for reference.
    3. Set the
      Access Control
      as
      Allow
      or
      Block
      for the model group.
      When you select the
      Allow
      access control, you can configure the protection settings for request and response traffic.
      When you block the access control for a model group, the protection settings are also disabled. This means any traffic to this model will be blocked for this profile.
    4. Configure the following
      Protection Settings
      for the
      Request
      and
      Response
      traffic:
      Request
      Response
      AI Model Protection
      -
      Enable Prompt injection detection
      and set it to Alert or Block
      N/A
      AI Application Protection
      - Set the default
      URL security
      behavior to Allow, Alert, or Block
      AI Application Protection
      - Set the default
      URL security
      behavior to Allow, Alert, or Block
      AI Data Protection
      - Select the predefined or custom
      DLP rules.
      AI Data Protection
      - Select the predefined or custom
      DLP rules.
      The URL filtering monitors the AI traffic passing to AI data by monitoring the model request, and response payloads.
      You can also copy and import request and response configurations for the common protection settings including AI application protection and AI data protection.
  4. Select
    Add
    to create the model group and add this security profile to
    Security Profile Groups
    .
  5. Select
    Manage > Operations > Push Config
    and push the security configurations for the security rule from SCM to
    AI Runtime Security
    instance.
    As the user interacts with the app, and the app makes requests to an AI model, the AI security logs are generated for each one of these policy rules. Check the specific logs in the AI Security Report under AI Security Log Viewer.

Edit Model Groups

  1. In AI Security profile, select a
    Model group
    .
  2. Update the
    Target Models
    in a model group.
    Each model can only be associated with a unique model group. Select the AI models from the AWS, Azure, and GCP cloud providers. Refer to AI Models on Public Clouds Support Table for a complete list of supported public cloud provider pre-trained models.
    Select the
    Model Name
    from the available models.
  3. Update
    to save the model group changes.
    You can then add this security profile with customized model group protections to a security profile group.
    What’s Next: Configure the
    Security Profile Groups
    and add the AI Security profile to the profile group. You can then attach this profile group to a security policy rule.

Recommended For You