AI Runtime Security
Create Model Groups for Customized Protections
Table of Contents
Expand All
|
Collapse All
AI Runtime Security Docs
Create Model Groups for Customized Protections
Create Model Groups to group the AI models to apply specific App protection,
AI model protection, and AI data protection.
Where Can I Use This? | What Do I Need? |
---|---|
|
- Log in to Strata Cloud Manager (Strata Cloud Manager).Select Manage → Configuration → NGFW and Prisma Access → Security Services → AI Security → Add Profile.Select Add Model Group.The security profile has a default model group defining the behavior of models not assigned to any specific group. If a supported AI model isn't part of a designated group, the default model group’s protection settings will apply.
- Enter a Name.Choose the AI models supported by the cloud provider in the Target Models section. See the AI Models on Public Clouds Support Table list for reference.Set the Access Control as Allow or Block for the model group.When you select the Allow access control, you can configure the protection settings for request and response traffic.When you block the access control for a model group, the protection settings are also disabled. This means any traffic to this model will be blocked for this profile.Configure the following Protection Settings for the Request and Response traffic:The Request has the protection settings for AI model protection, AI application protection, and AI data protection.The Response has protections for AI application protection and AI data protection.
Request Response AI Model Protection - Enable Prompt injection detection and set it to Alert or Block.
N/A AI Application Protection - Set the default URL security behavior to
Allow, Alert, or Block. This would be the default
action if any URL is detected in the content of
model input. You can override the default behavior for each custom URL setting.
AI Application Protection - Set the default URL security behavior to
Allow, Alert, or Block. This would be the default
action if any URL is detected in the content of
model output. You can override the default behavior for specific URL categories in the exception table.
AI Data Protection - Select the predefined or custom DLP rule for detecting sensitive data in model input. AI Data Protection - Select the predefined or custom DLP rules for detecting sensitive data in model input. The URL filtering monitors the AI traffic passing to AI data by monitoring the model request, and response payloads.You can also copy and import request and response configurations for the common protection settings including AI application protection and AI data protection.Select Add to create the model group and add this security profile to Security Profile Groups.Select Manage > Operations > Push Config and push the security configurations for the security rule from Strata Cloud Manager to AI Runtime Security instance.As the user interacts with the app, and the app makes requests to an AI model, the AI security logs are generated for each one of these policy rules. Check the specific logs in the AI Security Report under AI Security Log Viewer.Edit Model Groups
- In your AI security profile, select a Model group.Update the Target Models in a model group.You can associate each model with a unique model group.Select AI models from the AWS, Azure, and GCP cloud providers. Refer to AI Models on Public Clouds Support Table for a complete list of supported public cloud provider pre-trained models.Selec the model name from the available models.Update the access control to Allow or Block.Configure the Request and Response protection settings.Update to save the model group changes.You can then add this security profile with customized model group protections to a security profile group.What’s Next: Configure the Security Profile Groups and add the AI Security profile to the profile group. You can then attach this profile group to a security policy rule.