AI Runtime Security
Create Model Groups for Customized Protections
Table of Contents
Expand All
|
Collapse All
AI Runtime Security Docs
Create Model Groups for Customized Protections
Create Model Groups to group the AI models to apply specific App protection,
AI model protection, and AI data protection.
Where Can I Use This? | What Do I Need? |
---|---|
|
- Log in to Strata Cloud Manager (SCM).Select Manage → Configuration → NGFW and Prisma Access → Security Services → AI Security → Add Profile.Select Add Model Group.The security profile has a default model group defining the behavior of models not assigned to any specific group. If a supported AI model isn't part of a designated group, the default model group’s protection settings will apply.
- Enter a Name.Choose the AI models supported by the cloud provider in the Target Models section. See AI Models on Public Clouds Support Table list for reference.Set the Access Control as Allow or Block for the model group.When you select the Allow access control, you can configure the protection settings for request and response traffic.When you block the access control for a model group, the protection settings are also disabled. This means any traffic to this model will be blocked for this profile.Configure the following Protection Settings for the Request and Response traffic:
Request Response AI Model Protection - Enable Prompt injection detection and set it to Alert or Block N/A AI Application Protection - Set the default URL security behavior to Allow, Alert, or Block AI Application Protection - Set the default URL security behavior to Allow, Alert, or Block AI Data Protection - Select the predefined or custom DLP rules. AI Data Protection - Select the predefined or custom DLP rules. The URL filtering monitors the AI traffic passing to AI data by monitoring the model request, and response payloads.You can also copy and import request and response configurations for the common protection settings including AI application protection and AI data protection.Select Add to create the model group and add this security profile to Security Profile Groups.Select Manage > Operations > Push Config and push the security configurations for the security rule from SCM to AI Runtime Security instance.As the user interacts with the app, and the app makes requests to an AI model, the AI security logs are generated for each one of these policy rules. Check the specific logs in the AI Security Report under AI Security Log Viewer.Edit Model Groups
- In AI Security profile, select a Model group.Update the Target Models in a model group.Each model can only be associated with a unique model group. Select the AI models from the AWS, Azure, and GCP cloud providers. Refer to AI Models on Public Clouds Support Table for a complete list of supported public cloud provider pre-trained models.Select the Model Name from the available models.Update to save the model group changes.You can then add this security profile with customized model group protections to a security profile group.What’s Next: Configure the Security Profile Groups and add the AI Security profile to the profile group. You can then attach this profile group to a security policy rule.