In this section, you will create an API security profile and configure AI model
protection, AI application protection, AI data protection, and AI agent protection.
You can use this API security profile to trigger the scan APIs to detect specific
threats.
Toxic content detection, contextual grounding, and custom topic
guardrails are supported in English.
Follow these steps if you are onboarding Prisma AIRS AI
Runtime: API intercept for the first time or when you have already onboarded the API
intercept and want to manage the API security profile.
To update an existing API security profile, navigate to Insights Prisma AIRS Prisma AIRS AI Runtime: API intercept.
In the top right corner, click Manage and select Security
Profiles.
To create a new security profile or update an existing one, select Create
Security profile and follow the below configurations:
Enter a Security Profile Name.
Select the following protections with Allow or Block
actions:
Protection Type
Configurations
AI Model Protection
Enable Prompt Injection Detection and
set it to Allow or Block.
Contextual Grounding: Enable this
detection and set the action to Allow or
Block.
This detection protects enterprise
AI models that use AI to summarize context from
various data sources, as the detections helps
prevent hallucinations. It addresses the risk in
AI model responses containing information not
present in or contradicting the provided
context.
Contextual grounding detection
service evaluates model responses based on the
following criteria:
The output contains factual information that
was not present in the provided context.
The output includes factual information that
contradicts information in the context.
Each piece of content
evaluated receives a "grounded" or "ungrounded"
verdict. Content is "grounded" when it aligns with
information provided in the context, and
"ungrounded" when it contains factual information
not present in or contradicting the given
context.
This
detection is only available for the response
received from the LLM models, not the
request.
The scan API needs an input prompt,
response, and "context" parameter, containing the
reference information provided to the LLM model
against which grounding is evaluated.
Enable Custom Topic Guardrails with
Allow or Block actions.
Add Allowed Topic: Add topics that
should be allowed in prompts and responses.
Add Blocked Topic: Add topics to be
blocked in prompts and responses.
The custom topic guardrails detection service
identifies a topic violation in the given prompt
or response. A topic violation indicates if the
prompt/response contains content that violates the
topic guardrails you configured while creating the
custom topics.
Enable Toxic Content Detection in LLM
model requests or responses.
This
feature helps protect the LLM models from
generating or responding to inappropriate
content.
The actions include Allow
or Block, with the following severity
levels:
Moderate: Content that may be
considered toxic, but may have some ambiguity. The
default value is Allow.
High: Content that is toxic with little
to no ambiguity. The default value is Allow.
Toxic content includes references to hate speech,
sexual content, violence, criminal actions,
regulated substances, self-harm, and profanity.
Malicious threat actors can easily bypass the LLM
guardrails against toxic content through direct or
indirect prompt injection.
Refer to the API reference
docs to trigger the scan APIs against this
API security profile. The reference docs include
details on the request and response report with
sample use cases.
AI Application Protection
Enable Malicious Code Detection
This feature analyzes code snippets generated
by Large Language Models (LLMs) and identifies
potential security threats.
Set the action to Block to prevent the
execution of potentially malicious code or set it
to Allow to ignore the “malicious” verdict
if needed.
To test your LLMs, trigger a scan API with a
response containing malicious code for supported
languages (JavaScript, Python, VBScript,
Powershell, Batch, Shell, and Perl).
The system provides a verdict on the code
snippet and generates a detailed report with
SHA-256, file type, known verdict, code action,
and malware analysis.
Enable Malicious URL Detection
Basic: Enable the Malicious URL
Detection in a prompt or AI model response and
set the action to Allow or Block.
This is to detect the predefined malicious
categories.
Advanced: Provide URL security
exceptions:
The default action
(Allow or Block) is applied to all
the predefined URL security
categories.
In the URL Security Exceptions table, you
can override the default behavior by specifying
actions for individual URL categories.
Select the plus (+) icon to add the
predefined URL categories and set an action for
each.
Refer to the API reference
docs to trigger the scan APIs with the
intended detections. Also, generate the reports
with the report ID and scan ID displayed in the
output snippet.
AI Data Protection
Enable Sensitive Data Detection to scan
the prompt and/or response for sensitive data such
as bank account numbers, credit card numbers, API
keys, and other sensitive data, to detect potential
data exposure threats.
Enable sensitive data
protection with Basic or Advanced
options.
Basic: Enable sensitive DLP protection
for predefined data patterns with Allow or
Block action.
Masking Sensitive Data: Enable this
detection to mask sensitive data patterns in the
API output response, which scans the LLM prompt
and responses.
To mask sensitive data, enable
Sensitive Data Detection with the
Block action in your API security
profile.
Masking sensitive data replaces
sensitive information, such as Social Security
Numbers and bank account details, with "X"
characters equivalent to the original text
length.
Masking the sensitive
data feature is only available for a basic DLP
profile and only when you select the Block
action for sensitive data detection in the API
security profile.
Advanced: Select the predefined or
custom DLP profile.
The
drop-down list shows your custom DLP profiles and
all the DLP profiles linked to the tenant service
group (TSG) associated with your Prisma AIRS API
deployment profile.
Navigate to
Manage > Configuration > Data Loss Preventions
> Data Profiles to create a new DLP profile
(Add a data
profile).
The prompts and
responses will be run against this DLP profile
attached to the AI security profile, and action
(allow or block) will be taken based on the output
of the DLP profile.
Enable Database Security
Detection:
This detection is for AI
applications using genAI to generate database
queries and regulate the types of queries
generated.
Set an Allow or
Block action on the database queries
(Create, Read, Update, and Delete) to prevent
unauthorized actions.
Refer to the API reference
docs to trigger the scan APIs against this
API security profile. The reference docs include
details on the request and response report with
sample use cases.
AI Agent Protection
Enable AI Agent Protection with an
Allow or Block action.
This detection secures low-code/no-code
AI agents by detecting threats, such as attempts
to leak function schema, invoke tools directly, or
manipulate memory.
When a threat is detected, the system
takes the action you've configured, allowing or
blocking the request.
If you enable AI Agent
Protection without configuring an AI Agent
framework in your application definition, then the
AI Agent detection service only enables
model-based protections and not the
patterns.
Refer to the API reference
docs to trigger the scan APIs against this
API Security profile with the intended detections.
You can use a single API key to manage
multiple AI security profiles for testing.
Latency Configuration: Define acceptable API response times by
setting a latency threshold in seconds. This threshold will determine
when API responses exceed the permissible limit, impacting how quickly
threats are detected and actions are executed. You can set the action to
Allow or Block when the latency threshold
exceeds.