Prisma AIRS AI Runtime: API Intercept Overview
    Use Prisma AIRS AI Runtime: API intercept to embed Prisma AIRS
        Security-as-Code in your applications.
    
  
    
  
| Where Can I Use This? | What Do I Need? | 
|---|
    
| Security-in-Code with Prisma AIRS AI
                                        Runtime: API intercept
 |  | 
 
  
 
  
 Prisma AIRS AI Runtime: API intercept is a threat
            detection service designed to secure AI applications. It helps discover and protect
            applications using REST APIs by embedding Security-as-Code directly into source
            code.
The Scan API service scans prompts and models responses to identify
            potential threats and provides actionable recommendations.
The APIs protect your AI models, applications, and datasets by programmatically scanning
            prompts and models for threats, enabling robust protection across public and private
            models with model-agnostic functionality. Its model-agnostic design ensures seamless
            integration with any AI model, regardless of its architecture or framework. This enables
            consistent security across diverse AI models without any model-specific
            customization.
You can use this API in your application to send prompts or model responses and
            receive a threat assessment, along with the recommended actions based on your API
            security profile.
Key Features:
- Simple integration: Secure AI application models and datasets from insecure model
                outputs, prompt injections, and sensitive data loss.
- Comprehensive threat detection: Provides extensive app, model, and data threat
                detection while maintaining ease of use.
- Exceptional flexibility and defense: Integrates API-based threat detection to
                deliver unmatched adaptability and layered protection.
Activation and Onboarding Workflow
Use Cases- Secure AI models in production: Validate prompt requests and
                    responses to protect deployed AI models.
- Detect data poisoning: Identify contaminated training data
                    before fine-tuning.
- Protect adversarial input: Safeguard AI agents from malicious
                    inputs and outputs while maintaining workflow flexibility.
- Prevent sensitive data leakage: Use API-based threat detection
                    to block sensitive data leaks during AI interactions.
 Limitations
- One API key per deployment profile - Each deployment profile in the
                         Customer Support Portal-  allows a
                    single API key. 
- Each API key created in a specific region can only be used within that region.
                Cross-region use of API keys isn’t supported. A region can have multiple API keys
                associated with it.
- 2 MB maximum payload size per synchronous scan request - Limited to a maximum of 100
                URLs per request.
- 5 MB maximum payload size per asynchronous scan request - Limited to a maximum of
                100 URLs per request.
- Asynchronous requests are limited to a maximum of 25 batched requests.