AI Runtime Security: API Intercept Overview
AI Runtime Security-as-Code
AI Runtime Security: API Intercept is a threat detection service designed to
secure AI applications. It helps discover and protect applications using REST APIs by
embedding Security-as-Code directly into source code.
The Scan API service scans prompts and models responses to identify
potential threats and provides actionable recommendations.
The APIs protect your AI models, applications, and datasets by programmatically scanning
prompts and models for threats, enabling robust protection across public and private
models with model-agnostic functionality. Its model-agnostic design ensures seamless
integration with any AI model, regardless of its architecture or framework. This enables
consistent security across diverse AI models without any model-specific
customization.
You can use this API in your application to send prompts or model responses and
receive a threat assessment, along with the recommended actions based on your AI
security profile.
Key Features:
- Simple integration: Secure AI application models and datasets from insecure model
outputs, prompt injections, and sensitive data loss.
- Comprehensive threat detection: Provides extensive app, model, and data threat
detection while maintaining ease of use.
- Exceptional flexibility and defense: Integrates API-based threat detection to
deliver unmatched adaptability and layered protection.
Use Cases- Secure AI models in production: Validate prompt requests and
responses to protect deployed AI models.
- Detect data poisoning: Identify contaminated training data
before fine-tuning.
- Protect adversarial input: Safeguard AI agents from malicious
inputs and outputs while maintaining workflow flexibility.
- Prevent sensitive data leakage: Use API-based threat detection
to block sensitive data leaks during AI interactions.
Limitations