Prisma AIRS
Secure Your AI Models with AI Model Security
Table of Contents
                    
          Expand All
          |
          Collapse All
        
        Prisma AIRS Docs
Secure Your AI Models with AI Model Security
AI Model Security is an application designed to ensure that your internal and
        external AI models meet rigorous security standards.
    
  | Where Can I Use This? | What Do I Need? | 
|---|---|
| 
 | 
 | 
What is a Model?
Model is the collection of files required to perform a single base inference pass.
                Model has the following structure:
- Source—Where the AI model lives.
- Model—Logical entity (like "sentiment-analyzer").
- ModelVersion—Specific AI model version (like "v1.2.3").
- Files—Artifacts of that AI model version.
Models are the foundational asset of AI/ML workloads and already power many of your
                key systems in use today. AI model security focuses on securing your models against
                threats like:
- Deserialization Threats: Protecting your models from executing malicious and unknown code at load time.
- Neural Backdoors: Detecting Manchurian Candidate like models.
- Runtime Threats: Protecting your models from executing malicious and unknown code at inference time.
- Invalid Licenses: Ensuring that your models are not using invalid licenses.
- Insecure Formats: Ensuring that your models use formats that help prevent threats.
What is AI Model Security?
AI Model Security is an enterprise application designed to enforce comprehensive
                security standards for both internal and external machine learning models deployed
                in production environments. The application addresses a critical gap in
                organizational security practices where machine learning models, despite their
                significant impact on business operations, often lack the rigorous security
                validation that is standard for other data inputs and systems.
In most enterprise environments, traditional data inputs such as PDF files undergo
                extensive security scrutiny before processing, yet machine learning models that
                drive critical business decisions frequently bypass equivalent security measures.
                This disparity creates substantial operational risk, as compromised or inadequately
                validated models can impact business logic, data integrity, and decision-making
                processes. AI Model Security solves this problem by providing comprehensive model
                validation through automated security assessments, multi-source support for both
                internally developed and third-party models, and proactive identification and
                remediation of model-related security vulnerabilities.
By implementing AI Model Security, organizations can establish consistent security
                standards across all ML model deployments, significantly reducing operational risk
                from unvalidated or compromised models. It enables secure adoption of third-party ML
                solutions while maintaining organizational security rules and industry compliance
                standards. Additionally, it provides comprehensive audit trails and compliance
                reporting capabilities, ensuring that AI Model Security assessments meet regulatory
                requirements and internal governance standards.
Impact of model vulnerabilities
Models can execute arbitrary code and existing tooling is not checking that for you.
                Models have been found at the root of cloud take over attacks, and can be used to
                exfiltrate data, or even to execute ransomware attacks. The sensitivity of the data
                that models are trained on and exposed to at inference time makes them a prime
                target for attackers.
AI Model Security Core Components
AI Model Security enables you to have flexible controls to secure, validate, and
                manage AI models across different sources through Security Groups,
                    Sources, Rules, and Scans.
AI Model Security delivers a comprehensive framework to establish and enforce
                security standards for AI models across your organization. Unlike traditional
                security tools that simply scan for malware, AI Model Security recognizes that AI
                models require more nuanced security considerations that incorporate license
                validation, file format verification, and context-specific security checks based on
                the teams and environments using the models. 
The AI Model Security approach moves beyond the simplified first-party versus
                third-party model distinction to provide granular security controls that scale with
                enterprise needs. This approach centers around four key components: Security
                    Groups, Sources, Rules, and Scans. 
  | Entity | Description | Examples | 
|---|---|---|
| Security Groups | Serve as the foundation of your AI Model Security posture, allowing you to combine specific rules and requirements for models from a particular source. | 
 | 
| Source | Each Security Group is assigned to a specific Source, which represents where model artifacts reside, such as HuggingFace for external models or Local Storage and Object Storage for internal models. The source designation is crucial as it provides metadata that powers specific security rules applicable to models from that source. | 
 | 
| Rules | Within each Security Group, you configure Rules that define the specific evaluations performed on models. Rules can verify proper licensing, check for approved file formats, scan for malicious code, and detect architectural backdoors. Each Rule can be enabled or disabled and configured as blocking or non-blocking, giving you precise control over which security issues prevent model usage versus those that simply generate warnings. | 
 | 
| Scan | When models are evaluated against these Rules, a Scan is
                                performed, documenting the verdict across all rules. These Scans
                                create an audit trail of security evaluations and serve as decision
                                points to either promote secure models forward or block potentially
                                threatening ones early in your workflow. Here's what a typical
                                    scan will look like:   | Scan of fraud-detector:v2.1.0 using S3-Production group | 
AI Model Security leverages rules to help organizations establish sophisticated,
                scalable security frameworks tailored to their specific requirements. This flexible
                approach enables teams to enforce strict blocking mechanisms for high-severity
                threats while maintaining non-disruptive alerting for compliance monitoring—allowing
                security teams to effectively manage risk without hindering developer productivity.
                The result delivers dual benefits: end users gain confident access to vetted models
                through a seamless experience, while security teams receive comprehensive protection
                for their AI/ML infrastructure.
To implement AI Model Security effectively, you'll typically need at least two
                Security Groups: one for external models using HuggingFace as a Source, and another
                for internal models using Local or Object Storage Sources. This separation allows
                you to apply appropriate security standards based on the origin and intended use of
                models across your organization.
 
                
            