AI Model Security leverages a community of 16,000+ hackers to find AI model
vulnerabilities missed by traditional databases, scanning both public and private models
securely.
Where Can I Use This?
What Do I Need?
Prisma AIRS (Model Security)
Prisma AIRS Model Security License
Our comprehensive vulnerability detection begins with the industry's most extensive
dataset of model security issues. This intelligence is driven by our Huntr community—a
network of over 16,000 security researchers who continuously discover novel attack
vectors through our bug bounty program and strategic partnership with Hugging
Face.
You can explore our most recent discoveries in the Insights
DB, where we publish vulnerability assessments from scanning every public
model in the Hugging Face repository. The platform includes mechanisms for security
researchers and ML practitioners to challenge or validate our findings.
InsightsDB classifies and lists the models as
Safe, Unsafe, and
Suspicious. You can dispute a finding in Insights
DB by selecting the specific model from the list and Report an
issue:
Report your finding if you've found a new threat.
Report an incorrect threat if you disagree with our
findings
This continuous intelligence stream enables us to identify a broad spectrum of
vulnerabilities in your models, including many AI-specific security issues that fall
outside the scope of traditional software vulnerabilities—threats that typically won't
appear in the National Vulnerability Database (NVD) or conventional security feeds.
Model Security operates as a centralized repository of models from third-party sources
like Hugging Face.
Additionally, we provide an on-premises scanning solution that deploys directly within
your infrastructure, enabling you to assess your proprietary models locally without data
transmission to our systems—ensuring complete privacy and security of your assets.