Enhanced Scan Results and Remediation Guidance
Focus
Focus
What's New in the NetSec Platform

Enhanced Scan Results and Remediation Guidance

Table of Contents

Enhanced Scan Results and Remediation Guidance

Enhanced Scan Results now provide contextual remediation guidance to help you quickly resolve policy violations and security findings.
When you scan AI models with AI Model Security, you receive detailed security analysis that helps you understand not just what went wrong, but exactly how to fix it. Enhanced Scan Results and Remediation Guidance transforms blocked scan verdicts into actionable intelligence by providing you with contextualized remediation steps, security explanations, and direct links to relevant documentation and threat intelligence.
The feature delivers immediate value when your model scans encounter policy violations or security threats. If you scan a model stored in an unapproved format like pickle when your security group only allows safetensors, you receive specific guidance on how to convert the model to an approved format, complete with code examples tailored to your framework. When you scan models with detected security threats, you see clear explanations of why the threat matters, what risks it poses such as data exfiltration or arbitrary code execution, and step-by-step instructions for investigating and resolving the issue. For internal models that fail security checks, you receive critical escalation procedures that guide you through quarantining the model, auditing your training pipeline, and coordinating with your security team for incident response.
You benefit from this feature because it eliminates the research burden of understanding security findings and determining appropriate remediation actions. Instead of receiving generic error messages that require you to investigate solutions independently, you get remediation steps that are specific to your organization's security policies, showing exactly which formats, licenses, or locations your security group approves. When scanning public models from platforms like Hugging Face, you receive direct links to threat intelligence in the AI Model Security Insights database, allowing you to review detailed analysis of the specific vulnerability detected in that model. For compliance and audit purposes, you gain access to point-in-time snapshots of rule configurations at scan time, ensuring you can demonstrate what policies were in effect when a particular model was evaluated. This comprehensive guidance accelerates your ability to move from blocked scans to compliant, secure model deployments while maintaining the security posture your organization requires.