Secure Custom AI Models on Private Endpoints
Focus
Focus
What's New in the NetSec Platform

Secure Custom AI Models on Private Endpoints

Table of Contents

Secure Custom AI Models on Private Endpoints

Enable custom model support in API security profile.
You can extend AI security inspection to LLMs hosted on privately managed endpoints or input/output schemas that are not publicly known. By enabling this support within your AI security profile, all traffic that matches a security policy rule is forwarded to the AI cloud service for threat inspection, regardless of whether the model is a well-known public service or a custom-built private one. This ensures comprehensive security for your entire AI ecosystem.
The new AI security profile inspects and secures the AI traffic between AI applications and LLM models passing through Prisma AIRS: Network intercept that are managed by Strata Cloud Manager or Panorama. This profile protects against threats such as prompt injections and sensitive data leakage.