You can now extend AI security inspection to Large Language Models (LLMs) hosted on
privately managed endpoints. This feature allows you to secure traffic to custom AI
models, even when their endpoints or input/output schemas are not publicly known. By
enabling this support within your AI security profile, all traffic that
matches a security policy rule will be forwarded to the AI cloud service for threat
inspection, regardless of whether the model is a well-known public service or a
custom-built private one. This ensures comprehensive security for your entire AI
ecosystem.