AI application
protection now includes
Malicious Code Detection, which analyzes
code snippets generated by Large Language Models (LLMs) to identify
potential security threats. The feature supports scanning for malicious
code in JavaScript, Python, VBScript, PowerShell, Batch, Shell, and
Perl. You can enable this detection by updating the
API Security Profile. This
feature is vital for preventing supply chain attacks, enhancing
application security, maintaining code integrity, and mitigating AI
risks in the deployment and utilization of generative AI.