Detect malicious code embedded in plain-text API fields by providing defense against
code injection threats in AI applications.
Malicious code embedded directly in plain-text fields of API prompts or
responses is detected across both synchronous and asynchronous scan services. Even
if the code isn’t in a traditional file format, it is identified and analyzed. For
testing purposes, send malicious code in plain text within the
API “prompt” or “response” fields to confirm detection.
As AI applications become more integrated, the risk of malicious code
injection through user input or model responses increases. This feature helps
safeguard your AI models and applications by providing a layer of defense against
such threats, even when the code is embedded in formats other than traditional
files.