Define AI Content Boundaries with Custom Topic Guardrails
Contextual Grounding detection identifies LLM responses containing information not
present in or contradicting the provided context.
You can
enable the Custom Topic Guardrails
detection service to identify a topic violation in the given prompt or response.
This feature allows you to define specific topics that must be allowed or blocked
within the prompts and responses processed by your LLM models. The system then
monitors content for violations of these defined boundaries, ensuring that
interactions with your LLMs stay within acceptable or designated subject matter.
Custom Topic Guardrails provide granular control over the content your AI
models handle, offering crucial protection against various risks. For example, you
can prevent misuse, maintain brand integrity, ensure compliance, and enhance the
focus of the LLM's outputs.