Prevent Inaccuracies in LLM Outputs with Contextual Grounding
Focus
Focus
What's New in the NetSec Platform

Prevent Inaccuracies in LLM Outputs with Contextual Grounding

Table of Contents

Prevent Inaccuracies in LLM Outputs with Contextual Grounding

Detects LLM responses that contain information not in the provided context or contradict it, helping identify hallucinations and ensure factual accuracy.
You can now enable Contextual Grounding detection in your LLM response, which detects responses that contain information not present in or contradicting the provided context. This feature works by comparing the LLM's generated output against a defined input context. If the response includes information that wasn't supplied in the context or directly contradicts it, the detection flags these inconsistencies, helping to identify potential hallucinations or factual inaccuracies.
Ensuring that LLM responses are grounded in the provided context is critical for applications where factual accuracy and reliability are paramount.