Grounding is a set of techniques employed to minimise the chances that a LLM will Hallucinate while generating a completion. This is done in 2 broad ways:
- By providing external, authoritative state, rather than relying purely on the Model Weights. This is done with standard Retrieval techniques.
- By providing feedback to the model to correct on constrain the responses. This is typically done with Feedback Loops
Another good practice here is, as much as possible, to offload work to deterministic scripts.
Resources
- https://lexler.github.io/augmented-coding-patterns/patterns/offload-deterministic/