- Inference from data of uncertain quality and/or provenance
- Unintended consequence of the analytical method the system is using
- Failure to provide relevant context into the initial prompt
While there’s still a degree of uncertainty about the interaction between these factors, there are steps which can be taken to mitigate hallucination:
- Clarity in speaker prompts: what exactly are you asking for?
- Multiple rounds of prompting with examples of what we are looking for
- Active mitigation strategies using model parameters (presumably end users are reliant on these being built into user-facing interfaces e.g. Bing AI’s three settings)
