Why do generative AI systems hallucinate?

  • Inference from data of uncertain quality and/or provenance
  • Unintended consequence of the analytical method the system is using
  • Failure to provide relevant context into the initial prompt

While there’s still a degree of uncertainty about the interaction between these factors, there are steps which can be taken to mitigate hallucination:

  • Clarity in speaker prompts: what exactly are you asking for?
  • Multiple rounds of prompting with examples of what we are looking for
  • Active mitigation strategies using model parameters (presumably end users are reliant on these being built into user-facing interfaces e.g. Bing AI’s three settings)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.