[-] [email protected] 13 points 1 year ago

I believe this phenomenon is called "artificial hallucination". It's when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.

[-] [email protected] 3 points 1 year ago
[-] [email protected] 0 points 1 year ago

Then stop using GPT4ALL and use a better language model!

Fluffles

joined 1 year ago