AI Content Chat (Beta) logo

When AI Makes Things Up (“Hallucinations”) GPT-4 is not the World Brain, but it could be a valuable component of it, if we use it wisely and ethically. It could help us to create a more informed, rational, and creative humanity, ready to face the future with confidence and hope. Well said, H.G.-PT! The varieties of hallucinatory AIxperience An admission: I don’t like the term “hallucinations.” To my ear, it somehow sounds both euphemistic (“Relax, it’s just some goofy nonsense!”) and unduly alarming (“Watch out, hippie! This GPT stuff could make you jump off a roof!”) Also, it’s covering a lot of ground. By my count, there are at least four different kinds of “hallucinations” LLMs can produce: 1) Nonsensical. These are probably the least problematic kind, because they’re the easiest to identify. 2) Plausible, but incorrect. These are arguably the most prob- lematic kind, because they can be quite hard to identify— specifically because LLMs like GPT-4 have become so good at presenting information with convincing authority. 3) Responses where the LLM seems to claim capacities it doesn’t actually have, such as sentience or emotion, or (per Microsoft’s Sydney) saying it could spy on users, order a pizza, or take any number of actions that language-predic- tion software can’t actually do. 4) Deliberate and destructive hallucinations, such as when a user prompts an LLM to generate false information that the user intends to use to mislead, confuse, or produce some other negative effect. 157

Impromptu by Reid Hoffman with GPT-4 - Page 164 Impromptu by Reid Hoffman with GPT-4 Page 163 Page 165