AI Content Chat (Beta) logo

Impromptu: Amplifying Our Humanity Through AI Obviously, in all these different forms, hallucinations have been a big part of the narratives about new LLMs like ChatGPT and Microsoft’s Bing/Sydney. Today, when LLM hallucinations are novel and often unsettling, they’re understandably generating lots of attention. In part, I believe this is because hallucinations contradict estab- lished expectations for how highly evolved AIs are supposed to behave. We thought we were getting all-knowing, supremely logical, and infallibly even-tempered automata; instead, we get a simulation of that smart but sometimes sketchy dude we’ve been arguing with on Reddit?! It must be said though, the attention also comes because this hallucinatory behavior really does create new potential harms. A confident chatbot telling people how to hotwire a car might inspire them to act on its guidance more than an old, inert web page with the same information. So concerns are not unfounded. But as we try to fully consider LLMs’ pros and cons, I would add these points: ● In some circumstances, the power of “good enough knowl- edge” can be profoundly great. ● Before we decide that LLMs like GPT-4 produce too many errors to tolerate, we should try to understand how many errors they make—and how many errors we already accept in other sources. ● In certain contexts, an LLM’s ability to generate non-fac- tual information can be tremendously useful. (In humans we call it “imagination,” and it’s one of the qualities we most prize in ourselves.) 158

Impromptu by Reid Hoffman with GPT-4 - Page 165 Impromptu by Reid Hoffman with GPT-4 Page 164 Page 166