AI Content Chat (Beta) logo

WHEN AI MAKES THINGS UP (“HALLUCINATIONS”) When OpenAI introduced ChatGPT to the world via a “research preview” on November 30, 2022, a company blog post warned that “ChatGPT sometimes writes plausible-sound- ing but incorrect or nonsensical answers.” In just five days, one million people signed up to give ChatGPT a spin. As users shared their experiences with it, ChatGPT’s “hallucinations” (as its errors, fabrications, and other algorith- mic oddities are often described) became a major theme in the social media chatter and news coverage that initially helped define this uncanny new chatbot. So forgive me if some of these examples and quotes sound like old news: ● A Harvard University researcher said you should “dou- ble-check everything” it presents as fact, and always remember that it’s “only one source.” ● A Wired magazine reporter asked whether this was really a productive step forward or just a new way of “unleashing misinformation on the masses.” ● When a well-known journalist saw a biography it had crafted for him speculating about his role in the assassi- 153

Impromptu by Reid Hoffman with GPT-4 - Page 160 Impromptu by Reid Hoffman with GPT-4 Page 159 Page 161