AI Content Chat (Beta) logo

When AI Makes Things Up (“Hallucinations”) Limits, rules . . . and patience Long story short, I see a similar dynamic playing out with GPT-4 and related technologies. In fact—as I noted in the introduction to this travelog (and as I touch on in a later chapter on GPT-4 and journalism)—I believe LLMs have the capacity to answer a much wider range of questions than Wikipedia or any other source; I believe they can answer these questions faster; and I believe they can do so through an intuitive interface that makes information retrieval highly accessible to a wide range of users. What does this all add up to? Because LLMs offer such advantages in breadth, efficiency, and accessibility, I believe they’ve already achieved the status of “good enough knowledge,” despite their hallucinations. More importantly, I’m very confident that from here, things are chiefly going to get better. So when we hear urgent calls to regulate LLMs like we regu- late many other industries, we should remember that today’s car and drug regulations did not arise fully fledged. They were informed by years of actual usage, and the associated, measur- able problems and negative outcomes. Of course I’m not saying we should wait for “enough” IRL chatbot-related tragedies before we draft meaningful AI safety rules—but I also don’t think we have enough information and context yet to determine what regulations we do need. In the meantime, it’s vital to get a more quantitative and system- atic handle on the problems and challenges LLMs will present. That’s easier said than done, in part because the developers who have the best data on LLM error rates have so far released 161

Impromptu by Reid Hoffman with GPT-4 - Page 168 Impromptu by Reid Hoffman with GPT-4 Page 167 Page 169