AI Content Chat (Beta) logo

Introduction: Moments of Enlightenment ligence. Thus I believe that when describing LLMs, it’s accept- able—useful, even—to use words like “knowledge” and “under- stands” in a not-strictly literal way, just as Richard Dawkins uses the phrase “the selfish gene” in his 1976 book of that name. A gene doesn’t have conscious agency or self-conception in the way the word “selfish” suggests. But the phrase, the metaphor, helps us humans wrap our inevitably anthropocentric minds around how the gene functions. Similarly, GPT-4 doesn’t have the equivalent of a human mind. It’s still helpful to think in terms of its “perspective,” anthropo- morphizing it a bit, because using language like “perspective” helps convey that GPT-4 does in fact operate in ways that are not entirely fixed, consistent, or predictable. In this way, it actually is like a human. It makes mistakes. It changes its “mind.” It can be fairly arbitrary. Because GPT-4 exhibits these qualities, and often behaves in ways that make it feel like it has agency, I’ll sometimes use terminology which, in a metaphorical sense, suggests that it does. Moving forward, I’ll dispense with the quotation marks. Even so, I hope that you, as reader, will keep the fact that GPT-4 is not a conscious being at the front of your own won- drously human mind. In my opinion, this awareness is key to understanding how, when, and where to use GPT-4 most pro- ductively and most responsibly. At its essence, GPT-4 predicts flows of language. Trained on massive amounts of text taken from publicly available inter- net sources to recognize the relationships that most commonly exist between individual units of meaning (including full or partial words, phrases, and sentences), LLMs can, with great 11

Impromptu by Reid Hoffman with GPT-4 - Page 18 Impromptu by Reid Hoffman with GPT-4 Page 17 Page 19