AI Content Chat (Beta) logo

Impromptu: Amplifying Our Humanity Through AI frequency, generate replies to users’ prompts that are contextu- ally appropriate, linguistically facile, and factually correct. They can also sometimes generate replies that include factual errors, explicitly nonsensical utterances, or made-up passages that may seem (in some sense) contextually appropriate but have no basis in truth. Either way, it’s all just math and programming. LLMs don’t (or at least haven’t yet) learn facts or principles that let them engage in commonsense reasoning or make new inferences about how the world works. When you ask an LLM a question, it has no awareness of or insights into your communicative intent. As it generates a reply, it’s not making factual assess- ments or ethical distinctions about the text it is producing; it’s simply making algorithmic guesses at what to compose in response to the sequence of words in your prompt. 2 In addition, because the corpora on which LLMs train typically come from public web sources that may contain biased or toxic material, LLMs can also produce racist, sexist, threatening, and otherwise objectionable content. Developers can take actions to better align their LLMs with their specific objectives. OpenAI, for example, has chosen to deliber- ately constrain the outputs that GPT-4 and its other LLMs can produce to reduce their capacity to generate harmful, unethi- cal, and unsafe outputs—even when users desire such results. To do this, OpenAI takes a number of steps. These include removing hate speech, offensive language, and other objection- able content from some datasets its LLMs are trained on; devel- 2 “Corpora” is the plural of a “corpus,” which in this context refers to a collection of written texts used for language research. 12

Impromptu by Reid Hoffman with GPT-4 - Page 19 Impromptu by Reid Hoffman with GPT-4 Page 18 Page 20