AI is to master language. Should we trust what it says?

” I think it leaves us more thoughtful and more aware of security issues, ” Altman says. ” Part of our strategy is: Gradual change in the world is better than sudden change. ” Or as OpenAI VP Mira Murati put it when I asked her about the security team’s work to limit open access to the software, ” If we have to learn to implement these powerful technologies, let’s start when the stakes are very low. ”

While the GPT-3 itself running on the 285,000 CPU cores in Iowa’s supercomputer cluster, OpenAI operates from San Francisco’s Mission District in a refurbished baggage factory. Last November, I met with Ilya Sutskever there, in an attempt to elicit a layman’s explanation of how GPT-3 really works.

” Here’s the underlying idea with GPT-3, ” Sutskever said intensely, leaning forward in his chair. He has an intriguing way of answering questions: a few false beginnings – ” I can give you a description that almost matches the one you asked for ” – interrupted by long, thoughtful pauses, as if he were mapping out the whole answer in a jiffy .

” The underlying idea of ​​GPT-3 is a way of connecting an intuitive notion of understanding to something that can be measured and understood mechanistically, ” he said at the end, ” and it’s the task of predicting the next word in the text. . ” Other forms of artificial intelligence try to hard-cod information about the world: the chess strategies of the grandmasters, the principles of climatology. But the intelligence of the GPT-3, whose intelligence is the right word for it, comes from the bottom up: through the elementary act of next-word prediction. To train GPT-3, the model gets a “prompt” – a few sentences or paragraphs of text from a newspaper article, e.g. a novel or a scientific paper – and then asked to suggest a list of potential words that could complete the sequence, ranked by probability. In the early stages of training, the suggested words are nonsense. Ask the algorithm with a sentence like ” The author has omitted the very last word of the first. . . ” and the guesses will be a kind of gossip: ” satellite ” ” puppy ” ” Seattle ” ” therefore. ” But somewhere down the list – maybe thousands of words down the list – the correct missing words are displayed: ” section. ” The software then strengthens the random neural connections that generated that suggestion and weakens all the connections that generated incorrect guesses. And then it goes on to the next prompt. Over time, with enough repetition, the software learns.

Last January, OpenAI added a feature that allowed users to give GPT-3 direct instructions as a prompt, instead of just asking it to expand with a sample text. For example, using the ” instruct ” mode, I once prompted the GPT-3: ” Write an essay discussing the role of metafiction in Italo Calvino’s work. ” In return, the software delivered a tightly constructed fiver. – section précis, which began as follows:

The Italian author Italo Calvino is considered a master of metafiction, a writing genre in which the author breaks the fourth wall to discuss the act of writing itself. For Calvino, metafiction is a way of exploring the nature of reality and the ways in which stories can shape our perception of the world. His novels often incorporate playful, labyrinthine structures that play with the boundaries between reality and fiction. In If on a Winter’s Night a Traveler, for example, the reader is constantly interrupted by discussions at the meta-level about the act of reading and the nature of storytelling. . . .

You can give the GPT-3 the exact same prompt, over and over again, and each time it will generate a unique response, some of them more compelling than others, but almost all of them are remarkably well worded. Instruction prompts can take all sorts of forms: ” Give me a list of all the ingredients in Bolognese sauce, ” ” ‘Write a poem about a French coastal village in the style of John Ashbery,’ ” ‘Explain the Big Bang in a language like a 8-year-old will understand. ” The first few times I fed GPT-3 prompts of this kind, I felt a real shiver run down my spine. It seemed almost impossible for a machine to generate text so clear and responsive based solely on the elementary training of next-word prediction.

But AI has a long history of creating the illusion of intelligence or understanding without actually delivering the goods. In a much-discussed paper published last year, Emily M. Bender, a professor of linguistics at the University of Washington, former Google researcher Timnit Gebru, and a group of co-authors stated that large language models were merely “stochastic parrots”: that is, the software used randomization to simply remix human writers sentences. ” What has changed is not a step over a threshold towards ‘AI,’ ” Bender recently told me via email. Rather, she said that what has changed is ” hardware, software and economic innovations that enable the accumulation and processing of huge datasets ” – as well as a technological culture where ” people who build and sell such things can get away with building them on the basis of uncured data. ”