These strange new minds How AI learned to talk and what it means

Christopher Summerfield

Book - 2025

"An insider look at the Large Language Models (LLMs) that are revolutionizing our relationship to technology, exploring their surprising history, what they can and should do for us today, and where they will go in the future--from an AI pioneer and neuroscientist. In this accessible, up-to-date, and authoritative examination of the world's most radical technology, neuroscientist and AI researcher Christopher Summerfield explores what it really takes to build a brain from scratch. We have entered a world in which disarmingly human-like chatbots, such as ChatGPT, Claude and Bard, appear to be able to talk and reason like us--and are beginning to transform everything we do. But can AI 'think', 'know' and 'und...erstand'? What are its values? Whose biases is it perpetuating? Can it lie and if so, could we tell? Does their arrival threaten our very existence? These Strange New Minds charts the evolution of intelligent talking machines and provides us with the tools to understand how they work and how we can use them. Ultimately, armed with an understanding of AI's mysterious inner workings, we can begin to grapple with the existential question of our age: have we written ourselves out of history or is a technological utopia ahead?" --

Saved in:
1 being processed
Coming Soon
Subjects
Genres
Informational works
Documents d'information
Published
New York, NY : Viking 2025.
Language
English
Main Author
Christopher Summerfield (author)
Edition
First United States edition
Item Description
First published in hardcover in Great Britain by Viking, part of Penguin Random House Group of companies, Penguin Random House Ltd., London, in 2025.
Physical Description
ix, 373 pages ; 24 cm
Bibliography
Includes bibliographical references (pages 349-360) and index.
ISBN
9780593831717
  • Part 1. How did we get here?
  • part 2. What is a language model?
  • part 3. Do language models think?
  • part 4. What should a language model say?
  • part 5. What could a language model do?
  • part 6. Are we all doomed?
Review by Booklist Review

Summerfield, neuroscientist and former researcher at Deepmind, offers one of the most balanced and realistic assessments of the current state of AI technology as well as a summary of how AI was first conceived and developed. In doing so, he spends as much time exploring linguistics and neuroscience as he does with technology. Noam Chomsky is as much a part of this as Alan Turing is. Summerfield examines the philosophies of the major players in the AI sphere and spends some time assessing the various hopes and fears people have for it. Amongst the slate of AI-focused, pop-sci books hitting the shelves recently, this one does the best job of explaining for a lay reader how AI is structured and trained. But Summerfield takes it further, comparing the ways AI functions to the workings of the human brain to show not just the potential of AI for true intelligence and creativity but also that AI is fundamentally different from human intelligence. Those differences pose the biggest challenges to the next steps of AI development. This technology contains both tremendous potential and very real danger. Summerfield tackles all this with humor, wit, and candor.

From Booklist, Copyright (c) American Library Association. Used with permission.
Review by Publisher's Weekly Review

This superlative study from Oxford University neuroscientist Summerfield (Natural General Intelligence) explores how large language models work and the thorny questions they raise. He explains that neural networks learn by guessing the relationships between data points and developing "weights" that prioritize the processing pathways most likely to produce correct answers. Wading into debates around whether LLMs possess knowledge or merely proffer predictions, Summerfield makes the provocative argument that human learning is essentially predictive, depending on the same trial-and-error strategy LLMs use. According to the author, this indicates human knowledge is comparable to AI knowledge. Summerfield is remarkably levelheaded in his assessment of AI's capabilities, suggesting that while obstacles to designing AI assistants that can book trips and pay bills may be resolved in the next several years, it's unlikely LLMs will ever become sentient given their inability to experience physical sensation. The lucid analysis also makes clear that technological improvements will never overcome such pitfalls as determining when to provide answers as definitive or up for debate, since such problems depend on subjective judgment. By inquiring into the nature of knowledge and consciousness, Summerfield brings some welcome nuance and clarity to discussions of LLMs. In a crowded field of AI primers, this rises to the top. Agent: Rebecca Carter, Rebecca Carter Literary. (Mar.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

A closeup look at the large language models that have radically changed computer technology. The world fundamentally changed when artificial intelligence systems learned to talk, says this intriguing book. It meant that humans no longer had a monopoly on cooperation, knowledge generation and sharing, and conceptualization. Summerfield, a professor of cognitive neuroscience at the University of Oxford and a staff research scientist at Google DeepMind, is well placed to explain the origins and nature of the large language models (or LLMs) that have taken AI systems to a more advanced level. He is wary of this new generation, concerned that they are developing faster than the means of human control. While acknowledging that LLMs can be very useful at organizing and providing information, the author provides plenty of examples of dangers they could unleash, including fake legal cases or stock market crashes. Developers have tried to prevent these risks by providing more context for AI responses, but that raises the issue of AI systems reflecting the biases of the programmers. The fact that LLM-informed systems can communicate with each other also means that some things occur without human involvement. Does all this mean that the systems "think?" Summerfield's conclusion: Whatever they're doing looks a lot like it. The key problem is that we are plunging ahead with ever-smarter systems without understanding their impact, and Summerfield thus calls for coordinated research into the field from developers, regulators, and governments. "The era we have just entered--where AI can speak, both to us and to each other--is a watershed moment," Summerfield writes. "We don't yet understand what it will mean for humanity, but it's going to be exciting--and slightly terrifying--to find out." A clear-minded, accessible examination of how AI systems work. Copyright (c) Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.