If Anyone Builds It, Everyone Dies : Why Superhuman AI Would Kill Us All

Eliezer Yudkowsky

Book - 2025

"In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter -- Eliezer Yudkowsky and Nate Soares -- have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us -- and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn&#...039;t even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies." --

Saved in:
1 copy ordered
Subjects
Genres
Informational works
Documents d'information
Published
Little Brown & Company 2025.
Language
English
Main Author
Eliezer Yudkowsky (-)
Other Authors
Nate Soares (-)
Physical Description
272 p.
ISBN
9780316595643
Contents unavailable.
Review by Publisher's Weekly Review

In this urgent clarion call to prevent the creation of artificial superintelligence (ASI), Yudkowksy and Soares, co-leaders of the Machine Intelligence Research Institute, argue that while they can't predict the actual pathway that the demise of humanity would take, they are certain that if ASI is developed, everyone on Earth will die. The profit motive incentivizes AI companies to build smarter and smarter machines, according to the authors, and if "machines that think faster and better than humanity" get created, perhaps even by AIs doing AI research, they wouldn't choose to keep humans around. Such machines would not only no longer need humans, they might use people's bodies to meet their own ends, perhaps by burning all life-forms for energy. The authors moderate their ominous outlook by noting that ASI does not yet exist, and it can be prevented. They propose international treaties banning AI research that could result in superintelligence and laws that limit the number of graphic processing units that can be linked together. To drive home their point, Yudkowsky and Soares make extensive use of parables and analogies, some of which are less effective than others. They also present precious few opposing viewpoints, even though not all experts agree with their dire perspective. Still, this is a frightening warning that deserves to be reckoned with. (Sept.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

Making the unimaginable feel real, apocalyptic scenarios visualize AI's potential to tip humanity into extinction. As signaled by their alarming title, Yudkowsky and Soares issue a stark warning: Unless we act now to contain powerful superintelligent AI systems, humanity may not survive. Yudkowsky, co-founder of the Machine Intelligence Research Institute, and Soares, its president, target politicians, CEOs, policymakers, and the general public in their urgent plea. The book opens with an accessible breakdown of what AI is, how it's built, and why even its creators often can't comprehend the accelerating complexity of their own systems. Through parablelike vignettes, the authors expose the underlying realities of AI algorithms--advanced AIs are not engineered so much as grown, operating with opaque and unpredictable results, untethered from human values. The most chilling passages describe how AIs could escape computers and manipulate the physical and financial worlds, eventually repurposing Earth's resources to serve alien objectives or replacing humanity with their own "favorite things." The authors warn, "Nobody has the knowledge or skill to make a superintelligence that does their bidding," arguing that world governments must cooperate to restrict or ideally halt AI research. Policymakers have not yet grasped the full implications of these advanced systems, and the public hasn't felt the impact in their lives, but the authors caution they must be persuaded to act immediately. While some scenarios seem extreme or unrealistic, including hoping global leaders can agree on defining the problem or collaborating on solutions, the book's arguments that the risks are elevated and time is short are persuasive. There is excellent information and food for thought here, including links to resources for readers motivated to join the fray. A timely and terrifying education on the galloping havoc AI could unleash--unless we grasp the reins and take control. Copyright (c) Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.