Review by Publisher's Weekly Review
In this urgent clarion call to prevent the creation of artificial superintelligence (ASI), Yudkowksy and Soares, co-leaders of the Machine Intelligence Research Institute, argue that while they can't predict the actual pathway that the demise of humanity would take, they are certain that if ASI is developed, everyone on Earth will die. The profit motive incentivizes AI companies to build smarter and smarter machines, according to the authors, and if "machines that think faster and better than humanity" get created, perhaps even by AIs doing AI research, they wouldn't choose to keep humans around. Such machines would not only no longer need humans, they might use people's bodies to meet their own ends, perhaps by burning all life-forms for energy. The authors moderate their ominous outlook by noting that ASI does not yet exist, and it can be prevented. They propose international treaties banning AI research that could result in superintelligence and laws that limit the number of graphic processing units that can be linked together. To drive home their point, Yudkowsky and Soares make extensive use of parables and analogies, some of which are less effective than others. They also present precious few opposing viewpoints, even though not all experts agree with their dire perspective. Still, this is a frightening warning that deserves to be reckoned with. (Sept.)
(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review
Making the unimaginable feel real, apocalyptic scenarios visualize AI's potential to tip humanity into extinction. As signaled by their alarming title, Yudkowsky and Soares issue a stark warning: Unless we act now to contain powerful superintelligent AI systems, humanity may not survive. Yudkowsky, co-founder of the Machine Intelligence Research Institute, and Soares, its president, target politicians, CEOs, policymakers, and the general public in their urgent plea. The book opens with an accessible breakdown of what AI is, how it's built, and why even its creators often can't comprehend the accelerating complexity of their own systems. Through parablelike vignettes, the authors expose the underlying realities of AI algorithms--advanced AIs are not engineered so much as grown, operating with opaque and unpredictable results, untethered from human values. The most chilling passages describe how AIs could escape computers and manipulate the physical and financial worlds, eventually repurposing Earth's resources to serve alien objectives or replacing humanity with their own "favorite things." The authors warn, "Nobody has the knowledge or skill to make a superintelligence that does their bidding," arguing that world governments must cooperate to restrict or ideally halt AI research. Policymakers have not yet grasped the full implications of these advanced systems, and the public hasn't felt the impact in their lives, but the authors caution they must be persuaded to act immediately. While some scenarios seem extreme or unrealistic, including hoping global leaders can agree on defining the problem or collaborating on solutions, the book's arguments that the risks are elevated and time is short are persuasive. There is excellent information and food for thought here, including links to resources for readers motivated to join the fray. A timely and terrifying education on the galloping havoc AI could unleash--unless we grasp the reins and take control. Copyright (c) Kirkus Reviews, used with permission.
Copyright (c) Kirkus Reviews, used with permission.