The voltage effect How to make good ideas great and great ideas scale

John A. List, 1968-

Book - 2022

"A leading economist answers one of today's trickiest questions: Why do some great ideas make it big while others fail to take off? "Scale" has become a favored buzzword in the startup world. But scale isn't just about accumulating more users or capturing more market share. It's about whether an idea that takes hold in a small group can do the same in a much larger one-whether you're growing a small business, rolling out a diversity and inclusion program, or delivering billions of doses of a vaccine. Translating an idea into widespread impact, says University of Chicago economist John A. List, depends on one thing only: whether it can achieve "high voltage"-the ability to be replicated at scale. ...In The Voltage Effect, List explains that scalable ideas share a common set of attributes, while any number of attributes can doom an unscalable idea. Drawing on his original research, as well as fascinating examples from the realms of business, policymaking, education, and public health, he identifies five measurable vital signs that a scalable idea must possess, and offers proven strategies for avoiding voltage drops and engineering voltage gains. You'll learn: How celebrity chef Jamie Oliver expanded his restaurant empire by focusing on scalable "ingredients" (until it collapsed because talent doesn't scale) Why the failure to detect false positives early on caused the Reagan-era drug-prevention program to backfire at scale How governments could deliver more services to more citizens if they focused on the last dollar spent How one education center leveraged positive spillovers to narrow the achievement gap across the entire community Why the right set of incentives, applied at scale, can boost voter turnout, increase clean energy use, encourage patients to consistently take their prescribed medication, and more. By understanding the science of scaling, we can drive change in our schools, workplaces, communities, and society at large. Because a better world can only be built at scale"--

Saved in:

2nd Floor Show me where

658.4012/List
1 / 1 copies available
Location Call Number   Status
2nd Floor 658.4012/List Checked In
Subjects
Published
New York : Currency [2022]
Language
English
Main Author
John A. List, 1968- (author)
Edition
First edition
Physical Description
265 pages ; 25 cm
Bibliography
Includes bibliographical references and index.
ISBN
9780593239483
9780593443521
  • Introduction
  • Part 1. Can Your Idea Scale?
  • 1. Dupers and False Positives
  • 2. Know Your Audience
  • 3. Is It the Chef or the Ingredients?
  • 4. Spillovers
  • 5. The Cost Trap
  • Part 2. Four Secrets to High-Voltage Scaling
  • 6. Incentives That Scale
  • 7. Revolution on the Margins
  • 8. Quitting Is for Winners
  • 9. Scaling Culture
  • Conclusion
  • Acknowledgments
  • Notes
  • Index

Part One CAN YOUR IDEA SCALE? 1 Dupers and False Positives On September 14, 1986, First Lady Nancy Reagan appeared on national television to address the nation from the West Sitting Hall of the White House. She sat on a sofa next to her husband, President Ronald Reagan, and gazed into the camera. "Today there's a drug and alcohol abuse epidemic in this country and no one is safe from it," she said. "Not you, not me, and certainly not our children." This broadcast was the culmination of all the traveling the First Lady had done over the preceding five years to raise awareness among American youth about the dangers of drug use. She had become the public face of the preventative side of President Reagan's War on Drugs, and her message hinged on a catchphrase that millions of people still remember, which she employed once again that evening on television. "Not long ago, in Oakland, California," Nancy Reagan told viewers, "I was asked by a group of children what to do if they were offered drugs. And I answered, 'Just say no.' " Although there are different accounts of where this infamous slogan originated--with an academic study, an advertising agency, or the First Lady herself--its "stickiness," to use the parlance of marketing, was undeniable. The phrase appeared on billboards, in pop songs, and on television shows; school clubs took it as a name. And in the popular imagination it became inseparable from what government and law enforcement officials saw as the crown jewel of the Reagan-era drug prevention campaign: Drug Abuse Resistance Education, or D.A.R.E. In 1983, Los Angeles chief of police Daryl Gates announced a shift in his department's approach to the War on Drugs: instead of busting kids in possession of illegal substances, the new focus would be on preventing those drugs from getting into their hands in the first place. This was how D.A.R.E., with its iconic logo of red letters set against a black background, was born. D.A.R.E. was an educational program built on a theory from psychology called social inoculation, which took from epidemiology the concept of vaccination--administering a small dose of an infectious agent to induce immunity--and applied it to human behavior. The approach of the program was to bring uniformed officers into schools, where they would use role-playing and other educational techniques to inoculate kids against the temptations of drugs. It certainly sounded like a great idea, and the early research on D.A.R.E. was encouraging. As a result, the government opened its taxpayer-funded faucet, and soon the program was scaled up in middle schools and high schools across the country. Over the next twenty-four years, 43 million children from over forty countries would graduate from D.A.R.E. There was only one problem: D.A.R.E. didn't actually work. In the decades since Nancy Reagan urged the nation's youth to "just say no" to drugs, numerous studies have demonstrated that D.A.R.E. did not in fact persuade kids to just say no. It provided children with a great deal of information about drugs such as marijuana and alcohol, but it failed to produce statistically significant reductions in drug use when these same kids were presented with opportunities to use them. One study even found that the program spurred participants' curiosity about drugs and increased the likelihood of experimentation. It is hard to overstate the cost of D.A.R.E.'s voltage drop at scale. For years, the program consumed the time and effort of thousands of teachers and law enforcement officers who were deeply invested in the well-being of our greatest natural resource: future generations. Yet all of this hard work and time, never mind taxpayer dollars, was wasted on scaling D.A.R.E. because of a fundamentally erroneous premise. Worse, it diverted support and resources away from other initiatives that might have yielded real results. Why D.A.R.E. became the disaster that it did is a textbook example of the first pitfall everyone hoping to scale an idea or enterprise must avoid: a false positive. The Truth About False Positives A first truth about false positives is that they can be considered as "lies," or "false alarms." At the most basic level, a false positive occurs when you interpret some piece of evidence or data as proof that something is true when in fact it isn't. For example, when I visited a high-tech plant in China that produced headsets, if a headset working properly got marked as defective due to human error, that was a false positive. When I was called for jury duty, a false positive would have occurred had we determined that an innocent suspect was guilty. False positives also show up in medicine, a phenomenon that gained attention during the pandemic, when some test results for the virus turned out to be unreliable, showing people had contracted the virus when in reality they had not. Unfortunately, false positives are ubiquitous across contexts; consider a 2005 study that found that between 94 and 99 percent of burglar-alarm calls turn out to be false alarms, and that false alarms make up between 10 and 20 percent of all calls to police. In the case of D.A.R.E., the National Institute of Justice's 1985 assessment involving 1,777 children in Honolulu, Hawaii, found evidence "favoring the program's preventative potential," and a subsequent study conducted soon after in Los Angeles among nearly as many students also concluded that D.A.R.E. led to a reduction in drug experimentation. These purportedly strong results drove schools, police departments, and the federal government to just say yes to expanding D.A.R.E. nationwide. Yet numerous scientific analyses over the following decade examining all of the known studies and data on the program yielded incontrovertible proof that D.A.R.E. didn't actually have a meaningful impact. So what happened? The simple answer is this: it is not uncommon for data to "lie." In the Honolulu study, for example, the researchers had calculated that there was a 2 percent chance their data would yield a false positive. Unfortunately, subsequent research shows that either they underestimated that probability or they just, unfortunately, fell within that 2 percent. There was never any voltage in D.A.R.E. How can something like this happen in the hallowed halls of science? First, I should clarify that when I say the data are "lying," what I'm actually referring to is "statistical error." For example, when you draw a sample of children from a certain population (i.e., children living in a single city in Hawaii), random differences among them might produce an "outlier group" that leads you to make a false conclusion. Had the researchers gone back to the original population of children in Honolulu and tested D.A.R.E. again with a new group of students, they would have likely found the program didn't work. (A related kind of inference problem is when the results from one group don't generalize to another; we take up this issue in Chapter 2.) Unfortunately, statistical failures of this sort happen all the time. As we saw with D.A.R.E., false positives can be very costly because they lead to misinformed decisions with downstream consequences--time and money that would have been better invested elsewhere. This is especially true when the "lie" or error is missed early on, causing enterprises that were never actually successful to begin with to suffer an inevitable voltage drop at scale. In other words, eventually the truth will come out, as it did for D.A.R.E. when its critics produced overwhelming empirical evidence that the program didn't work. I have witnessed this firsthand in my own work in the business world. Excerpted from The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale by John A. List All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.