Rethinking medications Truth, power, and the drugs you take

Jerry Avorn

Book - 2025

"Groundbreaking research has given us many remarkable new medicines, but America's drug evaluation process, once the envy of the world, is being seriously compromised. Under pressure from drugmakers, the FDA has been lowering its approval standards and has let poorly effective or risky products enter the market--while our prescription prices, the highest in the world, put crucial treatments beyond the reach of many. In Rethinking Medications, Dr. Jerry Avorn explains how we got here and what we can do to ensure that our medicines are dependably effective, safe, and affordable"--

Saved in:
1 being processed

2nd Floor New Shelf Show me where

338.476151/Avorn
0 / 1 copies available
Location Call Number   Status
2nd Floor New Shelf 338.476151/Avorn (NEW SHELF) On Holdshelf
+1 Hold
Subjects
Published
New York : Simon & Schuster 2025.
Language
English
Main Author
Jerry Avorn (author)
Edition
First Simon & Schuster hardcover edition
Physical Description
x, 498 pages ; 24 cm
Bibliography
Includes bibliographical references (pages 455-478) and index.
ISBN
9781668052846
Contents unavailable.
Review by Booklist Review

Harvard doctor Avorn makes a strong case that at times pharmaceutical companies dramatically put profits ahead of patient health. For example, Merck tried to hide the risks of its blockbuster drug Vioxx, which reduced inflammation but increased the risk of heart attacks. Contrary to popular belief, the FDA itself does not test drugs before they're allowed on the market. Avorn sprinkles in fascinating historical tidbits, including how the altruistic inventors of insulin essentially gave their patent to their university and how polio-vaccine creator Jonas Salk also didn't care about personal gain. Today, too many pharmaceutical companies see drugs primarily in terms of money, charging exorbitant prices such as $1,000 a pill for a hepatitis C medication. They invest heavily in lobbying, spending $378 million in 2023. Their goal? Maximizing profit. Avorn wants to help the medical profession rediscover how to help people who are suffering. This eye-opening look at the pharmaceutical industry should make FDA officials want to scrutinize drug approvals more carefully, doctors want to prescribe more carefully, and patients want to consume more carefully.

From Booklist, Copyright (c) American Library Association. Used with permission.
Review by Publisher's Weekly Review

In this troubling report, Avorn (Powerful Medicines), a professor of medicine at Harvard Medical School, explores problems with the development and administration of medicine. Charting America's history of underregulating drugs, Avorn notes that listing a medication's contents on its container only became mandatory in 1906, and that a 1962 law marked the first time pharmaceutical manufacturers had to prove to the Food and Drug Administration that a medication works before bringing it to market. Such safety measures are imperiled today, Avorn contends, warning that since the 1980s, successful efforts by pharmaceutical industry lobbyists to loosen testing standards for FDA approval have left doctors with a more limited understanding of new drugs' side effects. Avorn blames the pharmaceutical industry's lack of transparency for endangering patients, recounting how the drug manufacturer Merck received approval for the painkiller Vioxx, which was later found to increase patients' risk of heart attack, based on distorted research that withheld evidence of the side effect. Sensible suggestions for ameliorating such harms include requiring pharmaceutical companies to upload all trial results to a public database and stipulating that clinical trials must compare a new drug's efficacy and safety with the best available current treatment, instead of just a placebo. A damning survey of the drug development system's many failures, this enlightens even as it infuriates. Agent: Michael Carlisle, InkWell Management. (Apr.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

Unsettling news about prescription drugs. Avorn, professor of medicine at Harvard Medical School, reminds readers that pharmaceutical companies, despite bitter opposition, were required to prove that their drugs worked beginning only in 1962. Ironically, their first mass support came from radicals when 1980s AIDS activists denounced the neglect and slow pace of anti-AIDS drug approval. Responding, the FDA created an Accelerated Approval program to release drugs quickly based on "surrogate" clinical endpoints. For example: If, early in research, an anti-diabetic drug lowers blood sugar, that's a hopeful surrogate sign, and it may be approved. But lowering blood sugar does nothing to prevent heart disease, blindness, infections, and other diseases that afflict diabetics. Sensibly, the FDA insists that research proceed, but drug companies avoid this. A drug proven effective does not increase profits because it's already approved, and failure is disaster. As a result, most drugs approved today haven't been shown to benefit patients, and an unnerving number, many wildly expensive, are considered useless by experts, if not by their manufacturers. Others are toxic, but, Avorn writes, "for decades the FDA had a kind of attention deficit disorder concerning drugs it has already approved." Vioxx, an anti-inflammatory similar to Motrin or Advil, became the world's bestseller after its 1999 approval. It also greatly increased the risk of heart attacks and strokes, although five years passed before it was pulled from the market. Similar debacles abound, so readers may breathe a sigh of relief at the author's diversions into his life and career at Harvard Medical School, where, he writes, "one eminent department chair…had a standard response to faculty recruits who balked at the paltry academic salaries he was offering them: 'Just think of it as a base. You can earn much more, maybe double that amount, by consulting for drug companies.'" Avorn provides sensible solutions, but many involve increased government oversight, which seems unlikely these days. A masterful assessment of a highly flawed health care system. Copyright (c) Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.

Chapter 1: How Do We Know? CHAPTER 1 How Do We Know? We have access to more information and evidence than ever, but facts seem to have lost their power. -- This Is Not Propaganda , a book about the Soviet Union In 2021, the Food and Drug Administration gave its approval to Aduhelm, a new drug for Alzheimer's disease that didn't work, could cause brain damage, and was poised to cost the nation each year a sum the size of NASA's annual budget. How did the world's once best prescription drug regulatory body fall so low? And how does this decline impact the medications that Americans take every day? We need to start by considering how a drug is evaluated to determine whether it works, and what we even mean by "working." It took us over a century to learn how to rethink this question; knowing about that journey is key to understanding where we've ended up, and to contemplate the more primitive approach to which we may be returning. Before 1906 anybody could put anything they wanted in a bottle and call it a medicine, without even having to reveal what was in it. A manufacturer could then make any claims it wanted about the product's effectiveness for any condition. They were called "patent medicines," even though they were generally not patented. Many of them did no good at all, and some were downright dangerous. Pills and elixirs promoted to treat pain, depression, cancer, "female troubles," liver disease, and a host of other complaints filled store shelves and mail-order catalogues. Many were physiologically inert, but some contained hefty amounts of alcohol, opium, cocaine, or a combination of them. Yes, sick babies given narcotic or alcohol elixirs did seem to become more comfortable, stopped crying, and slept better. Many of them also stopped breathing. On the picker-upper side of the medicine cabinet, it's widely known that Coca-Cola got its name because the active ingredient in the original formulation was cocaine. Apart from its substantial addictive potential, this explains why so many people believed that things really did go better with Coke. That ingredient was removed in the early 1900s. A Uniquely American Condition? Around the same time, the Rexall company introduced its "Americanitis Elixir" to treat the ills caused by a rapidly industrializing society. The product was "as necessary as food and drink," its ads proclaimed, continuing, This unique medical discovery strengthens and tones the nerves.... It supplies to the body phosphorous in soluble form--a thing never before considered possible. Rexall Americanitis has accomplished wonderful results all over the country and its merits are now universally recognized. The part about phosphorus was utterly meaningless; the product's real active ingredients appear to have been 15 percent alcohol and some chloroform, explaining the ad's tagline "Note how quickly that feeling of nervous strain disappears." Companion advertisements for the product were directed at "nervous, over-worked, and run-down women," noting that the product "acts directly on the nerves." (Yes, alcohol and chloroform will do that.) The ad for women continued, Rexall Americanitis Elixir is the only remedy of its kind in existence. As its name implies, it's a specific for the peculiar exhausted nervous conditions resulting from the continuous rush and tension under which Americans live. This remedy fills an important gap in the line of medicines. Other promotion in the early 1900s from the Bayer company touted its two recently invented compounds: Aspirin for fevers (that's worked out well over the years), and Heroin for cough (not so much). Both drugs had been created by the same chemist during the same period in 1897. Bayer's Aspirin found its way into nearly every home medicine cabinet, while Bayer's Heroin helped set the stage for a crippling epidemic of addiction, discussed more fully in chapter 20. This was before the invention of the categories of controlled substances or prescription-only drugs, so any doctor could recommend any substance to any patient. Nor was a doctor even needed: such substances could be bought directly by the consumer, with no requirement or guarantee that any of them be either safe or effective. It wasn't until the Progressive Era at the start of the twentieth century that the nation began to wonder whether government should do something about this chaotic abundance of sometimes-toxic choices. The nation's first attempt at drug regulation simply proposed that manufacturers should be required to label what was in their products, which would be helpful for people trying to limit their inadvertent intake of opioids, cocaine, or alcohol. As modest as the requirement was, like all attempts to regulate medications over the decades it was met with charges of government overreach encroaching on the rights of citizens. But cooler if still timid heads prevailed, and in 1906 Congress passed the first Pure Food and Drug Act, creating the Food and Drug Administration. This small step did nothing to ensure that any of these products worked, or even were safe: manufacturers just needed to state what was inside the bottle or tablet. The country still was not ready for something as modest as a law requiring that medicines not be poisonous; that didn't fall into place until over three decades later, in 1938 (see chapter 5). And then, for another quarter century after that, drugmakers still didn't have to prove that their products really worked. That revolutionary concept was proposed in legislation introduced in 1961 by Senator Estes Kefauver, a Democrat from Tennessee. Along with other proposed laws that dealt with the high prices of medicines--a recurring theme in American history--he introduced the radical idea that a manufacturer should be required to show that its product helped patients before it could be sold or promoted. No other country required that; at the time, this idea was seen as far too liberal, and the initiative seemed headed for certain defeat. The proposed reforms were met with the usual objections, this time put forward by an increasingly powerful pharmaceutical industry: the new rules would impose excessive government control, limiting the rights of doctors to prescribe whatever they chose and of patients to ingest anything they wanted. Furthermore, the argument went, it would harm the capacity of drugmakers to discover new products. A Golden Era for Drug Evaluation In one of those accidents of history that no one saw coming, the early-1960s Kefauver amendments were implausibly rescued at the last minute by the thalidomide tragedy, in which thousands of babies worldwide were born with congenital defects caused by a drug their mothers took during pregnancy (see chapter 5). Although a central goal of the Kefauver amendments was the containment of high drug prices and the thalidomide tragedy concerned drug safety, the birth defect debacle led to the passage of his legislative package and gave the government new powers in yet a third domain: medication effectiveness. The new 1962 law required a manufacturer to provide the FDA with credible evidence that a new product actually worked before it could be sold. Nothing like that had been put into place anywhere: it changed everything about how people think about and use medications, both in the U.S. and eventually around the world. This evidence would have to come from what the law defined as "well-conducted studies"; that usually meant randomized controlled trials (RCTs) in patients. The logic behind the RCT is as powerful as it is simple. Many diseases wax and wane on their own. Enthusiastic doctors may attribute any improvement to something they had done, and patients often perceive benefit from ingesting compounds with no biologic effect at all. The RCT handles these problems elegantly through a remarkably simple approach: take a large group of patients with a given disease and randomly allocate some to get the drug being studied, and some to get a comparison treatment--often an inert substance, the placebo. The approach makes vivid use of the concept "all things being equal." If a large group is assigned to get treatment A or treatment B by the flip of a coin (or a computer random number generator), all things other than the treatment really are rendered equal across the two groups. It's also important that neither the patient nor the doctor to know who got what. That key feature has traditionally been known as "double-blinding," but in deference to visually impaired people and their advocates, some now prefer the term "double-masking." At the end of the trial the data are unblinded (unmasked?); if the randomization worked well and the sample size is good enough to make a chance finding unlikely, then any differences that are seen in the group that got the drug are extremely likely to have been caused by the medication and nothing else. This simple approach, which wasn't in routine use until after World War II, utterly transformed our ability to know what works and what doesn't in medicine. For the same reasons, it also proved useful for understanding the frequency and severity of side effects, as described in chapter 6: patients sometimes develop symptoms they attribute to placebos, known as the "nocebo" effect, from the Latin root for "noxious." Randomization and double-blinding/masking help take care of that as well. The RCT became the mainstay of drug evaluation relatively recently, in the second half of the twentieth century. Before that, respected authorities would decide what drugs worked based primarily on their own clinical experience (often an unreliable indicator), or assumptions about mechanisms of physiology and pharmacology predicting which drugs ought to work. For hundreds of years, medicine was under the sway of the utterly wrong precepts of the second-century Greek physician Galen, who taught that the body operated through a system of four "humors": black bile, yellow bile, blood, and phlegm; these had to be balanced to maintain or restore health. That gave us treatments like bloodletting and purgatives that created many side effects and occasionally death, but very little in the way of actual curing. That is why the Kefauver amendments of 1962 were so important: they put the full force of law behind the potent idea of science-based evaluation through RCTs, empowering the government to mandate rigorous scientific assessment of new drugs for the first time. Comparisons with other progressive turning points of that era are hard to ignore: the right of every citizen to cast a vote is a good idea, but withers without congressional action that makes it a crime to deny it; the right of a woman to control her own fertility is just a theoretical construct without a Supreme Court decision that transforms it into a legal right. But just as with the Voting Rights Act that was passed a few years after the Kefauver amendments, and the Roe v. Wade Supreme Court decision of several years after that, the foes of these reforms didn't just accept defeat and go away quietly. Instead, in each case they spent the ensuing years laying the judicial and legislative foundations to undo each advance. To erode each of these reforms, the opposition built a well-funded, persistent, and highly organized counterattack. They elected sympathetic lawmakers and then applied relentless political and financial pressure on them; they amassed huge sums of money and deployed it in powerful lobbying efforts in Washington and the states; they designed creative legal onslaughts and employed novel constitutional arguments, presenting carefully chosen cases to sympathetic conservative judges appointed to key positions over years of disciplined political effort. This is what has happened to medication policy as well. To put the degradation of our prescription drug-related policies into context, it helps to look at the legal and ideological arguments that were used to decimate those other now-crippled reforms. It wouldn't have gone over well to propose rolling back the Voting Rights Act to make it harder for Black people to vote. But arguments about states' rights and federal government overreach have the patina of judicial logic to them, just as the same doctrines were used to justify preserving slavery and segregation for so many years. Disenfranchisement was cloaked in the garb of redistricting, limitations on voting procedures, and other administrative maneuvers. Similarly, removing protections for abortion rights was redefined as a more faithful reading of the Constitution, protection of life and religious freedom, and restoring these decisions to the states, a level of government said to be closer to the people. We've seen where those arguments have gotten us. Similar retrenchments are happening to the legal structures that enabled the government to protect us from poorly effective prescription drugs, but those changes have been far less visible to the public. The Compromise Begins Criticism of our approach to drug assessment has come from all parts of the ideological spectrum. One of the most important transformations of the FDA's evaluation approach started out as a well-meaning program to help some of the nation's most vulnerable patients. The agency's fall from grace began a while ago, with the best of intentions. Tony Fauci had a problem. The epidemic was advancing week by week, its death toll rising daily. Patients and potential future victims were panicking. Why wasn't the government doing more to address the crisis? The federal agency he headed, the National Institute of Allergy and Infectious Diseases (NIAID), was tasked with leading the country's research agenda for all communicable diseases; why did it seem to be dragging its feet so badly on this? And why were his colleagues at the FDA taking so long to approve new treatments that could save hundreds of thousands of people right now--people at risk of dying of this new fatal disease? Couldn't NIAID fund more research, and couldn't the FDA approve promising-looking new treatments faster to get them to the public? In the face of an unprecedented epidemic, many argued for the need to simply launch new treatments that might hold some promise and get them out there for patients to try, instead of watching so many people die of a lethal new disease while federal agencies slogged through their obsessive work as usual. The angst, panic, and outrage weren't over Covid in 2020. They were about AIDS in the late 1980s--another new and potentially fatal infectious disease whose cause and transmission were not yet understood. A much younger Dr. Fauci was then at the start rather than the end of his forty-year career leading NIAID, his head still sporting a dense shock of black hair. Back then, as in 2020, Fauci was demonized as an unfeeling federal bureaucrat murdering innocent people. Larry Kramer, an outspoken leader of the growing AIDS activist movement, said as much. But instead of just focusing on the science and ignoring the public assaults on his motivation and character, young Dr. Fauci took the opposite approach. He met with his fiercest critics to understand their concerns about the government's rigidity and slowness in combating the AIDS epidemic, and to hear their demands about balancing rigorous review of new treatments with the urgent need to deal with a public health emergency. In the era of AIDS, he came to be seen as an ally trying to move the lumbering bureaucracy forward to help get new drugs out to the public. The AIDS protesters of the late 1980s were enraged by what they saw as the sluggishness of Dr. Fauci's FDA colleagues in reviewing and approving promising new drugs. Their friends, their lovers, their whole community were dying. Even if new medications were developed to reduce the burden of this plague, they feared that the agency's cumbersome review process--a once-valued legacy of the Kefauver regulations--meant that many more people would succumb while the lengthy evaluation process trudged on. The crisis was personal and very urgent. In October 1988, FDA employees coming to work were astonished to find the lobby of their headquarters in Rockville, Maryland, occupied by over a thousand furious AIDS protesters sitting and waving placards; one sign bore a red-stained palm print and read "THE GOVERNMENT HAS BLOOD ON ITS HANDS--ONE AIDS DEATH EVERY HALF HOUR." Another read "FDA--UNSAFE AT ANY DOSAGE." The activists chanted, " Hey, hey, FDA! How many people did you kill today? " and covered the lobby floor with a red liquid they said was blood. This was especially distressing for the FDA doctors, many of whom had chosen work at a government agency over the in-your-face stress of patient-facing jobs. The protesters used a clever strategy: contact with blood was known to be an effective means of transmitting AIDS, even though walking through it with shoes on was not a clear risk factor, especially if the liquid wasn't really infected blood. The demonstrators were right that the FDA was acting slowly in approving drugs for AIDS. It was applying its standard meticulous review process mandated over twenty-five years earlier by the 1962 law requiring randomized controlled trials that could take a year or much more to evaluate new medicines, even those with particular promise. Cancer patients joined the AIDS activists and argued that, for them as well, new treatments that could reduce their own risk of imminent death seemed to be taking forever for the FDA to review as it lumbered through its seemingly endless assessment processes. Until the AIDS crisis, the FDA appeared to be at the top of its game in evaluating new medications. After 1962, the world took admiring note of that productive marriage among science, government, and the pharmaceutical industry (probably better to call it a thruple). The nation's rigorous but fair drug evaluation system became the envy of the world as governments all over the globe sent representatives to the U.S. to study and then replicate its approach. But that product of the progressive legislative era of the 1960s didn't fare well in the more combative 1980s. Ronald Reagan had been elected at the start of the decade on a platform of reducing federal involvement in the life of the nation. His first inaugural speech in 1981 set the tone for regulators' status in the coming decade when he announced that government could not solve the nation's problems because the government was the problem. As dark as it was, his formulation was gentler than that of Grover Norquist, the virulent anti-tax activist who said his goal was to "shrink government down to a size where you can drown it in a bathtub." An increasingly powerful Republican presence in Congress was eager to put its legislative muscle behind this vision. The AIDS problem may not have been too salient for President Reagan, who did not even utter the name of the disease in public during the first years of his presidency as the epidemic was growing and destroying more lives each day. The fiscal stringencies that flowed from this conservative worldview took their toll on the nation's capacity to evaluate and approve medications, among many other things. Budgets for federal agencies were constrained, including that of the FDA. Beyond its culture of careful, sometimes obsessive scientific review (which its critics described as mere sluggishness), the agency was truly hampered by inadequate staffing. The 1962 Kefauver legislation had required it to apply an unprecedented level of scientific scrutiny to new drugs, and despite complaints by the drug industry that such evaluation would limit innovation, the productivity of the biomedical enterprise was increasing sharply year by year. A growing budget for the National Institutes of Health and new drug discoveries (many of them taxpayer-funded) laid the groundwork for more and more new medicines. And though the budget Congress allotted to the FDA increased, it didn't grow apace with this explosion of therapeutic discovery. A mandated follow-up program to evaluate scores of drugs that had been approved before the new efficacy criteria were in force had dragged on for two decades--which was fine with manufacturers, since the often-useless products couldn't be taken off the market until they were assessed. By the late 1980s, when George H. W. Bush took the helm from Reagan and continued his anti-big government policies--enshrined in his menacing and ultimately self-destructive slogan "Read my lips, no new taxes!"--the FDA budget was simply inadequate to support enough scientists to review all those new drug applications efficiently. But such stinginess fit in well with the conservative ideology that was becoming more popular in Washington during those years. Conservative economist Milton Friedman had quipped that if the federal government were put in charge of the Sahara Desert, within five years there would be a shortage of sand. The second Bush president, George W., liked to call himself "the Decider," but when it came to government policies, he declared that "we don't believe in planners and deciders making decisions on behalf of Americans." In this climate, the most logical solution to the FDA's inability to get promising discoveries onto the market more quickly--giving it an adequate budget to hire enough scientists to review new drugs--was a political nonstarter. Using Surrogates to Give Birth to Medicines One regulatory response to the AIDS crisis was a new FDA program called Accelerated Approval, eventually made into law in 1992. It began as a plausible and well-intentioned attempt to address the concerns of the activists criticizing the FDA's slowness. A new social contract was offered: instead of the previous requirement of two or more randomized trials showing that a new drug improved patients' health, an innovative new pathway was created. For a serious condition with no satisfactory treatment, a pharmaceutical manufacturer could now get a product approved if the drug produced an encouraging change in a "surrogate measure"--a lab test such as an assessment of the viral load in the blood of an AIDS patient, or an improvement in an imaging study defining the size of a cancer patient's tumor--even if these weren't the same as showing an improvement in patients' health or survival. All that would be needed to win accelerated approval was to change a laboratory test or a scan result in a way that would be "reasonably expected" to predict future clinical improvement. The key second part of this social contract was that the manufacturer would then have to conduct follow-up studies once the drug was in use to measure actual clinical outcomes, such as how well patients functioned or how long they lived, to prove that the new treatment was truly beneficial. So far, so good. But the second part of that social contract--the follow-up confirmatory studies--were often neglected or delayed. In the years that followed, under relentless pressure from the drug industry and its supporters in Congress, the FDA allowed this surrogate measure-accelerated approval system to expand so widely that it has begun to fray the agency's once-legendary drug review system. The current approach is similar to a car salesman whose dealership is located at the top of a hill suggesting that a prospective buyer take a vehicle out for a test drive and note its quick pickup and the strength of its engine; at the bottom of the hill, the salesman kindly offers to drive it back up to the office--the part of the trip on which those qualities aren't so evident. Much of the work our group has done on FDA policies has been spearheaded by Aaron Kesselheim, a brilliant physician-lawyer who began working with me when he was still a resident; he now leads that effort in a very productive group we call the Program On Regulation, Therapeutics, And Law, or PORTAL. In 2016, Congress passed the "21st Century Cures Act" that many of us worried might further loosen approval standards; beyond accelerated approval, it created another expedited review process for so-called breakthrough drugs that seemed to many of us to be more hype than science. The term leads prescribers and patients to think that these are major new developments, but the designation just means a new drug is unusual--not that it actually works. For pharmaceutical companies, a lower evidentiary bar means shorter and less costly clinical trials, hastening their ability to get a drug to market much sooner. It also sharply raised the prospects that a new product would be approved: it's much easier to show a change in a lab test or a scan than to prove eventual patient benefit. A product could thus have more time to generate revenues before its patent expired, whether it really worked or not. The drugmakers' efforts were abetted by Congress and an administration marinating in funds from the pharmaceutical lobby, one of the most well-endowed pressure groups in Washington. Backup support came from vociferous concerned patient groups: many were utterly sincere, many were funded by those same companies, and many were both. With impressive synergy, the industry's enormous financial clout in Congress swayed key legislation; the pressure was transmitted in parallel to FDA officials through influence in the executive branch, whichever party was in power. More than half of new drugs are now evaluated through one or more expedited pathways that use lower standards of evidence. These developments laid the groundwork for the 2021 approval of Aduhelm, the intravenous treatment for Alzheimer's disease that didn't work. A full autopsy of that drug is presented in the next chapter; for now, we'll consider more closely how the well-intentioned AIDS-era accelerated approval system was captured by special interests who have used it to weaken the evidence that doctors use to prescribe drugs, often the ones we give to our sickest patients. Lowering the Bar Even as the AIDS epidemic waned, the FDA was becoming more and more flexible about what surrogate measures might be "reasonably expected" to predict future benefit for an unproven drug. The accelerated approval pathway whizzed way past its original goal of green-lighting promising drugs for untreatable diseases and spun out of orbit, allowing companies to use lab tests, scans, or other findings of dubious relevance as a free pass to early marketing of treatments for many diseases. This was not a new idea. Goodhart's law is named after a British economist who observed that when a measure becomes a policy target, it stops being a good measure because people learn how to game it. In her book Counting , Deborah Stone provides other telling examples: Uzbek cotton pickers paid by weight soaked their harvest in water before bringing it to market; Soviet factories that were required to produce a certain number of meters of fabric each week adjusted their looms to make long narrow strips; railroad companies paid on the basis of how many miles of track they put down laid it out in winding paths. Daniel Kahneman wisely observed that people prefer to replace hard questions with easy ones. And we all know the classroom distortions that occur when educators start "teaching to the test." So we should not be surprised that the FDA's growing use of surrogate measures incentivizes the use of assessments that don't require showing that a new drug produces clear patient benefits. Oncology has been an especially fertile field for such criterion-bending. In 2022, I was asked by JAMA (the publication formerly known as the Journal of the American Medical Association ) to write a commentary on a study that examined cancer drugs approved on the basis of surrogate measures (there were many) and how many were then subjected to the required follow-up analyses to measure actual patient benefit (there were fewer). In virtually all cases, the medication remained in use even if the follow-up studies were not done, and even--amazingly--if they were performed and failed to confirm a benefit. I titled the article "A Finger Pointing at the Moon," referring to a legend of the Buddha trying to teach a lesson to his students by showing them the moon. But the acolytes instead gathered in a circle and stared intently at his finger, totally missing the idea. The finger was the surrogate marker, of course, and the moon the more distant goal of making patients better. The use of surrogate measures has gone well past cancer and now constitutes a "Get Out of Jail Free" card for manufacturers of drugs for muscular dystrophy, ALS, Alzheimer's disease, and diabetes, among other conditions. All too often, companies take their accelerated approval, rush ahead marketing the product, and then don't get around to completing the mandated studies the law requires to determine whether the drug really helps patients. In a 2024 paper in JAMA , my PORTAL colleagues reported on over a hundred cases of cancer drugs granted accelerated approval based on surrogate measures. On follow-up, fewer than half had been shown to produce actual patient benefit, though they generally remained in use. Not much rethinking going on there. Clinical and Ideological Justifications of Lower Standards There is a legitimate policy argument here: We don't want to release a new drug into routine use before it's been adequately studied, but we also don't want to make that assessment so long and cumbersome that it keeps effective treatments from the patients who need them. How much evidence of effectiveness is enough for approval, and how much is too little? How much is too much? The AIDS activists' delay-causes-death case often comes up when drug manufacturers and patient groups advocate for quicker approvals, and sometimes it has merit. Hundreds of millions of dollars can also ride on this timing; a company that may have made a large investment in developing a drug (or didn't) cannot begin to make a profit until it can sell product. But arguments about speeding drugs to market don't go over well if they are justified primarily in terms of increasing industry revenue. It works much better to cite a clinical rationale that involves patients (see chapter 3). And sometimes those arguments make sense. An important and laudable movement took hold in the 1990s in health care in general to rely more on well-collected evidence to guide everything we do in medicine. Before long, two clever but snarky satirical pieces appeared in the respected publication BMJ (formerly the British Medical Journal ). BMJ has a time-honored tradition of running humorous articles in its Christmas issue each year, and two of these provide provocative challenges to our understanding of the role of randomized trials in medicine. The first purported to be a systematic review of all published clinical trials on the effectiveness of parachutes used when jumping from airplanes "in preventing major trauma related to gravitational challenge." Mimicking a critique often made by advocates of evidence-based medicine, the authors bemoaned the fact that even though parachutes are a very commonly used intervention, there was not a single published RCT documenting their effectiveness. They wryly concluded: Advocates of evidence-based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence-based medicine organised and participated in a double blind, randomized, placebo controlled, crossover trial of the parachute. My Rethink conclusion: you don't always need an RCT to know if something works. A team of American doctors returned to the topic in the BMJ Christmas issue several years later. They actually conducted such a controlled trial, enrolling twenty-three volunteers to jump from a small plane or helicopter after being randomized to wear either a parachute or an empty backpack. Surprisingly, the authors reported no difference in the rate of injuries between the two groups. Illustrating the need to evaluate a paper's methodology carefully before inferring much about its findings, it took a dive into the study's design to figure out that the trial was conducted with the aircraft stationary on the ground. Rethink lesson: you have to read the details of any clinical trial carefully to understand how relevant its findings might be to actual practice. Hunter and the Hunted A more serious and compelling example of the RCT debate concerned the treatment of Hunter syndrome, a rare and devastating genetic disorder in which children can't make a key enzyme to break down large sugar molecules. Its victims suffer from delayed growth, hearing loss, and declining brain function, and die young. In the 1990s, researchers discovered how to partially replace the missing enzymes in Hunter syndrome and diseases like it. A new product, Elaprase, showed promise in early studies. Nothing like it had ever been seen for this condition; young patients given the new treatment in early evaluations did much better than expected. But the FDA declined to approve the new drug until those promising findings were confirmed in a yearlong randomized controlled trial in which some of the affected children would be assigned to get a placebo. Those requirements led to a long delay in its availability nationwide, and to further deterioration in the kids randomly allocated to the control group. In a compelling 2012 opinion piece in National Affairs magazine, a doctor pointed to that as an example of how excessive regulation kept a lifesaving medication out of the hands of Americans who needed it, for no good reason. That was true, even though it was an uncommon error in FDA judgment. The exception doesn't prove the rule; that's why it's called an exception. But it put flesh on the drug industry's favored argument that the FDA's too-strict evidence requirements were depriving the public of medicines that could save lives. These clinical arguments were used to strengthen the case made by conservatives and drugmakers that the FDA was full of sluggish bureaucrats whose obsessiveness and stubbornness made the review process so slow and complicated that Americans couldn't get the drugs they need. But nothing could be further from the truth; most FDA reviewers are sharp, committed scientists who usually do their work astutely; when major gaffes occur, they are usually committed by FDA leadership, who sometimes have other priorities. The FDA's overall efficiency has been demonstrated clearly in studies by our group led by Kesselheim and by our Yale colleague, Joe Ross, and his colleagues. In a series of detailed papers, we've all found that the data show convincingly the FDA is as fast on average as any drug-regulatory body in the world, often approving new drugs before Europeans or Canadians have access to them. That period of review is now down to six months for urgent decisions, although the agency can move even faster in emergency situations: the first two vaccines against Covid-19 were approved remarkably quickly after the agency's receipt of their initial data on efficacy and safety. But the overly obsessive bureaucrat meme was a durable one, and is often used as a seemingly patient-friendly excuse to encourage Congress to lower the FDA's approval standards. The author of the Hunter syndrome article in National Affairs was Dr. Scott Gottlieb, whom President Trump named FDA commissioner five years later. Trump then fired him in 2019 after just two years on the job, after Gottlieb tried to crack down on tobacco industry-backed vaping companies over their promoting their products to youngsters. Gottlieb landed on his feet, though: he joined the corporate board of Pfizer, where he earned $553,000 in 2022, and became a partner in New Enterprise Associates, one of the world's largest venture capital firms, advising them on new drug development and securing FDA approval. He also became a partner at the biotech firm Illumina, where he earns over $420,000 annually. More on that golden revolving door later. Despite the very rare exceptions to the need for randomized trials, the bold Kefauver requirement that clinical effectiveness had to be shown before a drug could be approved has sometimes been watered down to an FDA message that in effect says this for many products: "You can market your drug if it makes a lab test look better in a short study, compared to a placebo. We won't be on your case too much about those confirmatory follow-up studies." Even apart from the accelerated approval pathway, this combination of surrogate outcomes and placebo controls means that a new drug for diabetes, for example, can be approved on the basis of a twelve-week trial showing it lowers blood sugar more than no treatment. But a main reason we want to lower blood sugar in people with diabetes is to prevent the damaging outcomes that patients and doctors really care about: heart attack, stroke, kidney failure, blindness, nerve damage. Yet demonstrating an effect on these important clinical outcomes isn't required for the FDA to approve a new diabetes drug. What matters for approval is lowering the blood sugar, even though we now know that some widely used diabetes drugs like Januvia (sitagliptin) do only that, while others like Jardiance (empagliflozin) or Ozempic (semaglutide) lower blood sugar and prevent heart attacks and kidney damage--a huge difference. "Do Your Own Research" Ideally, the approval of a new drug should be exclusively the province of science, but for a half-trillion-dollar-a-year industry, it couldn't possibly remain so. The same libertarian posture of the earlier twentieth century--the spirit that opposed government's right to require accurate labeling and prevent toxicity--lives on in the insistence by some advocates on the far right that the government shouldn't even be in the business of determining whether a drug works or not. Physicians and patients could determine which drugs work best and which don't, their argument goes, through decisions reflecting their individual clinical experiences. This is such a bad idea that it's hard to know where to start in debunking it. Here are some basics: Some of the detailed data the FDA receives from a drug's manufacturer is considered the company's private property and is kept secret, so any outside reviewer isn't playing with a full deck. Furthermore, evaluating the results of a clinical trial can be tricky: Were the study groups truly comparable at the start of the trial? Were the randomization and blinding done appropriately? What statistical methods were used to compare outcomes? If the differences were statistically significant, were they also large enough to be clinically meaningful ? Were the patients studied comparable to people a doctor is treating, or (as often occurs) healthier and younger? If the comparison drug was a placebo, how does the new drug stack up against all the evidence on other relevant treatment choices out there (perhaps including nondrug options) that weren't in the trial? Beyond all that, the issue of selective publication of favorable results has bedeviled all of us who look to the peer-reviewed medical literature to guide our decisions about how well drugs work, a problem several researchers have documented. A worrisome analysis of this issue was published in the New England Journal of Medicine by Erick Turner, a psychiatrist who had spent several years at the FDA reviewing new drug applications. While there, he noticed that the more favorable studies that crossed his desk were more likely to end up being published in medical journals than the less favorable ones. Once he left the agency, he and his colleagues followed up on the concern that drugmakers who sponsor studies have in the past published the results they liked and spent far less effort to get non-favorable trial findings into the medical literature. Turner et al. reviewed the raw data on seventy-four clinical trials submitted to the FDA evaluating twelve different antidepressants and found that almost a third of them had never been published. Virtually all those that depicted favorable outcomes made it into medical journals; but of the studies with negative or questionable results, nearly all were never published, or appeared with a positive spin on the results. How are we clinicians or our patients supposed to independently rethink that , if the totality of the evidence never sees the light of day? Put differently, Turner's study found that if you looked at the then-extant medical literature you'd find that 91 percent of published trials of antidepressants found the drugs were effective; by contrast, only about half of all the original studies submitted to the FDA showed the medications worked. In response to problems like these, reforms were passed in 2007 to require public disclosure of plans for all clinical trials before they are launched. The less good news is that disclosure of their results is still far from complete (see chapter 9). Still, many libertarians argue that Americans should just be able to do their own research to decide which drugs work and which don't. But going over the terabytes of data the FDA receives for a new drug submission takes large teams of smart, dedicated, specialized scientists months to get right. Over many years, we've found how hard it is to do this work well in our educational outreach programs when we try to synthesize such data to guide doctors toward better prescribing decisions (see chapter 17). So how could it make sense to let individual freedom decide what drugs are available for use? Surely no responsible government scientist would advocate for that, right? One odd presentation of this anti-big government perspective was offered in an op-ed in the Wall Street Journal that argued for an approach in which the FDA wouldn't assess the effectiveness of new drugs. Instead, the author proposed, the agency should just make sure new products aren't terribly unsafe and then release them to the magic of the marketplace, so doctors and patients could figure out which ones work and which don't. This approach is reminiscent of a decision rule used in the Albigensian Crusade in thirteenth-century France: when an invading army was having trouble differentiating loyal Catholics from heretics among the townspeople, the monk leading the charge is said to have commanded, " Novit enim Dominus qui sunt eius ," which roughly translates to "Kill them all and let God sort 'em out." That strange WSJ op-ed was written by Dr. Andrew von Eschenbach, appointed by George W. Bush in 2005 as FDA commissioner. While at the FDA Dr. von Eschenbach, whose clinical expertise was as a prostate surgeon, prolonged the agency's yearslong refusal to approve greater access to the morning-after contraceptive pill despite its proven safety and effectiveness. Apparently he felt there are some issues the marketplace shouldn't be allowed to decide on its own. The same motif of laissez-faire, caveat emptor has inspired a nationwide "right to try" movement for unapproved medications, pursued aggressively in several states by conservative legislators seeking to enable patients to take unproven drugs. But this is a solution to a problem that doesn't really exist. For many years, to avoid being in the middle of this unwinnable debate, the FDA has allowed any physician to ask a company for access to an investigational drug that hasn't been approved by the FDA. The agency itself approves about 99 percent of such requests; when there is an access problem, it's usually the company that resists making the product available. But if a drug hasn't been determined to work, should such liberated patients expect their health insurer, or a government program, to pay for it? And if there is a dangerous side effect, would they expect society to cover the costs of caring for those consequences as well? When my colleagues and I wrote a paper for the New England Journal of Medicine about this issue, we used a common term to describe the policy: "compassionate use." The editors wisely made us change that to "expanded access," pointing out that there's not necessarily anything compassionate about helping people take an untested drug that may not work and could hurt them. A Legacy of the AIDS Era It's now well over three decades since the fraught days of those AIDS demonstrations in the FDA lobby in the late 1980s. Thanks to a wide variety of very effective drugs to treat HIV, many patients with that diagnosis are now living well into old age. A medical student working with me has even studied the interactions between drugs for geriatric conditions and the medications that aging HIV-positive patients take--a wonderful outcome few of us saw coming in those dark times. But world-changing events like the AIDS epidemic cast a long shadow, and interest groups of all stripes understood that a crisis should never go to waste. Changes in FDA policy put in place during the AIDS era--and extended several times in the years since--mean that now a large proportion of new drugs are rushed to market under one or another of the FDA's "expedited pathways": accelerated approval, fast-track review, breakthrough designation, priority review, orphan drug status, and others. In one recent year, fully 65 percent of new drugs were approved on one or more of these pathways. Drugmakers have become adept at linking up with patient advocacy groups to urge reliance on these tentative measures. Yet many drugs that received accelerated approval underwent "confirmatory trials" that were reported years later using the very same surrogate measures. And many of these products turn out not to work well--or at all--when subjected to more careful scrutiny. Worse, other speedily approved drugs never undergo the follow-up testing that was mandated following those quick approvals. One PORTAL paper found that only a fifth of new cancer drugs approved on the accelerated pathway were shown to prolong patients' overall survival. But sick people keep on taking them, and insurers and government programs are required to pay for them. For a company, foot-dragging on confirmatory studies makes sense: they can continue to charge full price for a drug approved on preliminary data; problematic follow-up trials can only derail that gravy train. We've allowed these evidentiary limitations to be omitted from the drug's official descriptions or advertising or price, so patients and doctors have no way of knowing about the problem--a lucrative omission we continue to permit. It's widely known that a vampire cannot enter your house unless you invite him to cross your threshold. In late 2022, the inspector general of the Department of Health and Human Services issued a critical report noting that for more than a third of accelerated approval of drugs, a follow-up study had not been submitted on time, citing "ongoing concerns that sponsors of drug applications granted accelerated approval fail to complete their statutorily required confirmatory trials on schedule, and concerns that FDA's oversight of the trials is lax." The FDA announced new policies in 2023 to address these problems, but it isn't clear how effectively they will be implemented. The electoral victory of Donald Trump, combined with Republican ascendancy in Congress, make it far less likely that we will upgrade our evaluation of regulation of drugs anytime soon. Evidence for this can be found in Trump's enthusiastic advocacy of Robert F. Kennedy Jr. to head the federal Department of Health and Human Services, which oversees the FDA--a man with little scientific background who promoted drugs that don't work, such as ivermectin and hydroxychloroquine for Covid, and vehemently disputed the safety and usefulness of vaccines. Before November 2024, many of us thought the pressing policy goal would be to fine-tune the nation's drug review process. Now we are just hoping to save it. Gregg Gonsalves was one of the AIDS activists in the demonstrations against the pace of the FDA's approval process in the late 1980s, when he feared that slowness could put his own life at risk. He's now on the faculty at Yale, and in 2023 wrote this: It thus deeply pains me to see patient groups today--not for AIDS, but for a host of other diseases--distort what we were fighting for, and use it for counter-productive purposes. Sometimes this stems from the sheer terror and desperation that I know so well, but it often emerges from thoughtlessness and outright collusion with drug companies. Their end goal appears to be to dismantle the FDA as we know it. As someone who fought alongside so many to change the way we develop and regulate drugs in the USA--including the role of the FDA--and who is only alive because of the fights that we won, I feel certain that these groups are making a terrible mistake. Excerpted from Rethinking Medications: Truth, Power, and the Drugs You Take by Jerry Avorn All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.