Do We Need Pfizer?

A plan to take on Big Pharma — and create and distribute lifesaving drugs.

`Illustrations by Michael DeForge`

Around the time of World War I, the Metropolitan Life Insurance Company estimated that a ten-year-old with diabetes could expect to live a mere two or three years more; by World War II, however, it was predicting that such a child could expect to live another forty-five years. A death sentence had been lifted by a team of researchers at the University of Toronto, who in the early 1920s, developed insulin.

Yet according to today’s dogma on drug discovery, this should never have happened: we’re told it is only the promise of prodigious profits — guaranteed by patent protection — that incentivizes the discovery of lifesaving drugs. But two of the key developers of insulin — Frederick Banting and John Macleod — declined, for ethical reasons, to even have their names on the first insulin patent, as noted by Rob Hegele in The Lancet Diabetes & Endocrinology last May. The researchers who did take out the patent did so only to sell it back to their university for $1, later writing that they intended to assure that “no one could secure a profitable monopoly.”

Now, let’s move forward a hundred years or so. Patent monopolies are the source of windfall profits and profound misery. Insulin is modified again and again over the years, and the new versions are patented and sold at high premiums, at the expense of those who need them. During years when Alex Azar — now Trump’s nominee to lead the Department of Health and Human Services — headed the US branch of drug giant Eli Lilly, the company jacked up the price of one modified insulin product by more than 300 percent. And while Azar amassed millions for his service, others did less well — like Shane Patrick Boyle, a young man who couldn’t afford his insulin. “As he waited to fill his expensive prescription, he rationed what he had left, stretching it by taking smaller doses,” Tonic reported. The acid levels in Boyle’s blood began to rise. Then, he died.

Today, there is an overwhelming mandate for change in our pharmaceutical system. There is even a broad consensus that government should use its bargaining power to leverage down drug prices, through Medicare today — as Trump once promised to do during the campaign — or with a single-payer system tomorrow, as most other nations already do.

That would be major step forward, but there are reasons to think it wouldn’t be enough. First, even if we began negotiating drug prices at the national level, other flaws in the way we develop new drugs would persist. We have a privatized system of drug R&D that has warped research priorities toward the most profitable drugs (and not necessarily the most needed ones), and distorted the scientific record through conflicts of interest. Second, negotiations have limits: they can “fail” because a firm with a drug monopoly is the final arbiter on price and can walk away from the negotiating table if it doesn’t get the price it wants — leaving us without important medicines. And third, even if we were to achieve much lower drug prices through negotiations, the American health care financing system — in which many are uninsured, and even the insured often can’t afford out-of-pocket payments — would continue to impede access to medicines.

The question is not merely whether we can lower drug costs (we obviously can); it is whether we can envision a system that would actually be superior in the race to discover new and safer therapies — to achieve the next heroic medical advance like insulin — while, at the same time, creating a right to medicine for all.

Profitable Results

In January, the Wall Street Journal reported that drug giant Pfizer had given up trying to find new treatments for Alzheimer’s and Parkinson’s, two terrible scourges of the modern age. There wasn’t a thing the public could do about it. Lowering drug prices wouldn’t change this — it is the reality of a privatized R&D system where business considerations structure research priorities.

Our current model of drug development goes something like this. First, scientists — predominantly supported by the publicly funded National Institutes of Health (NIH) — spend their careers uncovering the basic mechanisms of health and disease. They illuminate the complex cascade of molecular events that produce disease within cells, tissues, and organs, and identify targets for intervention — foci where a particular molecule, shoved into the right place in the right patient, might stop (or at least slow) the grinding gears of pathology.

Next, a drug is developed to hit one of those targets (though the process can also be more serendipitous). Sometimes this product comes directly out of NIH-funded research. There was a time when agents developed in this manner could not be patented and effectively became public goods. That changed in 1980 with the passage of the Bayh-Dole Act, a law that permitted researchers to patent their government-supported work products, and sell them to industry.

The modern biopharmaceutical age was born. As Howard Markel wrote in the New England Journal of Medicine, the law is “beloved by the biotechnology and investment communities,” and the shareholders who profited from the shift.

Yet even drugs that haven’t directly emerged from federally funded research rest on the large body of basic research that is funded by the NIH. Either way, a new molecule is born, and, then, of course, patented. If promising, the firm will test the new product on animals, and then — in three stages of clinical trials of increasing size — on humans. If approved by the Food and Drug Administration, the drug can be brought to market, where it is typically heavily promoted, both to doctors and the public. For their investment in developing the agents and testing them, we give drug firms a period of marketing exclusivity that allows them to price drugs at effectively whatever they wish.

The most familiar way this system fails us is that it gives Big Pharma extortionary pricing power. Patent monopolies allow drug firms to price drugs as high as “the market will bear.” Combined with our lack of universal health care, the harms of this are obvious: according to a 2016 poll by the Kaiser Family Foundation, one in five Americans don’t fill prescriptions because of cost, while one in eight split their pills in half or omit doses of medicine to save money.

What’s more, the system is less innovative than its proponents claim. A drug can be obscenely profitable whether or not it actually does anything any better than existing products, meaning that much of the money spent on R&D could’ve been better spent. In 2004, Marcia Angell, former editor of the New England Journal of Medicine, excoriated the drug industry for pumping dollars into developing “their own versions of blockbuster drugs to cut into a market that has already proved both lucrative and expandable.”

Why plunge money into the risky business of developing a novel treatment at all? In one infamous case, AstraZeneca developed the heartburn drug Nexium, a trivial and clinically insignificant modification of its existing heartburn drug which was coming off of patent and so would soon be sold as a generic. With enough marketing, the replacement drug was a new blockbuster: the firm proceeded to take in nearly $50 billion in revenue on Nexium from 2006 to 2016 for work that yielded zero social value.

Notwithstanding some major advances, research confirms that most drugs developed today by industry are failing to provide gains over the medications we already have. For instance, a 2016 study in Healthcare Policy by Joel Lexchin of York University found that less than 10 percent of a sample of drugs approved in Canada between 1997 to 2012 could be considered “innovative,” defined as representing real advances in therapy.

The final problem with the drug-development status quo is that it distorts science. Leaving drug testing — i.e., human clinical trials, the gold standard when it comes to knowing if drugs actually work — in the hands of pharmaceutical companies is tantamount to letting the fox guard the henhouse. “[P]harmaceutical companies have used techniques leading to bias in the content of clinical research,” as Lexchin writes in Science and Engineering Ethics, “at every stage in its production.” Firms have an incentive to hide evidence, to present results in misleading ways, and to design trials to maximize the chance of a positive (read: profitable) result.

A host of reforms could do much to improve this system, from widely supported negotiations to control drug prices to higher standards for drug approval. Yet there is ultimately a trade-off that progressives do not always admit: at some point, firms will walk away from finding new treatments, at least for certain conditions, once they find the pursuit unprofitable, as Pfizer did with Alzheimer’s.

Simply put, we need a far more fundamental change in the way we develop drugs. We need public drug invention, testing, and production capacity.

The Public Prize

What matters is not so much the ownership of drug manufacturers, but the ownership of drug knowledge. The public at large could become the “owner” of more and more drugs, yet it would benefit from this ownership not through profits from a patent-monopoly regime, but instead by having drugs that are priced close to the cost of manufacture and available, for free, to everyone.

This idea of public drug ownership is not a new one. For instance, Jonas Salk — when asked who owned the patent on his polio vaccine — would famously respond, “The people . . . There is no patent.”

More recently, Dean Baker, co-founder of the Center for Economic and Policy Institute, outlined an approach to drug development that eschews patents. In 2002, for instance, he and Noriko Chatani presented the case for a public drug R&D program. They noted that such a program could have a multitude of benefits apart from cost savings: it would reduce “me-too” drug research, spending on armies of patent lawyers and drug advertising, and efforts to conceal drug research as trade secrets. Their proposal was subsequently used as a model for Representative Dennis Kucinich’s 2004 “Free Market Drug Act,” which would have set up ten public research laboratories within the NIH to develop new drugs outside the patent system. In 2008, Baker also described a system of publicly funded clinical trials, which would be a good idea for scientific reasons (i.e., eliminating conflicts of interest) apart from its economic benefits.

Another prominent proponent of an approach to drug development outside of the patent system is James Love, director of Knowledge Ecology International (KEI), who has emphasized the benefits of alternative R&D systems from the global perspective. In low-income nations, after all, the protection of drug patents is often tantamount to a form of mass violence, which the especially abhorrent history of HIV medications made clear. Will we ever truly know, for instance, how many died in South Africa as a result of the effort by pharmaceutical companies and their political supporters in the Clinton administration to protect drug patents and stymie that nation’s production or importation of cheap generic HIV drugs, as the Observer reported in 1999? Love and KEI emphasize so-called “delinkage,” or the separation of drug development from production through various mechanisms, including “prize funds,” that would provide lump sums as an award for the development of new drugs in lieu of patents; drugs could then be manufactured cheaply anywhere.

Finally, Senator Bernie Sanders has for many years floated a bill, the “Medical Innovation Prize Fund Act,” based on these ideas, which would go further in ending the system of drug patent monopolies, and instead fund new drug development entirely through “prizes” amounting to 0.55 percent of the US gross domestic product.

All of these approaches have merit. In truth, however, “prizes” — or large lump-sum victory allotments that function as a substitute for profits through patents — are probably not necessary. The inventors of insulin may have been chasing, to some extent, fame and glory, yet they weren’t expecting to become billionaires. There is no reason that those developing drugs need to have different incentives than those working to discover the basic molecular pathways of how disease affects cells.

The NIH, in short, should fund and oversee the development of new drugs as well as their testing in clinical trials, similar to how they fund basic science research now. The products of this public enterprise would be our collective property, and could then be manufactured relatively cheaply across the globe.

The Long Road

Yet even with drug-price negotiations at the national level, and even with a public drug-development program, we might still be held captive by Big Pharma, especially in the short term. For one thing, it will be many years before the first public drugs begin to reach patients. Meanwhile, for Pharma’s patented agents, drug negotiations might still fail: if there is one best drug for a condition, a single-payer system has only so much leverage, and drug firms might decline to accept a “reasonable price.” And finally, even without patents, generic manufacturers can still gouge the public whenever they cornered the market on a particular generic drug that nobody else is interested in producing.

Clearly, we need a plan to remake our pharmaceutical system from the bottom up — and not only in the United States. To fill this gap, a group of US and Canadian health scholars, physicians, and advocates is working on a blueprint for change that incorporates many of the reforms discussed above, and goes even further.

For instance, what do we do when pharmaceutical companies refuse to offer a reasonable price on a drug they have patented? One important tool is compulsory licensing, which the US government could start doing today without any legislation. As professors Amy Kapcyznski and Aaron Kesselheim described in a 2016 article in Health Affairs, 28 U.S.C. section 1498 gives the government the right to effectively bust a patent in order to produce (or allow a generic manufacturer to produce) a branded drug at low cost, while paying the patent owner a relatively small royalty. Similarly, the aforementioned Bayh-Dole Act also allows the government to “march in” and break a drug patent when there is a public interest to do so. An administration could be pressured to take these steps without a new act from Congress, and indeed, as reported in Kaiser Health News, there is already a push in Louisiana to use Section 1498 to make hepatitis C treatment affordable to all.

But even with a tool for patent-busting, we can still be scammed by generic manufacturers. Pharma bro Martin Shkreli, after all, did not have a patent on the very old drug pyrimethamine (mostly used by people with a parasitic infection resulting from aids), whose price he raised through the ceiling: it was a relatively tiny market in which no other firms were interested. There was no patent to bust, and if we had regulated prices downward, he might have simply stopped producing the drug. Also, we have serious problem with drug shortages in this country, which arises from a multitude of supply-chain issues. A public manufacturer could be available as a backstop to produce drugs in both scenarios, and there is precedent here: as health policy scholar Marc-André Gagnon notes, the public network of pharmacies in Sweden has its own drug-manufacturing capacity, which produces 1.2 percent of all drug sales in that nation.

A host of other reforms will be needed, however, ranging from upgrading standards of drug approval and monitoring (which have become far too deferential to the prerogatives of industry) and in the manner drugs are promoted (tens of billions of dollars are wasted a year in pushing pills on both doctors and the public). Yet the most important reform pertains to the final “stage” in our pharmaceutical system — the way drugs are made available to the public.

Effective Treatments Must Be Free

In the decade after the discovery of insulin, a young Scot named Archie Cochrane went to a rally outside London to advocate for a National Health Service (NHS). Later in life, Cochrane would become famous as the founding father of what is today called “evidence-based medicine,” the idea that medical practice should be grounded in the best science, and in particular, that drugs and other treatments should be proven efficacious in objective, randomized, controlled clinical trials (an idea that pharmaceutical companies, eager to have their products approved, would like to see rolled back today). In the 1930s, however, Cochrane was a medical student, as well as a member of the Socialist Medical Association who would volunteer during the Spanish Civil War in an ambulance unit. As Cochrane recalled at the beginning of his book Effectiveness and Efficiency: Random Reflections on Health Services, he went to the rally that day armed with a banner emblazoned with a slogan that would inform his life’s work: “All effective treatments must be free.

This is a slogan we would be wise to resurrect — and write on signs — today. Achieving it would require two simple things: first, knowing what treatments are, indeed, effective — something that necessitates separating medical science and clinical trials from the multifaceted chicanery of the profiteers, some of which are described in this article. Second, that drugs be free.

For some, the idea of free medicines for all might seem utopian, had it not already been achieved. The United Kingdom’s NHS included drugs among the forms of health care that were available to all, free at point of use, in 1948. Cost overruns, however, led the Labour Party to introduce co-pays on dental services and prescription drugs three years later. Yet what is less known is that in the twenty-first century, in parts of the UK outside of England, medicine co-pays have once again been eliminated. And last year — ten years after ending these co-pays — the BBC reported that the Welsh government had found that the policy of free medicine may have actually saved it money.

We should not expect any savings in the United States: given how many are currently going without medications because of cost, eliminating drug co-payments is certain to increase drug utilization — as it should, for people are now going without needed treatments. Still, such experiences demonstrate that a world in which effective medicines are “free” is not merely desirable, but eminently achievable.

Pharmaceutical history thus shows us two things. First, researchers need not be promised riches to struggle to create new effective treatments — patents could instead belong to the public. Second, all effective treatments can indeed be made free. Between a universal health care program with first-dollar coverage, and a public drug-development program that decommodifies the creation of new pharmaceutical knowledge, a radically new, and better, stage in human health might finally dawn.