In determining healthcare cost, one size doesn’t fit all
– Peter Pitts is president of the Center for Medicine in the Public Interest and a former FDA Associate Commissioner. The views expressed are his own. –
As part of its healthcare reform bills, Congress is calling for a more aggressive use of comparative effectiveness research (CER). What does this mean? Is comparative effectiveness the same thing as cost effectiveness?
No. Thereâ€™s a big difference.
Cost effectiveness research is what The United Kingdomâ€™s National Institute for Health and Clinical Excellence (NICE) does. NICE uses a measure known as a Quality Adjusted Life Year (QALY) to assess whether or not a treatment is cost-effective or not. If providing an additional year of life costs more than $50,000 — the average price of a fully-loaded Land Rover — NICE won’t recommend that treatment.
For example, NICEâ€™s preliminary decision was that four new kidney cancer drugs — Torisel, Avastin, Nexavar, and Sutent — should not be reimbursed by the National Health Service (NHS) because, despite clinical evidence that these drugs can actually help, they werenâ€™t â€ścost effective.â€ť
Currently, the only available treatment for metastatic renal cell cancer is immunotherapy. This halts the diseaseâ€™s progress for just four months on average. But if a person isn’t a candidate for immunotherapy, or the treatment doesnâ€™t work, thatâ€™s it. They have no other treatment options.
NICE agreed that patients tended to live longer when they were given these drugs. But when they put the data from the trials into their QALY-driven computer models, they found that the drugs cost about ÂŁ20,000 to ÂŁ35,000 ($39,000 to $68,000) per patient each year. NICE deemed this too pricey, and didn’t recommend that the NHS cover these drugs.
The result is that government saves money and patients receive an expedited death sentence. Thatâ€™s not hyperbole, thatâ€™s the simple truth about cost effectiveness research.
Comparative effectiveness research is different.
It strives to show which medicines are most effective for a given disease. In other words, CER asks whether drug A or drug B is the â€śmore effectiveâ€ť statin. It examines which of a variety of therapies is the â€śmore effectiveâ€ť treatment for depression. Most of the world refers to comparative effectiveness research as Healthcare Technology Assessment.
But CER raises some serious problems. For instance, how do you compare two molecules (or three or more) that perform differently depending on a patient’s personal genetic make-up?
It’s for this reason in particular that CER often leads to a â€śone-size-fits-allâ€ť approach to treatment.
That’s because the concept behind comparative effectiveness research is good, but the tools aren’t.
CER relies heavily on findings from randomized clinical trials. While these trials are essential to demonstrating the safety and efficacy of new medical products, the results are based on large population averages that rarely, if ever, indicate which treatments are â€śbestâ€ť for which patients. This is why it is so important for physicians to maintain the ability to supplement study findings with their own expertise and knowledge of their patient in order to make optimal treatment decisions.
Government sponsored studies that conduct head-to-head comparisons of drugs in â€śreal worldâ€ť clinical settings are a valuable source of information for coverage and reimbursement decisions — if not for making clinical decisions.
Two studies — the Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) study, and the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) study — are examples of â€śpractice-basedâ€ť clinical trials, sponsored in part by the National Institutes of Health, to determine whether older, less expensive medicines were as effective in achieving certain clinical outcomes as newer, more expensive ones.
The findings of both CATIE and ALLHAT were highly controversial, but one thing is not: even well-funded CER can be swiftly superseded by trials based on better mechanistic understanding of disease pathways and pharmacogenomics. And, since most comparative effectiveness studies are underpowered, they donâ€™t capture the genetic variations that explain why different patients respond to medicines in different ways.
But itâ€™s important to move beyond criticizing comparative effectiveness in its current form, and instead focus on a policy roadmap for integrating more patient-centric science into comparative effectiveness research.
Much like the U.S. Food and Drug Administration created something called the Critical Path Initiative to apply 21st-century science to the development of personalized medicine, another national goal should be to create a Critical Path Initiative to apply new approaches to data analysis and new clinical insights to promoting patient-centric healthcare.
Why? Because comparative effectiveness research should reflect and measure individual responses to treatments based on a combination of genetic, clinical, and demographic factors. The first steps have been taken. For example, the Department of Health and Human Services has invested in electronic patient records and genomics.
One way to complement this would be to encourage the Centers for Medicare and Medicaid Services to adopt the use of data that takes into account individual patient needs.
We also need to develop proposals that modernize the information used in the evaluation of treatments. Just as the FDA Critical Path Initiative uses genetic variations and biomedical informatics to predict individual responses to treatment, we must establish a science-based process that incorporates personalized medicine into reimbursement decisions.
For instance, the FDA has developed a Critical Path opportunities list that provides 76 concrete examples of how new scientific discoveries in fields such as genomics and proteomics could be used to improve the testing of investigational medical products.
We need to begin the process of developing a similar list of ways new discoveries and tools (such as electronic patient records) can be used to improve CER.
Itâ€™s a complicated proposition, but the goal is simple and essential: cost must never be allowed to trump care, and short-term savings must not be allowed to trump medical outcomes. Just as we need new and better tools for drug development, so too do we need them for comparative effectiveness research.