Comparative effectiveness is government’s attempt to establish what works and what doesn’t
Attempts to establish the effectiveness of medical products, devices and procedures are making news today. At the forefront of such efforts is the Agency for Healthcare Research and Quality, which conducts studies on behalf of Medicare and Medicaid. Some payer organizations, including the Blue Cross and Blue Shield Association, have established technology assessment programs as well. (See “Putting Technology to the TEC,” May/June 2007 Journal of Healthcare Contracting). Many JHC readers are familiar with ECRI Institute, which has been producing evidence-based patient information for some time. And more recently, Premier Healthcare Alliance’s QUEST program was designed, in part, to measure the effectiveness of various medical technologies. (See March/April 2009 JHC.)
But the phrase “comparative effectiveness” really hit the streets in February 2009, when Congress enacted the American Recovery and Reinvestment Act of 2009, the so-called “stimulus package.” Part of the act called for $1.1 billion to be invested in comparative effectiveness research, that is, research to weigh the benefit and harm of various ways to prevent, diagnose, treat or monitor clinical conditions, in order to determine which work best for particular types of patients and in different settings and circumstances.
A Federal Coordinating Council for Comparative Effectiveness Research was charged with developing and disseminating research assessing the comparative effectiveness of healthcare treatments and strategies. The Council was also asked to encourage the development of clinical registries, clinical data networks and other forms of electronic health data that could be used to generate or obtain outcomes data.
Many in the industry applauded the comparative effectiveness provisions of the stimulus package. But others decried it. The Washington Times, for example, compared it to a program in Adolf Hitler’s Germany, which called for “unproductive” members of society to be euthanized. Talk radio host Rush Limbaugh said that the comparative-effectiveness program was the government’s way of telling seniors to “get out of the way and die.” And former Alaska Governor Sarah Palin reportedly warned of the creation of “death panels,” which would make life-or-death decisions for ordinary Americans.
Why the vitriol? Because detractors fear comparative effectiveness will lead to its cousin, comparative cost-effectiveness. They fear that with clinical data in place, policymakers and payers will decide whether the merits of various procedures and technologies are enough to justify their cost. This is something that the United Kingdom’s National Institute for Health and Clinical Excellence (NICE) already does.
In June, at the request of Congress, the Institute of Medicine published a report recommending 100 health topics that should get priority attention and funding from the comparative effectiveness effort. The report also spelled out actions and resources needed to ensure that comparative effectiveness research will be a sustained effort with a continuous process for updating priorities as needed.
One member of the 23-person IOM committee (called the Committee on Comparative Effectiveness Research Prioritization) was Sean Tunis, M.D., founder and director of the Center for Medical Technology Policy in Baltimore, Md., a nonprofit organization focused on improving the value of clinical research for decision-making. Tunis received a bachelor’s degree in biology and history of science from the Cornell University School of Agriculture, and a medical degree and masters in health services research from the Stanford University School of Medicine. He did his residency training at UCLA and the University of Maryland in emergency medicine and internal medicine. He is board-certified in internal medicine and holds adjunct faculty positions at the Johns Hopkins and Stanford University Schools of Medicine.
Tunis was a senior research scientist with the Technology Assessment Group, where his focus was on the design and implementation of prospective comparative effectiveness trials and clinical registries. He also served as the director of the health program at the Congressional Office of Technology Assessment and as a health policy advisor to the U.S. Senate Committee on Labor and Human Resources, where he participated in policy development regarding pharmaceutical and device regulation.
He joined the Centers for Medicare and Medicaid Services in 2000 as director of the Coverage and Analysis Group. Through September 2005, he was director of the Office of Clinical Standards and Quality and chief medical officer at CMS. In this role, he had lead responsibility for clinical policy and quality for the Medicare and Medicaid programs, which provide health coverage to more than 100 million U.S. citizens. He supervised the development of national coverage policies, quality standards for Medicare and Medicaid providers, quality measurement and public reporting initiatives, and the Quality Improvement Organization program. As chief medical officer, he served as the senior advisor to the CMS administrator on clinical and scientific policy. He also co-chaired the CMS Council on Technology and Innovation.
In 2005, he founded the Center for Medical Technology Policy. He spoke with JHC recently about comparative effectiveness.
Journal of Healthcare Contracting: Judging from your resume, it’s obvious you’ve been interested in comparative effectiveness for quite some time. What stimulated your interest in it?
Sean Tunis, M.D.: Throughout my clinical training at Stanford, UCLA, the University of Maryland and Johns Hopkins, I was always interested in the underlying science that is used to guide clinical practice. And that’s really what technology assessment and comparative effectiveness is about.
JHC: You worked for CMS from 2000 to 2005. How did Medicare and Medicaid make their coverage decisions? What were the strengths of that process? What were the limitations?
Tunis: I was recruited to CMS to try to bolster and formalize the coverage decision-making process, specifically, to make it more evidence-based and transparent. At that time, the process wasn’t well defined. There were no particular time frames, and no formal documents were created to explain how a particular coverage decision was arrived at. It was a black box, and it lacked consistency and formality.
So we developed a process, with timelines; we upgraded the technical sophistication of the staff; and probably most important, we started publishing documents describing our decisions and on the scientific evidence they were based on. If I had to point to one thing that made the biggest difference, it was the lesson learned about transparency. If you force yourself to be accountable, to defend your decisions publicly, where anyone can point out error or omission, there’s nothing more powerful to lead to process improvement. I liken it to restaurants where you can actually see into the kitchen. I imagine their cookware is cleaner than others. If you let people see what you’re doing, you’ll get better.
JHC: In your experience, did you find that the federal government lagged behind or exceeded the capabilities of private payers in this regard?
Tunis: The Medicare program is not primarily a technology assessment group. We did some of that, but we were a coverage decision-making body. We made binding decisions about what products and services Medicare patients would or would not be covered for. It is a decision-making body that uses evidence, but it does not do analysis or synthesis of evidence. We tried to figure out where the existing evidence was sufficient to make our decisions, and to organize stakeholders to conduct studies that were missing.
JHC: Why did you found the Center for Medical Technology Policy?
Tunis: At Medicare, after we had developed a structured process to look at evidence, it was blatantly obvious that no one had done the studies we were hoping to see. So I became interested in how Medicare could use its statutory authority and resources to promote the development of better information, instead of limiting itself to the information that already existed. The Center for Medical Technology Policy grew out of that. I thought if I could work with private payers and other stakeholders, we might be able to leverage more people and resources to fill some of the critical gaps in information.
JHC: The Center for Medical Technology Policy advocates what you call “pragmatic” trials of products and procedures. What are pragmatic trials, and how do they differ from other clinical trials?
Tunis: Pragmatic trials are intentionally designed to be relevant to real-world patients and doctors. They are more informative to people who have to make clinical or policy decisions. When you develop the study protocol, you’re thinking about who needs the information and what kind of decisions they have to make. Pragmatic trials are one tool to use in comparative effectiveness research, but there are others.
JHC: Our understanding is that the FDA does not routinely compare the effectiveness of a new technology with existing technologies, but rather, its efficacy. Do you think comparative-effectiveness research will affect the way the FDA looks at devices and pharmaceuticals?
Tunis: It’s not universally true that the FDA doesn’t require comparative studies, but it is true in most circumstances. A simple way to look at efficacy vs. effectiveness is this: Efficacy tells you how something works under optimal circumstances; effectiveness looks at how it works in the real world. Also, keep in mind that the FDA clears many products for marketing through the 510(k) process; manufacturers don’t have to provide much – or any – evidence of a product’s clinical effectiveness because the product is deemed to be similar to a product already on the market.
I don’t expect that the FDA will change its approach to making regulatory decisions based on comparative effectiveness in the near term. What I imagine to be true – and this is the area we’re working in – is that companies who are developing products may want to do studies that satisfy the FDA but are also more informative to payers, patients and clinicians.
JHC: The Institute of Medicine’s Committee on Comparative Effectiveness Research Prioritization was composed of people representing many different fields and interest groups – physicians, hospitals, consumer groups, academia, insurers, etc. How did you navigate through your differences in order to arrive at a consensus on the suggested research topics?
Tunis: The many different backgrounds and perspectives on the committee made it interesting, informative and educational. There weren’t a lot of battles about the priorities, but there were points of difference on the policy questions. For example, if a long-term entity was going to be set up for comparative effectiveness, what should its governance look like? Who should have oversight of it? Fortunately, answering those questions wasn’t part of our assignment.
The real challenge for the committee was the time frame. The legislation said that we needed to issue a report by June 30, but the committee didn’t start work until April. Trying to get 23 people together, sifting through a couple of thousand of suggestions for priorities, creating a methodology for priority-setting, and writing a report by June 30 was an interesting challenge. The fact is, the bulk of the work was done during a three-day retreat in Virginia. Everyone was there, including Carolina Hinestrosa, representing the National Breast Cancer Coalition, despite being extremely ill.
JHC: The concept of comparative effectiveness has drawn the ire of many, who believe it will lead to decisions based on the cost-effectiveness of certain medical procedures and products. Are the two – comparative effectiveness and decisions based on cost-effectiveness – connected?
Tunis: My personal opinion is, one of the major limitations of doing good cost-effectiveness studies is the lack of good information on clinical effectiveness. The more important point is this: The venom and hostility don’t come from the fact that we’d be looking at cost-effectiveness or placing a value on human life. It’s the most fundamental notion of, who’s making the decision – government, patients, doctors? When people start to use the rhetoric of Nazism, that’s usually about big government messing around in your healthcare.
But it’s deceptive to say, “We don’t want government to look at our decisions,” and then forget that there are consequences to how we set up a system. The simple thing is, you end up having to choose between the lesser of two or three evils. People forget that “everybody gets whatever they want whenever they want it” is not actually a valid option. That only works if you don’t worry about the 63-year-old uninsured person who can’t afford drugs for his hypertension.
JHC: What difficulties do you foresee in getting providers, patients, payers, etc., to not only review comparative effectiveness reports, but to use them when making medical decisions? After all, don’t guidelines exist today?
Tunis: I would offer three observations about that. First, if there is significant progress with electronic medical records, comparative-effectiveness information will be available to doctors and patients at the point of care. But that’s still a ways off.
Second, if healthcare reform is enacted that corrects some of the perverse incentives embedded in fee-for-service medicine, then there will be good reason for clinicians and patients to be informed decision-makers. There will be much more demand for checking out the best evidence. And once that demand exists, there will be suppliers to provide it. In a healthcare system that pays more for doing more, there hasn’t been much of a demand for that kind of information.
Third, a compelling comparative-effectiveness study can change how people practice medicine. An example would be the COURAGE trial, which looked at stents vs. medical management for coronary artery disease. Such studies come up with a definitive result that says, “This approach has no impact,” or “This approach does.” Part of the reason a lot of research fails to change medical practice is that it is not well-designed. But when somebody answers an important question with a high degree of reliability, people pay attention.
As better, more reliable evidence becomes available, and you implement incentives for people to pay attention to it, you have a revolutionary transformation in healthcare.