AHRQ National Summit on Public Reporting for Consumers

Yesterday, I spoke at the Agency for Healthcare Research and Quality (AHRQ) National Summit on Public Reporting for Consumers in Washington, D.C. I have included my remarks below.

Thank you for the invitation to speak at this important gathering. I welcome the opportunity – and I ask for your indulgence with my many personal references – but this is, in large part, a personal reflection.

The past 25 years have seen a large scale shift in the way the public at large and the medical and health care communities view the issue of quality and safety in health care. I have been privileged to participate in this transformation at the national level in several roles, and I would like to describe that evolution.

Now, however, I have the job of leading an academic medical enterprise, and am challenged by the task of putting lofty ideas into practice at the local level. I remain very committed to the effort, but we are daily challenged to put the best ideas into practice.

In the spring of 1986, I was nominated by President Reagan and confirmed by the Senate for the position of administrator of the federal Health Care Financing Administration, with responsibility for the Medicare and Medicaid programs nationally. For the previous three years I had served on the White House domestic policy staff.

As I navigated the confirmation process, events that led to the first large scale public reporting effort began to unfold – but more on that in a moment.
When Medicare was enacted in 1965, the principal focus of the legislation was enhancing access to health care services for older Americans. The history of the legislation featured a major battle over the roles of government and of physicians in guiding health care – and the Medicare statute in 1965 includes the now quaint notion that decisions about medical care should be left entirely to doctors.

During the 1970s and early 1980s, widespread and growing concerns about the cost of care and the quality of services overtook the idea of further expansion of access, and became the dominate health care issues confronting the federal government.

Jack Wennberg’s seminal work showed wide geographic variation in the practice of medicine. Bob Brook found that a large portion of medical services was judged to have been unnecessary by a panel of experts. And David Eddy highlighted the lack of clear evidence for the effectiveness of many medical practices.

There was a major national debate over the high cost of hospital services in the early 1980s, leading to the enactment of the hospital prospective payment system (or “DRG” system). By the time I came to head the Medicare program in mid-1986, the “quicker and sicker” issue was before the Congress – the concern that the new methodology for paying hospitals was leading to patients being discharged too soon, and to their detriment.

My friend Sam Mitchell gave me some very valuable advice – rather than be pummeled for Medicare supposedly letting quality slip for financial reasons – I should launch a very public initiative to improve Medicare quality. This paid off sooner than I had expected.

The Medicare agency had undertaken some innovative analysis of hospital data that had been collected through the Peer Review Organization program. Dr. Henry Krakauer, a very creative combination of physician and statistician, did much of this early work. Robert Pear, reporter for the NY Times learned that HCFA had this information, which analyzed large-scale patterns of use and outcomes, focusing on hospital-specific mortality rates, adjusted for diagnosis and other factors.

Instead of just releasing the data, we decided to publish it ourselves, as part of a major effort to improve the quality of care in Medicare. There are many folks that I need to thank for their hard work on this project – Henry Desmarais, who was acting HCFA administrator until I got there in May, 1986, Glenn Hackbarth, who was my deputy, Tom Morford and John Spiegel, who oversaw the quality program at HCFA, and many others.

An earlier experience I had, as county health officer in my home town of Birmingham, Alabama – involving publishing restaurant inspection results in the local newspaper – helped shape my thinking about the importance of giving the consumer public information for their decision making.

We were careful to say that the Medicare hospital mortality information was not to be used on its own to make health care decisions – but it ought to be used to raise questions and to aid in the complex process of choosing where to get health care services.

We committed to an annual publication of this information, and embarked on a broad effort to improve quality in the Medicare program.

Starting in 1986 until 1991, HCFA published Medicare hospital mortality information each year. The hospital industry was very strongly against this effort, at least initially – saying that it would do more harm than good. Hospitals complained that our severity adjustments were not adequate and that our cut points for outliers were arbitrary.

I repeatedly said that we welcomed their help with improving the reporting initiative, but that the perfect should not be the enemy of the good.

Fred Mosteller, Jack Wennberg, and Bob Brook all gave us advice. And a largely unreported part of this story is that Dr. Edwards Deming gave us his insights and advice too.

Through this process all of us, including the hospitals, learned a great deal about hospital performance and began to explore how to evaluate health care. Shortly thereafter, we decided to launch a new effort that we called HCFA’s “Effectiveness Initiative,” which sought to use the Medicare data to improve health care. My colleagues – Glenn Hackbarth, Henry Krakauer and Bill Winkenwerder – and I had a paper in the NEJM in 1988, entitled, “Effectiveness in Health Care: An Initiative to Evaluate and Improve Medical Practice.” And Bud Relman wrote an accompanying editorial, which he called “Assessment and Accountability: the Third Revolution in Medical Care.”

We believed that information about health care effectiveness was a public good and therefore that the government should play a key role in data collection and evaluation of effectiveness. We sought funding for a research program to drive this process, and the following year the Congress established the Agency for Health Care Policy and Research, later renamed the Agency for Healthcare Research and Quality.

The Institute of Medicine, under Sam Their’s and then Ken Shine’s leadership, played a very important role across the decade of the 1990s, defining “quality” in health care, and pointing to problems in quality and patient safety. Bill Richardson led a multi-year IOM initiative that included the groundbreaking report, To Err is Human in 2000, and then Crossing the Quality Chasm in 2001.

These reports were a clarion call for action – especially making the point that a systems approach was required to deal effectively with these issues.

In the past several years, a number of organizations have contributed mightily to this effort – including NCQA, the Joint Commission, the Institute for Healthcare Improvement, the Leapfrog Group, AHRQ, CMS, the Hospital Quality Alliance, the AQA, and the National Quality Forum, whose board I am now privileged to chair.

I believe it is important to point out how far we have come in the past 25 years.

Twenty-five years after HCFA’s first foray into public reporting, the world could not be more different. There are now hundreds of reporting sites providing information to consumers, patients and other stakeholders.

• There are public sector sites, such as the HHS Healthcare Compare Series; and many sites maintained by states, especially ones on safety issues, such as health care acquired infections.

• Reporting on health care results is now a part of mainstream product/service evaluation efforts, such as Consumers Union; and U.S. News and World Reports Top Rankings.

• It has also taken root at the community level with sites sponsored by community partnership efforts, business coalitions, provider groups, states and others. Let me add that a number of community reports are being supported by AHRQ through its technical assistance program to 24 multistakeholder community collaboratives, referred to as Chartered Value Exchanges, and RWJF through its technical assistance program to over 15 multistakeholder community collaboratives referred to as Aligning Forces sites.

Although the earlier efforts focused on hospital quality, performance reporting has spread to virtually all levels and sites of care; and is beginning to address overuse and cost.

While there are spirited debates around the measures, reporting formats, who can and should be held accountable, use of the information in payment programs, and more; there is no doubt that public reporting has come of age. It is here to stay.

As I look back on the previous 25 years, I think there were three important streams of activity that together have brought us to where we are today.

First, a broad consciousness-raising about the serious gaps in safety and quality. When HCFA first released the mortality reports, few within the health professions understood the magnitude of the quality gap. Although there had been some important publications in peer-reviewed journals; they were few in number. Thanks to the great work by our colleagues in the health services research community (and to the support provided by AHCPR, now AHRQ) the body of evidence supporting quality and safety gaps grew significantly to nearly 100 publications in peer-reviewed journals during the 90s. That substantial body of evidence provided a solid foundation for the series of landmark reports issues by the Institute of Medicine– To Err is Human, Crossing the Quality Chasm and others. To Err is Human alone received saturation coverage in the print and broadcast media for nearly three days following its release.

After that, the quality and safety of health care became a national issue worthy of public discourse on Oprah, 60 Minutes, and other primetime programs. Congress held hearings; John Eisenberg (the head of AHRQ at that time) implemented an expansive set of safety and quality initiatives across the federal agencies; leadership within each of the major health professions took action as did others in the hospital industry and AMCs. The “horse was out of the barn” and quality and safety became a national movement.

Second, building a quality measurement and reporting enterprise. While public awareness of the quality issues provided important momentum and demand for public accountability, investing in what is sometimes called the Quality Measurement Enterprise has been critical to our success. Those less familiar with the quality issues often find it difficult to understand why measuring and reporting on quality and safety is so difficult. I know first-hand, it is exceedingly complicated.

The fact that we now have a reasonably robust quality measurement enterprise is a tribute to the work of President Clinton’s Advisory Commission on Consumer Protection and Quality which issued a report in 1998 recommending, among other things, the creation of what is now known as the National Quality Forum.

The President’s Commission report made it very clear that if we are going to have a more transparent health system – one that measures, reports and holds itself accountable – we must establish the necessary infrastructure to produce valid, reliable and useful performance information.

Over the last decade, we have made great strides in establishing a quality measurement and reporting enterprise that includes: setting priorities, developing and endorsing measures, building a data platform, selecting the best measures for different applications be that payment or public reporting, and creation of public websites. Let’s talk about each of these.

Setting priorities.
As the head of a large system, I know all too well, the demands on our doctors, nurses and other members of the care team. We need to make wise choices in focusing our measurement and improvement activities on areas that will remove health system waste; and produce better outcomes for patients. In 2008, NQF established the National Priorities Partnership, a collaborative of 42 major public and private sector stakeholders which put forward a set of 6 major priorities—patient and family engagement, safety, care coordination, overuse, palliative care, and population health — along with specific goals (such as reducing unnecessary hospital readmissions and avoidable emergency department visits). NPP also called for a focus on eliminating “disparities in everything we do.” These priorities have helped to focus many of our efforts. We are all very pleased that under ACA, the HHS Secretary is charged with establishing a National Quality Strategy (soon to be released) with National Priorities Partnership input.

Developing and Endorsing Measures. As many of you know, developing good performance measures is challenging. Our capacity in this area has grown significantly through the pioneering efforts of National Committee for Quality Assurance, the Joint Commission, the Physician Consortium for Performance Improvement, AHRQ and CMS—all of whom develop measures, along with others in academic settings, health plans, and provider organizations. NQF’s process for evaluating measures and identifying the “best in class” have improved markedly over the decade and we now have a portfolio of over 600 national standardized measures applicable to virtually all settings, sub-populations and major conditions.

Data Platform. Perhaps the area where we have lagged behind some of our European colleagues is in creating an electronic data platform to support measurement and improvement, but we are now catching up. The lack of interoperable electronic health records and personal health records has slowed our progress, and it is best evidenced by what IS NOT being measured and reported. There is a noticeable absence of patient outcomes (like health functioning); care coordination measures; and measures of resource use and value. With the passage of HITECH and the leadership of the Office of the National Coordinator, we have the opportunity to close this gap.

Selection of Measures for use in Payment and Public Reporting Programs. It can also be challenging to identify the best available and most appropriate performance measures for use in various programs. For public reporting to patients, we want measures that will help them make decisions about selecting providers, or choosing between treatment programs. For value-based purchasing programs, we want measures that will enable and reward providers for delivering safe, effective and coordinated care, and also measures that will detect unintended consequences. With HHS support, NQF is now convening a Measures Application Partnership, a multi-stakeholder group, that will provide advice to HHS and others on which measures are best for different applications. This is an area, where we all have a lot to learn. Determining which measures are most salient and useful for different audiences and applications really is a pressing challenge.

Reporting Programs. Lastly we need trusted sources that provide performance results to the public. Many States have led the way here with about two-thirds engaged in various forms of public reporting and important pioneering efforts of NY and Pennsylvania in the area of cardiac care dating back to the 90s. Private sector initiatives, such as the Consumers’ Checkbook, and NCQA’s HEDIS reporting for health plans have also paved the way. In recent years, HHS’s Healthcare Compare series has had significant impact.

In developing the quality measurement and improvement enterprise, we have learned that this must be a public and private sector partnership—neither can do it alone. We also need to strike the right balance between national and community efforts. Although there is a clear role for a “national quality measurement and reporting enterprise,” it must be informed by, and responsive to, community-level needs and initiatives. Some of the most innovative work is carried out at the community level. Minnesota, for example, has been a leader in measurement and public reporting for over two decades.

And third, slow but important cultural change. This stream of activity that has contributed to our progress over the last 25 years has been recognizing that performance measurement and public reporting involves profound cultural change.

First and foremost, the voice of consumers must be at every table. Public reporting of performance results is fundamental to creating a patient-centered health care system. As Don Berwick has said—the patient is “True North.” Or as the IOM’s Crossing the Quality Chasm said, “Nothing about me, without me.” We have an opportunity over the coming decade to create a health system that is truly patient centered, but to be successful, we must listen to the patient by measuring patient experience and perceptions of care through instruments like CAHPS and by engaging consumer representatives in our decision-making processes, whether that be setting national priorities, developing and endorsing measures, designing public websites.

Health care providers and clinicians also must be involved at every step of the way. Measure developers, such as PCPI, and NQF through its endorsement process, have gone to great lengths to develop open, transparent processes that engage providers and seek consensus. By consensus, I don’t mean unanimity –that can lead to the least common denominator. The way we use the term at the NQF is “allowing all voices to be heard.” Providing many opportunities for providers and clinicians to have input leads to more informed decisions and a broader base of support for decisions. But I can assure you; it does not lead to unanimous support for the decision. In fact, when it comes to performance measures, there is almost always someone who is unhappy. Transparency of performance information fundamentally changes the clinician and patient relationship. It puts us on the road to developing a culture of shared decision-making that provides each and every patient the opportunity to participate in care decisions with their clinician, should they so desire to do so.

In summary, it has been a long journey—25 years is much longer than I thought it would take when HCFA first released the hospital mortality reports. But we have come a long way and we have learned a great deal that we can apply to other areas of health system change.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s