The Commonwealth Fund just published its fourth Mirror, Mirror on the Wall study comparing the U.S. health care system with other countries, and as in all previous studies, we ranked as the absolutely worst health care system in the developed world, bar none. Yikes. The Commonwealth Fund studied many health care domains, and we didn’t rank in first place for anything. The best we managed to do is place a lackluster third in the subcategory of Effective Care. The United Kingdom, on the other hand, with its socialized medicine system, took first place in almost every category, and the Swiss came in second. That’s almost enough to drive a proud American into deep despair, and as the report bluntly states, “The claim that the United States has “the best health care system in the world” is clearly not true”. To add insult to injury, ours is also clearly the most expensive system in the world, and no, that doesn’t count as being #1 for something.
The authors of the Commonwealth Fund report are gracefully doing their best to cheer us up and give us hope, by pointing out that “[s]ince the data in this study were collected, the U.S. has made significant strides adopting health information technology and undertaking payment and delivery system reforms spurred by the Affordable Care Act”. It may be okay to hope that the next Mirror, Mirror report will show us moving up a couple of notches, instead of continuing to be the laughing stock of all developed nations. So how do we go about improving our scores? Adopting health IT is obviously the first thing, and then we need to “encourage more affordable access and more efficient organization and delivery of health care, and allow investment in preventive and population health measures”. Sounds like a plan.
Except, one thing in that picture looks very peculiar. The United Kingdom, the poster child of frugal and immaculate perfection, scored almost as bad as we did in the only domain that can be regarded as an outcome: health. The bon vivant French people, with the worst access to care and horrific patient-centeredness, seem to enjoy the healthiest lives of all (and Jefferson is finally vindicated). Looking further, it seems that Sweden, where care is of abysmal quality, but most equitable and efficient, came in second in healthy lives and third overall. Can something even be simultaneously of low quality and very efficient? Can a country have dangerous, ineffective care, like Norway, and still be ranked comfortably in the middle of the pack? For inquiring minds of the confused variety, the study provides more granular data points to peruse, so let’s dive in.
Over at the Incidental Economist blog, Dr. Aaron Carroll is warning us to stay away from “Zombie arguments defending the US healthcare system”. Fair enough. Let’s not worry about the U.S. system, or any system, and let’s even hold back on questioning the much too flawless results of this or that system. Let’s just look at the data. There are four major domains in the study: quality, access, efficiency, equity and healthy lives. Without splitting hairs, healthy lives can be considered an outcome of efforts in all other domains, but of course, it shouldn’t be, and the study authors acknowledge that the health care system is “just one of many factors, including social and economic well-being, that influence the health of a nation”. Completely agree. In which case, it is unclear to me why healthy lives measures are factored into the rankings of health care systems, straight up with no weighting or adjustments.
Let’s dig in a little deeper. The quality domain is divided into four subdomains: effective care, safe care, coordinated care and patient-centeredness. Without debating this particular definition of quality, let’s look at how effectiveness is measured on two axes, preventive care and chronic care, each one assessed based on a series of data points. So for example, the first three prevention measures are: 1) the ease of printing out lists of patients due for preventive care; 2) patients who received preventive care reminders; and 3) patients routinely sent computerized reminders for preventive and routine care. I would call this triple dipping, because the only measure that actually counts here is whether patients received reminders or not, and how they responded, which was not measured at all. Whether it is easy to “print out” lists, or whether people are bombarded with computer calls that nobody picks up the phone for, is irrelevant.
The U.S. was ranked 3rd for patients receiving reminders and 7th for the other two useless measures. The UK ranked 1st for the useless measures and 5th for the mildly pertinent measure. For the remaining preventive measures, dealing with lifestyle advice provided by physicians to their patients, the U.S. ranked 1st and 2nd overall. To assess effectiveness, I would have expected perhaps a ratio of reminders sent, to reminders acted upon by patients, or at least reminders received, instead of an average score for those two, plus some strange measure about printing lists to paper.
The chronic care portion of the effectiveness subdomain illustrates yet another logical flaw in the study. Similar to the preventive care measures, here too the U.S. scores decently on actual chronic care activities, and poorly on ease of producing lists. But the bigger issue is the one measure evaluating cost barriers to adherence, and as expected the U.S. scored poorly on affordability, which is what this measure is all about. It may be fine to blast the U.S. system for being expensive, but to say that we are paying too much for a bad system, while assessing badness based on the system being expensive, is circular logic that should have no place in serious scientific conversation.
Another, rather perplexing methodology flaw, is the many repetitive questions with conflicting answers that were nevertheless dutifully added as is to the averages. Questions of this type are routinely included in surveys to validate answers, but are not meant to be independent data points. For example, how is it possible to have a bad score on primary care docs receiving discharge summaries in general, coupled with a good score on receiving discharge summaries in a timely manner? You can keep on digging if you are so inclined, but my general impression is that the data in the analyzed surveys are neither sufficient, nor pertinent enough to allow for meaningful rankings of national health care systems. Let’s also note that all data is derived from surveys. Even if surveys would be classified as objective observation, which they are not, how can we infer causality from a narrow observational study?
And here is my biggest problem with these rankings and the subsequent conclusions drawn by the Commonwealth Fund, i.e. more computerization, more preventive care, more population management, or in other words more corporate and data driven health care. The study authors are basing their findings on a subset of indicators subjectively selected by the survey designers. What about the heaps and troves of other indicators that may also be pertinent to these findings? For instance, the average primary care panel size in the UK is a little over half the average panel in the U.S., and most primary care is delivered in small private practice, by primary care physicians who are better paid than UK specialists. Are those things pertinent to the UK stellar performance on all study domains? I don’t know, and neither does anybody else until a proper study is conducted.
Looking under the lamppost for lost keys is not a scientific method of inquiry, and when we can’t find said keys, it is not proper to blame the low wattage of the lamp. There is nothing in those surveys supporting the conclusions and recommendations put forward by this report, other than faith and preconceived opinions which were neither validated nor disproved by survey responders. There is no indication that the U.S. is an outlier in health information technology, preventive care or population management, and there is zero indication that these factors are the most salient factors in the performance of a health care system.
Does the fact the U.S. is the only country in this cohort
where poor people are segregated away into special insurance plans that
pay doctors and hospitals below cost, have anything to do with our poor
numbers for Equity and Access? Does the for-profit nature of our system
affect the exorbitance of our costs and hence all study domains? Are
these things perhaps a tad more important than the ease of generating
and printing out lists of patients? We may never know….
It is proper to observe, as the Commonwealth study does, that all other countries have universal health care, while the U.S. does not. It may even be logical to assume that such a huge systemic difference must in some ways adversely affect our outcomes. But it is nothing short of perplexing to conclude that the remedy consists of mixing a tiny bit of our overpriced (and yes, best in the world) medicine, with lots of corporate run analytic dashboards, followed by universal administration of this homeopathic concoction to innocent people.
My Medical AI Holiday Wish List
1 hour ago
No comments:
Post a Comment