Such a situation has now been revealed to be true for the US health care debate. Specifically, Wonderboy and his party's members have made a point of claiming to exhaustion that the US health care systems is ranked "37th in the world." This, we are told, is the reason for redesigning the whole mess. When, that is, the administration isn't claiming that excessive health care costs as a burden on business is the leading reason to redesign the health care system.
Last Wednesday's Wall Street Journal carried a fascinating piece by Carl Bialik in his column, The Numbers Guy, entitled "Ill-Conceived Ranking Makes for Unhealthy Debate."
It should be required reading for every Congressional and administration member, as well as any voter interested in the health care debate. Because Mr. Bialik's piece is so important and, being the WSJ, it won't be searchable by non-paying readers, I've reposted substantial portions of his column, in order to accurately present the big picture of the WHO data.
Before launching into Mr. Bialik's piece, let me provide a general orientation to the topic. Having been a quantitative type from my early years in undergraduate business school, I am fairly well-versed in both doing and critiquing the work of others. Just because someone cites a study, that doesn't mean the study has any bearing on the topic at hand.
Research methodology, as composed of definitions of terms, measurement approaches, and analytical techniques, to name a few components, has many dimensions which may be evaluated for either error, or context for interpretation of results and conclusions.
Mr. Bialik very capably reveals many aspects of the WHO report which mitigate its use as the Democrats and liberals have been applying it in the current health care debate.
Mr. Bialik begins his article,
"During the health-care debate, one damning statistic keeps popping up in newspaper columns and letters, on cable television and in politicians' statements: The U.S. ranks 37th in the world in health care.The trouble is, the ranking is dated and flawed, and has contributed to misconceptions about the quality of the U.S. medical system.
Among all the numbers bandied about in the health-care debate, this ranking stands out as particularly misleading. It is based on a report released nearly a decade ago by the World Health Organization and relies on statistics that are even older and incomplete.
Few people who cite the ranking are aware that some public-health officials were skeptical of the report from the outset. The ranking was faulted because it judges health-care systems for problems -- cultural, behavioral, economic -- that aren't controlled by health care."
So we learn at the outset that even those in the medical field don't take the vaunted WHO report at its face value.
"It's a very notorious ranking," says Mark Pearson, head of health for the Organization for Economic Cooperation and Development, the 30-member, Paris-based organization of the world's largest economies. "Health analysts don't like to talk about it in polite company. It's one of those things that we wish would go away."
More recent efforts to rank national health systems have been inconclusive. On measures such as child mortality and life expectancy, the U.S. has slipped since the 2000 rankings. But some researchers say that factors beyond the control of the health-care system are to blame, such as dietary habits. Studies that have attempted to exclude these factors from the equation don't agree on whether the U.S. system looks better or worse."
Here we see a failing on the very first element of research design. One of my graduate school mentors, Wharton Marketing Professor Jerry Wind, taught me that if I could not construct the data analysis plan and tables prior to going into the field with my research instrument (questionnaire), then I wasn't ready to field the instrument. A good researcher already has the data analysis plan detailed, with each individual analysis specified, just from knowing what the questions and form of the responses will be.
Foremost among these are the so-called objective variables, i.e., the outcome measures for which causal or related variables are being sought and which effects are being measured.
On that topic, controlling for things which may affect, but aren't strictly part of the study, is critical. In the area of medical system effectiveness, common sense dictates that you need to control for the incoming patients' uncontrollable initial conditions. This would include things like lifestyle variables- diet, exercise, even genetics. Apparently, none of this was done in the WHO study.
"The WHO ranking was ambitious in its scope, grading each nation's health care on five factors. Two of these were relatively uncontroversial: health level, which is roughly the average healthy lifespan of a nation's residents; and responsiveness, which is a sort of customer-service rating encompassing factors such as the system's speed, choice and quality of amenities. The other three measure inequality in health-care outcomes; responsiveness; and individual spending.
These last three measures struck some analysts as problematic, because a country with unhealthy people could rank above a healthier one where there was a bigger gap between healthy and unhealthy people. It is certainly possible that spreading health care as evenly as possible makes a society healthier, but the rankings struck some health-care researchers as assuming that, rather than demonstrating it."
Here, we see the problems arising from what I just mentioned. Health-care outcomes seems a reasonable topic, but "individual spending?" This latter item would absolutely be conditioned upon the initial health situation of individuals. Thus, without some control across populations measured, this measure will be meaningless.
Yet health care spending is one of the two major clubs with which liberals beat the US health care system as poorly performing.
"An even bigger problem was shared by all five of these factors: The underlying data about each nation generally weren't available. So WHO researchers calculated the relationship between those factors and other, available numbers, such as literacy rates and income inequality. Such measures, they argued, were linked closely to health in those countries where fuller health data were available. Even though there was no way to be sure that link held in other countries, they used these literacy and income data to estimate health performance.
Philip Musgrove, the editor-in-chief of the WHO report that accompanied the rankings, calls the figures that resulted from this step "so many made-up numbers," and the result a "nonsense ranking." Dr. Musgrove, an economist who is now deputy editor of the journal Health Affairs, says he was hired to edit the report's text but didn't fully understand the methodology until after the report was released. After he left the WHO, he wrote an article in 2003 for the medical journal Lancet criticizing the rankings as "meaningless."
The objects of his criticism, including Christopher Murray, who oversaw the ranking for the WHO, responded in a letter to the Lancet arguing that WHO "has an obligation to provide the best available evidence in a timely manner to Member States and the scientific community." It also credited the report with achieving its "original intent" of stimulating debate and focus on health systems."
This is stunning! First, we learn that many of the desired data items were simply absent. So rather than either reduce the scope of the study, or redesign it to use available data, the researchers simply grabbed at some presumed correlated measures in some countries, then applied those correlative associations to completely different societies and economies. This technique basically assumes the result, since one uses assumed relationships to churn out "results," which are, of course, as expected, because, well, you created them from an a priori formula.
The results are truly useless, as Dr. Musgrove contends. Further, as Mr. Murray states, the study was only supposed to stimulate "debate and focus on health systems." This would mean it wasn't intended to be conclusive, or used as conclusive evidence about anything actionable. Only to spur debate.
In effect, the WHO designed a flawed study which was only meant, anyway, to provide some rough directional findings for debate. Not to conclude that this or that country was demonstrably better or worse on health care system performance. Precisely what US liberals wishing to nationalize health care are now doing.
"Prof. Murray, now director of the Institute for Health Metrics and Evaluation at the University of Washington, Seattle, says that "the biggest problem was just data" -- or the lack thereof, in many cases. He says the rankings are now "very old," and acknowledges they contained a lot of uncertainty. His institute is seeking to produce its own rankings in the next three years. The data limitations hampering earlier work "are why groups like ours are so focused on trying to get rankings better."
A WHO spokesman says the organization has no plans to update the rankings, and adds, "We would not consider it current."
Here, we have the WHO contending that a study which is 10 years old is still current in its implications. Does anyone else seriously believe this?
More importantly, do you rip up and redesign one of the world's more complex and effective health care systems, some 15% of the US economy, on the unreliable conclusions of a decade-old study not even intended to provide firm conclusions for any actions?
"And yet many people apparently do. The 37th place ranking is often cited in today's overhaul debate, even though, in some ways, the U.S. actually ranked a lot higher. Specifically, it placed 15th overall, based on its performance in the five criteria. But for the most widely publicized form of its rankings, the WHO took the additional step of adjusting for national health expenditures per capita, to calculate each country's health-care bang for its bucks. Because the U.S. ranked first in spending, that adjustment pushed its ranking down to 37th. Dominica, Costa Rica and Morocco ranked 42nd, 45th and 94th before adjusting for spending levels, compared to the U.S.'s No. 15 ranking. After adjustment, all three countries ranked higher than the U.S.
Still, people often claim that the 37th-place ranking refers to quality or outcomes. High spending rates pushed the ranking down but didn't degrade the quality of care. Among those who have recently failed to make that distinction in published comments are Colorado Rep. Diana DeGette; Iowa Democratic Sen. Tom Harkin; and Margaret E. O'Kane, president of the National Committee for Quality Assurance, an advocacy group.
Representatives for Ms. DeGette and Mr. Harkin didn't respond to requests for comment. A spokeswoman for the National Committee for Quality Assurance said, "WHO is a respected organization. ...We have no reason to believe it is inaccurate, and we would never knowingly misrepresent or misuse another organization's data."
Here we see that many US politicians don't even know the details of the WHO study they cite. This is, as I noted in my earlier comments, perhaps the worst way in which research studies are used. Yet that is precisely what liberal US Congressional members are doing with this flawed WHO study of a decade ago.
"The flawed WHO report shouldn't obscure that the U.S. is lagging its peers in some major barometers of public health. For instance, the U.S. slipped from 18th to 24th in male life expectancy from 2000 to 2009, according to the United Nations, and from 28th to 35th in female life expectancy. Its rankings in preventing male and female under-5 mortality also fell, and placed in the 30s.
But even such analyses, more limited in scope than the WHO's effort, face similar problems: How to differentiate between the quality of the medical system and other factors, such as diet, exercise and violent-crime rates.
Some think that if the U.S. health-care system isn't responsible for troubling outcomes, trying to fix it doesn't provide the best return on investment.
"We might get more bang for the buck by setting aside some of our health-care money to support novel approaches to improve nutrition, education, exercise or public safety," says Alan Garber, an economist and professor of medicine at Stanford University. "Not every health problem has a medical solution."Nor can everything be ranked -- especially health-care systems. "I think it's a fool's errand," says Dr. Musgrove."
And here we have the "Sunday punch," as it were, from misusing the WHO study's report.
Let's take the two negative health trends in US male and female life expectancy cited above.
Any sensible person would first ask,
"Which countries moved up in the rankings, which are above the US, and what is different in those societies and countries from the US?"
You wouldn't just assume the difference was due to the countries' health care systems. Diet, lifestyles, genetics, or other cultural differences could all figure more prominently than health care systems.
There is so much detailed commentary one could make on the WHO study's flaws that it's really impossible to do so from the outside. But it's a pretty good bet that to study the various aspects of comparative health care systems' effectiveness's, a lot of smaller studies focused on specific topics in more controlled situations would have produced much more valuable, credible and actionable findings.
From what I read in Mr. Bialik's excellent piece, I have no confidence in any part of the WHO report on health data from a decade ago. It should scare you, as it does me, that our politicians are taking this report at all seriously.
No comments:
Post a Comment