“This means when the data says “bad surgeon,” the surgeon might, in fact be a Top Gun — a technically-gifted, Morbidity Hunter — the last hope of the poor and sick.”
Having been an STS coordinator for 5 years- responsible, basically- for the entire process, data collection, discriminating and interpretation of data-field-definitions, inputting data into whatever 3rd party vendor’s data system is currently in use at the institution that I am responsible to, and ultimately, submitting a data harvest to the STS (Society of Thoracic Surgeons) golden standard for public and peer review, I have come to the conclusion that it is mostly an illusion. And that illusion makes a lot of people a lot of money.
STS has basically become the “prerequisite” standard by which a cardiac program is measured, hence, poor numbers = reduced referrals, reimbursement, a sort of scarlet letter for a hospital that fails to achieve at least a 2 star rating (3 star maximum- basically implying you are in the top 1/2 of 1%). A 1 star rating will literally kill a program.
STS makes it’s money by charging participating hospitals, as well as the vendors that the hospitals use (for software that will upload to the STS national database). Basically, hospitals are forced to comply and participate in order to be taken seriously, and vendors pay (and charge) a pretty premium in order to stay in the mix. The bottom line on that? Hospitals end up paying a huge annual fee to STS, a hefty fee to whatever vendor they are using in order to deliver the data, and compensation for the data management team that enters and harvests the data for STS. That’s expensive- to say the least…
Compounding this ever growing behemoth, the number of data fields continue to multiply like rabbits, most of which are clearly not measurements of performance quality, rather unabashed data mining. There is little rhyme or reason newly added or deleted fields to the average lay person, so presumably, some think tank, somewhere, came up with great ideas such as a “5 meter walking test for cardiac patients, as some sort of exclusionary “check mark” for questionable or unnecessary heart surgery. I can tell you that no heart surgeon I know has ever embraced that particular diagnostic gem into their routine practice.
This all brings me to the following article that was forwarded to me by Dr. Marnix Verhofste (thank you 🙂 ) regarding the impact on practicing physicians that have now become wary and subsequently more cautious in their patient selection, perhaps avoiding the higher risk patient in order not to take a “hit” from some sort or grading or regulatory body.
Nobody wants finger pointing
And finger pointing tends to lead to professional character assassination, when one surgeon (or group) feels that the other surgeon (or group) are either padding their numbers, misrepresenting their predicted versus observed morbidity/mortality, or just avoiding the more complex cases.
Ultimately, these types of performance benchmarks lead to higher costs in medicine as well as defensive practice where a surgeon may either opt out of a procedure, or order so many unnecessary tests (the dreaded “Standard of Care” mantra makes a cameo appearance here) so that both the caregiver and the recipient patient become casualties of greed and indecision.
Thank you again Dr. Verhofste for the insightful read below 🙂
Click on image above to view source article…
When report cards of performance became available, cardiac surgeons in New York and Pennsylvania avoided high risk patients. Could something similar happen, nationally, after the forthcoming revolution in transparency inspired by ProPublica’s data release?
Cherry Picker lives in the Upper East Side of New York. His patients give him great reviews on Yelp. His patients read every comment on Yelp before making any decision. Cherry Picker has a beautiful family. When he smiles, light refracts from his shiny teeth.
Cherry regularly appears on TV. He writes for the sleek, metrosexual publication, FHM. Cherry specializes in knee injuries in weekend warriors. His patients often call him from the ski slopes in Colorado, Whistler ,and Zermatt. Cherry is good at his craft. But his patients are even better at their craft — post-operative recovery. Cherry doesn’t actively seek such patients. His patients are selected for him by his zip code, reputation, long waiting list and Yelp.
Morbidity Hunter’s real name is Harjinder Singh. He migrated from Punjab and works in a safety net hospital in North Philadelphia. Singh wanted to work in Beverley Hills, but to convert his J1-visa to a green card, he had to work in an area of need. Once he started working, he liked his job. His daughters liked their school, and his wife liked the house they bought. Singh doesn’t have shiny teeth. He hasn’t appeared on TV, although his daughters tease that he can play Sonny from Exotic Marigold Hotel.
Singh’s colleagues named him Morbidity Hunter because he operates regardless of how sick his patients are. He never says no. Nearly all his patients are obese and diabetic. The school of public health sends students to shadow him to learn about polypharmacy. The hospital went on a spree of hiring hospitalists when Singh started.
His patients, straddling the Federal Poverty Limit, don’t rate him on Yelp. His patients don’t use Yelp. Even if they were informed consumers they would have to choose Singh, because there are very few orthopedic surgeons who are willing to operate on them in that zip code. His patients haven’t heard of Cherry Picker. They don’t ski, ballroom dance or run half marathons.
Singh, too, is good at his craft. Technically excellent, to be precise. You wouldn’t know that from looking at the rates of readmission, infection, and deep vein thrombosis in his patients. But the staff in the operating room know that, as do his colleagues, whom he has often helped out in tough operations. Even Cherry admires him.
Singh is not in for the money. He doesn’t make as much money as Cherry, but makes enough. He doesn’t operate for glory. He operates for professional pride — an ethereal concept that eludes some health economists.
It’s hard to zap the morale of this sturdy lad from the Punjab. But the data transparency movement achieved that. He always knew that operating on the sickest, poorest and most disenfranchised section of society was not going to be lucrative. But he never knew he was going to be made the captain of their ship — he was happy to captain the placement of their total hip — but what happened before or after they entered the operating room was not his fault, he felt.
People began to call Singh an incompetent surgeon. He objected, but he could not understand the logic behind the numbers which were incriminating him. His complication rates were the highest in Philadelphia. Numbers don’t lie, supposedly. This was too much for him to bear. He didn’t mind losing the pitiful bonuses that CMS was withholding from him, but the reason broke his heart: his poor quality.
Singh was puzzled by people who claimed to lose sleep over the poor. The chasm between their sentimentality and actions baffled him. Punjab began to make more sense than Philadelphia.
But then Cherry invited Singh to join his practice in New York. Cherry promised Singh that he could operate on technically challenging patients. Grudgingly, Singh accepted the offer, which made his wife very excited about shopping for Indian food in Queens. She insisted, though, that Singh had to see a dentist first.
Homo sapiens have always sought redemption. Today it is through data. Numbers have replaced Yahweh and Indra. But, just like the old gods were, numbers can be moody, arbitrary and, occasionally, downright unfair. Numbers are a human construct, after all.
My favorite is Simpson’s paradox — where the conclusions are actually, and precisely, the opposite of what is inferred from the data. That is, for example, when a study shows the superiority of an inferior treatment, and vice versa.
You can imagine the God of the Old Testament yelling as Abraham was about to sacrifice Isaac, “Stop! That’s Simpson’s paradox. Now I know you fear God. Put the knife down, slowly. Isaac, promise daddy you will learn your times tables.”
The data release by ProPublica is a reservoir of Simpson’s paradox. This means when the data says “bad surgeon,” the surgeon might, in fact be a Top Gun — a technically-gifted, Morbidity Hunter — the last hope of the poor and sick.
Aren’t you intrigued and perturbed by this paradox? This means that data may not be just telling half-truths, but flat out lying. I thought we were done with burning innocents at Salem.
This graph made by John Alan Tucker, PhD (@JohnTuckerPhD), an analyst, is instructive. Let’s assume that a complication rate of 2.5 percent is the national benchmark of surgeons. Let’s say that a surgeon, Good Enough, MD, in reality meets the benchmark. The probability that the data will show a complication rate of 4 percent in Good Enough — that is 60 percent over the benchmark — is nearly 40 percent, if we sample his last 35 procedures. Forty percent! Would you accept that degree of uncertainty for a new statin in the market? Are surgeons more expendable than Lipitor?
Chance, it seems, favors the high-volume surgeon. But don’t get too excited. If Good Enough’s last 200 procedures are recorded, the chances that the data will show a 4 percent complication rate, when his true rate is actually 2.5 percent, are 15 percent. This is not as high as forty percent, but hardly respectable.
This figure, in which Tucker included the much-touted risk-adjustment, is no more reassuring. You can see that the complication rates of hip surgeons in California that fall within the 95 percent confidence interval. This means that the difference between the nearly worst and nearly best hip surgeon could be the roll of a dice. Of course, that’s not true. The technical skills of surgeons lie in a spectrum. The numbers do not replicate the spectrum.
This highlights a curious ethical issue. If a patient has a right to know about a surgeon’s performance, the point estimate, should they not be informed about the methodological limitations of that measurement, the confounders, the confidence interval — which is as large as the elephant in the room?
No, some will say, it’s information overload. But limiting the information we give patients just because we think they can’t handle the information has a name. What’s it called again? Ah yes, paternalism.
We are at the brink of a revolution in accountability in medicine. We are at the brink of some revolution, or other, a dozen times a day. The future of the transparency movement is bright. The future for patients who need Morbidity Hunter, MD is not so good. Harjinder from Punjab won’t look after the poorest and sickest — even for a green card — if you call him a bad surgeon.
Saurabh Jha is a radiologist and can be reached on Twitter @RogueRad. This article originally appeared in the Health Care Blog.
Image credit: Shutterstock.com