The reader will remember the stirring account of my recent neurosurgical adventure. The initial part of the adventure was essentially a misdiagnosis by my ophthalmologist. Despite the facts that, one, my eyesight was clearly deteriorating, and two, the very small cataracts in each eye were unchanged -- nonetheless, my ophthalmologist sent me home with literature on cataract removal, and instructions to tell him when I was ready to have them removed. I struggled with my feeling on the one hand that something was more profoundly wrong than that, versus my respect for a doctor’s opinion on the other. Silly me. Eventually, as I exerted pressure for a better diagnosis, my pituitary macroadenoma impinging on my optic chiasm was revealed as the cause of my progressive blindness.
What I didn’t anticipate was the response of fellow physicians to this misdiagnosis. One friend, plastic surgeon Steve Daane, was outraged: “You were misdiagnosed!” My pediatrician friend Arnie Blustein noted that this is a classic case of the kind of mistake in doctor thinking described in Jerry Groopman‘s book, How Doctors Think. As Arnie said, once they get an idea what the diagnosis is, they twist logic to come to that conclusion and just hang on. In my case, the cataracts had nothing to do with my decrease in vision, and if the ophthalmologist had used his head and gotten a simple visual fields test, he would have discovered my pituitary tumor impinging on the optic nerves. But he didn’t. Funny, I had read the book but it took Arnie’s observation for me to connect my plight to Groopman’s, as recounted in the book. I guess I’m not so smart as I sometimes think I am; I’ll have to reread it.
But what’s even more interesting is the number of doctors who recounted similar incidents in their own medical care! Here is the list:
• One pediatric colleague had had GI symptoms for some months, and his doctor said, you’re just constipated. My colleague didn’t think so, but went along with the suggested remedies. He subsequently arrived in the ER with fecal obstruction. The diagnosis in the ER was constipation. My colleague objected, insisted on and pulled strings to get an MRI, and an obstructing colon cancer was found. (He is now fine, having finally gone to UCSF for treatment.)
• Another pediatric colleague, a runner, was recovering from back surgery when he took a turn for the worse. The diagnosis was that he hadn’t rested enough and was “pushing it.” He objected to the diagnosis, insisted on blood tests, and he was ultimately diagnosed with osteomyelitis at the surgical site.
• Another colleague had had nagging pain in the right buttock and leg. He was advised to get physiotherapy. He insisted on an MRI, got his internist to sign off on it, and a five pound tumor was found in his buttock.
• Just yesterday, a neonatologist colleague told me about his searing right upper quadrant abdominal pain, diagnosed by a GI specialist as a clear case of a spastic gall bladder, which should be removed at once. My colleague suggested an MRI or CT scan, which the specialist refused to order because the case was so clear-cut. My friend went back to his internist to get his wish for a study fulfilled, and the culprit turned out to be a ureteral stone just above the bladder, not a gall bladder case at all.
What to make of these cases? An anecdote is just that, an anecdote, and even multiple anecdotes do not rise to the status of data. But while they don't prove anything, I think these anecdotes really do illuminate a probable truth. I think it i probably true that diagnostic medicine as commonly practiced has a very high rate of errors. This is a sobering thought, but probably true. It’s what my father told me long ago; as a community based neurosurgeon he saw a lot of how medicine is practiced, and it didn’t inspire confidence. His counsel to me was to be very careful whom I chose to be my doctor.
All these anecdotes concern doctors as patients. It’s possible that doctors get worse care than do ordinary patients. Treating physicians might be more nervous with a doctor as a patient, and they might not treat their doctor-patients as they do other patients and leave things out, try extra hard to appear confident, etc. But I think it is more probable that doctors are more alert to physician mistakes than are lay people. We just know more about what goes on, just as a general can tell a bad battle plan and bad commander reactions to enemy actions better than an architect can. I think that there is a lot of this bad diagnostic medicine practiced all the time and everywhere, but doctors get away with it.
From another point of view, it's amazing that such a poor diagnostic landscape exists. After all, there is a very active movement in medical care to measure quality of care. I won’t recount the history here, because it is long and complex. But the amazing thing is this – none of these adventures in misdiagnosis would be caught and cited by any current effort to measure quality of care!!! (That’s right, three exclamation points. After all, what could be more important than these misdiagnoses, and yet they are off the radar screen, by design! OK, now I’m down to one exclamation point.)
In outpatient medicine, it is very hard to measure quality of care, or even to define it. The most recent quality movement has been P4P, or Pay For Performance. P4P uses data from billings – the ease of data acquisition in P4P is its biggest selling point – and assesses to what extent regular, stereotypical procedures have been carried out. For instance – immunizations. There is a regular schedule for giving immunizations to children, and each shot should be billed for. Thus, a quality indicator for a practice is, what percentage of patients in a practice have gotten all the required shots by age 2? Or, another example, in a patient hospitalized for a heart attack, what percentage of patients had a beta-blocker prescribed by the time of discharge? Again, the ideal percentage would be 100%, so the rating is unambiguous. These are stereotypic procedures that should be done for every patient of that age or with that diagnosis, every time.
In the case of my ophthalmologist, a regular, stereotypic procedure would be, does the practice measure intraocular pressure every year, to detect glaucoma? I assure you, in this practice, they measure 100% on this; I have never been there without my intraocular pressure being measured.
But how could there be a method for detecting the timely (let alone early) diagnosis of my problem? How could the diagnostic acumen of the doctors in thes other cases of misdiagnosis recounted above, be assessed? Actually, I can think of several ways, but each would be difficult, expensive, and not possible in the typical small office. Quality assessment is really hard to do.
But, I do have a proposal. It goes against the grain of current quality assurance or quality improvement programs, because it is not quantitative – quantitation has infected medicine worse than MRSA has. You just can’t mention anything in quality assessment until somebody’s left brain kicks in, and someone pipes up as though they had a personal connection to the scientific Taliban, “How can you measure that exactly?” As though they were the smartest ones in the room. Drives me nuts.
Anyway, right now practices get paid extra (actually, their withhold of payment is restored, but that’s not the way the insurer’s portray it) if they meet the quantitative measure of keeping enough patients fully immunized. My proposal is this: the practices should be paid if they can show that they regularly review diagnostic problems within their practices and discuss them with each other.
The scientific measurement Taliban will reject this as “inexact,” but still, what gets talked about and what gets paid gets the attention, and the more we ignore diagnostic prowess the worse it will get. So I say, tell the practices they should have these procedures of diagnostic review, set some criteria for them and pay the practices that set such a review up.
There is a legal problem that would have to be solved for this proposal to work. In the hospital a quality committee’s deliberations are legally non-discoverable. This confidentiality allows true quality work to proceed, for obvious reasons. Without that protection, the quality committee’s work would be adversarial from the start and no one would agree to staff it. The diagnostic quality review committee in outpatient care would need similar legal protection.
So, in summary, I think diagnostic accuracy in community medical care probably is sorely deficient. If my colleagues and I can’t get it, you probably can’t get it, either. Caveat emptor – get that second and third opinion whenever you feel uneasy. And second, I propose that the quality movement change its focus from being exclusively on the repetitive, stereotypic, quantifiable procedures, and start to focus more on the harder to measure but ultimately at least as important area of accuracy of diagnosis.
Note to insurance company – I am available as a consultant, as I am not currently otherwise occupied occupying Oakland.
Budd Shenkin