formats

Atul Gawande’s NY Times Op-ED

Atul Gawande, the surgeon-writer, is outraged (A Lifesaving Checklist 12/30/07.) Apparently an ongoing medical study, that he endorsed, was stopped because of a bureaucratic rule. The study was trying to this question:  Are there fewer hospital infections if doctors follow a checklist, as do airline pilots on take-off (sample: Have you washed your hands? Have you draped the patient with a sterile cover) than if treat patients according to their own rules? This, Dr. Gawande fomented, is bureaucracy at its worst. Stopping the study is outrageous, because the benefit of checklists is so obvious. (He did not ask why, if the benefit is obvious, the study should be done at all.) 

The study asked the doctors for half the patients to use checklists to prepare for a procedure while the doctors for the other half would prepare as they saw fit. The reason the bureaucracy—in this case the Office for Human Research Protections (OHRP)—gave to cancel the study was that the investigators had not asked patients to consent to participate. Some might have refused. After all, if you were the patient, would you want to be under the care of the doctor who did not use a checklist?  It is, of course, possible that using a checklist makes no difference at all, that doctors do the right thing all the time, but, if you are the patient, do you really want to take that chance? Shouldn’t you be asked? 

Dr. Gawande thought that insensitive bureaucracy had stupidly ended a useful study. He missed a larger point. First, we do not know whether the checklist does or does not make a difference (preliminary studies suggests it does), so it is perfectly appropriate to do the test—improving our knowledge is clearly worthwhile. Second, an experiment on a patient is not just good or bad. Human experiments engage several value systems not all of which are easily seen. The conflict between Dr. Gawande’s wish to continue and OHRP’s command to stop was a conflict between, and a different weighting of, these different sets of values. 

In research using patients, probably the highest value is accorded to patient autonomy.  A research patient has a fundamental right: to refuse to participate. This right trumps any perceived benefit of the research. OHRP clearly focused on this value; patients should have been given the right to refuse. 

The second most highly ranked value is that of privacy. This right was emphasized in first days of AIDS, to protect against discrimination or the public release of personal information.  To protect this right, medical researchers now are burdened with cumbersome regulations; all patient-identifying information must be removed from all documents.  Research data must be kept in locked cabinets. Physicians may not transmit information about patients to anyone, including family, except another treating physician, unless a patient expressly consents (a ruling that applies to all aspects of patient care).  This is not only a matter of ethics; in many states it is a matter of law. Researchers and treating physicians find it exhausting to comply with the paperwork engendered by these rules.  Privacy rules delay research; burdensome regulations cause some potential researchers not to enter the field at all.  Since not very much that is private is a risk in the checklist research, Dr. Gawande seems to rate the privacy value low; he would dispense with the bureaucratic needs. 

A third value can be summarized by this sentence: Medical research should provide the greatest good to the greatest number.  The checklist study would do this. The investigators may have assumed that checklists and observation of doctor behavior does not count as research. They may be correct. Audits of practice, whether or not one collects additional data, are usually exempt from research rules.  Furthermore, the agencies that rule over human research exempt some research from consent—studies of population behavior, for instance—and provide expedited approvals for low risk research.    The researchers should have asked for an exemption, but they certainly know that local human research monitoring boards are inconsistent.  When the same research is done in several different sites, as the checklist study was done, one local board may consider it exempt from consent rules, while another may require full consent.  Such inconsistencies exasperate researchers and delay research if not force it to a complete halt. Had OHRP been asked before the experiment began, it might have ruled that checklists are practice audits exempt from consent, or it might not. One does not know until one asks. 

Society values innovation and rapid dissemination of research results.  These values contradict the values of autonomy and privacy. Innovation and dissemination demand flexibility of experiment rules and open and ready access by the public; autonomy and privacy restrict the rate of flow and narrow the volume of information that may be shared.  Even the creation of a widely desired a national medical database is hobbled by privacy concerns. 

The value “do no harm” has a very high priority. Pragmatically, however, the fact of the study is closer to “do less harm” or “do more good” than it is to “do no harm.” Consider the comparison of a new drug with an old; the new one is thought to be safer, but it may not be. If the new drug is better, those receiving the old will be (in one sense) harmed by the experiment; if the new drug turns out to be, in fact, more toxic, those receiving it will be harmed. Or consider the circumstance in which early surgery is thought to give a better cure rate than a medical treatment. Either the surgery or the medical-treated patient group could be harmed or helped, depending what the answer turns out to be. In some cases an experiment demonstrating good yields a very small, one might say trivial, improvement; in another case the difference might be as great as that between death and cure. The amount of potential good or potential harm can be weighed. Committees that judge the overall value of a research study on humans use common sense rules but apply no formal measurement scales to these values. Absent a metric, OHRP’s and Dr. Gawande’s opinions collide. 

So what to do?  Protect the privacy of the research subject?  Facilitate the rapid conduct of research? Protect at all costs against possible harm? Or make possible a public good? 

One cannot reconcile the conflicting demands.  One can try to make the committees more uniform, the rules more efficient, but these are minor fixes. Instead, it would be best to concede that our contradictory rules derive from different societal values, and that no one rule is absolute.  We can assign point systems—degrees of risk—that have positive (for benefit) and negative (for risk of harm) numbers instead of ruling by yes/no criteria. There are different levels of privacy and autonomy, and different degrees of good and harm. Scientific importance can be quantitated. The potential public gain can be as well.  The numbers can be summed, and the potential worth of the research scored. 

The table below gives examples. How the scoring system is actually constructed is not an important point. What is important is that the investigators should document how their proposals address each of the values; what is important is that the committee should systematically weigh each value when it decides to approve or disallow a specific project of human research. When that happ
ens, Dr. Gawande, instead of railing against a faceless bureaucracy, will see its decision from another point of view, and OHRP will understand that some quantitatively small risks can be waived.

                                                    

Potential Research Questions to be Studied:

Ranking of benefit and harm (0 to +4)

Does a checklist reduce hospital infection?

Is new (very similar) drug A better than (widely used) old drug B?

Is new drug A better than (widely used) old drug B? (Completely new biologic principle under test, first use in humans)

Is (similar) new drug A better than (widely used) old drug B? (treatment of HIV or other very private concern)

What is the good to society if the intervention is successful? 

4 1 3 2

How much does knowledge increase if the intervention is successful? 

1 1 3 2

What is the good to the patient if the intervention is successful?

1 1 2 2

What is the harm to the patient if privacy is lost? 

0 -1 -1 -4

What is the harm to the patient if the intervention has adverse effects?

0 -2 -4 -2

What is the harm to the patient if the intervention is not given (placebo or refusal)?

-1 0 -1 0

Overall score (sum)

5 0 2 0
 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
Comments Off on Atul Gawande’s NY Times Op-ED  comments