Robert M. Califf, MD

Our hearts beat around 100,000 times per day

One of the great things about an academic medical center is the opportunity created by proximity to a university with all of its assets and ideas. As I have become more involved in activities across campus, the difference in culture between the “medical complex” and the “campus” is more profound than I would have imagined. The systems have different pressures and advantages. The biomedical complex is 24 X 7 because of the demands of medical care, while the “campus” has a much more measured and orderly ambience. 7 am meetings are “late” for surgeons, but anathema to many parts of campus. Summer and fall are no different for clinical medicine, but summer is a time for scholarly reflection on campus. The budgets on campus are oriented around the tuition model, while on the medical side the combination of revenue from patient care and grant funding dwarfs tuition.

Despite all these differences, when synergy is found, it can stimulate new ways of doing things and there is the chance to infuse creative thinking on both sides.

An area to watch at Duke will be “innovation and entrepreneurship.” Today, the Duke Institute for Health Innovation (DIHI) has its kickoff with a host of international experts and dignitaries on hand. This will provide a place for creative ideas to develop on the issue of how to deliver the most effective healthcare. The “after dinner lecture” by Uwe Reinhardt promises to be a highlight, but the entire meeting is full of extraordinarily bright people with big ideas.

A major university initiative called Duke I&E is bringing together students, faculty, alumni, and supporters, led by Dr. Eric Toone. As we have worked on this project, it has become clear that I&E can be a hub for interaction between “campus” and “Duke medicine.” In particular, the Duke Translational Medicine Institute can play a creative role in the area of biotechnology, bringing together medical center faculty and campus faculty with students to solve problems, and taking advantage of a large and talented alumnus pool.

Smoking in Europe

I couldn’t help but notice what seemed to be a tremendous rate of cigarette smoking both in Amsterdam itself and among the cardiologists assembled for the European Society of Cardiology. This habit seems so outmoded in many parts of America, although stomping it out will be almost impossible. The EU report on smoking showed rates of 25-30% across the EU. Rough estimates give similar rates in China (accounting for the aphorism “more smokers in China than people in the US”). The US is slightly less than 20% smokers according to CDC. If this continues, cardiologists will not have to worry about future business!

The Doctor as Employee

Over the past week I have been in multiple meetings in which various people have expressed the view that doctors offer patients the full range of options for treatment and discuss the risks and benefits as an unbiased professional.  In both the HHS hearings on consent when comparing treatments already on the market and in the ESC when discussing choice of anticoagulant in atrial fib, people referred to “not interfering with “medical practice.”  In the HHS hearing, it was assumed that in practice doctors discuss risks and benefits of various options, choosing the one that fits best with the match of the evidence and the patients’ preferences.  From that perspective, limiting options through randomized assignment could be problematic because it would take away readily available options that would otherwise be personalized.  In the case of anticoagulants, there was a belief that the decision often rested with the doctor.

However, in both cases it seemed that across the US and Europe, physicians are now employees of government or corporate systems that limit options as a matter of policy.  Certain drugs and devices are allowed on the formulary, and others are not, often based on financial considerations for the aggregate interests of the government or corporate delivery system.  Patients are usually not informed of the options that have been “taken off the table” and doctors who go “off the reservation” are disciplined.

There are many good aspects of this situation, especially the use of protocols to ensure reliable delivery of known effective interventions.  However, the fundamental basis for much thinking about clinical trial consent and personalization of therapy is that decisions in practice are made by a physician with a dominant professional responsibility to the well-being of the patient.  It will be interesting to see how this changing basal condition of routine practice will impact clinical trials and introduction of new therapies.

Consent for Research in a Learning Healthcare System

I had the privilege of participating in an HHS meeting (unfortunately coinciding with MLK “I had a dream” festivities on the Mall at the same time) in which many people gave their views about informed consent, clinical trials assessing comparative clinical care strategies and the SUPPORT Trial.  Here is my testimony, which can also be found on the OHRP website.  The verbal presentation can be found at:

Below is the written statement:

I am pleased and honored to have the opportunity to address the Committee on each of the items listed in “Section III. Issues for Discussion” of the “Notice of a Department of Health and Human Services Public Meeting and Request for Comments on Matters Related to the Protection of Human Subjects and Research Studying Standard of Care Interventions” published on p. 38,343 of the Federal Register on June 26, 2013. I have a number of roles and associations that are relevant to the questions at hand. However, the views I express in this document do not represent the official positions of the organizations with which I associate or that fund my activities—they represent my personal opinions only.

I have worked as a clinical investigator for more than 30 years. My research interests include clinical trials and outcomes research involving development of new interventions, but also the evaluation of interventions already in use and the delivery of health services. I lead a clinical and translational research enterprise at Duke University with hundreds of faculty investigators and more than 1000 employees, and thus have a definite interest in promoting research. Most recently, I was chosen to serve as the Principal Investigator of the Coordinating Center for the NIH Healthcare Systems Research Collaboratory, a major effort to develop our national capacity to answer important questions about medical practice. I am also the co-Chair of the Clinical Trials Transformation Initiative (CTTI), a Cooperative Agreement between the FDA and Duke University that includes more than 60 member organizations representing industry, academia, government, and advocacy groups.    

My career as a doctor began in 1978, and I have spent many years directing a busy coronary care unit and attending in a large outpatient clinical practice. During this time, I have witnessed many instances in which ignorance about efficacy and relative risks of interventions led to harm to individuals and populations, despite the best intentions of healthcare providers, patients, families, and the healthcare system as a whole. We simply did not know, for instance, that Type I anti-arrhythmic drugs caused sudden death rather than preventing it; that hormone replacement therapy in postmenopausal women caused cardiovascular events rather than preventing them; or that high-dose erythropoietin stimulating agents (ESAs) caused tumor growth and adverse vascular events rather than improving quality of life in cancer patients with anemia or in patients with renal failure. My own specialty, cardiovascular medicine, is the most evidence-intensive field of medicine, but our own research has shown that 85% of major recommendations in this field lack high-quality evidence to support them.[1]

Erythropoietin stimulating agents deserve special mention in light of recent controversies about blood transfusions. At the time this class of drugs first emerged, it seemed self-evident that “normalizing” hemoglobin levels would benefit patients, a concept supported by observational studies. Small early clinical trials (probably biased by selective reporting) also showed trends toward benefit with more aggressive use of high-dose ESAs. This practice of aggressive use was recommended in clinical practice guidelines well before adequately powered randomized trials were done, resulting in widespread clinical uptake. When randomized trials were finally done, there was no evidence for benefit with high-dose ESAs and considerable evidence for harm.[2],[3] Depending on pilot studies and trials with inadequate statistical power to guide treatment decisions is treacherous and potentially dangerous to patients.

Finally, as I have gotten older, I have encountered my own chronic health problems, and multiple members of my family have developed health issues that required “educated guesses” together with their healthcare providers. Doctors, administrators, policymakers, patients, and family members must make decisions every day about the best course of action—decisions that will affect individual patients, the clinic, the hospital, or the health system as a whole. Prior to the 1970s these decisions were, with few exceptions, informed by experience stored in a clinician’s memory or on paper, or by reasoning based on our current understanding of human biology.

However, in recent decades we have benefitted from three major advances in healthcare research methods relevant to this discussion: the advent of computerized storage and analysis of medical data, the maturation of the disciplines of biostatistics and epidemiology, and the widespread acceptance of the epidemiological tool of randomization. We have learned more about the specific ways in which human observation and memories are flawed, and that methods to account for bias and confounding are essential to avoiding faulty conclusions that can lead in turn to devastating consequences. 

We have known for some time that most beneficial interventions have a modest effect; and for effects with a risk reduction smaller than 40% or so, randomization is essential to avoid drawing incorrect conclusions. But despite this knowledge, we have been limited in our ability to aggregate high-quality data in ways that would allow us to efficiently conduct controlled trials and evaluate interventions, because healthcare transactions were not based on electronic standards. Thanks to continuing advances in technology and information systems, we now have the ability to gather needed information very rapidly. Unfortunately, the existing culture of healthcare, perhaps best exemplified by the divide between research and practice and by inhibitions about the use of randomized allocation of interventions, keeps us from advancing life-saving knowledge as effectively as we should.

Given the choice between a pure guess and expert opinion, I would take expert opinion, but given the choice between opinion and high-quality evidence, an expert informed by high-quality evidence is vastly superior. The question before us is how to unleash the power of the knowledge we can gain while respecting and protecting the interests of individual participants in the system.

My detailed views on the subject under discussion accord with those expressed in the Institute of Medicine’s CERIC Committee document, which I have signed and endorsed. I will however provide some brief additional comments. I believe there are three key changes in the background that necessitate a reconsideration of the approach to consent comparing “standard of care” interventions:

First, we are in the midst of a revolution in medical knowledge that could have a major effect in reducing death and disability both nationally and globally. This revolution is occurring thanks to our rapidly expanding ability to aggregate data that can truly inform which interventions are best for a given patient or patients. (By “interventions” I mean drugs, devices, behavioral therapies, and health services strategies at the level of individuals, clinics, hospitals, and health systems.)  When my career began, the concept of aggregating information on a large scale could be imagined but was not yet technically feasible. Now, we will soon have more than 300 million Americans with electronic health records, and tabulations of patient characteristics and outcomes are being made routinely at every level of the system.

Second, as a consequence of the availability of these vast amounts of data, there is a pressing need to reform rules governing research that made perfect sense in the “old world” but are now hindering further progress. These issues are well described in the Hastings Center Report[4],[5] already discussed in other documents submitted for this meeting. The bottom line, however, is that the current artificial separation of research and practice reduces the rate at which we are able to provide evidence that is critically needed to inform the myriad of decisions heretofore relegated to the best guesses of experts armed with inadequate methods. Furthermore, when health systems collect data systematically, current norms encourage the use of suboptimal methods. As documented in the Hastings Center Report, this likely leads to numerous bad decisions, because employing the best methods and disseminating the continuously aggregating knowledge would lead to labeling the learning activity as “research,” thereby triggering a set of highly regulated activities that are costly and interrupt the flow of practice.

Finally, I believe that the best way to cut through the layers of bureaucracy that are holding us back is to do the empirical research that will provide perspective on the expectations of patients and research participants, and then to involve the public more broadly in the research enterprise. There is a substantial body of literature about what the vast majority of people would consider to be reasonable to inform them and gain their consent. I am confident that empirical research will lead to a new norm in which learning is much more continuous, the intensity of oversight is gauged to the level of risk, and systematic approaches to informing, including and gaining the consent of research participants become more sensible and streamlined, with a reduction in the length and complexity of current consent documents, which are both broadly unpopular and poorly suited to accomplishing their ostensible purpose.


Comments on Specific Questions Raised by OHRP

1. How should an IRB assess the risks of standard of care interventions provided to subjects in the research context? a. Under what circumstances should an IRB consider those to be risks that may result from the research?

A reasonable approach to this issue is to focus on the incremental risks beyond those of the standard of care. This concept has been well-articulated by others, especially in the document submitted by the American Association of Medical Colleges.[6]

A significant and related issue is the requirement to discuss all reasonable alternatives and their risks and benefits when learning takes place under the rubric of research. Given that we know that most decisions in medicine are not informed by high-quality evidence, it is the difficulty of informing patients about the uncertainty inherent in clinical practice that leads to a situation in which the researcher has to “make up for lost ground,” not only by explaining the incremental risks and benefits of the study, but also by explaining the issues involved in routine practice.


B. Under what circumstances should an IRB refrain from considering those risks as unrelated to the research?

In many cases where research concerning alternatives within usual care is involved, the interventions being compared are not known to have a net difference in risk/benefit balance.  Arguments are often based on indirect comparisons, opinions, and non-systematic empirical tabulations and are typically accompanied by considerable bias as well as intellectual and financial conflicts of interest. When the proper evaluation has been done to establish that high-quality evidence does not provide a clear choice between alternative interventions, I believe that the IRB should consider these known or perceived risks not as being related to the research, but instead as part and parcel of risks inherent to clinical practice.

Others have pointed out that in cases when there is equipoise between two interventions, the net benefits and risks may be the same, but the specific benefits and risks may differ—for example, one may cause headaches and the other may cause myalgia. In such circumstances patients known to be at high risk for adverse outcomes from one of the treatments should be excluded, and when a reasonable person might choose one over the other, the patient should be informed.


C. What type of evidence should an IRB evaluate in identifying these risks?

An IRB should evaluate all available evidence, but the most persuasive form of evidence should be well-conducted clinical trials, or well-conducted registries or epidemiological studies that have been published in the peer-reviewed literature. Opinion uninformed by high-quality data should be considered, but it should also be clearly understood for what it is, and the amount of uncertainty should be considered on a continuum. 

The skills needed to inform these determinations are considerable, and I believe that the complexity of the task of evidence evaluation is one of many reasons to involve multiple centers and leverage the composite expertise of IRBs or central IRBs to debate these issues. The Internet is well-suited for enabling local IRBs to participate in joint sessions, an approach that is being piloted in the CTSA organizations funded by the NIH.[7]


2. What factors should an IRB consider in determining that the research-related risks of standard of care interventions, provided to research subjects in the research context, are reasonably foreseeable and therefore required to be disclosed to subjects?

This question is difficult to answer conclusively because the evidence for the incremental risks of a given intervention exists on a continuum of severity and uncertainty. For this reason, the “reasonable person” yardstick may be the most appropriate measure to apply. For a properly constituted IRB, research risks should be disclosed when they exceed the threshold above which a reasonable person would want to be informed. As noted above, I believe these determinations are best made through discussions that provide perspective and a chance to air and consider differences of opinion before decisions are made. The widely documented variation among IRBs suggests that discussions across IRBs would lead to more consistent, higher-quality outcomes.


A. What criteria should be used by the IRB to evaluate whether the risks to subjects are reasonably foreseeable?

It may be useful here to distinguish “foreseeable risks” from hypotheses and study aims. Patients can understand that a study is needed because the best choice of intervention is not yet known. When describing a trial, the expressed purpose of the study should include the outcomes under evaluation, because it is within the realm of prospectively defined outcomes that the uncertainty can be reduced through pre-specified hypothesis testing. Simply because a trial is evaluating total mortality, for example, does not mean that increased mortality is an expected risk, but rather that it is a possible risk or in fact a possible benefit (remembering that in a comparison within usual care, a risk of one therapy is a benefit of the comparator) that is worth measuring and may be discovered as a result of the trial. 


3. How should randomization be considered in research studying one or more interventions within the standards of care? Should the randomization procedure itself be considered to present a risk to the subjects? Why or why not? If so, is the risk presented by randomization more than minimal risk?  Should an IRB be allowed to waive informed consent for research involving randomization of subjects to one or more standard of care interventions? Why or why not?

The act of randomization undeniably reduces autonomy, but in many cases there is no reasonable evidence that it increases risk. In fact, there is substantial evidence that participating in a randomized trial may reduce overall risk relative to standard practice. I believe that potential research participants should be informed of this body of knowledge. 

Overall, it is appropriate that OHRP is focused on randomization, as it is a necessary part of developing reliable evidence to support decisions. But randomization is also a complex concept that is poorly understood even by many healthcare professionals. Much can be learned through well-conducted observational studies, and there is universal agreement that randomized trials and observational studies are complementary. Yet, as recent work by the Observational Medical Outcomes Partnership[8]  makes clear, even with advanced, recently developed methods, we cannot sort out the confounding and bias when making treatment comparisons for typical effect sizes (i.e., smaller than 40%). In plain English: Without applying the tool of randomization to assess interventions with modest effects, we run a significant risk of drawing the wrong conclusions!

Randomization raises a number of issues in practical application, not least because it directly calls attention to the uncertainty inherent in medical practice. Randomization constitutes a tacit admission that we know so little about the relative effects of interventions that our best available option is to “flip a coin” to determine who gets which intervention. The need to explain these concepts as part of the informed consent process is far more time-consuming and causes much more discomfort for patients, families, and providers than does consent to use data to make observations with no intrusion into treatment selection.  

The alternative, however, is what we have now: practitioners, clinics, and systems routinely recommending one intervention over another without explicitly recognizing the degree of uncertainty or the prevalent financial and professional conflicts that shade these recommendations. Under the rubrics of “operations improvement” and “competitive intelligence,” health systems often support such decisions with proprietary analyses of observational data that are performed using suboptimal methods and that lack explicit public accountability or airing of the methods and results. Yet the best systematic review performed to date found no evidence that participation in a randomized trial increases risk, and the trend in the evidence is toward reduction in risk.[9] This finding is consistent with results from the SUPPORT study, in which babies included in the trial had a lower rate of mortality, regardless of randomized treatment assignment, when compared with allowing the physician and parents to choose the oxygen saturation level within the range of standard care.[10] 

Cluster randomization adds a particular twist to this discussion, because changes are made at the level of a provider, clinic, unit, or system. This approach allows delivery of the intervention to proceed efficiently according to the same standard with each patient. Many cluster randomized trials have not required consent at all, because a determination has been made by an IRB that the intervention involved only minimal risk to patients and that obtaining consent of the sort typical of standard randomized controlled trials would be impractical. My personal belief is that empirical research will demonstrate that the vast majority of people will agree that standard consent is not needed for this type of research when its incremental risks are judged to be minimal by an IRB, but that people will want to be informed. However, this empirical research needs to be done.

A consequence of the move of physicians from private practice to an employment model is that they become employees of systems that make corporate decisions. However, while health system formularies are often restricted and services are provided or withheld at unit levels without explicit acknowledgement to patients or families of the options that could have been considered (and without public display of the findings that generated the policies), cluster randomization explicitly attempts to learn, in the most rigorous manner possible, which of several options is best. Cluster randomization mimics what happens in “real-world” healthcare, where intervention decisions are increasingly made at the level of a practice, unit, or hospital to provide consistency and reliable delivery.

In summary, I do not believe randomization itself imposes incremental risk as a general principle, although each case should be assessed on its merit.


4. How, and to what extent, does uncertainty about risk within the standard of care affect the answers to these questions? What if the risk significantly varies within the standard of care?

The only legitimate reason to undertake a research study involving humans applies in situations wherein enough uncertainty exists that reasonable people would agree that the best choice for those being studied is not known based on a rational evaluation of all available evidence. The level of risk almost always varies within the standard of care, because harms of interventions depend on patient characteristics and the proficiency and other characteristics of the healthcare providers implementing the intervention. In my opinion, the inflection point is reached when there is some proven level of risk versus a situation in which the possible risk is not proven and different opinions exist about the presence and extent of risk within the patient and provider communities. Additionally, if risk without countervailing benefit exists in an easily identifiable subpopulation, those patients should be excluded from participation to protect them from harm.


5. Under what circumstances do potential risks qualify as reasonably foreseeable risks? For example, is it sufficient that there be a documented belief in the medical community that a particular intervention within the standard of care increases the risk of harm, or is it necessary that there be published studies identifying the risk?

The criteria for explicitly identifying “foreseeable risks” in general should require credible evidence. The history of “beliefs in the medical community” is replete with errors due to bias both conscious and unconscious, and the complexities of detecting and interpreting treatment effects in the ranges typically achieved with medical interventions (10%–30%).

In conclusion, the widespread adoption of electronic health records and the development of methods for leveraging novel sources of data could enable us to increase the number of important healthcare decisions informed by high-level evidence by a log order or more, thereby saving countless lives and preventing disabilities caused by suboptimal knowledge regarding the most effective practices. But in order to enable such dramatic progress, we must also develop sensible approaches to the oversight of research within a learning healthcare system, ones that encourage the adoption of the best methods and the timely provision of relevant information to participants in a healthcare system that meets their expressed needs. This effort must include use of randomization, together with its attendant explicit acknowledgement of uncertainty.  Consequently, we should develop streamlined approaches for encouraging learning with methods that will yield reliable results capable of benefiting both research participants and—as high-quality information accumulates—future patients as well. Empirical research on how to inform and acquire consent in this new world could pave the way to a revolution in knowledge about how to provide the best healthcare possible.





Robert M. Califf, MD, MACC

Vice Chancellor for Clinical and Translational Research

Director, Duke Translational Medicine Institute

Donald F. Fortin Professor of Cardiology

Duke University School of Medicine

[1] Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301(8):831-41.

[2] Singh AK, Szczech L, Tang KL, Barnhart H, Sapp S, Wolfson M, Reddan D; CHOIR Investigators. Correction of anemia with epoetin alfa in chronic kidney disease. N Engl J Med. 2006;355(20):2085-98.

[3] Solomon SD, Uno H, Lewis EF, Eckardt KU, Lin J, Burdmann EA, de Zeeuw D, Ivanovich P, Levey AS, Parfrey P, Remuzzi G, Singh AK, Toto R, Huang F, Rossert J, McMurray JJ, Pfeffer MA; Trial to Reduce Cardiovascular Events with Aranesp Therapy (TREAT) Investigators. Erythropoietic response and outcomes in kidney disease and type 2 diabetes. N Engl J Med. 2010;363(12):1146-55.

[4] Kass NE, Faden RR, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. The research-treatment distinction: a problematic approach for determining which activities should have ethical oversight. Hastings Cent Rep. 2013 Jan-Feb;Spec No:S4-S15.

[5] Faden RR, Kass NE, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics. Hastings Cent Rep. 2013 Jan-Feb;Spec No:S16-27.

[6] Bonham AC. Letter on behalf of the AAMC RE: Notice of a Department of Health and Human Services Public Meeting and Request for Comments on Matters Related to the Protection of Human Subjects and Research Study in Standard of Care Interventions, 78 FR 38343, Docket number HHSOPHS20130004.

[7]IRBshare website. Available at: Accessed August 22, 2013.

[8] Observational Medical Outcomes Partnership website. Available at: (accessed August 23, 2013).

[9] Vist GE, Bryant D, Somerville L, Birminghem T, Oxman AD. Outcomes of patients who participate in randomized controlled trials compared to similar patients receiving similar interventions who do not participate. Cochrane Database Syst Rev. 2008;(3):MR000009.

[10] SUPPORT Study Group of the Eunice Kennedy Shriver NICHD Neonatal Research Network. Target ranges of oxygen saturation in extremely preterm infants. N Engl J Med. 2010;362(21):1959-69.

Medical Evidence–A New Perspective

This Coldplay spoof makes some excellent points:


The Word Innovative

I was intrigued by this Wall Street Journal article some time ago:

But now I’ve seen enough to agree completely.  The word “innovative” means nothing at this point, especially in medicine.  We need to come up with words that really connote an understanding of what differentiates one activity from another, lest we become a Dilbert personification.

Another favorite is “payer mix.”  When this term is used in “improved payer mix,” it typically means we cleverly figured out how to send poor people to the other guy’s health system!

NIH Director’s Blog Post

The post above appeared on the NIH Director’s blog.  Great work by Voora, Ginsburg, and the Duke Clinical Research Unit team in a support role.  It provides a glimpse into the complexity of human biology and medication mechanisms.  We built the DCRU and linked it to units in Singapore and Delhi because we really believed that drugs, although typically developed based on a specific biological target, affect multiple pathways and that deep phenotyping is needed.  Over 100 years after aspirin became available, we are now learning about these “ancillary” off-target effects.

The ability of the scientific community to create, curate, and mine massive biological data sets on individual patients and populations is continuing to advance rapidly.  The era of deep phenotyping is here–we need to develop efficient systems to generate the information.  I am certain we will find that our current naive notions of how medications work will be completely overhauled–the race is on.

Do Drugs Work?

I was intrigued by the article in the New York Times below, but more intrigued by how many people sent me a link to it.  I wish the whole article had been about page three.  I found page one to be a bit problematic.  There seems to be an implication that the problem is the clinical trials and if we only knew how to analyze them better, we could unravel the “responder/non-responder” issue on an individual, personalized basis.  It is almost impossible to believe that a neutral study isn’t composed so some people with a great response to treatment and others with an adverse response.  Unfortunately, when heterogeneity of treatment response appears to be present it is very often random variation.  There are many examples of chasing these leads down through independent validation trials, and the vast majority turn out to be negative.  In a rough sense, in a reasonably sized clinical trial, if one looks at 20 subgroups, roughly one in 20 will seem to have a significant treatment effect. But, it’s just random variation.  As we come to understand biology better; however, as skeptical as I am about most claims to date, even I must agree there will be valid stratification by biomarkers and panels of biomarkers.

Page two of the essay touches on some very important issues, but to really do justice to these topics much more space is needed.

Page three is great–more people need to know about the concept of I-SPY.  The most important part, at least in my view, is that patients and their doctors have taken charge and they are designing studies that directly compare alternative cancer treatments–and inferior therapies will be identified and dropped.  Also, effective stratification findings can be validated or refuted as an intrinsic mechanism of the research design.  The Bayesian statistical design is an interesting feature, but one that is debatable among very knowledgeable people.   All in all this is a terrific effort and we need more trials like it.

In the end, because biological processes have a stochastic nature (Barry Coller has a great lecture on this), deterministic thinking will always fall short and we will need probabilistic predictions based on populations.

An interesting retort, most of which I agree with:


Ignorance or Learning?

Fueled by the controversy over the SUPPORT Trial, HHS has announced that there will be public proceedings about the issue of consent for participation in practical randomized trials asking questions about best practice (,%202013/aug28public.html).  These discussions are critical to our future. 

Rounding in the Duke CCU earlier in the month, I was acutely aware of how often we make decisions about therapy based on belief with no empirical evidence to support our choices.  Faced with the same question, different doctors would recommend different treatments.  This variation in practice probably makes little difference, but the summary effect of these differences over a population may be huge.

In 1977 as a third year medical student, I had the chance to dream with Eugene Stead, Bob Rosati, and Frank Starmer about the “living textbook of medicine”.  We knew then that someday it would be possible to learn continuously from the recording of high quality data that could be turned into information and knowledge through high-quality analytics.  35 years later, we can do it! 

The critical issue that needs to be resolved is how to involve patients and their families in understanding our level of uncertainty and the consequences of doctors guessing about the best approach when it is imminently feasible to increase our ability to answer critical questions by a log order. The arguments about the details of the consent form in SUPPORT pale in comparison to the importance of moving to a learning system–we can always make notification and consent better, and efforts should be made to do so, but the extreme roadblocks to attempts to learn in healthcare really need to be overcome.

If we were all committed to learning, a big improvement in health of individuals and population would be rapid and sustained.

Dancing Clinical Investigators

Whoever said that clinical researchers are boring?  This clip, made by the DCRI fellows for their end-of-year bash, identifies certain talents in our faculty that were previously unknown to me.  Despite the intensity of our work, it’s really good to see people having fun. It’s especially good to look financial stress in the face and laugh at it, all the while knowing that we’re pulling together to answer critical questions by diversifying funding sources.

© Robert M. Califf, MD. Blog design by Hopkins Design Group Ltd. HTML, CSS, and WordPress Theme by Digital Mettle, LLC.