By Zack Dumont

Healthcare and medical literature is fraught with limitations: It’s complex, vulnerable to various interpretations, and ultimately not well served by fleeting headlines. To effectively translate the knowledge uncovered in a research study into clinical practice, one must first determine whether the research methods are valid (i.e., did they conduct a good study?) Sadly, the answer is often no.

Next comes assessing the meaning of the findings (i.e., what might these numbers suggest?) Sadly, the answer is often not much, or worse, unclear. Finally, we try to apply the information back to the care provided to a specific patient (i.e., does my patient care about what this study tells us?) Sadly, the it’s often unlikely … despite what the author stretches to conclude.

There are textbooks, expensive web tutorials, and even entire university courses (!) dedicated to honing the skills required to critically appraise medical literature. As a teacher of evidence-based medicine and critical appraisal for the last decade, I’ve worked mostly with busy clinicians — e.g., physicians, pharmacists, nurses — who challenge my every ounce of empathy when they say something akin to “The activity of critical appraisal is simply not practical. I’m just going to have to take a leap of faith on the author’s conclusions.” This is usually stated in a moment of frustration and I understand where they’re coming from; however, given the complexity and what’s at stake, taking a leap of faith is not an option that our patients can afford for us to take

After being repeatedly thrust into what felt like a crossroads — to teach the full technical critical appraisal process all at once, or to just give up — I’ve found there’s another path: Teach some aspects of critical appraisal, and keep it simple. Get the learners into the ballpark with a few simple tools, and build on these foundational skills over time.

One foundational approach I’ve found effective is to lean on a tool that I believe everyone, scientist or not, can use. This tool is heavily embedded into science-based and research curricula : the PICO format. PICO is an acronym for “patient/population,” “intervention” (i.e., a new medication, medical device, procedure, etc), “comparator” (e.g., placebo, Drug Y, etc.), and “outcome.” It’s a framework for structuring questions before digging into the available literature.

Think of a Google search: While someone might ask, “Should I take Drug X to prevent myself from having a heart attack?”, the search results will be of much poorer quality than if the question were asked using the PICO format. Using PICO, the search question might look more like, “For an otherwise healthy person in their sixties (patient/population), does Drug X (intervention), compared with placebo (comparator), reduce my risk of heart attacks (outcome),” which will likely bring up higher quality evidence. PICO can be used, and is sometimes force-functioned, in very sophisticated search engines that healthcare providers use. Since the acronym is so well engrained, I’ve leveraged it to help those new to critical appraisal to think more critically. I think everyone can apply this technique.

Though it does not align with the original intention of PICO, when looking at a scientific claim you can ask yourself the following.

  • P — Did they study this product/service in people who are like me? Unfortunately, studies are often conducted in relatively “simple” patients with single ailments; their health status certainly doesn’t fit the large part of the bell curve. For example, in a study of a medication for people with a heart condition, the researchers will exclude people with depression and anxiety; in the real world, heart health and mental health very often walk hand in hand.
  • I — Is the product/service practical and implementable in the real world? Often interventions are studied only in certain geographical regions and/or include intensive follow-up that simply isn’t achievable in the real world. For example, a study for a medication that lowers your risk of death also increases the amount of potassium in your blood, which, when high enough, can carry severe consequences. Your potassium levels can be measured, but the frequency of monitoring and the required assessment is time- and resource-consuming and relies on an imperfect healthcare system.
  • C — Has the product/service been compared to the gold standard? For example, omeprazole is effective at lowering stomach acid levels and, therefore, used to treat heart burn; but if the study compared omeprazole to placebo rather than other treatments we can use for heartburn (e.g., other medications, weight loss, trigger avoidance, etc.) then the argument that it works is certainly less compelling.
  • O — Has the product/service been shown to increase the likelihood of outcomes that I care about? For example, I’m sure many patients with type 2 diabetes want their blood sugar levels lowered. Did you know that we have multiple medications that lower blood sugar, but compared to placebo, do not positively impact the rate of heart attacks, strokes, and other bad outcomes associated with diabetes? Scientific claims should always be backed by evidence of having an impact on patient-valued outcomes.

So next time you see or hear a scientific claim, try to apply the PICO tool to test if you can poke holes in it. Critical appraisal isn’t always about finding the truth, it can also be about identifying when others (intent aside) might be off the mark.

This article appears in the September 2019 version of Critical Links.