MDCalc is a 13-year-old medical reference started by two practicing emergency medicine physicians, Dr. Joe Habboushe and Dr. Graham Walker. A recent survey by EB Medicine has shown that MDCalc’s 370+ tools are now used weekly by 65% of U.S. attending physicians and nearly 80% of U.S. residents, which may make it the most broadly used medical reference. It is still owned and run by the physicians who started it, without outside investment or outside corporate influence (just like Medgadget).
To better understand how new medical calculators are created and to gain more insight into what MDCalc has achieved, we are joined today by co-founder and CEO of MDCalc, Dr. Joe Habboushe (an emergency physician), Dr. Jarone Lee, Medical Director and Quality Director in Massachusetts General Hospital’s Department of Surgery (a critical care doctor and member of the MDCalc Editorial Board), and MDCalc’s Head of Content Dr. Rachel Kwon (a surgeon).
Alice Ferng, Medgadget: What was the state of medical calculators in clinical practice prior to the founding of MDCalc, and what is the growth that you’ve seen in the past few years? Why do you think there has been an increasing popularity of these calculators?
Dr. Joe Habboushe, Co-Founder & CEO: There are few medical calculators that are more than 20 years old: Glasgow Coma Scale, FENa, Ranson’s. Even Wells’ Criteria’s first study was just 17 years ago! When MDCalc was started in 2005 – out of the idea that docs shouldn’t waste their time trying memorize esoteric scores (and, more importantly, scores need not be limited by their “memorizability”) – there were just a couple dozen scores. But the world of “evidence based medicine” – and specifically, clinical decision tools – has exploded since then. Four years ago, MDCalc had 80 scores and was used by about 15% of all US docs. But no one expected the recent explosion in clinical scores: MDCalc now has 370 scores, with 600+ in their pipeline, and are used by 65%+ of all U.S. docs every week.
This is largely very good for medicine and for our patients, as clinical decision scores create an evidence base that – when properly applied – is combined with physician judgment to improve the care of their patients. The trend towards clinical decision tools is only accelerating.
Medgadget: What are point-of-care references that physicians use these days, and do they have calculators and rubrics available?
Dr. Jarone Lee, MDCalc Editorial Board Member: There seem to be a few big categories: the modern version of an “electronic textbook” for when you have 5+ minutes and need to read up on a topic – references such as UpToDate and emedicine. There are drug references – Epocrates was a favorite when I was in school, but there’s a bunch of others that are just as good – Micromedex, RXList, and some EHRs even have them built in. And then there are “medical calculators”, which wasn’t as important 10+ years ago as it is now – MDCalc is the main one here for nearly all medical calculators. There are also some specific references for specialties – a great ortho reference, or derm reference, for example. Finally, once in awhile we need to look up society guidelines – this is fairly rare, but there’s also no good quick way to do this without reading through long guideline documents, as it’s not really designed for point-of-care use.
Medgadget: What is driving the creation of new calculators, and why is there a rise of the number of them now? Is there something that is lacking in current clinical practice that is moving these calculators forward? How did clinicians make similar judgment calls in the past before the calculators? Was there a reason why these calculators were just not as “popular”?
Dr. Lee: You’re right – there’s a ton more calculators / clinical decision tools now then there were just a few years ago. There are many reasons for this: evidence-based medicine (EBM) is still quite young, and many more traditional specialties are just now really embracing it’s value in practice. Also, because technology allows for easy access and use of calculators, the dissemination and adoption has skyrocketed. Also on the research side, it seems like there is more and more research, sophistication, and resources being put into looking at big healthcare databases. As a result of this research, we usually get some type of prediction rule/calculator.
We also have a new culture of physicians who want to apply medical calculators as they see how it benefits their patients, medical researchers who are understanding how to best create new scores, and medical journals who will support and publish their work.
MDCalc has changed the game as well – by making these scores easily accessible by doctors, calculator creators know their work will be used, and know they can be more sophisticated with the design of their scores, as they don’t need to be as memorizable.
Finally, payers looking for consistent treatment as well as rules on when payments should be made, and providers such as hospital systems looking for ways to track “value” in the new value-based healthcare systems are hungry for quantitative EBM tools that are at least some sort of quantitative measure of disease, treatment, diagnosis, and value.
Overall, this is great for patients, who get better care when smart tools are combined with good clinical judgment at the bedside.
Medgadget: Let’s talk about two things: 1) How are calculators actually created? Validated? Tested? And 2) How do doctors and other healthcare workers know which calculators to use in clinical practice, and how do they know if they’re using them for the correct intended use?
Dr. Habboushe: The way we see it, the expertise to create and validate a medical calculator – is quite different from making the calculator usable by clinicians at the bedside.
The first is done by the hundreds of calculator creators: talented academic researchers with a strong foundation in statistics. Typically a score is first derived by identifying multiple potential correlated aspects of patients across a patient data set, and applying a multiple regression analysis. The score should then be validated on a new patient population, hopefully one that is as different as possible from the first patient population, and ideally by a new set of researchers. These are the calcs that make it onto MDCalc: not only derived, but also rigorously validated with a proven clinical utility. We don’t include every calculator under the sun (there’s 1000’s – many don’t make the mark).
The second part is what we do at MDCalc: bringing the calculators to the clinicians at the bedside. Originally this was a simple user interface – doctors who knew the name of the calculator and the ins and outs of them, but needed a quick calculation, would come to us while seeing patients. About seven years ago we started building content around each calc that a clinician could quickly read to help them best apply the scores: when to use them, pearls and pitfalls, a synopsis of the evidence, and what to do next after a result is calculated – even advice from the calculator creator themselves – this allowed docs who may not be as familiar with the score to use them. Finally, we’ve recently created smart search tools so a clinician can essentially say “I have a patient with a PE that I would like to prognosticate and get treatment advice” by picking appropriate filters, they will get a list of potential calculators with descriptions, and sometimes a short comparison article explaining when to use each score. This way docs who don’t even know the name of scores can still incorporate robust evidence-based medicine in their practice.
Medgadget: For our clinical readership out there, can we run through an example of how to create a calculator and think about this process?
Dr. Rachel Kwon, Head of Content: One recent example of a high-impact score is the Hestia Criteria for pulmonary embolism (PE). It helps answer an important and practical clinical question: which patients with PE can be safely treated at home? Outpatient treatment is often better for patients, provided that it’s safe and they are low risk for complications, as it’s more convenient and avoids the risks that come with being admitted, like infection. The original Hestia study was published in 2011 on a multicenter prospective cohort of 297 patients in the Netherlands. The authors set predefined criteria for who would be eligible for outpatient treatment – criteria that are easily obtainable and make good clinical sense, such as hemodynamic stability, not requiring supplemental oxygen, absence of other medical and social reasons requiring admission, and others. The criteria were developed from several small observational studies on outpatient PE treatment. The Hestia authors applied the criteria to their own cohort, looking at patient-important clinical outcomes including recurrent VTE, major bleeding, and mortality, and with 100% follow-up, found that the criteria were able to determine who could be safely treated at home.
Other academic researchers then studied the criteria in several other cohorts to validate. The key studies can be seen in our Evidence Appraisal of the score on MDCalc, but briefly, Beam and colleagues looked specifically at rivaroxaban (Xarelto) to treat 106 patients at home meeting Hestia Criteria and found similar results (no deaths, major bleeding, or recurrent VTE); Den Exter and colleagues found in a randomized inferiority trial that of 275 patients (of 550 randomized) discharged who met Hestia Criteria, VTE recurrence was 1.1% (below 7%, which is generally considered acceptable); and Weeda and colleagues retrospectively studied a cohort of 577 patients and found no deaths in 149 patients who would have been deemed low risk by the Hestia Criteria. This is typical of external validations, which may have slightly different results (either better or worse than what the original authors have found), but in the case of a well-validated tool like the Hestia Criteria, an acceptable level of accuracy and safety remains even in several different cohorts.
The original study authors also published post-hoc analyses, which are not quite external validations but can sometimes add additional insights from statistical analysis, showing feasibility and safety of discharging those who met the criteria, including in patients with right ventricular dysfunction, who are often considered high risk, and showing that the Hestia Criteria identified more patients who could safely be treated as outpatients than an alternate score, the simplified PESI (PE Severity Index).
Fun fact: unlike a lot of scores that are named after acronyms as mnemonic devices for memorization, Hestia is named after the Greek goddess of domesticity and the home.
Medgadget: I know we’ve previously talked about how these medical calculators are clinical decision tools, and not rules to be followed. Can you please talk about the context for which these clinical decision tools should be used in conjunction with physician judgment?
Dr. Lee: There’s a lot of chatter about AI in medicine – and some of the folks pushing it may be missing the boat. These calculators fall apart when you remove clinical judgment from the picture – as they were never designed to replace clinical judgment, but rather to support it and be used along with it. That’s why the concept of comparing a score versus clinical judgment doesn’t really make sense. It’s a misunderstanding.
The modern clinician should have a strong understanding of the statistical aspects of the scores they apply, so they can use them correctly. For example, some docs mistakenly use one-directional rules in two directions – e.g. the Canadian Head CT Rule is an amazing tool at ruling out the need for Head CT’s after trauma, in up to ~70% of patients. It was designed specifically to be very sensitive, but not very specific – which means if you meet the criteria, you very rarely need to get a CT (because it’s so sensitive), but if you do NOT meet the criteria, the rule doesn’t say you need to get a scan (it’s not designed to be specific) – in fact, the rule is silent on those patients, and the clinician can and should use their judgment – not necessarily scanning a patient. This – and other principles about scores – are often misunderstood and adversely affect patient care.
MDCalc does a good job at teaching these principles in the content around each score.
Medgadget: Is there an end in sight to the number of scores? What else is next?
Dr. Habboushe: Frankly, the rate of calculator creation just continues to increase with time… we went from 20 calculators 13 years ago, to 80 just four years ago, to over 365 today – with even more than that number in our pipeline to be potentially added to the site.
In addition to calculators, several medical societies have asked us to create the official “guideline summaries” for them, to be much more accessible and easy to use by the doctor at the bedside. We’ve partnered now with over six peer reviewed medical journals, often making a special section of the journal to publish content around clinical decision tools. We’re working with other partners to provide CMEs for when doctors read the content around each score, to be launched by 2019.
Finally – and probably most exciting – we’re developing robust and smart EHR integrations with several hospital partners and have over 15 large medical systems signed up in our EHR pilot program.
Link: MDCalc…