OrderRex: Doctors who ordered this also ordered…
If your family member was ill and arrived at the hospital, you might expect them to receive a consistently excellent standard of care regardless of where they are and which particular doctor is on shift today. The reality is instead pervasive variability and uncertainty in your doctor’s practice of medicine, leading some people to receive subpar or inefficient healthcare. This can range from 25% of patients with a heart attack not receiving the aspirin they should to overall compliance with evidence-based guidelines ranging from 20% to 80%. To be fair, much of this variability is due to uncertainty, with the majority of clinical decisions (e.g., a third of surgeries to place pacemakers or ear tubes and up to 90% of clinical practice guideline recommendations) lacking high quality scientific evidence to support or refute their value. The status quo is thus medical practice routinely driven by individual expert opinion and anecdotal experience.
The progressive adoption of electronic medical records to digitally record patient information and physician medical decisions creates new opportunities to inform medical practice with the collective expertise of many practitioners. Instead of consulting an individual doctor for advice, data-driven approaches can allow us to effectively consult every doctor on how they care for their similar patients. Just as Netflix or Amazon can determine “customers who liked this movie also like this movie,” we developed an analogous computer algorithm to make determinations like “doctors who ordered this lab test also tended to order this medication.” In this paper, we investigate whether such an approach is actually predictive of real physician practices and patient outcomes.
To achieve this, we extracted one year of electronic medical records from Stanford University Hospital (>5.4 million structured data items from >18,000 patients, including physician orders, lab results, and diagnosis codes). From this, we implemented an algorithm based on Amazon’s item recommendation algorithm to systematically identify clinical event co-occurrences (e.g., “how often does a patient get a chest X-ray within 1 hour of getting an EKG,” or “how often does a patient admitted for pneumonia require intensive care mechanical ventilation within 24 hours”). For a separate set of patient emergency room records, we used the above system to anticipate the ten most likely physician orders the patient would need during their initial hospitalization, improving upon the benchmark of using a generic “best-seller” list of common physician orders.
We further found that, if this technical infrastructure can predict the probability that a doctor will order an aspirin, it should similarly be able to predict any clinical event. This includes patient outcomes like death or need for intensive care life support. Based on information from a patient’s first hospital day, we indeed found we could idistinguish patients who would die within 30 days or require life support within 1 week from those who did not with better than 84% accuracy, on par with previous state-of-the-art prognosis scoring systems.
The key concern with these results is whether their basis on common practice patterns represents ideal ones. Further evaluation is necessary to determine whether previous clinical practice patterns represent the wisdom of the crowd or the tyranny of the mob. Ultimately however, whether such algorithmic recommendations are “correct” or “smarter” than a human physician may not be the most relevant question. Rather we instead consider the status quo of physicians practicing based on individual experience compared to the possibility of your doctor making medical decisions informed by the collective experience of thousands of other physicians, right at the point-of-care.
OrderRex: Clinical order decision support and outcome predictions by data-mining electronic medical records.
Chen JH, Podchiyska T, Altman RB.
J Am Med Inform Assoc. 2015 Jul 21