Toggle light / dark theme

Trust The AI? You Decide

Posted in biotech/medical, information science, robotics/AI

Trust in AI. If you’re a clinician or a physician, would you trust this AI?

Clearly, sepsis treatment deserves to be focused on, which is what Epic did. But in doing so, they raised several thorny questions. Should the model be recalibrated for each discrete implementation? Are its workings transparent? Should such algorithms publish confidence along with its prediction? Are humans sufficiently in the loop to ensure that the algorithm outputs are being interpreted and implem… See more.


Earlier this year, I wrote about fatal flaws in algorithms that were developed to mitigate the COVID-19 pandemic. Researchers found two general types of flaws. The first is that model makers used small data sets that didn’t represent the universe of patients which the models were intended to represent leading to sample selection bias. The second is that modelers failed to disclose data sources, data-modeling techniques and the potential for bias in either the input data or the algorithms used to train their models leading to design related bias. As a result of these fatal flaws, such algorithms were inarguably less effective than their developers had promised.

Now comes a flurry of articles on an algorithm developed by Epic to provide an early warning tool for sepsis. According to the CDC, “sepsis is the body’s extreme response to an infection. It is a life-threatening medical emergency and happens when an infection you already have triggers a chain reaction throughout your body. Without timely treatment, sepsis can rapidly lead to tissue damage, organ failure, and death. Nearly 270,000 Americans die as a result of sepsis.”

Leave a Reply