PhD position Causal Inference Solutions for the Prediction Paradox in Deployed AI-based Prediction Model
Join the Data Science group at the Julius Center (UMC Utrecht) to develop, evaluate, and apply innovative methods to enhance AI-based prediction in healthcare. Contribute to cutting-edge research in a collaborative environment.
This PhD position is part of the VIDI project The MOT (‘APK’) for safe and effective predictive AI in healthcare: methods for periodic tests and revision, funded by ZonMW. This project brings together experts in prediction modeling, causal inference and data science/AI and aims to yield novel insights relevant to how we can better monitor the performance of (AI based) prediction models that are deployed in healthcare settings. This is especially challenging for prediction models that are used to support medical decisions that influence the outcome that is predicted.
The PhD candidate will:
The Data Science team at the Julius Center is a growing group of researchers working on methods and applications of AI in health care. The PhD candidate will be embedded in the AI methods lab of the UMC Utrecht. Furthermore, you will work in close collaboration with clinical experts and experts on deployment and quality control of AI in the UMC Utrecht.
You will work together in a diverse team of excellent researchers in the field prediction models and causal inference at the Julius Center. The supervision team will consist of Dr Maarten van Smeden, Prof Dr Ewout Steyerberg, Dr Wouter van Amsterdam and Dr Oisin Ryan.
Background
Prediction models based on Artificial Intelligence (AI) play an increasingly important role across medical specialities, with the aim to support medical decision making for individual patients. While it is widely recognized that AI tools need periodic tests and maintenance (“updating”) to guarantee safety and effectiveness in medical decision making, there is currently no agreement on how and how frequent such tests must be done. This project explores, using in-depth methods research and real-world use cases of implemented AI-based prediction models, how to determine how often an AI-based prediction model needs testing, and to what degree deteriorated predictions can be foreseen and prevented. The ambition is to provide a new framework with concrete guidance on performing periodic tests and maintenance: MOT, not for motor vehicles, but for safe and effective AI in healthcare.
The prediction paradox describes the phenomenon that predictions can influence behaviors that in turn invalidate predictions. Consider the following hypothetical example: a model developed and deployed in a hospital for 1-year cardiovascular disease risk. Only patients with a predicted risk >5% will receive preventive treatment in this hospital. The preventive treatment is effective, which means that the actual for 1-year cardiovascular disease risk of patients receiving treatment is lowered. A little more than a year post-deployment the hospital monitoring the performance of the prediction model detects that the predicted risks in their patients with >5% predicted risks are higher than the actual risks of these patients, as only they have received preventive treatment. Clearly, the CPM has adequately done its job. However, the predictions no longer correspond with observed risks: the CPM appears to be miscalibrated.
Profile
© BSL Media & Learning, onderdeel van Springer Nature