 Abstract machine learning, ML, has the potential to improve healthcare outcomes, but it must be used carefully to avoid amplifying existing health disparities. To do so, researchers should identify and address any unjust biases present in ML algorithms by testing for shortcut learning, which occurs when predictive models are based on improper correlations in the training data. Additionally, researchers should take a holistic approach to mitigating unfairness in ML-based systems by considering all possible causes of bias. This article was authored by Alexander Brown, Nenik Tomasiv, Jan Freiberg, and others.