
Faculty, Staff and Student Publications
Publication Date
5-1-2025
Journal
PLOS Digital Health
Abstract
Predictive models in biomedicine need to ensure equitable and reliable outcomes for the populations they are applied to. However, biases in AI models for medical predictions can lead to unfair treatment and widening disparities, underscoring the need for effective techniques to address these issues. However, current approaches struggle to simultaneously mitigate biases induced by multiple sensitive features in biomedical data. To enhance fairness, we introduce a framework based on a Multiple Domain Adversarial Neural Network (MDANN), which incorporates multiple adversarial components. In an MDANN, an adversarial module is applied to learn a fair pattern by negative gradients back-propagating across multiple sensitive features (i.e., the characteristics of patients that should not lead to a prediction outcome that may intentionally or unintentionally lead to disparities in clinical decisions). The MDANN applies loss functions based on the Area Under the Receiver Operating Characteristic Curve (AUC) to address the class imbalance, promoting equitable classification performance for minority groups (e.g., a subset of the population that is underrepresented or disadvantaged.) Moreover, we utilize pre-trained convolutional autoencoders (CAEs) to extract deep representations of data, aiming to enhance prediction accuracy and fairness. Combining these mechanisms, we mitigate multiple biases and disparities to provide reliable and equitable disease prediction. We empirically demonstrate that the MDANN approach leads to better accuracy and fairness in predicting disease progression using brain imaging data and mitigating multiple demographic biases for Alzheimer's Disease and Autism populations than other adversarial networks.
DOI
10.1371/journal.pdig.0000830
PMID
40445951
PMCID
PMC12124548
PubMedCentral® Posted Date
5-30-2025
PubMedCentral® Full Text Version
Post-print
Published Open-Access
yes