
Faculty, Staff and Student Publications
Publication Date
1-1-2025
Journal
AMIA Joint Summits on Translational Science Proceedings
Abstract
Accurate identification and categorization of suicidal events can yield better suicide precautions, reducing operational burden, and improving care quality in high-acuity psychiatric settings. Pre-trained language models offer promise for identifying suicidality from unstructured clinical narratives. We evaluated the performance of four BERT-based models using two fine-tuning strategies (multiple single-label and single multi-label) for detecting coexisting suicidal events from 500 annotated psychiatric evaluation notes. The notes were labeled for suicidal ideation (SI), suicide attempts (SA), exposure to suicide (ES), and non-suicidal self-injury (NSSI). RoBERTa outperformed other models using binary relevance (acc=0.86, F1=0.78). MentalBERT (F1=0.74) also exceeded BioClinicalBERT (F1=0.72). RoBERTa fine-tuned with a single multi-label classifier further improved performance (acc=0.88, F1=0.81), highlighting that models pre-trained on domain-relevant data and the single multi-label classification strategy enhance efficiency and performance.
PMID
40502237
PMCID
PMC12150747
PubMedCentral® Posted Date
6-10-2025
PubMedCentral® Full Text Version
Post-print
Published Open-Access
yes
Included in
Medical Sciences Commons, Mental and Social Health Commons, Psychiatry Commons, Psychiatry and Psychology Commons