Faculty, Staff and Student Publications
Language
English
Publication Date
2-1-2026
Journal
Journal of Applied Clinical Medical Physics
DOI
10.1002/acm2.70495
PMID
41708070
PMCID
PMC12916175
PubMedCentral® Posted Date
2-18-2026
PubMedCentral® Full Text Version
Post-print
Abstract
Background: Usability engineering is essential for ensuring the safety and effectiveness of medical software, as design-related issues are a leading cause of use errors in clinical settings. Heuristic evaluation provides a practical approach to identifying usability problems, but its outcomes depend heavily on expert interpretation. Large Language Models (LLMs), such as ChatGPT, offer a potential means to augment heuristic evaluation by generating structured, context-aware usability feedback. This study explored the use of ChatGPT to support heuristic assessment of the Radiation Planning Assistant (RPA), a web-based radiotherapy planning tool designed to support clinical teams in low- and middle-income countries.
Methods: ChatGPT was provided with the RPA user and technical guides, training videos for each functional dashboard, and Zhang et al.'s 14 usability heuristics. The model was instructed to score each dashboard according to these heuristics, using Zhang's 0-4 severity scale, and to propose concrete interface improvements. The resulting feedback was reviewed and scored independently by the RPA developer team and by 13 users during a dedicated User Meeting. Comparative analysis was performed between ChatGPT, developer, and user ratings.
Results: ChatGPT identified 26 potential usability issues across six heuristic domains. The developer team considered nine of these actionable, though all were classified as minor (severity ≤ 2). User ratings showed wide variability, with nine suggestions achieving mean scores ≥ 1.5. Qualitative agreement between users and developers was limited, underscoring the importance of diverse perspectives in heuristic evaluation. Three suggestions-enhanced upload logs, reversible actions ("reopen request"), and stronger error prevention-were rated as potentially high priority by a minority of users. ChatGPT's ratings were consistent across dashboards.
Conclusions: While ChatGPT did not reveal any critical usability failures, its heuristic assessment proved valuable in prompting discussion, identifying minor refinements, and enriching both developer and user engagement with the RPA's interface design. This study demonstrates that LLMs can serve as an effective, low-cost complement to conventional heuristic evaluation, supporting early-stage usability review and stakeholder dialogue in the development of medical software.
Keywords
Humans, Software, Radiotherapy Planning, Computer-Assisted, Heuristics, User-Computer Interface, Programming Languages, Neoplasms, Large Language Models, Heuristic evaluation, Large language models, Usability, User interface design
Published Open-Access
yes
Recommended Citation
Court, Laurence E; Smit, Jacobus; Strauss, Lourens; et al., "Leveraging Large Language Models for Heuristic Usability Assessment of Medical Software: Insights With the Radiation Planning Assistant" (2026). Faculty, Staff and Student Publications. 6325.
https://digitalcommons.library.tmc.edu/uthgsbs_docs/6325
Included in
Bioinformatics Commons, Biomedical Informatics Commons, Genetic Phenomena Commons, Medical Genetics Commons, Oncology Commons