Faculty, Staff and Student Publications
Language
English
Publication Date
1-1-2026
Journal
Artificial Intelligence Review
DOI
10.1007/s10462-025-11423-3
PMID
41278248
PMCID
PMC12634706
PubMedCentral® Posted Date
11-20-2025
PubMedCentral® Full Text Version
Post-print
Abstract
The exponential growth of Large Language Models (LLMs) continues to highlight the need for efficient strategies to meet ever-expanding computational and data demands. This survey provides a comprehensive analysis of two complementary paradigms: Knowledge Distillation (KD) and Dataset Distillation (DD), both aimed at compressing LLMs while preserving their advanced reasoning capabilities and linguistic diversity. We first examine key methodologies in KD, such as task-specific alignment, rationale-based training, and multi-teacher frameworks, alongside DD techniques that synthesize compact, high-impact datasets through optimization-based gradient matching, latent space regularization, and generative synthesis. Building on these foundations, we explore how integrating KD and DD can produce more effective and scalable compression strategies. Together, these approaches address persistent challenges in model scalability, architectural heterogeneity, and the preservation of emergent LLM abilities. We further highlight applications across domains such as healthcare and education, where distillation enables efficient deployment without sacrificing performance. Despite substantial progress, open challenges remain in preserving emergent reasoning and linguistic diversity, enabling efficient adaptation to continually evolving teacher models and datasets, and establishing comprehensive evaluation protocols. By synthesizing methodological innovations, theoretical foundations, and practical insights, our survey charts a path toward sustainable, resource-efficient LLMs through the tighter integration of KD and DD principles.
Keywords
Large language models, Knowledge distillation, Dataset distillation, Efficiency, Model compression, Survey
Published Open-Access
yes
Recommended Citation
Fang, Luyang; Yu, Xiaowei; Cai, Jiazhang; et al., "Knowledge Distillation and Dataset Distillation of Large Language Models: Emerging Trends, Challenges, and Future Directions" (2026). Faculty, Staff and Student Publications. 6736.
https://digitalcommons.library.tmc.edu/uthgsbs_docs/6736
Included in
Bioinformatics Commons, Biomedical Informatics Commons, Genetic Phenomena Commons, Medical Genetics Commons, Oncology Commons