Faculty, Staff and Student Publications

Language

English

Publication Date

10-1-2025

Journal

Nature Communications

DOI

10.1038/s41467-025-63825-0

PMID

41034198

PMCID

PMC12488916

PubMedCentral® Posted Date

10-1-2025

PubMedCentral® Full Text Version

Post-print

Abstract

Speech brain-computer interfaces (BCIs) combine neural recordings with large language models to achieve real-time intelligible speech. However, these decoders rely on dense, intact cortical coverage and are challenging to scale across individuals with heterogeneous brain organization. To derive scalable transfer learning strategies for neural speech decoding, we used minimally invasive stereo-electroencephalography recordings in a large cohort performing a demanding speech motor task. A sequence-to-sequence model enabled decoding of variable-length phonemic sequences prior to and during articulation. This enabled development of a cross-subject transfer learning framework to isolate shared latent manifolds while enabling individual model initialization. The group-derived decoder significantly outperformed models trained on individual data alone, enabling decoding robustness despite variable coverage and activation. These results highlight a pathway toward generalizable neural prostheses for speech and language disorders by leveraging large-scale intracranial datasets with distributed spatial sampling and shared task demands.

Keywords

Humans, Brain-Computer Interfaces, Speech, Brain, Electroencephalography, Male, Female, Adult, Young Adult, Middle Aged, Learning, Neural decoding, Brain injuries

Published Open-Access

yes

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.