Final Project - Data Science B.Sc. Program - TAU - 2024-2025: Speech Decoding from Brain's Single Neuron Recordings Using Deep Learning Architectures
Individuals with neurological disorders, e.g. ALS, brain stem stroke or brain injury, may experience significant impairments in their ability to speak, leaving them unable to communicate even their most basic needs. In this project, we aimed to tackle this important problem by developing a model that decodes speech directly from electrical brain signals.
Our project aimed to develop and compare Deep Learning (DL) models for offline decoding of vowel articulations directly from the electrical activity of single neurons in the brain of epilepsy patients.
Experimental results show, on the one hand, that there is no universal “best” model, out of the models that we tested, across all individuals; on the other hand, they demonstrate the technical feasibility of decoding speech elements from single neuron recordings.
This project was conducted as a collaboration between the School of Industrial & Intelligent Systems Engineering at Tel Aviv University and Dr. Tankus from Sourasky Medical Center (Ichilov). The work is built upon earlier research by Dr. Tankus, which demonstrated the classification of two vowel sounds from the neural recordings of a single patient. Our project took this further, extending it to more phonemes and more patients. Ultimately, this project could pave the way for brain-computer interfaces (BCIs) that will restore speech communication in completely paralyzed patients.
The project was done throughout October 2024 until July 2025, and was selected as one of the outstanding projects of our program (2025).
The project's final grade - 97.
The full implementation details can be found in the Final Report (PDF) and in the Executive Summary (PDF), in the repo.
Due to patients' privacy, the project's data & code could not be revealed here.