Code for brain-to-text communication via handwriting research paper
Top 76.9% on sourcepulse
This repository provides code for reproducing high-performance neural decoding of attempted handwriting movements, enabling brain-to-text communication. It is intended for researchers and engineers in the BCI and neuroscience fields. The project offers a complete pipeline from raw neural data to text output, with significant performance gains achieved through language model integration.
How It Works
The pipeline decodes neural data into character sequences using a Recurrent Neural Network (RNN), specifically GRU layers, which are well-suited for sequential data. It incorporates a multi-stage language modeling approach, starting with a bigram model and progressing to a GPT-2 rescoring step. This layered approach refines the decoded output, significantly reducing character and word error rates by leveraging linguistic context.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
The project is associated with a specific manuscript and preprint. No information on community channels or ongoing maintenance is provided in the README.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The project relies on specific, older versions of dependencies (TensorFlow 1.15), which may pose compatibility challenges with modern environments. The substantial data generation (~100 GB) and external dependencies like Kaldi and GPT-2 models add complexity to setup and reproduction.
2 years ago
1+ week