Vision-language model interface for reMarkable 2 e-ink tablet
Top 66.3% on sourcepulse
This project provides a framework for using the reMarkable2 tablet as an interface to vision-language models (LLMs), enabling users to interact with AI by writing and drawing on the device. It targets reMarkable2 and Paper Pro users who want to leverage advanced AI capabilities directly on their e-ink device for tasks like generating images from handwritten prompts or assisting with note-taking.
How It Works
The core approach involves capturing screen content (screenshots or drawings) from the reMarkable, sending it along with a text prompt to a selected LLM (e.g., OpenAI's GPT-4o, Anthropic's Claude, Google Gemini), and then displaying the LLM's response back on the reMarkable's screen. It supports various output formats, including rasterized dots and SVGs, and has introduced modes for text-based assistance via a virtual keyboard. The system is designed to be extensible, allowing for different LLM backends and prompt/tool configurations.
Quick Start & Requirements
scp
. Make it executable (chmod +x ./ghostwriter
) and run it.export OPENAI_API_KEY=your-key-here
)../ghostwriter
or ./ghostwriter --model <model-name>
on the reMarkable. Trigger the assistant by tapping the upper-right corner of the screen.Highlighted Details
Maintenance & Community
The project is actively developed, with frequent updates and additions noted in the journal. The developer has been responsive to community feedback, including requests for reMarkable Paper Pro support.
Licensing & Compatibility
The project does not explicitly state a license in the README. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The drawing output mechanism can be unstable, sometimes causing the reMarkable to "freak out" with large black areas. Support for newer reMarkable OS versions and their corresponding Linux kernels may require manual intervention or downgrades due to potential compatibility issues with modules like uinput
. The project is experimental, with ongoing work on stability and feature completeness.
2 months ago
1 day