Local LLM UI for web/mobile, supporting multiple models
Top 97.8% on sourcepulse
LLM-X provides a user-friendly, offline-first web and Chrome extension interface for interacting with local Large Language Models (LLMs). It targets users who run LLM inference locally via tools like Ollama, LM Studio, or AUTOMATIC1111, offering a private and convenient way to chat, generate images, and manage conversations.
How It Works
LLM-X leverages a React/TypeScript frontend with Vite for building a Progressive Web App (PWA) and a Chrome extension. It connects to local LLM servers via their APIs, supporting multiple simultaneous connections and models. The application prioritizes privacy by performing all operations locally, with no external API calls, and stores chat history in IndexedDB for offline access.
Quick Start & Requirements
OLLAMA_ORIGINS=https://mrdjohnson.github.io ollama serve
(for Ollama) or lms server start --cors=true
(for LM Studio). Then navigate to https://mrdjohnson.github.io
.OLLAMA_ORIGINS=chrome-extension://iodcdhcpahifeligoegcmcdibdkffclk
.Highlighted Details
Maintenance & Community
The project is actively maintained by mrdjohnson. Continuous deployment is set up, with changes to the master branch automatically deploying to https://mrdjohnson.github.io/llm-x/
.
Licensing & Compatibility
The project appears to be MIT licensed, allowing for commercial use and integration with closed-source applications.
Limitations & Caveats
The README mentions initial difficulties with LangChain.js stream handling, though it has since been integrated. Code highlighting is limited to common languages.
1 day ago
1 day