Wisdom-Weasel  by scukeqi

LLM-powered input method for enhanced Chinese text prediction

Created 3 months ago
270 stars

Top 95.2% on SourcePulse

GitHubView on GitHub
Project Summary

Wisdom-Weasel enhances the Rime/Weasel Chinese input method by integrating Large Language Model (LLM) powered intelligent prediction. It targets users seeking advanced, context-aware text completion beyond traditional input methods, offering a smarter typing experience while preserving Rime's flexibility and extensive ecosystem.

How It Works

This project augments the Rime input method framework by incorporating LLM-based candidate word generation. It maintains Rime's full scheme and dictionary support, using LLMs to predict candidate words based on current input and historical context. The system supports multiple LLM backends, including OpenAI-compatible APIs (like Ollama), local llama.cpp inference with GGUF models, and a Python backend for pinyin-constrained generation. It actively manages user input sequences as context and offers optional asynchronous memory compression for extended histories.

Quick Start & Requirements

  • Primary install: Download the installer from the Releases page and install it. Configuration is done via weasel.yaml in the Rime user directory.
  • Prerequisites:
    • Rime/Weasel must be installed and deployed.
    • OS: Windows 8.1 ~ Windows 11.
    • LLM Backends:
      • OpenAI-compatible: Requires an accessible API endpoint or local service (e.g., Ollama).
      • llamacpp: Requires a local GGUF model file (4GB+ VRAM or sufficient RAM recommended).
      • hf_constraint: Requires a Python environment set up using hf_backend/requirements.txt.
  • Links: Rime Project Homepage: https://rime.im

Highlighted Details

  • Supports multiple LLM prediction backends: OpenAI-compatible, llama.cpp (local GGUF), and hf_constraint (Python).
  • Maintains user input history for context and offers optional LLM-based memory compression.
  • Predictions are displayed alongside Rime's native candidates, ensuring seamless integration.
  • Enables local LLM inference for privacy and offline use.

Maintenance & Community

Issues specific to Wisdom-Weasel's LLM features, build, or configuration should be reported in the project's GitHub Issues. General Rime input method issues should be directed to Rime Home. Pull Requests are welcomed.

Licensing & Compatibility

  • License: GPLv3.
  • Compatibility: The GPLv3 license is a strong copyleft license, requiring derivative works to also be licensed under GPLv3. This may impose restrictions on linking with closed-source applications.

Limitations & Caveats

The project is Windows-only, supporting Windows 8.1 through Windows 11. It has a hard dependency on a pre-installed and configured Rime/Weasel input method. Setting up the LLM backends requires specific user configuration, such as providing API keys, local model paths, or Python environments.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
5
Issues (30d)
10
Star History
270 stars in the last 30 days

Explore Similar Projects

Starred by Eric Zhu Eric Zhu(Coauthor of AutoGen; Research Scientist at Microsoft Research), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
1 more.

textgrad by zou-group

0.4%
3k
Autograd engine for textual gradients, enabling LLM-driven optimization
Created 1 year ago
Updated 8 months ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
7 more.

LLMLingua by microsoft

0.4%
6k
Prompt compression for accelerated LLM inference
Created 2 years ago
Updated 3 days ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Travis Fischer Travis Fischer(Founder of Agentic), and
2 more.

Memori by MemoriLabs

1.4%
13k
LLM memory engine for context-aware AI
Created 8 months ago
Updated 1 day ago
Feedback? Help us improve.