anime.gf  by cyanff

Open-source LLM frontend for local character chat

Created 1 year ago
494 stars

Top 62.6% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides a local, open-source frontend for interacting with various Large Language Models (LLMs), aiming to be a desktop alternative to services like CharacterAI. It targets users who want a private, customizable chat experience with AI characters, supporting multiple API backends and offering features like character card management and theme customization.

How It Works

The application is built using Electron, providing a desktop experience. It acts as a unified interface to interact with LLM APIs, including OpenAI, Anthropic, Mistral, and Together AI, as well as any OpenAI-compatible endpoints like Groq or Ollama. Users configure their chosen API provider, API key, and model within the application's settings to initiate chats.

Quick Start & Requirements

  • Install: Download the latest release and run the provided setup executable.
  • Prerequisites: No specific non-default prerequisites are mentioned for the release build. Source code requires Node.js and related build tools.
  • Links: Discord

Highlighted Details

  • Supports multiple LLM API providers and OpenAI-compatible endpoints.
  • Features character card creation, import/export, editing, and a deleted item restore function.
  • Includes message rewind and response regeneration capabilities.
  • Electron auto-updater is integrated for seamless updates.

Maintenance & Community

The project is actively developed, with recent releases including Linux and macOS binaries and auto-updater functionality. A Discord server is available for community support, bug reporting, and feature requests.

Licensing & Compatibility

The project does not explicitly state a license in the provided README. This requires further investigation for commercial use or integration into closed-source projects.

Limitations & Caveats

The application is in Alpha Early Access (v0.0.1™®). Theme editing directly within the app is a Work In Progress (WIP) feature, requiring manual CSS edits in the source code for customization. Local inference capabilities are planned but not yet implemented.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Pietro Schirano Pietro Schirano(Founder of MagicPath), and
1 more.

SillyTavern by SillyTavern

1.1%
18k
LLM frontend for power users
Created 2 years ago
Updated 1 day ago
Feedback? Help us improve.