LLMUnity  by undreamai

Unity package for LLM integration, enabling AI characters

created 1 year ago
1,227 stars

Top 32.7% on sourcepulse

GitHubView on GitHub
Project Summary

This package enables the integration of Large Language Models (LLMs) directly within the Unity game engine, allowing developers to create interactive AI characters with conversational abilities and knowledge retrieval. It targets Unity developers seeking to enhance game immersion through AI-driven characters, offering local, cross-platform LLM execution and a built-in RAG system for dynamic knowledge bases.

How It Works

LLM for Unity leverages the llama.cpp library for efficient, local LLM inference across CPU and GPU (Nvidia, AMD, Apple Metal). It supports various LLM architectures in .gguf format and provides a RAG system using usearch for semantic search and data augmentation. The package facilitates seamless integration via Unity components, enabling character interaction, chat history management, and custom prompt engineering.

Quick Start & Requirements

  • Install: Via Unity Package Manager (Asset Store or GitHub URL: https://github.com/undreamai/LLMUnity.git).
  • Prerequisites: Unity 2021 LTS or newer. Models are typically .gguf format.
  • Setup: Add LLM and LLMCharacter components to GameObjects. Download or load .gguf models via the LLM component's manager.
  • Docs: LLM for Unity Documentation

Highlighted Details

  • Cross-platform support: Windows, Linux, macOS, iOS, Android, VisionOS.
  • Local execution: No internet required, data stays within the game.
  • Fast inference on CPU and GPU (Nvidia, AMD, Apple Metal).
  • Retrieval-Augmented Generation (RAG) for semantic search and knowledge enhancement.
  • Supports custom grammars (GBNF) for structured LLM output and function calling.

Maintenance & Community

  • Active development with regular updates.
  • Discord community available for support and discussion.
  • GitHub Repository

Licensing & Compatibility

  • MIT License for the core package.
  • Third-party components use MIT and Apache licenses.
  • Model licenses vary; users must review individual model terms.
  • Permissive for commercial use and closed-source linking.

Limitations & Caveats

Mobile deployment is limited to smaller LLM parameter counts (1-2 billion) due to hardware constraints. Model download and management are crucial for deployment, especially for mobile builds where models can be downloaded on first launch.

Health Check
Last commit

2 weeks ago

Responsiveness

1 day

Pull Requests (30d)
2
Issues (30d)
5
Star History
177 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.