gigax  by GigaxGames

Runtime for LLM-powered game NPCs

Created 1 year ago
331 stars

Top 82.7% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a framework for creating LLM-powered Non-Player Characters (NPCs) that can perform defined actions within a game environment, running locally on user hardware. It targets game developers and researchers seeking to integrate dynamic, intelligent characters with low-latency inference.

How It Works

Gigax leverages the outlines library for structured generation, ensuring LLM outputs consistently adhere to predefined formats for actions and parameters. It offers fine-tuned open-weight models (Llama-3, Phi-3, Mistral) optimized for NPC behavior, with support for GGUF formats for CPU inference via llama_cpp. The core interaction involves a NPCStepper class that takes scene context and character data to generate NPC actions.

Quick Start & Requirements

  • Install via pip install gigax.
  • Requires Python. GPU acceleration is supported and recommended for optimal performance.
  • Models are available on Hugging Face, including GGUF versions for CPU.
  • See Usage for code examples.

Highlighted Details

  • GPU inference under 1 second on most machines.
  • Supports fine-tuned models from Llama-3, Phi-3, and Mistral.
  • Structured generation with outlines for reliable output parsing.
  • Upcoming features include a local server mode with a language-agnostic API and API access for runtime quest generation and memory management.

Maintenance & Community

  • Active development with new features planned.
  • Links to Twitter and Discord are provided for community engagement.

Licensing & Compatibility

  • The specific license is not explicitly stated in the README. Compatibility for commercial use or closed-source linking would require clarification.

Limitations & Caveats

The project is still under active development, with several key features like local server mode and advanced memory management listed as "Coming soon." The licensing is not clearly defined, which may impact commercial adoption.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Starred by Junyang Lin Junyang Lin(Core Maintainer at Alibaba Qwen), Georgi Gerganov Georgi Gerganov(Author of llama.cpp, whisper.cpp), and
1 more.

LLMFarm by guinmoon

0.4%
2k
iOS/MacOS app for local LLM inference
Created 2 years ago
Updated 1 month ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
2 more.

torchchat by pytorch

0.1%
4k
PyTorch-native SDK for local LLM inference across diverse platforms
Created 1 year ago
Updated 1 week ago
Starred by Lianmin Zheng Lianmin Zheng(Coauthor of SGLang, vLLM), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
1 more.

MiniCPM by OpenBMB

0.4%
8k
Ultra-efficient LLMs for end devices, achieving 5x+ speedup
Created 1 year ago
Updated 1 week ago
Feedback? Help us improve.