gigax  by GigaxGames

Runtime for LLM-powered game NPCs

created 1 year ago
327 stars

Top 84.6% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a framework for creating LLM-powered Non-Player Characters (NPCs) that can perform defined actions within a game environment, running locally on user hardware. It targets game developers and researchers seeking to integrate dynamic, intelligent characters with low-latency inference.

How It Works

Gigax leverages the outlines library for structured generation, ensuring LLM outputs consistently adhere to predefined formats for actions and parameters. It offers fine-tuned open-weight models (Llama-3, Phi-3, Mistral) optimized for NPC behavior, with support for GGUF formats for CPU inference via llama_cpp. The core interaction involves a NPCStepper class that takes scene context and character data to generate NPC actions.

Quick Start & Requirements

  • Install via pip install gigax.
  • Requires Python. GPU acceleration is supported and recommended for optimal performance.
  • Models are available on Hugging Face, including GGUF versions for CPU.
  • See Usage for code examples.

Highlighted Details

  • GPU inference under 1 second on most machines.
  • Supports fine-tuned models from Llama-3, Phi-3, and Mistral.
  • Structured generation with outlines for reliable output parsing.
  • Upcoming features include a local server mode with a language-agnostic API and API access for runtime quest generation and memory management.

Maintenance & Community

  • Active development with new features planned.
  • Links to Twitter and Discord are provided for community engagement.

Licensing & Compatibility

  • The specific license is not explicitly stated in the README. Compatibility for commercial use or closed-source linking would require clarification.

Limitations & Caveats

The project is still under active development, with several key features like local server mode and advanced memory management listed as "Coming soon." The licensing is not clearly defined, which may impact commercial adoption.

Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems).

JittorLLMs by Jittor

0%
2k
Low-resource LLM inference library
created 2 years ago
updated 5 months ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Nat Friedman Nat Friedman(Former CEO of GitHub), and
32 more.

llama.cpp by ggml-org

0.4%
84k
C/C++ library for local LLM inference
created 2 years ago
updated 23 hours ago
Feedback? Help us improve.