MiroFish-Offline  by nikmcfly

Offline multi-agent simulation and prediction engine

Created 4 weeks ago

New!

1,860 stars

Top 22.9% on SourcePulse

GitHubView on GitHub
Project Summary

MiroFish-Offline offers an entirely local, multi-agent simulation engine for public opinion, market sentiment, and social dynamics. This English-UI fork of MiroFish replaces cloud dependencies with Neo4j and Ollama, enabling researchers and analysts to simulate reactions to documents like press releases or policy drafts on their own hardware without external API costs or data privacy concerns.

How It Works

The engine ingests documents, building a Neo4j knowledge graph from extracted entities and relationships. It generates hundreds of AI agents with unique personalities and memories, simulating interactions on social platforms to track sentiment and influence dynamics. A ReportAgent analyzes outcomes, and users can directly interrogate agents. The architecture features an abstract GraphStorage interface and a hybrid search (0.7 vector similarity + 0.3 BM25).

Quick Start & Requirements

Docker Compose is recommended. Prerequisites include Docker, Python 3.11+, Node.js 18+, Neo4j 5.15+, and Ollama. Setup involves cloning, copying .env.example to .env, running docker compose up -d, and pulling Ollama models (e.g., qwen2.5:32b, nomic-embed-text). Access the interface at http://localhost:3000. Manual setup is also detailed.

Highlighted Details

  • Local Stack: Replaces cloud services (Zep Cloud, DashScope) with Neo4j CE and Ollama for offline operation.
  • English UI: Comprehensive translation from the original Chinese interface.
  • Hybrid Search: Integrates vector similarity (0.7) and BM25 keyword search (0.3).
  • Agent Interaction: Direct chat with simulated agents, preserving memory and personality.
  • Configurable LLM: Supports any OpenAI-compatible API via LLM_BASE_URL and LLM_MODEL_NAME.

Maintenance & Community

A fork of MiroFish, originally supported by Shanda Group, with its simulation engine powered by OASIS/CAMEL-AI. No specific community channels or active maintenance signals are detailed.

Licensing & Compatibility

Licensed under AGPL-3.0, requiring derivative works to also be AGPL-3.0. This copyleft license necessitates careful consideration for commercial use or closed-source integration.

Limitations & Caveats

CPU-only inference is significantly slower. Substantial hardware resources (16GB+ RAM, 10GB+ VRAM for smaller models, 24GB+ for larger models) are recommended for optimal performance.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
24
Issues (30d)
14
Star History
1,875 stars in the last 28 days

Explore Similar Projects

Feedback? Help us improve.