MiroShark  by aaronjmars

AI-driven multi-agent simulation engine for scenario analysis

Created 3 weeks ago

New!

654 stars

Top 51.0% on SourcePulse

GitHubView on GitHub
Project Summary

MiroShark is a universal swarm intelligence engine designed for simulating public reaction to documents. It empowers users to upload any text-based document, generating hundreds of AI agents with distinct personalities to simulate social media discourse, track opinion shifts, and analyze influence dynamics. This tool is ideal for researchers, PR professionals, and policymakers seeking to understand and predict societal responses to information.

How It Works

The engine operates by first ingesting a document, extracting entities and relationships into a Neo4j knowledge graph. It then generates hundreds of AI personas, each with unique biases, reaction speeds, and influence levels. These agents engage in simulated social media interactions, including posting, replying, and arguing, with sentiment and influence tracked in real-time. A ReportAgent analyzes simulation outcomes, offering structured insights. Users can interact directly with agents or query groups.

Quick Start & Requirements

MiroShark offers multiple setup options:

  • Cloud API: Requires an OpenAI-compatible API key (e.g., OpenRouter, Anthropic) and local Neo4j (or Docker). No GPU is needed.
  • Docker (Local Ollama): Clones the repo, uses Docker Compose, and requires pulling Ollama models.
  • Manual (Local Ollama): Involves starting Neo4j and Ollama locally, pulling models, and running setup commands.
  • Claude Code: Leverages a Claude Pro/Max subscription via the CLI for LLM tasks (graph building, agent profiles, reporting), but still requires Ollama or a cloud API for embeddings and CAMEL-AI simulation rounds.

All options require Python 3.11+, Node.js 18+, and Neo4j 5.15+ or Docker. Setup involves configuring environment variables in a .env file.

Highlighted Details

  • Supports diverse LLM providers including OpenAI, Anthropic (via Claude Code), and local Ollama, offering flexibility in cost and performance.
  • Features a "Smart Model" capability to route specific, high-reasoning workflows (e.g., report generation, ontology extraction) to a more powerful or expensive model while using a default model for high-volume tasks.
  • Enables use cases such as PR crisis testing, trading signal simulation based on news sentiment, and policy analysis.
  • Allows direct persona chat interaction with individual agents or groups within the simulation.

Maintenance & Community

The project is built upon MiroFish by 666ghj, with Neo4j and Ollama layers adapted from MiroFish-Offline by nikmcfly. The simulation engine is powered by OASIS (CAMEL-AI). Specific community links or active contributor details are not prominently featured in the provided README.

Licensing & Compatibility

MiroShark is licensed under AGPL-3.0. This copyleft license requires derivative works to also be open-sourced under the same license, which may impose restrictions on integration into closed-source commercial products.

Limitations & Caveats

Using Claude Code incurs a ~2-5 second overhead per LLM call due to subprocess spawning, making it best suited for smaller simulations or hybrid setups. Local Ollama deployments require significant RAM and VRAM (16GB+ RAM, 10GB+ VRAM minimum) for larger models, with specific hardware recommendations provided for different model sizes. Claude Code does not handle embeddings or the core CAMEL-AI simulation rounds directly, necessitating separate LLM configurations for these components.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
19
Issues (30d)
1
Star History
660 stars in the last 22 days

Explore Similar Projects

Feedback? Help us improve.