Discover and explore top open-source AI tools and projects—updated daily.
aaronjmarsAI-driven multi-agent simulation engine for scenario analysis
New!
Top 51.0% on SourcePulse
MiroShark is a universal swarm intelligence engine designed for simulating public reaction to documents. It empowers users to upload any text-based document, generating hundreds of AI agents with distinct personalities to simulate social media discourse, track opinion shifts, and analyze influence dynamics. This tool is ideal for researchers, PR professionals, and policymakers seeking to understand and predict societal responses to information.
How It Works
The engine operates by first ingesting a document, extracting entities and relationships into a Neo4j knowledge graph. It then generates hundreds of AI personas, each with unique biases, reaction speeds, and influence levels. These agents engage in simulated social media interactions, including posting, replying, and arguing, with sentiment and influence tracked in real-time. A ReportAgent analyzes simulation outcomes, offering structured insights. Users can interact directly with agents or query groups.
Quick Start & Requirements
MiroShark offers multiple setup options:
All options require Python 3.11+, Node.js 18+, and Neo4j 5.15+ or Docker. Setup involves configuring environment variables in a .env file.
Highlighted Details
Maintenance & Community
The project is built upon MiroFish by 666ghj, with Neo4j and Ollama layers adapted from MiroFish-Offline by nikmcfly. The simulation engine is powered by OASIS (CAMEL-AI). Specific community links or active contributor details are not prominently featured in the provided README.
Licensing & Compatibility
MiroShark is licensed under AGPL-3.0. This copyleft license requires derivative works to also be open-sourced under the same license, which may impose restrictions on integration into closed-source commercial products.
Limitations & Caveats
Using Claude Code incurs a ~2-5 second overhead per LLM call due to subprocess spawning, making it best suited for smaller simulations or hybrid setups. Local Ollama deployments require significant RAM and VRAM (16GB+ RAM, 10GB+ VRAM minimum) for larger models, with specific hardware recommendations provided for different model sizes. Claude Code does not handle embeddings or the core CAMEL-AI simulation rounds directly, necessitating separate LLM configurations for these components.
1 day ago
Inactive
langchain-ai