infiAgent  by ChenglinPoly

AI agent framework enabling unlimited runtime with zero context compression

Created 2 weeks ago

New!

373 stars

Top 76.0% on SourcePulse

GitHubView on GitHub
Project Summary

This framework provides an unlimited runtime AI agent system designed for complex, long-running tasks without context degradation. It enables users to build domain-specific, state-of-the-art agents, particularly for research and scientific computing, by leveraging configuration files and a novel multi-level agent architecture. The primary benefit is the ability to handle extensive workflows, such as academic paper writing and scientific simulations, autonomously and persistently.

How It Works

InfiAgent employs a multi-level agent hierarchy, orchestrating agents in a tree structure for focused roles and clear delegation. Its core innovation lies in a file-centric architecture and a "Ten-Step Strategy" where agent state is updated based on file system changes every ten steps, eliminating the need for context compression. A nested attention mechanism extracts only relevant information from large documents, preserving context efficiency. Batch file operations and a task ID system tied to workspace paths ensure persistent memory across sessions, allowing for truly unlimited runtime without performance degradation.

Quick Start & Requirements

The recommended installation is via Docker.

  1. Install Docker.
  2. Pull the image: docker pull chenglinhku/mlav3:latest
  3. Run Web UI mode:
    docker run -d --name mla \
      -e HOST_PWD=$(pwd) \
      -v $(pwd):/workspace$(pwd) \
      -v ~/.mla_v3:/root/mla_v3 \
      -v mla-config:/mla_config \
      -p 8002:8002 -p 9641:9641 -p 4242:4242 -p 5002:5002 \
      chenglinhku/mlav3:latest webui
    
    Access at http://localhost:4242.
  4. Configure API keys at http://localhost:9641 by editing run_env_config/llm_config.yaml.

Local installation requires Python >= 3.10, pip install -e ., and playwright install chromium.

Highlighted Details

  • Unlimited Runtime: Achieved via file-centric state management and a ten-step strategy, avoiding context degradation.
  • End-to-End Research Workflows: Capable of literature search, experiments, plotting, and LaTeX paper writing, with claims of producing papers that pass EI/IEEE conference peer reviews.
  • Zero Context Compression: State is managed via the file system, eliminating the need for context compression techniques.
  • Multi-Level Agent Hierarchy: Serial execution with tree-structured orchestration for focused agents and clear delegation.
  • Batch File Operations: Uses list-based parameters for tool calls (e.g., file_read(paths=[...])) to save tokens.

Maintenance & Community

The project shows active development with frequent updates in early 2026 addressing bugs and adding features like Web UI enhancements and improved LLM integration. Key contributors include Yu, Chenglin and Wang, Yuchen. Contact is available via email (yuchenglin96@qq.com, etc.) and GitHub.

Licensing & Compatibility

The specific open-source license is detailed in a LICENSE file, but not explicitly stated in the README. Compatibility for commercial use or closed-source linking is not detailed.

Limitations & Caveats

The framework currently supports Python projects for coding tasks, with potential for other languages in the future. A temporary fix for the Web UI is noted, with a full resolution pending. Older versions had restricted command execution; newer versions allow all commands, but using Docker is recommended for tasks that might modify system files.

Health Check
Last Commit

2 days ago

Responsiveness

Inactive

Pull Requests (30d)
35
Issues (30d)
13
Star History
382 stars in the last 18 days

Explore Similar Projects

Starred by Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
7 more.

SuperAGI by TransformerOptimus

0.2%
17k
Open-source framework for autonomous AI agent development
Created 2 years ago
Updated 11 months ago
Feedback? Help us improve.