Discover and explore top open-source AI tools and projects—updated daily.
Chen-zexiLLM agents execute code for efficient data processing
Top 56.2% on SourcePulse
This project implements Programmatic Tool Calling (PTC) for LLM agents, addressing the inefficiency and token bloat of traditional tool-use methods. By enabling LLMs to generate and execute Python code directly within a sandboxed environment, it significantly reduces token consumption (85-98%) and enhances agent capabilities for complex data processing tasks. It's designed for developers building sophisticated LLM-powered applications that handle large datasets or require intricate, multi-step operations.
How It Works
PTC leverages the LLM's strength in code generation. Instead of making discrete JSON tool calls, the LLM writes Python code that orchestrates workflows. This code is then executed in a secure Daytona sandbox environment. Data processing, filtering, aggregation, and transformations occur locally within the sandbox, with only the final, concise output returned to the LLM's context. This approach, built upon langchain-ai's deep-agent and Daytona for sandboxing, drastically cuts down on token usage, especially when dealing with large structured or time-series data.
Quick Start & Requirements
uv sync..env file. Additional keys for services like Tavily or cloud storage are optional but recommended for full functionality.PTC_Agent.ipynb) is provided for a quick demonstration.Highlighted Details
glob, grep, and other file manipulation tools.execute_code, Bash, Read, Write, Edit, Glob, and Grep.Maintenance & Community
The provided README does not detail specific maintenance contributors, community channels (like Discord/Slack), or roadmap information.
Licensing & Compatibility
The project is released under the MIT License, which is permissive and generally allows for commercial use, modification, and distribution.
Limitations & Caveats
Setup requires obtaining and configuring API keys for multiple external services, including Daytona, which is essential for the core sandboxing functionality. The project's reliance on these external services and specific LLM configurations may introduce dependencies and potential points of failure.
3 days ago
Inactive
portiaAI
temporalio