Discover and explore top open-source AI tools and projects—updated daily.
amlalabsSecurely execute LLM-generated code with capability enforcement
New!
Top 93.2% on SourcePulse
Summary
amla-sandbox addresses agent framework security by providing a WebAssembly (WASM) sandbox, replacing insecure subprocess/exec() methods and infrastructure-heavy Docker/VMs. It enables secure, isolated execution of LLM-generated code via efficient scripting with strict capability-based access controls, mitigating prompt injection risks.
How It Works
Leveraging wasmtime and WASI, it offers memory-isolated WASM execution for JavaScript/shell scripts. Key is capability-based security: agents invoke only explicitly defined tools with constrained parameters/calls. This collapses multiple LLM tool calls into single script executions, boosting efficiency by reducing LLM round trips. The host Python manages tool execution and validation, ensuring agents operate within defined boundaries.
Quick Start & Requirements
Install via pip: pip install "git+https://github.com/amlalabs/amla-sandbox". No Docker/VM needed. Use create_sandbox_tool for JS/shell execution. amla-precompile caches the WASM module for faster loads. Project Website, Examples, and Docs are linked.
Highlighted Details
/workspace and /tmp.Maintenance & Community
Project links include Website, Examples, and Docs. No specific community channels (Discord/Slack) are mentioned.
Licensing & Compatibility
Python code is MIT licensed. The core WASM runtime binary is proprietary; usable with the package but not independently redistributable. Future open-sourcing of the WASM runtime is planned.
Limitations & Caveats
No full Linux environment, native modules, or GPU access. Infinite loop protection is limited (step counter tracks WASM yields, not internal script instructions). The proprietary WASM binary is a key constraint for redistribution. Optimized for controlled code snippets, not full VM replacement.
6 days ago
Inactive
zerocore-ai
0xMiden