Discover and explore top open-source AI tools and projects—updated daily.
TelegramMessengerConfidential AI inference on a decentralized network
Top 53.1% on SourcePulse
Confidential Compute Open Network (COCOON) provides a decentralized platform for running AI inference within Trusted Execution Environments (TEEs) on the TON blockchain. It targets GPU owners seeking to monetize compute power and developers needing secure, private AI inference, ultimately enabling users to access AI services with enhanced confidentiality.
How It Works
COCOON utilizes TEEs, specifically mentioning Intel TDX (Trust Domain Extensions), to create isolated, encrypted environments for AI model execution. This ensures that even the host system cannot access the data or the model during inference. The system operates as a decentralized network where compute providers (workers) serve AI models, and developers or users consume these services, with transactions and coordination facilitated by the TON cryptocurrency.
Quick Start & Requirements
./scripts/build-image prod./scripts/prepare-worker-dist ../cocoon-worker-dist./scripts/build-model <ModelName> to generate verifiable model archives.Highlighted Details
Maintenance & Community
No specific details regarding maintainers, community channels (like Discord/Slack), or roadmap were provided in the README snippet.
Licensing & Compatibility
The license is stated to be in the LICENSE file; the specific type and any compatibility restrictions for commercial use are not detailed in the provided text.
Limitations & Caveats
The system relies on specific hardware support for Intel TDX, potentially limiting adoption. The setup process involves building VM images and model distributions, which may require significant technical expertise and resources. The project appears to be in a preview stage, suggesting potential for instability or breaking changes.
2 weeks ago
Inactive
openpcc