aiac  by gofireflyio

AI-as-code generator, CLI tool, and Go library

created 2 years ago
3,701 stars

Top 13.4% on sourcepulse

GitHubView on GitHub
Project Summary

This tool generates Infrastructure-as-Code (IaC) templates, configurations, and scripts using Large Language Models (LLMs) from providers like OpenAI, Amazon Bedrock, and Ollama. It's designed for developers and DevOps engineers who need to quickly create or iterate on cloud infrastructure definitions, CI/CD pipelines, and policy-as-code.

How It Works

aiac leverages LLMs to translate natural language prompts into structured code. Users specify the desired output (e.g., "terraform for AWS EC2") and the target LLM provider via a configuration file. The tool then composes the request, sends it to the LLM, and outputs the generated code, offering an interactive shell for refinement or saving to files. This approach automates the creation of boilerplate code, reducing manual effort and potential errors.

Quick Start & Requirements

  • Installation: brew install aiac, docker pull ghcr.io/gofireflyio/aiac, or go install github.com/gofireflyio/aiac/v5@latest.
  • Prerequisites: LLM provider API keys/endpoints (OpenAI, Azure OpenAI, Bedrock) or Ollama server URL. Configuration is managed via a TOML file (~/.config/aiac/aiac.toml).
  • Setup: Requires configuring LLM provider access.
  • Docs: https://github.com/gofireflyio/aiac

Highlighted Details

  • Supports multiple LLM backends (OpenAI, Bedrock, Ollama) via a configurable TOML file.
  • Generates various code types: IaC (Terraform, Pulumi, CloudFormation), Dockerfiles, Kubernetes manifests, CI/CD pipelines, policies, and scripts.
  • Offers interactive chat, code saving to files, and clipboard integration.
  • Version 5 introduced a significant configuration overhaul, moving to named backends defined in a TOML file.

Maintenance & Community

  • Actively maintained by gofireflyio.
  • Community support channels are not explicitly mentioned in the README.

Licensing & Compatibility

  • Apache License 2.0.
  • Permissive license suitable for commercial use and integration into closed-source projects.

Limitations & Caveats

  • Since v5, only chat models are supported; completion models are dropped.
  • The tool does not verify if a selected model actually exists or is suitable for the task, relying on the provider API for validation.
  • Rate limiting from LLM providers must be handled by the user if programmatic usage is intended.
Health Check
Last commit

9 months ago

Responsiveness

1+ week

Pull Requests (30d)
0
Issues (30d)
0
Star History
59 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.