Screenshot-to-code tool for generating code from visual inputs
Top 0.2% on sourcepulse
This project provides a tool to convert screenshots and mockups into functional code (HTML/Tailwind, React/Vue, Bootstrap, Ionic) using AI. It targets designers and developers seeking to rapidly prototype or generate boilerplate code from visual designs, offering a significant time-saving benefit.
How It Works
The application utilizes AI models like Claude Sonnet 3.7 and GPT-4o to interpret visual input (screenshots, Figma designs) and generate corresponding code. It supports various frontend frameworks and styling libraries. An experimental feature also allows converting video screen recordings into functional prototypes.
Quick Start & Requirements
.env
files for backend and frontend, then run poetry install
and poetry run uvicorn main:app --reload --port 7001
for the backend, and yarn
then yarn dev
for the frontend. Docker Compose is also available.Highlighted Details
Maintenance & Community
The project is actively maintained by the author, with community feedback encouraged via GitHub Issues and Twitter.
Licensing & Compatibility
The repository appears to be under the MIT License, allowing for commercial use and integration with closed-source projects.
Limitations & Caveats
While powerful, the quality of generated code is dependent on the AI model used and the clarity of the input screenshot. The experimental video-to-code feature may have varying results.
6 days ago
1 day