Web app demo of OpenAI's Sora text-to-video model
Top 36.2% on sourcepulse
This project provides a web interface for generating AI videos, inspired by OpenAI's Sora. It targets users interested in exploring text-to-video generation capabilities, offering a platform to experiment with AI-driven video creation.
How It Works
The project utilizes Next.js for its full-stack architecture, with PostgreSQL for database management. It leverages pnpm for dependency management and Tailwind CSS for frontend styling. The backend likely handles API interactions and data processing, though specific AI model integrations are not detailed beyond referencing OpenAI's "red team" for example videos.
Quick Start & Requirements
sudo docker build -f deploy/Dockerfile -t sorafm:latest .
followed by sudo docker run -itd -p 127.0.0.1:8014:8080 --restart=always sorafm:latest
.pnpm install
, setting up a .env.local
file with POSTGRES_URL
and WEB_BASE_URI
, and running pnpm dev --port 3000
.Highlighted Details
Maintenance & Community
The project is maintained by idoubicc, with contact available via Twitter: https://twitter.com/idoubicc.
Licensing & Compatibility
The repository does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The project states that the "Sora text-to-video API is not available," and displayed videos are generated by OpenAI's "red team," implying this project does not directly interface with OpenAI's Sora model. The specific AI models used for generation within this project are not detailed.
11 months ago
1 week