Video generator for story creation via LLMs
Top 23.5% on sourcepulse
This project enables users to generate short, high-definition story videos from a simple text prompt. It targets content creators and individuals looking to quickly produce visual narratives, offering a streamlined workflow for creating AI-powered multimedia content.
How It Works
The system leverages a backend powered by Python and FastAPI, orchestrating calls to various AI model providers for different media generation tasks. It supports multiple providers for text (OpenAI, Aliyun, Deepseek, Ollama, SiliconFlow) and image (OpenAI, Aliyun, SiliconFlow) generation, allowing flexibility in model selection. The generated text content, AI-created images, synthesized audio, and subtitles are then compiled into a cohesive video.
Quick Start & Requirements
git clone https://github.com/alecm20/story-flicks.git
backend/
, copy .env.example
to .env
, and configure API keys and model providers. Install dependencies with pip install -r requirements.txt
and run with uvicorn main:app --reload
.frontend/
, install dependencies with npm install
, and run with npm run dev
.docker-compose up --build
from the project root.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project requires users to obtain and configure API keys for various third-party AI services, which may incur costs. The quality and performance of the generated videos are dependent on the selected AI models and providers.
4 months ago
1 day