CLI tool for local LLM fine-tuning automation
Top 87.1% on sourcepulse
Kolo streamlines LLM fine-tuning by automating environment setup and providing a unified interface for popular tools like Unsloth, Torchtune, Llama.cpp, and Ollama. It targets AI researchers and developers seeking a rapid, hassle-free local fine-tuning experience, reducing setup time to minutes.
How It Works
Kolo leverages Docker to create a consistent, pre-configured environment, eliminating dependency conflicts. It integrates Unsloth for faster training and lower VRAM usage, Torchtune for PyTorch-native fine-tuning (including AMD GPU and CPU support), and Llama.cpp for GGUF conversion and quantization. Ollama manages model deployment, and Open WebUI provides a testing interface. This stacked approach offers flexibility and performance for local LLM experimentation.
Quick Start & Requirements
./build_image.ps1
(or ./build_image_amd.ps1
for AMD)../create_and_run_container.ps1
(or ./create_and_run_container_amd.ps1
for AMD)../copy_training_data.ps1
./train_model_unsloth.ps1
or ./train_model_torchtune.ps1
./install_model.ps1
localhost:8080
in a browser.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
./delete_model.ps1
.4 months ago
Inactive