LoRA finetune for documentation assistant PoC
Top 60.2% on sourcepulse
This project demonstrates using a fine-tuned Llama 7B model (LoRA) with Unreal Engine 5 documentation to create a specialized, locally hosted documentation assistant. It targets developers and researchers seeking alternatives to cloud-based LLM APIs and vector databases for niche, context-aware information retrieval.
How It Works
The project fine-tunes Meta's Llama 7B model using a LoRA adapter trained on Unreal Engine 5.1 documentation. This approach allows for efficient, local adaptation of a base LLM to a specific domain, enabling it to answer queries about UE5 features like Nanite and Mass Avoidance with higher accuracy than a general-purpose model.
Quick Start & Requirements
unreal_docs.txt
(provided in repo) placed in text-generation-webui/training/datasets
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The model is prone to hallucinations and may generate incorrect information. Output quality could be improved with a UE5-tailored character YAML file or by formatting the dataset as instruction/response pairs. The included web scraping script is inefficient.
2 years ago
1+ week