Discover and explore top open-source AI tools and projects—updated daily.
LoRA tuning research for AI-assisted software development
Top 34.7% on SourcePulse
This repository provides a collection of LoRA models and training code for enhancing AI-driven software development efficiency. It targets engineers and researchers interested in fine-tuning large language models like LLaMA and ChatGLM for tasks such as user story generation, test code creation, code completion, and text-to-SQL conversion. The project offers pre-trained LoRA models and detailed tutorials for replicating the training process.
How It Works
The project leverages LoRA (Low-Rank Adaptation) to fine-tune pre-trained models on specific datasets tailored for software engineering tasks. It standardizes the AI-assisted development process by breaking down tasks into granular steps, feeding data for each step to the models. This approach aims to maximize the "copy-paste" effect of AI, generating accurate outputs for each micro-task. Datasets are prepared using OpenAI for generating user tasks and stories, and for creating code and test cases based on class information.
Quick Start & Requirements
alpaca-lora.ipynb
, chatglm-tuning.ipynb
) and Python scripts.https://github.com/tloen/alpaca-lora
https://github.com/unit-mesh/unit-minions/blob/main/chatglm-tuning.ipynb
https://github.com/unit-mesh/minions-data-prepare
Highlighted Details
Maintenance & Community
unit-mesh
.Licensing & Compatibility
Limitations & Caveats
1 year ago
Inactive