LoRA tuning script for ChatGLM-6B
Top 52.1% on sourcepulse
This project provides LoRA fine-tuned weights for the ChatGLM-6B model, focusing on instruction following. It targets researchers and developers looking to enhance ChatGLM-6B's ability to understand and respond to instructions, particularly in Chinese. The primary benefit is an improved instruction-following capability for the base ChatGLM-6B model.
How It Works
The project leverages the LoRA (Low-Rank Adaptation) technique to fine-tune the ChatGLM-6B model on various instruction datasets. This approach injects trainable low-rank matrices into the existing model layers, significantly reducing the number of parameters that need to be updated during fine-tuning. This makes the fine-tuning process more memory-efficient and faster compared to full model fine-tuning, while still achieving competitive performance.
Quick Start & Requirements
pip install -r requirements.txt
https://pan.baidu.com/s/1c-zRSEUn4151YLoowPN4YA?pwd=hxbr
(Extraction code: hxbr
)Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
2 years ago
1 week