Chinese finance LLM fine-tuning project, LoRA weights for LLaMA
Top 52.9% on sourcepulse
Cornucopia-LLaMA-Fin-Chinese provides instruction-tuned LLaMA models fine-tuned on Chinese financial knowledge. It targets users needing improved LLaMA performance in the financial domain, offering open-source, commercially usable models and a lightweight training framework.
How It Works
The project fine-tunes LLaMA-based models using instruction datasets derived from Chinese financial Q&A data, including publicly available and scraped sources. This approach aims to enhance LLaMA's capabilities in financial question answering by leveraging curated financial domain data and instruction-following techniques. Future work includes expanding datasets with GPT-4 and knowledge graphs for multi-task SFT and RLHF.
Quick Start & Requirements
pip install -r requirements.txt
./base_models/load.sh
../scripts/infer.sh
(single model), ./scripts/comparison_test.sh
(multi-model)../scripts/finetune.sh
(requires data in ./instruction_data/fin_data.json
format).Highlighted Details
decapoda-research/llama-7b-hf
(V1.0) and Linly-AI/Chinese-LLaMA-7B
(V1.1).Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project explicitly states that model resources are for academic research only and strictly prohibited for commercial use. The accuracy of model-generated content is not guaranteed due to computational factors, randomness, and quantization precision loss. Outputs should not be considered investment advice.
2 years ago
1 week