Chinese legal dialogue model fine-tuned from ChatGLM-6B
Top 34.2% on sourcepulse
This project provides LAW-GPT (XieZhi), a Chinese legal large language model designed to offer professional and reliable answers to legal questions. It targets individuals facing legal issues, aiming to make legal information accessible and contribute to a more lawful society.
How It Works
LAW-GPT is built upon the ChatGLM-6B model, fine-tuned using LoRA with a 16-bit instruction tuning approach. Its training dataset includes existing legal Q&A datasets and high-quality legal text Q&A generated via self-Instruct, guided by statutes and real cases. This method enhances the model's performance in the legal domain, improving the reliability and professionalism of its responses, notably by providing statutory references.
Quick Start & Requirements
src
, and run pip install -r requirements.txt
. The peft
library requires local installation (cd peft && pip install -e .
)../model
), retrieval model parameters (into ./retriver
), and text2vec-base-chinese model parameters (into ./text2vec-base-chinese
).CUDA_VISIBLE_DEVICES=$cuda_id python ./demo.py
for basic interaction or CUDA_VISIBLE_DEVICES=$cuda_id python ./demo_r.py
for retrieval-augmented interaction.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project's disclaimer states that the pre-trained model is for reference and research only, and its accuracy and reliability are not guaranteed. It explicitly warns against using the model for actual applications or decision-making, with users assuming all risks.
1 year ago
1 day