ChatGLM fine-tune for Chinese medical QA
Top 37.5% on sourcepulse
This repository provides a Chinese medical instruction-tuned version of the ChatGLM-6B model, aimed at improving its performance in the healthcare domain. Researchers and developers working with Chinese medical data can leverage this fine-tuned model for enhanced medical question answering capabilities.
How It Works
The project fine-tunes the ChatGLM-6B model using a custom-built Chinese medical instruction dataset. This dataset is constructed by leveraging a medical knowledge graph (cMeKG) and the GPT-3.5 API to generate diverse question-answer pairs covering diseases, drugs, and examination indicators. The fine-tuning process aims to adapt the base ChatGLM model to understand and respond to medical queries more effectively.
Quick Start & Requirements
pip install -r requirements.txt
python infer.py
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project states that the instruction-tuning method might negatively impact ChatGLM's base capabilities due to the lack of open-source training strategies. Future iterations will focus on open-source models due to copyright considerations. The dataset quality is noted as limited and subject to ongoing iteration. Model-generated content should not be used for actual medical diagnosis.
2 years ago
1 week