MedicalGPT  by shibing624

Medical LLM training pipeline using ChatGPT techniques

created 2 years ago
4,014 stars

Top 12.4% on sourcepulse

GitHubView on GitHub
Project Summary

MedicalGPT provides a comprehensive pipeline for training domain-specific large language models, focusing on the medical field. It enables users to replicate ChatGPT-like training methodologies, including pre-training, supervised fine-tuning (SFT), and preference optimization techniques like RLHF, DPO, ORPO, and GRPO. This project is valuable for researchers and developers aiming to build specialized medical AI assistants or enhance existing LLMs with medical knowledge and conversational capabilities.

How It Works

The project implements a multi-stage training process inspired by the ChatGPT pipeline. It starts with optional incremental pre-training (PT) on large domain-specific datasets to adapt the model to the medical domain. This is followed by supervised fine-tuning (SFT) using instruction-following datasets to align the model with user intents and inject medical knowledge. For further alignment with human preferences, it supports Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), ORPO, and GRPO, which refine the model's behavior without requiring complex RL setups.

Quick Start & Requirements

  • Install dependencies: pip install -r requirements.txt --upgrade
  • Hardware: VRAM requirements vary significantly by model size and training method, ranging from 4GB for QLoRA 2-bit 7B models to over 2400GB for full parameter training of 8x22B models.
  • Supported models include Llama, Llama2, Llama3, Qwen, Qwen1.5, Qwen2, Qwen2.5, Mistral, Mixtral, Baichuan, ChatGLM, and more.
  • Demo: CUDA_VISIBLE_DEVICES=0 python gradio_demo.py --base_model path_to_llama_hf_dir

Highlighted Details

  • Supports a wide range of optimization techniques: PT, SFT, RLHF, DPO, ORPO, GRPO.
  • Integrates with popular LLM architectures like Llama3, Qwen2.5, and Mixtral 8x7B.
  • Includes features for context length extension (RoPE interpolation, S²-Attn) and embedding noise injection (NEFTune).
  • Offers a ChatPDF module for Retrieval-Augmented Generation (RAG) with custom knowledge bases.

Maintenance & Community

The project is actively maintained, with frequent updates adding support for new models and training methods. Community engagement is encouraged via GitHub issues.

Licensing & Compatibility

The code is licensed under the Apache License 2.0, permitting commercial use. However, model weights and data are restricted to research purposes only. A disclaimer is provided, and attribution to MedicalGPT is required for product descriptions.

Limitations & Caveats

While the project supports numerous models and training methods, the setup and execution can be resource-intensive, requiring significant VRAM and computational power, especially for full parameter training. The README notes that the code is "still rough" and encourages community contributions for improvements and testing.

Health Check
Last commit

3 weeks ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
3
Star History
174 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.