Medical LLM for intelligent Q&A, diagnosis support, and medical knowledge access
Top 72.9% on sourcepulse
WiNGPT2 is a large language model specialized for the healthcare domain, offering intelligent medical Q&A, diagnostic support, and knowledge services. It aims to enhance efficiency and quality in healthcare by integrating professional medical knowledge and data. The project targets medical professionals and general users seeking reliable health information.
How It Works
WiNGPT2 is built upon Transformer architecture, leveraging models like Gemma and Llama 3 as base pre-trained models. It incorporates advanced techniques such as RoPE relative positional encoding, SwiGLU activation, and RMSNorm for improved performance. The model is trained on a vast, curated dataset of medical literature, guidelines, and clinical data, with specific optimizations for various healthcare scenarios and tasks.
Quick Start & Requirements
transformers
library.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "WiNGPT2-7B-Chat" # Or other available models
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).to("cuda")
transformers
. GPU with sufficient VRAM is recommended for optimal performance.demo.py
).Highlighted Details
Maintenance & Community
wair@winning.com.cn
) or website (https://www.winning.com.cn
).Licensing & Compatibility
Limitations & Caveats
8 months ago
1 day