Open-source LLM for interpretable mental health analysis
Top 94.9% on sourcepulse
MentaLLaMA provides open-source instruction-following large language models for interpretable mental health analysis on social media. It targets researchers and developers needing to analyze mental health discourse and generate explanations, offering a novel dataset and benchmark for this specialized domain.
How It Works
MentaLLaMA is built upon LLaMA and Vicuna foundation models, fine-tuned on the Interpretable Mental Health Instruction (IMHI) dataset. This dataset comprises 105K instruction samples across 8 mental health analysis tasks derived from public social media data. The models are designed to follow instructions for mental health analysis and provide high-quality, interpretable explanations for their predictions.
Quick Start & Requirements
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained(MODEL_PATH)
model = LlamaForCausalLM.from_pretrained(MODEL_PATH, device_map='auto')
./vicuna-33B
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 year ago
Inactive