Research paper exploring LLMs through the lens of MBTI personality types
Top 63.6% on sourcepulse
This project provides a suite of 32 large language models (LLMs) specialized for 16 different MBTI personality types, available in both Chinese and English. It aims to explore the intersection of LLMs and personality psychology, offering tailored models for nuanced interactions and insights.
How It Works
The MM series models are built upon foundational LLMs like Baichuan and LLaMA2. They are developed through a multi-stage process involving pre-training, fine-tuning, and Direct Preference Optimization (DPO) using a custom-built, extensive MBTI dataset. This approach aims to achieve stable personality alignment, differentiating it from prompt-based personality manipulation.
Quick Start & Requirements
Models are available on Hugging Face and ModelScope. Specific installation and usage instructions would depend on the chosen base model and framework (e.g., LLaMA-Efficient-Tuning
). Access to the models and datasets is provided via Hugging Face and ModelScope links.
Highlighted Details
Maintenance & Community
The project has active partnerships with ModelScope and Hugging Face for model hosting and demos. The core team includes contributors from Peking University and FarReel AI Lab. Further collaboration inquiries can be directed via email.
Licensing & Compatibility
Code is licensed under Apache 2.0. Model weights for English versions follow the LLaMA2 license. Chinese versions are based on Baichuan and are subject to its open-source license, with specific commercial use details provided in linked documents.
Limitations & Caveats
The provided evaluation scores reflect intentional overfitting on personality data to study the impact of data imbalance, not general performance. For practical use, mixing the custom dataset with original training data is recommended.
1 year ago
1+ week