Discover and explore top open-source AI tools and projects—updated daily.
agent-topiaFramework for LLM agents with dynamic, evolving Jungian personalities
Top 63.1% on SourcePulse
This framework enables Large Language Models (LLMs) to develop dynamic, evolving personalities grounded in Carl Jung's psychological theories. It addresses the need for more structured and adaptive AI agents, offering a novel approach for applications ranging from game NPCs to personalized assistants and social simulations. The primary benefit is providing LLMs with interpretable and controllable personalities that can adapt to contexts and evolve over time.
How It Works
The Jungian Personality Adaptation Framework (JPAF) employs three core mechanisms to achieve dynamic personality modeling. Dominant-Auxiliary Coordination ensures core personality consistency, while Reinforcement-Compensation allows for short-term adaptation to specific interaction contexts. The Reflection Mechanism drives long-term personality evolution, enabling the LLM's persona to grow and change. This psychologically grounded approach, based on Jung's eight psychological types and weighted differentiation, offers a structured alternative to more ad-hoc personality implementations.
Quick Start & Requirements
conda create -n jpaf python=3.10, conda activate jpaf), and install dependencies (pip install -r requirement.txt).para.env.example to para.env and fill in their respective API keys, base URLs, and model names.Highlighted Details
Maintenance & Community
No specific community channels (e.g., Discord, Slack) or notable contributors/sponsorships are mentioned in the provided README.
Licensing & Compatibility
The provided README does not explicitly state the software license. This omission requires further investigation for compatibility with commercial use or closed-source integration.
Limitations & Caveats
The framework requires users to configure and provide their own LLM API keys, which may incur usage costs. Performance metrics indicate variability across different LLM families, with Llama models showing lower accuracy in certain aspects compared to GPT and Qwen. The absence of a clearly stated license is a significant caveat for adoption.
1 month ago
Inactive
THUDM
microsoft