evolving_personality  by agent-topia

Framework for LLM agents with dynamic, evolving Jungian personalities

Created 1 month ago
491 stars

Top 63.1% on SourcePulse

GitHubView on GitHub
Project Summary

This framework enables Large Language Models (LLMs) to develop dynamic, evolving personalities grounded in Carl Jung's psychological theories. It addresses the need for more structured and adaptive AI agents, offering a novel approach for applications ranging from game NPCs to personalized assistants and social simulations. The primary benefit is providing LLMs with interpretable and controllable personalities that can adapt to contexts and evolve over time.

How It Works

The Jungian Personality Adaptation Framework (JPAF) employs three core mechanisms to achieve dynamic personality modeling. Dominant-Auxiliary Coordination ensures core personality consistency, while Reinforcement-Compensation allows for short-term adaptation to specific interaction contexts. The Reflection Mechanism drives long-term personality evolution, enabling the LLM's persona to grow and change. This psychologically grounded approach, based on Jung's eight psychological types and weighted differentiation, offers a structured alternative to more ad-hoc personality implementations.

Quick Start & Requirements

  • Installation: Clone the repository, create and activate a conda environment (conda create -n jpaf python=3.10, conda activate jpaf), and install dependencies (pip install -r requirement.txt).
  • Prerequisites: Python 3.10+, conda, and API credentials for supported LLMs (OpenAI, Qwen, Llama). Users must copy para.env.example to para.env and fill in their respective API keys, base URLs, and model names.
  • Links: The repository itself serves as the primary resource; specific quick-start or demo links are not detailed beyond the provided README instructions.

Highlighted Details

  • Psychologically Grounded: Models personalities based on Jung's eight psychological types with fine-grained expression.
  • Triple Adaptive Mechanisms: Features Dominant-Auxiliary Coordination, Reinforcement-Compensation, and Reflection for personality consistency, adaptation, and evolution.
  • Cross-Model Compatibility: Validated on multiple LLM families including GPT-4, Llama, and Qwen.
  • Experimental Highlights: Achieved 100% MBTI alignment accuracy on tested models, high type activation accuracy (GPT/Qwen > 90%, Llama 65–95%), and personality evolution accuracy (GPT/Qwen 100%, Llama 92%). Supports all 16 MBTI types.

Maintenance & Community

No specific community channels (e.g., Discord, Slack) or notable contributors/sponsorships are mentioned in the provided README.

Licensing & Compatibility

The provided README does not explicitly state the software license. This omission requires further investigation for compatibility with commercial use or closed-source integration.

Limitations & Caveats

The framework requires users to configure and provide their own LLM API keys, which may incur usage costs. Performance metrics indicate variability across different LLM families, with Llama models showing lower accuracy in certain aspects compared to GPT and Qwen. The absence of a clearly stated license is a significant caveat for adoption.

Health Check
Last Commit

1 month ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
125 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Elvis Saravia Elvis Saravia(Founder of DAIR.AI), and
1 more.

TinyTroupe by microsoft

0.2%
7k
LLM-powered multiagent simulation for business insights and imagination
Created 1 year ago
Updated 2 weeks ago
Feedback? Help us improve.