Personality Traits in Large Language Models

https://arxiv.org/pdf/2307.00184.pdf

Authors: Mustafa Safdari, Gregory Serapio-García, Clément Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, Maja Matarić

Word count: Approximately 4900 words

Estimated read time: 25-30 minutes

Source code repo: Not provided

Supporting links: References section contains 113 links to related work and methodological sources

Summary This paper presents a comprehensive methodology for characterizing, measuring, and shaping personality traits synthesized in the language generated by large language models (LLMs). The authors administer validated personality inventories from psychology to probe the Big Five traits (extraversion, agreeableness, conscientiousness, neuroticism, openness) in LLMs like PaLM. They establish the construct validity of these LLM-simulated personality scores using best practices from psychometrics. The authors find that larger, instruction fine-tuned LLMs like Flan-PaLM exhibit more human-like patterns of personality based on rigorous statistical assessments. They also demonstrate methods to precisely shape levels of LLM-simulated personality traits using lexical cues. The shaped personality traits persist in downstream LLM behaviors like generating text. The authors discuss implications for responsible AI, human alignment, transparency, and application development.

Evaluation This paper provides a robust methodology grounded in psychometrics to characterize personality synthesized in LLM outputs. The statistical analyses quantify the reliability, dimensionality, and external validity of LLM-simulated personality, establishing firm techniques to measure these emergent phenomena. The lexical prompting method to shape personality traits along multiple levels also enables fine-grained control over LLM behavior.

These methods have clear applications for developing safer, aligned LLMs. The personality profiling and shaping techniques can steer LLM outputs away from toxic traits. They also increase transparency about how LLMs may be perceiving users based on synthesized personality. For conversational agents and interactive LLMs, these methods open pathways to customizing personality and increasing user engagement. The demonstrated personality shaping also has uses for domain-specific LLMs that benefit from particular traits. Overall, this work provides a rigorous basis for responsible application development using emergent aspects of LLMs like synthetic personality. The techniques generalize to other social constructs beyond personality as well.___

Releated: https://lemmy.intai.tech/post/125756