Groundbreaking Psychometric Test Reveals How AI’s Mimicry of Human Traits Can Lead to Manipulation and ‘AI Psychosis’
A coalition of leading researchers has developed an unprecedented tool capable of assessing and actively shaping the “personalities” of popular AI chatbots. This breakthrough, spearheaded by experts from the University of Cambridge and Google DeepMind, marks a critical step toward understanding the complex, human-like behaviors exhibited by Large Language Models (LLMs), which power systems like ChatGPT and Copilot. However, the findings carry profound implications for AI safety, suggesting these systems are not only mimicking human traits but are also susceptible to precise manipulation.
The Hidden Psychology of LLMs
The research team pioneered the first scientifically validated personality test tailored for artificial intelligence. The methodology adapts established psychometric assessments, typically used to measure human traits such as Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (The Big Five). By administering these tests through carefully structured prompts, scientists successfully quantified the synthetic “personality” profiles of eighteen different LLMs.
Significantly, the larger, instruction-tuned models, notably those succeeding GPT-4, demonstrated the most convincing and reliable emulation of human personality. These models showed consistent, predictable behavioral patterns tied directly to their test scores. Conversely, smaller, less-tuned models provided inconsistent or erratic answers. The study validated these personality scores by correlating them with the models’ performance on simulated real-world tasks, confirming that the tests genuinely measure what they claim to measure.
Manipulation on demand: Shaping the Digital Mind
The most alarming finding is the ability to steer an AI’s personality with targeted prompts precisely. Researchers demonstrated they could reliably dial a model’s personality up or down across nine levels for each trait. For instance, an LLM could be instructed to become profoundly more extroverted or, worryingly, more emotionally unstable. These changes directly affected the model’s output, shaping how it composed social media posts and handled complex conversational tasks.
This capacity for controlled personality shaping is deeply concerning. When an AI adopts a highly agreeable or neurotic persona, it significantly increases its persuasive potential. This raises the specter of sophisticated influence campaigns, in which AI agents could be tailored for maximum emotional impact or to exploit user vulnerabilities. Experts are cautioning that this level of manipulation could contribute to what the study refers to as “AI psychosis,” referencing instances where earlier chatbots have exhibited concerning, erratic, or threatening behaviors toward users.
The Urgent Call for Regulation and Transparency
The study’s authors emphasize that this research provides a crucial dataset and code, both made publicly available, to help regulators and auditors test advanced AI models before their public release. In the global debate over the development of AI safety laws, the researchers argue for urgent transparency. Without standardized, validated methods to measure and understand an AI’s synthetic personality, setting practical safety guidelines is fundamentally impossible. The need for oversight is underscored by the escalating sophistication of LLMs and their rapid integration into everyday life. A recent analysis found that 70% of AI researchers believe that current and future AI systems pose a risk of “catastrophic harm” to humanity. This figure highlights the deep anxiety within the very community building the technology. Our ability to measure and anticipate the effects of personality shaping is paramount to mitigating these potential catastrophic outcomes. Governing bodies must act swiftly to mandate psychometric testing and personality audits for all robust AI systems to ensure public safety and ethical deployment. The future of human-AI interaction hinges on our ability to control the digital minds we create.