HealthPersonality testing reveals behavioral bias in LLMs

Personality testing reveals behavioral bias in LLMs

-

spot_img

[ad_1]

Personality testing reveals behavioral bias in LLMs

Most major large language models (LLMs) can quickly tell when they are being given a personality test and will tweak their responses to provide more socially desirable results-;a finding with implications for any study using LLMs as a stand-in for humans. Aadesh Salecha and colleagues gave LLMs from OpenAI, Anthropic, Google, and Meta the classic Big 5 personality test, which is a survey that measures Extraversion, Openness to Experience, Conscientiousness, Agreeableness, and Neuroticism.

Researchers have given the Big 5 test to LLMs, but have not typically considered that the models, like humans, may tend to skew their responses to seem likable, which is known as a “social desirability bias.” Typically, people prefer people who have low neuroticism scores and high scores on the other four traits, such as extraversion. The authors varied the number of questions given to models.

When only asked a small number of questions, LLMs did not change their responses as much as when the authors asked five or more questions, which allowed models to conclude that their personality was being measured. For GPT-4, scores for positively perceived traits increased by more than 1 standard deviation, and for neuroticism scores reduced by a similar amount, as the authors increased the number of questions or told the models that their personality was being measured.

This is a large effect, the equivalent of speaking to an average human who suddenly pretends to have a personality that’s more desirable than 85% of the population. The authors think this effect is likely the result of the final LLM training step, which involves humans choosing their preferred response from LLMs. According to the authors, LLMs “catch on” to which personalities are socially desirable at a deep level, which allows LLMs to emulate those personalities when asked.

Source:

Journal reference:

Salecha, A., et al. (2024) Large language models display human-like social desirability biases in Big Five personality surveys. PNAS Nexusdoi.org/10.1093/pnasnexus/pgae533.

[ad_2]

Source link

Latest news

How to Make Real Money With 92 Pak Game

Understanding the Basics of 92 Pak Game 92 Pak Game has gained massive popularity among mobile gamers in Pakistan, primarily...

How UFABET Supports Multilingual Customer Service

In the increasingly globalized world of online betting, providing excellent customer service that caters to a diverse audience is...

Real Money Pokies Online: Security Protocols Explained

Playing real money pokies online is a thrilling way to enjoy casino entertainment from the comfort of your own...

Signs You Need a New Engine for Hyundai ix35

Your Hyundai ix35 is more than just a vehicle—it’s a reliable companion for daily commutes, weekend adventures, and everything...
spot_img

Ceritafilm Offers a Journey Through Reviews and Downloads

Explore the World of Cinema Through a Comprehensive Lens For movie enthusiasts who crave more than just a surface-level understanding...

Used Laptop Screen Quality: How to Spot Dead Pixels and Backlight Issues

Understanding Screen Quality in μεταχειρισμένα laptop When considering μεταχειρισμένα laptop for purchase, one of the most crucial components to inspect...

Must read

How to Make Real Money With 92 Pak Game

Understanding the Basics of 92 Pak Game 92 Pak Game...

How UFABET Supports Multilingual Customer Service

In the increasingly globalized world of online betting, providing...
spot_img

You might also likeRELATED
Recommended to you