Researchers found that global attitudes correlate with language, revealing that implicit biases are often mirrored in AI models like ChatGPT. The study suggests that to address these biases, a focus on the cultural transmission of language is necessary.
In a new study, published today (June 14) in the journal Social Psychological and Personality Science, researchers share evidence that people’s attitudes are deeply woven into language and culture across the globe and centuries.
Exploring Linguistic Associations
The researchers looked at connections between people’s attitudes and language from 55 different topics like rich vs. poor, dogs vs. cats, or love vs. money. They used four text sources: Current English writing and text, English books going back 200 years, and texts in 53 languages other than English. As a measure of people’s attitudes, they used data from over 100,000 Americans; first, direct self-reports, and second, an indirect measure based on a people’s reaction times, often referred to as implicitly-measured attitudes.
They found that the associations picked up by large AI language models like ChatGPT match more closely with the second indirect measure rather than the attitudes they explicitly state.
AI and Social Representation
“With the rise of AI and large language model applications, we as consumers, leaders, researchers, or policymakers need to understand what these models are representing about the social world,” says lead author Dr. Tessa Charlesworth, of
Discussion about this post