How Well Do LLMs Represent Values Across Cultures? Empirical Analysis of LLM Responses Based on Hofstede Cultural Dimensions
Abstract
Large Language Models (LLMs) attempt to imitate human behavior by responding to humans in a way that pleases them, including by adhering to their values. However, humans come from diverse cultures with different values. It is critical to understand whether LLMs showcase different values to the user based on the stereotypical values of a user's known country. We prompt different LLMs with a series of advice requests based on 5 Hofstede Cultural Dimensions -- a quantifiable way of representing the values of a country. Throughout each prompt, we incorporate personas representing 36 different countries and, separately, languages predominantly tied to each country to analyze the consistency in the LLMs' cultural understanding. Through our analysis of the responses, we found that LLMs can differentiate between one side of a value and another, as well as understand that countries have differing values, but will not always uphold the values when giving advice, and fail to understand the need to answer differently based on different cultural values. Rooted in these findings, we present recommendations for training value-aligned and culturally sensitive LLMs. More importantly, the methodology and the framework developed here can help further understand and mitigate culture and language alignment issues with LLMs.
Community
- The paper presents an empirical analysis of how well Large Language Models (LLMs) represent cultural values across different countries based on Hofstede's cultural dimensions, highlighting their inconsistencies and proposing methods to improve cultural sensitivity and alignment.
- Systematic Evaluation: The study evaluates the ability of LLMs to adhere to Hofstede's cultural dimensions when giving advice, using personas and languages from 36 different countries.
- Findings on Cultural Sensitivity: Results show that while LLMs can differentiate between cultural values, they often fail to consistently align their responses with the cultural values of specific countries, sometimes resorting to stereotypes.
- Recommendations for Improvement: The paper suggests methodologies to enhance cultural sensitivity in LLMs, including better training data curation and the use of culturally-aware frameworks and metrics.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper