COMPARING DIVERSITY, NEGATIVITY, AND STEREOTYPES IN CHINESE-LANGUAGE AI TECHNOLOGIES: AN INVESTIGATION OF BAIDU, ERNIE AND QWEN

Comparing diversity, negativity, and stereotypes in Chinese-language AI technologies: an investigation of Baidu, Ernie and Qwen

Comparing diversity, negativity, and stereotypes in Chinese-language AI technologies: an investigation of Baidu, Ernie and Qwen

Blog Article

Large language models (LLMs) and search engines have the potential to perpetuate biases and stereotypes by amplifying existing prejudices in their 3 Piece King Bed training data and algorithmic processes, thereby influencing public perception and decision-making.While most work has focused on Western-centric AI technologies, we examine social biases embedded in prominent Chinese-based commercial tools, the main search engine Baidu and two leading LLMs, Ernie and Qwen.Leveraging a dataset of 240 social groups across 13 categories describing Chinese society, we collect over 30 k views encoded in the aforementioned tools by prompting them to generate candidate words describing these groups.We find that language models exhibit a broader range of embedded views compared to the search engine, although Baidu and Qwen generate negative content more often than Ernie.We also observe a moderate prevalence of stereotypes embedded in the language models, many of which potentially promote offensive or derogatory views.

Our work highlights the importance of prioritizing fairness and 2 Pc. Living Room Package w/ Recliner inclusivity in AI technologies from a global perspective.

Report this page