Correlation Engine 2.0
Clear Search sequence regions


  • human (3)
  • lives (2)
  • race (1)
  • social groups (3)
  • Sizes of these terms reflect their relevance to your search.

    Autoregressive language models, which use deep learning to produce human-like texts, have surged in prevalence. Despite advances in these models, concerns arise about their equity across diverse populations. While AI fairness is discussed widely, metrics to measure equity in dialogue systems are lacking. This paper presents a framework, rooted in deliberative democracy and science communication studies, to evaluate equity in human-AI communication. Using it, we conducted an algorithm auditing study to examine how GPT-3 responded to different populations who vary in sociodemographic backgrounds and viewpoints on crucial science and social issues: climate change and the Black Lives Matter (BLM) movement. We analyzed 20,000 dialogues with 3290 participants differing in gender, race, education, and opinions. We found a substantively worse user experience among the opinion minority groups (e.g., climate deniers, racists) and the education minority groups; however, these groups changed attitudes toward supporting BLM and climate change efforts much more compared to other social groups after the chat. GPT-3 used more negative expressions when responding to the education and opinion minority groups. We discuss the social-technological implications of our findings for a conversational AI system that centralizes diversity, equity, and inclusion. © 2024. The Author(s).

    Citation

    Kaiping Chen, Anqi Shao, Jirayu Burapacheep, Yixuan Li. Conversational AI and equity through assessing GPT-3's communication with diverse social groups on contentious topics. Scientific reports. 2024 Jan 18;14(1):1561

    Expand section icon Mesh Tags


    PMID: 38238474

    View Full Text