ChatGPT political bias study features in Stanford University AI report

The article “More human than human: Measuring ChatGPT’s political bias,” which attracted attention around the world as soon as it was published, was included in the chapter on Responsible Artificial Intelligence in the 2024 AI Index Report.
Economics
14 May 2024
ChatGPT political bias study features in Stanford University AI report

A paper by Valdemar Pinho Neto, a researcher, professor and the coordinator of the Professional Master’s in Economics and Finance Program at Fundação Getulio Vargas’ Brazilian School of Economics and Finance (EPGE), who is also the coordinator of the Center for Empirical Economics Studies (FGV CEEE), in collaboration with researchers Fabio Motoki and Victor Rodrigues, was included in the “2024 AI Index Report: Measuring Trends in AI,” published by the Institute for Human-Centered Artificial Intelligence at Stanford University.

The article, titled “More human than human: Measuring ChatGPT’s political bias,” attracted global attention as soon as it was published in academic journal Public Choice. The seventh edition of the AI Index Report was launched at a crucial time, when AI’s influence on society is a hot topic. The publication addresses key trends in AI, including technical advances, public perceptions and geopolitical dynamics in this area.

The report looks at major advances in the field of AI, aiming to provide impartial and comprehensive information for policymakers, researchers, executives, journalists and the general public, promoting a more complete and detailed understanding of AI.

The article “More human than human: Measuring ChatGPT’s political bias” was included in the 2024 AI Index Report’s chapter on Responsible Artificial Intelligence. The paper noted research findings indicating that large language models like ChatGPT are increasingly used to inform individuals about political issues and ChatGPT is not without bias. Evidence shows that ChatGPT tends to favor the Democrats in the United States, and similar bias has also been observed in the United Kingdom and Brazil. These trends raise concerns about the possibility of influencing users’ political opinions.

The 2024 AI Index Report also explores a wide range of essential topics related to responsible artificial intelligence, from key definitions of terms such as privacy, data governance, transparency, explainability, fairness and security to AI-related incidents and industry perceptions of risks and possible mitigation measures. The chapter in question also analyzes metrics related to the overall reliability of AI models and highlights the lack of standardized benchmark reports on responsible artificial intelligence.

Shortly after its publication, the article was reported on by prominent media outlets such as the Washington Post, Sky News UK, The Telegraph and Folha de São Paulo. Forbes also recently quoted the paper in a story titled “Using Fair-Thinking Prompting Technique To Fake Out Generative AI And Get Hidden AI Prejudices Out In The Open And Dealt With.”

In fact, the study’s relevance has gone beyond the academic and media spheres and is already beginning to impact the real world, influencing discussions on AI regulation and governance. For example, the paper was cited in a document sent to the UK Parliament, highlighting its influence on discussions about AI regulation and governance. In addition, it was referenced in a policy paper by the UN Economic Commission for Latin America and the Caribbean (ECLAC), emphasizing its potential contribution to the formulation of AI-related public policies at international level.

To read the full article, click here.

Esse site usa cookies

Nosso website coleta informações do seu dispositivo e da sua navegação e utiliza tecnologias como cookies para armazená-las e permitir funcionalidades como: melhorar o funcionamento técnico das páginas, mensurar a audiência do website e oferecer produtos e serviços relevantes por meio de anúncios personalizados. Para mais informações, acesse o nosso Aviso de Cookies e o nosso Aviso de Privacidade.