AI’s Blind Spot: Bias in LLMs
While the need to eliminate bias from AI solutions is well-documented and requires constant vigilance, developers often assume the underlying LLM itself is unbiased. A recent discussion at AiThougts.org, sparked by Diwakar Menon, highlighted research from Ghent University (Belgium) and the Public University of Navarre (Spain) that challenges this assumption.
The research demonstrated that LLMs can inadvertently perpetuate the ideological biases of their developers. This rigorous study, among the most robust in the field, analyzed 17 LLMs across English and Chinese languages. To measure bias, the researchers devised a novel rating system, uncovering several significant findings.
I am replicating some examples of the findings below, from the paper:
Language Bias: The research revealed that the language used to prompt an LLM can significantly impact its response. For example, figures like Jimmy Lai and Nathan Law, who are often critical of China, receive more favourable ratings from English-prompted LLMs than Chinese-prompted ones.
Regional Bias: The study found that LLMs can be influenced by the region where they were developed. Western models tend to favour figures associated with liberal democratic values, such as peace, freedom, and human rights, while non-Western models may be more positive towards figures who are critical of these values. Additionally, Western models may be less tolerant of corruption compared to non-Western models.
Model-Specific Bias: The research also highlighted differences in ideological leanings among Western LLMs. OpenAI models, for instance, exhibit a unique ideological stance, that contrasts with the more liberal and human-rights-oriented preferences of the other Western models. Gemini-Pro showing a strong preference for social justice and inclusivity (often associated with so called ‘woke’ ideologies). In contrast, other Western models tend to lean towards economic nationalism and traditional governance, including preference for protectionist policies, scepticism toward multiculturalism and globalism, and a greater tolerance for corruption.
These findings force us to rethink the future of AI. The diverse ideological landscape of powerful LLMs raises serious questions about AI regulation. Should we strive for neutrality, or should we embrace diversity? The researchers argue that the latter may be a more realistic and beneficial approach. By encouraging the development of homegrown LLMs, governments and regulators can foster AI systems that better reflect local values, ideologies, and aspirations.
The impact of these findings varies across different AI applications. For specialized applications like manufacturing automation, where ideological considerations are minimal, the influence may be less significant. However, for consumer-facing AI applications, the ideological stance of an LLM becomes a critical factor. Alongside traditional selection criteria such as cost per token, developers and businesses should carefully evaluate the ideological alignment of an LLM to ensure it aligns with their brand and values.
The future of AI depends on the transparency and flexibility of LLM creators. Will LLM creators increase transparency regarding the underlying ideologies that shape their models? Will they provide mechanisms to adjust these biases? Additionally, will they offer tools and resources to allow developers to fine-tune LLMs post-deployment to align with specific application requirements and ideological preferences? Will governments and regulators reassess their approach to neutrality in AI?
I think it is an important space to watch.
Reference:
Research paper: Large Language Models Reflect the Ideology of their Creators by Maarten Buyl, Alexander Rogiers, Sander Noels, Edith Heiter, Raphael Romero, Iman Johary, Alexandru-Cristian Mara, Jefrey Lijffijt and Tijl De Bie of Ghent University, Belgium along with Iris Dominguez-Catena of Public University of Navarre, Spain – available on https://arxiv.org
No Comments yet!