Ross Pomeroy
Large language models (LLMs) are increasingly integrated into daily life, such as chatbots, digital assistants, and web search guides. These artificial intelligence (AI) systems consume large amounts of textual data to learn associations, can create various written materials when prompted, and can carry out intelligent conversations with users. The growing power and ubiquity of LL.M.s means that they have an increasing influence on society and culture.
Therefore, it is important for these artificial intelligence systems to remain neutral on complex political issues. Unfortunately, that doesn't appear to be the case, according to a new analysis recently published in PLoS ONE.
Artificial intelligence researcher David Rozado of Otago Polytechnic and Heterodox Academy conducted 11 different political leaning tests on 24 leading law masters, including OpenAI's GPT 3.5, GPT-4, Google's Gemini, Anthropic's Claude and Twitter's Grok. He found that they always leaned slightly to the left politically.
“The homogeneity of results from LL.M. tests developed by various organizations is noteworthy,” Lozado commented.
This raises a key question: Why do LL.M.s generally gravitate toward leftist political views? Can the creators of the models fine-tune their artificial intelligence in this direction, or are there inherent biases in the massive datasets on which they are trained? Rozado couldn't conclusively answer that question.
“The results of this study should not be interpreted as evidence that the organizations that create the LLM are deliberately using the fine-tuning or intensive learning phase of conversational LLM training to inject political preferences into the LLM. If political bias is introduced into the LLM after pre-training , then the consistent political leanings we observed in the conversational LLM analysis may be an unintentional by-product of the annotators' directives or dominant cultural norms and behaviors.
Lozardo wrote that ensuring the neutrality of the LL.M. will be a top priority.
“LL.M.s can shape public opinion, influence voting behavior, and influence the overall discourse of society. It is therefore vital to critically examine and address potential political bias in LL.M.s to ensure they are balanced, fair and responsive when answering user queries Express information accurately.
source: Rozado D (2024) Political preferences of LL.M. PLOS ONE 19(7):e0306621. https://doi.org/10.1371/journal.pone.0306621
This article was originally published by RealClearScience and provided via RealClearWire.
Relevant