Openai has discreetly changed one of its recently published policy documents, deleting a reference to the “politically impartial by default” AI models.
The language was part of the initial project of the company of its “economic plan” for the AI industry in the United States, which had suggested that AI systems should aim for political neutrality.
The revised project, made public on Monday, omits the previous formulation, drawing attention to the evolution of discourse surrounding political prejudices in AI.
Asked about the change, an Openai spokesperson said he was part of an effort to “rationalize” the document and stressed that the other documentation of the company, including the specification of the model published in May, always underlines the importance of objectivity in its AI systems.
The SPEC model provides a glimpse of the behavior of the different OpenAi models.
The elimination of the term highlights the sensitive and often controversial nature of the discussions around the bias of the AI. The question has been widely debated in recent months, in particular with criticism of political law accusing Openai of having pushed a liberal program in his responses as Chatbot.
Eminent personalities such as Elon Musk and David Sacks, an eminent IA investor, have expressed concerns that platforms like the Openai chatgpt can remove conservative views.
Musk was frank on the influence of the “awake” culture of Silicon Valley, saying that AI models in the region reflect the ideological trends of their developers.
The subject of the bias in AI has become heavier because the two technological companies and their products face a meticulous examination for potential political trends. Musk, whose own IA company, XAI, has also fought against the statements of bias, argued that AI models are shaped by the philosophies of the communities in which they are developed.
A study carried out by researchers from the United Kingdom published last August suggested that the OpenAi Chatgpt presented a liberal bias, in particular on sensitive subjects such as immigration, climate change and homosexual marriage. In response, Openai has always insisted that all system biases are “bugs, no features”.
Despite these criticism, the spokesperson for Openai has reaffirmed that objectivity remains at the heart of the philosophy of the company. However, the latest revision of his policy document indicates how difficult it is for AI companies to navigate the mines of political and ideological biases while trying to maintain the confidence of users in their products .