An artificial intelligence program that impressed the internet by its ability to generate original images from user promotes has also aroused concerns and criticisms of the AI: racial prejudices and prejudices gender.
And while Openai, the company behind the program, called Dall · E, sought to solve the problems, the efforts were also examined for what certain technologists have claimed to be a superficial means of solving underlying problems systemic with AI systems.
“It’s not just a technical problem. This is a problem that implies the social sciences, ”said Kai-Wei Chang, an associate professor at the UCLA Samueli School of Engineering who studies artificial intelligence. There will be a future in which the systems will better protect themselves against certain biased concepts, but as long as the company has biases, the AI reflects this, said Chang.
Openai published the second version of It is Dall image generator in April for Elogious criticism. The program asks users to enter a series of words relating to each other – for example: “An astronaut playing basketball with cats in space in a minimalist style.” And with spatial conscience and objects, dall creates four original images which are supposed to reflect the words, according to the website.
As with many AI programs, it did not take long to a few users To start pointing out what they have seen as signs of biases. Openai used the legend example “a build” which produced images present only men, while the legend “a on -board” only produced Images of women. In anticipation of these biases, Openai published a “Risks and limitations“Document with the limited publication of the program before the allegations of bias occur, noting that” dalls 2 inherits various biases of its training data, and its results sometimes strengthen societal stereotypes. “
Dall · E 2 is based on another piece of technology AI created by Openai called GPT-3A natural language treatment program that relies on hundreds of billions of examples of language from books, Wikipedia and Internet open to create a system that can approximately human writing.
Last week, Openai announced that it was implementing new mitigation techniques that helped Dall generate more diverse and reflective images of the world’s population – and He said that Internal users were 12 times more likely to say that images included people from various backgrounds.
The same day, Max Woolf, a scientist of Buzzfeed data who was one of a few thousand people granted access to test the dall (updated A Twitter wire Underline the updated technology was less precise than before to create images according to its written prompt.
Other Twitter users who have tested Dall · E 2 responded to Woolf’s wire sharing the same problem – especially with regard to breed and gender biases. They suspect that the OpenAi diversity solution was as simple as the words of identification of the sexes or the race of AI to the invites written without their knowledge to produce inorganically various sets of images.
“The way this rumor works is that it adds men or women or blacks, Asians or white breeds at random invites,” Wooolf said in a telephone interview.
OPENAI published A blog post last month on the attempt to repair biases by represented certain data; He didn’t mention anything about the addition of gender or breed designers to prompts.
“We think it is important to treat prejudice and security at all levels of the system, which is why we are looking for a range of approaches,” said an Openai spokesperson in an email. “We are looking for other ways to correct biases, including best practices to adjust training data.”
The concerns of biases in AI systems have increased in recent years as examples around Automated hiring,, health care And algorithmic moderation were noted to discriminate various groups. The problem sparked discussions on government regulations. New York adopted a law in December that prohibited the use of AI in the screening of employment candidates Unless AI passes a “bias audit”.
A large part of the problem around IA bias comes from data that forms the AI models how to make the right decisions and produce the desired outings. The data extracted often have prejudices and integrated stereotypes due to societal biases or human error, such as photo data sets which depict men as executives and women as assistants.
AI companies, including OPENAI, use and then use data filters to avoid graphic, explicit or non -unwanted results and, in this case, the images appearing. When the training data has passed through the data filter, which Optai calls “the amplification of the bias” produces more biased or biased results than the original training data.
This makes AI biases particularly difficult to correct after the construction of a model.
“The only way to really repair this is to recycle the entire model on biased data, and it would not be in the short term,” said Woolf.
Chirag Shah, Associate Professor at the University of Washington Information School, said that AI is a common problem and that Fix Openai seemed to have found did not solve the underlying problems of his program.
“The common thread is that all of these systems are trying to learn existing data,” said Shah. “They are superficially and, on the surface, solving the problem without solving the underlying problem.”
Jacob Metcalf, Data & Society researcher, a non -profit research institute, said that a step forward would be that companies are open to the way they create and form their AI systems.
“For me, the problem is transparency,” he said. “I think it’s great that dall is, but the only way these systems will be sure and just is maximalist transparency on how they are governed.”