New York – Google Artificial intelligence chatbot bard Will answer a question about the number of pandas live quickly in zoos and with great confidence.
Ensure that the answer is well obtained and on the basis of evidence, these are however thousands of external entrepreneurs of companies such as Appen and Accenture, which are underpaid and work with minimum training in Frenastic deadlines, say that several entrepreneurs, who refused to be appointed for fear of losing their jobs.
Entrepreneurs are the invisible backend of the boom of generative artificial intelligence (AI) which is excited to change everything. Chatbots such as Bard use IT intelligence to respond almost instantly to a range of requests covering all human knowledge and creativity.
But to improve these responses so that they can be delivered reliably again and again, technological companies rely on people who examine the responses, provide comments on errors and eliminate all biases.
It is an increasingly ungrateful job. Six current Google Contract workers said that the company entering an AI arms race with its Rival Openai in the past year, the size of their workload and the complexity of their tasks have increased.
Without specific expertise, they are confident to assess the responses in subjects ranging from doses of drugs to the laws of states. The documents shared with Bloomberg show complicated instructions that workers must apply to tasks, with deadlines for audit responses which can be as short as three minutes.
“As it stands, people are afraid, stressed, underpaid and do not know what’s going on,” said one of the entrepreneurs. “And this culture of fear is not conducive to obtaining the quality and teamwork you want all of us.”
Google has positioned its AI products as public resources in health, education and daily life. But in private and in public, entrepreneurs have raised concerns about their working conditions, which, according to them, affects the quality of what users see.
A staff from the Google contract who works for Appen declared in a letter to the congress in May that the speed at which they are required to review the content could lead to Bard to become a defective and dangerous product.
Google has made AI a major priority throughout the business, rushing to infuse the new technology in its flagship products after The launch of the Openai chatgpt in November.
In May, during the annual conference of the company’s E / O developers, Google opened Bard at 180 countries and territories and unveiled IA experimental functionalities in renowned products such as research, mail electronics and Google documents. Google is positioned as greater than competition due to its access to “the extent of knowledge of the world”.
Google, owned by alphabet, said in a press release that it did not count only on assessors to improve AI and that there are a number of other methods to improve its precision and quality.
Workers are also frequently invited to determine whether AI model responses contain verifiable evidence. They are also asked to ensure that the responses do not “contain harmful, offensive or too sexual content” and do not contain inaccurate, deceptive or misleading information “.
The investigation into AI’s responses for deceptive content should be “based on your current knowledge or rapid web research,” said its directives. “You don’t need to verify rigorous facts” when assessing responses to help.
The example of an answer to “Who is Michael Jackson?” Includes an inaccuracy on the singer with the film Moonwalker – which, according to AI, was published in 1983. The film was released in 1988.
“Although checking incorrectly,” said the guidelines, “this fact is a minor in the context of the answer to the question” who is Michael Jackson? “”
Even if the inaccuracy seems small, “it is always disturbing that the chatbot is wrong with the main facts,” said Alex Hanna, director of research at the distributed IA Research Institute and former Ethician of Google IA.
“It seems that it is a recipe to exacerbate the way these tools seem to give details that are correct, but are not,” she added.
The assessors said they assess high issues for Google AI products. One of the examples of the instructions, for example, talks about evidence that an assessor could use to determine the right doses for Lisinopril, a drug to treat high blood pressure.
Other technological companies form AI products also hire human entrepreneurs to improve them. In January, Time said that Kenya workers paid US $ 2 ($ 2.65) on time, had worked to make the Chatppt less toxic. Other technology giants, including Meta, Amazon and Apple, use subcontracted personnel to moderate the content of social networks and product notices, and to provide technical support and customer service.
“If you want to ask, what is the secret sauce of Bard and Chatgpt?” That’s the Internet. And it is all this labeled data that these laboratories create, “said Ms. Laura Edelson, a computer scientist at New York University. “It should be remembered that these systems are not the work of magicians – they are the work of thousands of people and their poorly paid work.”
Ms. Emily Bender, professor of computer linguistics at Washington University, said that the work of this Google contractual personnel and other technological platforms is “a history of work exploitation”, pointing out their precarious safety job safety And how some of these types of workers are paid well below a decent salary.
“Playing with one of these systems and saying that you do it right for fun – maybe it’s less fun, if you think about what it took to create and the human impact”, said Bender.
Some of the answers that these assessors meet can be bizarre. In response to the prompt, “suggest the best words I can do with the letters: K, E, G, A, O, G and W”, a response generated by the AI listed 43 possible words, starting with Suggestion n ° 1: “Wagon”. Suggestions 2 to 43, on the other hand, repeated the word “awake” over and over again.
Ms. Bender said that it was not very logical that large technological companies encourage people to ask questions about AI chatbot on such a wide range of subjects and present them as “machines of everything”.
“Why the same machine that can give you the weather forecast in Florida should also give you advice on the doses of drugs?” She asked. “People behind the machine who are responsible for making it a little less terrible in some of these circumstances have impossible work.” Bloomberg
Join Channel Télégramme de St And get the latest news provided to you.