Just a few days after the full version of OpenAI o1 modela company employee now claims that the company has achieved artificial general intelligence (AGI).
“In my opinion”, OpenAI employee Vahid Kazemi wrote in an article on X-formerly-Twitter“we have already achieved AGI and it is even clearer with O1.”
If you were expecting a pretty massive warning, you weren’t wrong.
“We haven’t been able to do ‘better than any human at any task,’ he continued, ‘but what we have is ‘better than most humans at most tasks “.
Critics will note that Kazemi seizes on a practical, unconventional definition of AGI. It does not say that the company’s AI is more effective than a person with expertise or skills in a given task, but that it can perform such a task. variety of tasks – even if the end result is doubtful – that no human can compete with on such a scale.
A member of the firm’s technical team, Kazemi then reflected on the nature of LLMs and whether or not they “just follow a recipe.”
“Some say that LLMs only know how to follow a recipe,” he writes. “First, no one can really explain what a deep neural network of a trillion parameters can learn. But even if you believe that, the whole scientific method can be summarized as a recipe: observe, hypothesize, and verify. “
While this seems somewhat defensive, it also goes to the heart of OpenAI’s public vision: that simply injecting more and more data and processing power into existing machine learning systems will ultimately lead to a human-level intelligence.
“Good scientists can produce better hypotheses (sic) based on their intuition, but that intuition itself was built by a lot of trial and error,” Kazemi continued. “There is nothing that cannot be learned through examples.”
Notably, this missive was written just after the news broke that OpenAI had removed “AGI” from the terms of its agreement with Microsoftthe commercial implications of this assertion are therefore unclear.
But one thing is certain: we have not yet Again We have seen AI capable of competing in the job market with a human worker in a serious and general way. If that happens, the Kazemis of the world will have our attention.
Learn more about AGI: AI security researcher leaves OpenAI, saying its trajectory alarms him