In the 1991 film “Terminator 2: Judgment Day”, a robot feels Killer returns in time to stop the rise of artificial intelligence. The robot locates the computer scientist whose work will lead to the creation of Skynet, a computer system that will destroy the world and convince it that the development of AI must be stopped immediately. Together, they go to the headquarters of Cyberdyne Systems, the company behind Skynet, and explode it. The research on AI is destroyed and the course of history is modified – at least, for the rest of the film. (There were four other suites.)
In the science fiction world of “Terminator 2”, it is clear what it means for an AI to become “self-aware” or for Poses a danger for humanity; It is just as obvious which could be done to stop it. But in real life, the thousands of scientists who have spent their lives working on AI do not agree to find out if today’s systems think or could become capable; They are not sure they know what types of scientific regulations or progress could allow technology to prosper while preventing it from becoming dangerous. Because some people hold Strong and unambiguous views Regarding these subjects, it is possible to have the impression that the AI community is divided properly into factions, one worries risk and the other eager to move forward. But most researchers are somewhere in the middle. They always reflect on scientific and philosophical complexities; They want to proceed carefully, whatever this can mean.
OPENAI, the research organization behind Cathas long represented this position in the middle of the road. It was founded in 2015, as a non -profit organization, with large investments of Peter Thiel And Elon Muskwho were (and are) concerned about the risks posed by AI. Openai’s objective, as indicated in its charter, was to develop a general artificial intelligence, or acted, in a “safe and beneficial” manner for humanity. Even if he tries to build “highly autonomous systems that surpass humans to the most economically precious work”, he plans to ensure that AI will not “harm humanity or unduly concentrated power” . These two objectives can very well be incompatible; Construction systems that can replace human workers have a natural tendency to to focus. However, the organization sought to honor its charter thanks to a hybrid arrangement. In 2019, he divided into two units, one for lucrative, one for non -profit, with the for -profit part supervised by the non -profit part. At least in theory, Openai’s for -lucrative part would act as a startup, focusing on the acceleration and marketing of technology; The non -profit part would act as a guard dog, preventing the creation of Skynet, while continuing research that could answer important questions about AI security. Profits and investments in marketing would finance the research of the non -profit organization.
The approach was unusual but productive. With the help of more than thirteen billion dollars in Microsoft’s investment, Openai has developed SlabChatgpt, and other AI products at the head of the industry, and began to transform GPT, its powerful large -scale language models, into an engine of a much wider software ecosystem. This year, he began to seem that OpenAi could consolidate an advance before Google, Facebook and other technological companies that build capable AI systems, even if its non -profit part launched initiatives focused on reducing technology risks. This centaur managed to gallop until last week, when the Openai board of directors with four people, which was largely considered to be sensitive to the risks of AI, dismissed its CEO, Sam AltmanMovement a chain of events that turns the head. As an explanation, the board of directors alleged that Altman, who came to Openai after having directed the Accelerator of Startup Y Combinator, had not been “always frank in his communications”; During a meeting all hands after the dismissal, Ilya Sutskever, member of the Board of Directors and Chief Scientist of Openai, would have declared that the Council had “done its duty”. Many interpreted this as reporting a disagreement on AI security. But Openai employees were not convinced and chaos ensued. More than seven hundred of them – almost the whole business – signed a letter requiring the resignation of the board of directors and the reintegration of Altman; Meanwhile, Altman and Greg Brockman, co-founder of Openai and member of his board of directors, were offered positions leading an AI division to Microsoft. Employees who signed the letter threatened to follow them there; It seemed possible that Optai – The most exciting company in Tech, recently evaluated at ninety billion dollars – could be toast.
Today’s AI systems often work by noting similarities and drawing analogies. People also think that way: In the days following the termination of Altman, the observers compared it to the dismissal of Steve Jobs by Apple’s Board, in 1985, or to “Game of Thrones”. When I prompted Chatgpt to suggest comparable accounts, he named the “succession” and the “Jurassic Park”. In the latter case, he wrote: “John Hammond pushes to quickly open a dinosaurs park, ignoring the experts of experts on the risks, in parallel with the eagerness of Altman against prudence pushed by others in Openai.” This is not quite a precise analogy: although Altman wants to see AI becoming widely used and extremely profitable, he also frequently talked about his dangers. In May, He said to Congress This rogue AI could present an existential risk for humanity. Hammond never told park enthusiasts that they had ran a good chance of being eaten.
In truth, no one outside a small inner circle knows what has really motivated Altman. However, on X (formerly known as Twitter) and Sublack, speculative articles have multiplied on an industrial scale. At first, many qualified the move as a coup by Sutskever. But it seems less likely on Monday, when Sutskever tweeted his remorse: “I deeply regret my participation in the actions of the board of directors. I never intended to harm Openai. I love everything we have built together and I will do everything I can to bring the business together ”, he wroteAnd, in a surprising subject, signed the letter demanding the reintegration of Altman.
Altman’s shooting was therefore not a takeover, exactly; However, it could be interpreted plausibly as the consequence of length of length tensions between “accelerattes” and “doomers” within Openai. With basic questions about the security of the AI always unanswered, it was logical to launch a “GPT store“, Where would the developers be able to sell GPTs that could” assume real tasks in the real world “? Some observers, such as the veteran of the technological journalist Eric NewComer, have raised the possibility that Altman was dismissed. “We should not let bad public messaging blind that Altman lost (the) confidence of the board of directors which was supposed to legitimize the integrity of Openai,” said the newcomer wrote. “Once you add the possibility of an existential risk of a super powerful artificial intelligence. . . This only increases the potential risk of any break in trust. Perhaps the Board of Directors felt excluded, by Altman, updates on the rapid progress of OpenAi, or came to believe that he was unaware of security problems, and decided to use the only really powerful tool at its disposal – the divism – to brake them.
Monday, the board of directors appointed Emmett Shear, the former CEO of the live broadcasting site Ticas acting managing director. Shear said he wanted to considerably slow down the speed of AI research. (“If we are at a speed of 10 at the moment … I think we should aim for a 1-2 instead”, he tweetedIn September.) But only seventy-two hours later, Wednesday, he announced that Altman would return as CEO “We have concluded an agreement in principle for Sam to return to Openai”, published the company, on X . Only one member would continue: Adam d’Angelo, co-founder of the Quoora Question and answers site. D’Angelo would be joined by economist Larry Summers and Bret Taylor, a Google, Facebook, Twitter and Salesforce veteran.