Faced with growing pressure to release models faster, Openai says he could soften his own rules if competitors ignore similar security measures.
OPENAI updated Its preparation framework, the internal system used to assess the safety of the AI model and determine the necessary guarantees during development.
The company now claims that it could adjust its security standards if a rival IA The laboratory releases a “high risk” system without similar protection, a decision which reflects increasing competitive pressure in the AI industry.
Instead of completely rejecting such flexibility, OPENAI Insists that any change would be made with caution and with public transparency.
Critics argue that OpenAi already reduces its standards for faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned business restructuring could encourage other shortcuts.
OpenAi denies these affirmations, but the reports suggest compressed security test calendars and an increasing dependence on automated assessments instead of human -run examinations. According to sources, certain safety checks are also carried out on previous versions of the models, not the latest to users.
The updated framework also modifies how OPENAI defines and manages risks. The models are now classified as having a “high” or “critical” capacity, the first referring to systems that could amplify damage, the latter to those which introduced entirely new risks.
Instead of deploying the models first and assessing the risks later, Openai says that it will apply guarantees during development and release, in particular for models capable of escaping a closure, hiding their capacities or reproducing.
Do you want to know more about AI, technology and digital diplomacy? If necessary, Ask our Chatbot Diplo!