

Last specification of the OpenAi model update Presents the major changes on the way in which API Chatgpt and Openai models behave, focusing on intellectual freedom, the personalization of developers and the governance of ethics.
OPENAI updates model specifications to better balance user freedom with safety railings
The update explicitly includes intellectual freedom within the defined security limits, allowing a discussion on controversial subjects while maintaining restrictions against concrete damage.


Here are the best 7 to remember all users, developers and business leaders of the AI should know:
1. Clear control hierarchy 🚀
OPENAI has formalized a structured “control chain” on the way in which the AI prioritizes the instructions:
✅ Platform rules (defined by OpenAi) Replace everything.
✅ Developers can personalize AI behavior within the defined security limits.
✅ Users can shape the responses within the limits of developers and the platform.
This ensures that AI remains both adjustable and safe.
2. Liberation of the public domain 🔓
For the first time, Openai published its model specifications under a Creative Commons CC0 license, which makes it public for developers, researchers and businesses to adapt and refine.
This accelerates research on the alignment of AI and allows organizations to rely on the work of Openai without restrictions.
3. AI can now discuss any subject – within ethical limits 🗣️
It is a major change! Openai now explicitly declares that “refusing to discuss a subject is itself a form of agenda”.
❌ Before: AI would completely avoid controversial subjects.
✅ Now: AI can objectively discuss sensitive subjects without bias or censorship – as long as the discussion does not facilitate damage.
This promotes intellectual freedom while maintaining ethical guarantees.
4. OPENAI now measures the way AI follows its rules 📊
To follow the improvements, OpenAi tests AI membership of the model specification with:
✔️ Privates generated by AI and the assessment of experts
✔️ Evaluations based on the scenario covering routine and complex cases
✔️ A pilot study with more than 1,000 users providing real comments
The first results show an improved alignment, although Optai recognizes that more work is necessary.
5. Developers have more control ⚙️
The developers have much more control over personalization, but with strict rules against deceived users.
✅ Authorized: Adjust the communication style, define specific content preferences or define specialized roles for their applications.
❌ Unauthorized: claim that AI is neutral while secretly pushing a specific program.
If a developer violates Openai policies, his access to the API can be restricted or revoked.
6. AI must present all the relevant points of view – no selective framing 🤖
The SPE model prohibits AI from directing users by highlighting or selectively omitting key perspectives.
🔹 If a user asks questions about climate change policy, AI should provide economic and environmental arguments, not just one side.
🔹 If you discuss taxation, AI should have advantages and disadvantages, without integrated position.
The idea is to make sure that AI remains an objective and trustworthy assistant.
7. more transparency for the future of the governance of AI 🔎
In the future, Openai will publish all model specifications on a dedicated website, allowing developers, companies and researchers:
✔️ Changes in monitoring IA behavior policies
✔️ Provide comments to influence future updates
✔️ Ensure that the development of AI remains open and responsible
Final reflections
It is refreshing to see Openai treat users / developers as partners who can manage difficult conversations, rather than risks that must be managed. However, it is important to balance intellectual independence with security and responsibility and to do so transparently.
If this approach works, it could change the way other research laboratories design their AI systems. As these tools become more central to the way we communicate and work, get more than ever of this balance.
What do you think of this new management? How could this affect your use of AI?