OpenAI may soon require organizations to complete an ID verification process to access certain future AI models, according to a support page published on the company’s website last week.
According to the page, the verification process, called Verified Organization, is a new way for developers to access the most advanced models and capabilities on the OpenAI platform. Verification requires a government-issued ID from one of the countries supported by the OpenAI API. An ID can only verify one organization every 90 days, and not all organizations are eligible for verification.
The page contains the following message: “At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely. “Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies. We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”
Abuse by North Korean parties
The new verification process seems intended to strengthen the security around OpenAI products as they become increasingly advanced and powerful. The company has published several reports on attempts to detect and prevent malicious use of its models, including by groups allegedly based in North Korea.
The process also seems to be aimed at preventing theft of intellectual property. According to a Bloomberg report published earlier this year, OpenAI investigated whether a group linked to the Chinese company DeepSeek had obtained large amounts of data via the API at the end of 2024, possibly for training models. That is a violation of OpenAI’s terms of use.
Services blocked in China
Last summer, OpenAI already blocked access to its services in China. Chinese AI companies then quickly tried to attract users of OpenAI technology.