OpenAI has once again delayed the release of its highly anticipated open-weight AI model, citing the need for further safety evaluations and internal testing. This marks the second time the company has pushed back the launch in recent weeks, leaving developers, researchers, and industry observers frustrated and uncertain about the company’s roadmap.
Originally expected to be released in June, the model was first postponed to an unspecified date later in the summer. Now, OpenAI says there is no concrete timeline, stating only that the model will be released “when it meets our highest standards for safety and responsibility.”
The open-weight model, which would allow developers to freely access and run the model on their own infrastructure, is widely believed to be one of the most capable language models OpenAI has produced outside its proprietary GPT-4 and GPT-4o offerings. It has been described internally as roughly comparable to the o3 series in reasoning and code generation tasks. Its release was expected to significantly shift the balance in the open-source AI ecosystem, where competitors like Meta and Mistral have already gained ground.

However, OpenAI now says that despite being “technically ready,” the model poses unresolved concerns around safety, misuse potential, and long-term risk. Leadership pointed to the irreversible nature of releasing model weights: once out in the world, they cannot be recalled. This raises the stakes around issues like disinformation, automated spam, impersonation, and unauthorized fine-tuning for harmful applications.
“We’re proud of the model’s capabilities,” said a senior researcher familiar with the project. “But we’re not just shipping software—we’re putting a powerful tool into the wild. We need to be absolutely sure we understand the implications.”
The decision has sparked mixed reactions across the AI community. Advocates for responsible AI development have applauded the move, saying that OpenAI’s cautious approach is appropriate given the potential for abuse. Critics, however, argue that the company’s delays reflect a lack of transparency and undermine the principles of openness it was originally founded on.
For developers and AI startups, the delay is more than a philosophical issue—it’s a practical setback. Many were hoping to begin integrating the model into products and workflows this summer. Some had even adjusted their development timelines in anticipation of its release.
“Every delay forces us to reconsider our build plans,” said one founder of an AI tooling startup. “We’re trying to stay aligned with OpenAI’s ecosystem, but these shifting timelines make that difficult.”
The delay also comes at a time when the generative AI landscape is rapidly evolving. Competitors around the world, including in China and Europe, have begun releasing powerful open models with fewer restrictions. Some of these have already outperformed OpenAI’s smaller models on specific benchmarks. OpenAI’s decision to hold back its model, while principled, could create room for others to take the lead in the open-source space.
At the same time, recent controversies involving misbehaving AI systems on rival platforms have highlighted just how quickly an AI deployment can go wrong. For OpenAI, avoiding a public safety or ethics scandal may outweigh the costs of moving slowly.
Still, the lack of a clear timeline has led some in the AI community to speculate that OpenAI is rethinking its open-weight strategy altogether. The company has increasingly focused on its proprietary GPT models and enterprise offerings, leading some to wonder if the open model will ever see the light of day.
)
For now, OpenAI insists that the release is still coming—just not yet.
“We remain committed to making powerful AI broadly accessible in a safe and responsible way,” the company said in a brief statement. “This delay reflects that commitment.”
Whether that reassurance is enough to satisfy its growing developer base—or hold off mounting competition—remains to be seen.








