OpenAI calls o3 its ‘smartest and most capable model yet’ | Technology News

On Wednesday, OpenAI introduced two new reasoning models — o3 and o4-mini — just a few days after launching the GPT-4.5 API for developers. According to a post OpenAI on X, these “reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation.”
o3 and o4-mini are super good at coding, so we are releasing a new product, Codex CLI, to make them easier to use.
this is a coding agent that runs on your computer. it is fully open source and available today; we expect it to rapidly improve.
— Sam Altman (@sama) April 16, 2025
“o3 and o4-mini are out! They are very capable. o4-mini is a ridiculously good deal for the price. They can use and combine every tool within ChatGPT. Multimodal understanding is particularly impressive. o3 and o4-mini are super good at coding, so we are releasing a new product, Codex CLI, to make them easier to use. This is a coding agent that runs on your computer. It is fully open source and available today; we expect it to rapidly improve,” wrote Sam Altman, CEO of OpenAI, in a post on X on Wednesday.
Altman also confirmed that a more capable o3-Pro model will be made available to Pro users “in a few weeks.”
OpenAI’s model naming can be quite confusing—especially if you’re trying to keep track of their best-ever models. Just a few days ago, Sam Altman acknowledged the issue in a post on X, writing, “How about we fix our model naming this summer, and everyone gets a few more months to make fun of us (which we very much deserve) until then?” Judging the names of the latest models, it seems we might indeed have to wait a bit longer for OpenAI for that fix.
Story continues below this ad
OpenAI highlights that the o3 model is a powerful one capable of handling tasks across various domains, including coding, math, science, and visual reasoning. Similarly, o4-mini is a smart yet cost-efficient model with a significantly higher user limit than o3, offering “a strong high-volume, high-throughput option for everyone with questions that benefit from reasoning.”
“Thinking with Images” has been one of our core bets in Perception since the earliest o-series launch. We quietly shipped o1 vision as a glimpse—and now o3 and o4-mini bring it to life with real polish. Huge shoutout to our amazing team members, especially:– @mckbrando, for… https://t.co/WzqoOl8EwW
— Jiahui Yu (@jhyuxm) April 16, 2025
These are also the first two language models from OpenAI that can process uploaded images as part of their chain of thought — meaning they don’t just see an image, they think about it.
ChatGPT Plus and Pro users will get access to o3, o4-mini, and o4-mini-high starting today, replacing o1, o3-mini, and o3-mini-high. ChatGPT Enterprise and Edu users will gain access within a week. Developers can also access o3 and o4-mini via the Chat Completions API and the Responses API.