Openai released two new artificial intelligence (AI) models on Wednesday. Dubbed o3 and o4-mini, these are the company’s latest reasoning-focused models with visible chain-of-thought (CoT). The San Francisco-based AI firm stated that these models come with visual reasoning capability, which means they can analyse and “think” about an image to answer more complex user queries. Successor to the o1 and o3-mini, these models will currently be available to the paid subscribers of ChatGPT. Notably, the company also released the GPT-4.1 series of AI models earlier this week.
OpenAI’s New Reasoning Models Arrive With Improved Performance
In a post on X (formerly known as Twitter), the official handle of OpenAI announced the release of the new large language models (LLMs). Calling them the company’s “smartest and most capable models,” the AI firm highlighted that these models now come with visual reasoning capability.
Visual reasoning essentially means that these AI models can better analyse images to extract contextual and implicit information from them as well. On its websiteOpenAI said these are the first models from the company that can agentically use and combine every tool within ChatGPT. These include web search, Python, image analysis, file interpretation, and image generation.
This means the o3 and o4-mini AI models can look up the image on the web, manipulate the image by zooming, cropping, flipping, and enhancing them, and even run a Python code to extract information. OpenAI said this would allow the models to find information even from imperfect images.
Some of the tasks these models can now perform include reading handwriting from a notebook that’s upside down, reading a faraway sign with barely readable text, recognising a particular question from a large list, finding a bus schedule from the picture of a bus, solving a puzzle, and more.
Coming to the performance, OpenAI claimed that the o3 and o4-mini AI models outperform GPT-4o and o1 models on the MMMU, MathVista, VLMs are blind, and CharXiv benchmarks. The company did not share any performance comparisons with third-party AI models.
OpenAI also highlighted several limitations of these models. The AI models could perform unnecessary image manipulation steps and tool calls to cause overly long chains of thought. The o3 and o4-mini are also susceptible to perception errors, and they can misinterpret visual information to give incorrect responses. Further, the AI firm highlighted that the models might also have reliability-related issues.
Both o3 and o4-mini AI models are being made available to ChatGPT Plus, Pro, and Team users. They will replace the o1, o3-mini, and o3-mini-high models in the model selector. Enterprise and Edu users will get access to them next week. Developers can access the models via the Chat Completions and Responses application programming interfaces (APIs).
For more info visit at Times Of Tech