OpenAI revealed on Thursday that it is launching GPT-4.5, the highly anticipated AI model code-named Orion. This new release is the company’s largest yet, leveraging more computing power and data than any previous model.
However, OpenAI clarifies in a white paper that it does not classify GPT-4.5 as a frontier model.
Starting today, ChatGPT Pro subscribers—OpenAI’s $200-a-month plan—can access GPT-4.5 in ChatGPT as part of a research preview. Developers on OpenAI’s paid API tiers can also begin using the model immediately. Other ChatGPT users, including those on ChatGPT Plus and ChatGPT Team, should receive access next week, according to an OpenAI spokesperson.
The AI industry has eagerly awaited Orion, which many see as a key test for traditional AI training methods. OpenAI built orion using the same core approach as previous models—scaling up computing power and data during an unsupervised learning phase called pre-training.
In past GPT versions, this scaling led to major performance leaps in areas like math, writing, and coding. OpenAI claims GPT-4.5 benefits from “deeper world knowledge” and “higher emotional intelligence.” However, the impact of scaling seems to be leveling off. On several AI benchmarks, GPT-4.5 lags behind newer reasoning models from DeepSeek, Anthropic, and even OpenAI itself.
Running Orion is also costly. OpenAI acknowledges its high operational expenses and is considering whether to keep offering it in the API long-term. Developers must pay $75 per million input tokens (about 750,000 words) and $150 per million output tokens. In contrast, GPT-4o costs just $2.50 per million input tokens and $10 per million output tokens.

GPT-4.5 vs. GPT-4o: Strengths and Trade-offs
OpenAI makes it clear—GPT-4.5 is not a direct replacement for GPT-4o, the core model behind most of its API and ChatGPT. While orion supports features like file uploads, image processing, and the ChatGPT canvas tool, it lacks ChatGPT’s realistic two-way voice mode.
On the plus side, GPT-4.5 delivers stronger performance than GPT-4o and many competing models.
In OpenAI’s SimpleQA benchmark, which tests AI on factual questions, GPT-4.5 outperforms GPT-4o and OpenAI’s reasoning models, o1 and o3-mini. It also hallucinates less frequently, reducing the risk of generating false information.
However, OpenAI did not compare GPT-4.5 against its top reasoning model, deep research, on SimpleQA. When asked, an OpenAI spokesperson told TechCrunch that deep research’s performance on this benchmark has not been publicly disclosed. Interestingly, Perplexity’s Deep Research model, which performs similarly to OpenAI’s deep research in other tests, surpasses GPT-4.5 in factual accuracy on SimpleQA.

GPT-4.5 and the Future of AI Training
OpenAI calls GPT-4.5 “the frontier of unsupervised learning.” That claim may hold, but the model’s limitations reinforce expert predictions that pre-training “scaling laws” are losing their edge.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, warned in December that AI has “achieved peak data” and that “pre-training as we know it will unquestionably end.” His statement aligns with concerns AI investors, founders, and researchers shared with TechCrunch in November.
To overcome these challenges, the industry—including OpenAI—is shifting toward reasoning models. These models take longer to process tasks but deliver more consistent results. AI labs believe increasing computational effort for reasoning will unlock major performance gains.
OpenAI plans to merge its GPT series with its “o” reasoning models, starting with GPT-5 later this year. Orion, reportedly costly to train, delayed multiple times, and underwhelming internally, may not dominate AI benchmarks. But OpenAI likely views it as a critical step toward something far more advanced.
Also Read About
Alibaba enters global AI race with $53 billion investment over three years
Google AI “co-scientist” Cracks Decade-Long Superbug Mystery in Just 48 Hours