OpenAI may soon release a groundbreaking AI tool designed to control your PC and complete tasks on your behalf. Tibor Blaho, a well-known software engineer recognized for accurate leaks of upcoming AI products, has shared new insights about OpenAI’s Agent tool. Previous reports, including one from Bloomberg, describe Operator as an “agentic” system capable of autonomously performing tasks such as coding and booking travel.
According to The Information, OpenAI is eyeing a January launch for Operator. Blaho’s latest findings add weight to this timeline.
Over the weekend, Blaho discovered hidden options in OpenAI’s macOS ChatGPT client. These include settings for “Toggle Operator” and “Force Quit Operator.” Blaho also found unpublished references to Operator on OpenAI’s website. These include tables comparing Operator’s performance to other AI systems, though the tables might be placeholders. If the data is real, it shows Operator is not fully reliable across all tasks.

OpenAI Operator: Progress, Challenges, and Industry Context
On the OSWorld benchmark, which simulates a real computer environment, the “OpenAI Computer Use Agent (CUA)”—likely the AI model behind Operator—scores 38.1%. This outpaces Anthropic’s computer-controlling model but falls short of the 72.4% achieved by humans. However, on WebVoyager, a benchmark for navigating and interacting with websites, CUA surpasses human performance. Yet, it struggles on WebArena, another web-based test, as revealed in leaked data.
Operator also faces difficulties with simple human tasks. In tests, it succeeded in signing up for a cloud provider and launching a virtual machine only 60% of the time. Creating a Bitcoin wallet proved even harder, with a success rate of just 10%.
The Competitive and Evolving AI Agent Space
OpenAI’s foray into AI agents comes as competitors like Anthropic and Google invest heavily in the emerging segment. While these systems remain experimental, tech leaders tout them as the next frontier in AI. Market analysis predicts the AI agent industry could reach $47.1 billion by 2030, according to Markets and Markets.
Despite their potential, current AI agents are still primitive. Some experts have flagged safety concerns, especially if the technology evolves rapidly. One leaked chart shows Operator performing well on safety tests designed to prevent illicit activities and protect sensitive personal data. OpenAI has reportedly prioritized safety testing, extending the development timeline.
In a recent X post, OpenAI co-founder Wojciech Zaremba criticized Anthropic for launching an agent he claims lacks sufficient safety measures. “I can only imagine the backlash if OpenAI did something similar,” he noted.
Still, OpenAI faces scrutiny over its approach to safety. Critics, including former staff, have accused the company of prioritizing rapid product development over safety considerations.
The future of OpenAI’s Agent and other AI agents will depend on addressing these challenges while meeting market demands.
Also Read About
Donald Trump Overturns Joe Biden’s AI Safety Executive Order
ChatGPT Latest Update Lets Users Customize Its Personality, From ‘Chatty’ to ‘Gen Z’