OpenAI Releases GPT-5.4-Cyber Model to Specific Users, Dedicated to Cybersecurity Defense
2026-04-15 10:18
Favorite

en.Wedoany.com Reported - On April 14, 2026 local time, OpenAI announced the launch of the GPT-5.4-Cyber model, a fine-tuned variant of its latest flagship model GPT-5.4, specifically designed for defensive cybersecurity tasks. Just one week earlier on April 7, competitor Anthropic had released its cutting-edge AI model Mythos, implementing a targeted invitation-only beta under its Project Glasswing initiative. OpenAI stated that GPT-5.4-Cyber will initially be made available on a targeted basis to vetted security vendors, organizations, and researchers, allowing hundreds of users to test it initially, with plans to expand to thousands in the coming weeks.

GPT-5.4-Cyber has a lowered refusal boundary in cybersecurity defense scenarios, with significantly reduced restrictions on legitimate security instructions. According to OpenAI's official documentation, the model supports advanced defensive workflows such as binary reverse engineering, allowing security professionals to analyze compiled software without source code to detect potential vulnerabilities and malware risks. OpenAI introduced network-specific security training starting with the GPT-5.2 version, categorizing cybersecurity capabilities as "High" in GPT-5.4. The base GPT-5.4 model has a refusal rate for network-related requests of approximately 8%, while GPT-5.4-Cyber reduces the refusal rate for legitimate defense tasks to about 1%, significantly enhancing its practicality for sensitive tasks such as vulnerability research and analysis.

Anthropic's Mythos model has demonstrated strong vulnerability discovery capabilities since its release. The model achieved a score of 83.1% on cybersecurity vulnerability reproduction benchmarks, far surpassing Claude Opus 4.6's 66.6%, and has identified thousands of high-risk vulnerabilities across major operating systems, browsers, and underlying software. Concerned about potential misuse by hackers and malicious actors, Anthropic chose not to release Mythos publicly, authorizing only a few partners like Amazon and Apple for defensive use through its Project Glasswing program. OpenAI's release of GPT-5.4-Cyber, coming just one week after its competitor's announcement, reflects the intensifying competition between the two companies in the field of AI for cybersecurity.

OpenAI has linked access to GPT-5.4-Cyber with its "Cybersecurity Trusted Access" program launched in February of this year. The company has added a new tiered access mechanism to the program, granting users who pass the highest level of vetting access to GPT-5.4-Cyber, where they will face significantly reduced model restrictions when handling vulnerability research and analysis. Individual users can apply to join the program by completing identity verification on the OpenAI website, while enterprise customers can apply for trusted access for their teams through dedicated representatives. OpenAI also emphasized that in usage scenarios lacking transparency, such as "zero data retention," or when accessed through third-party platforms, the model's advanced permissions may be restricted.

At a strategic security level, OpenAI has established three core principles for GPT-5.4-Cyber: democratized access, iterative deployment, and ecosystem resilience. The company determines access to advanced features based on objective criteria such as "Know Your Customer" identity verification, avoiding subjective judgments about legitimate users' eligibility. Regarding ecosystem development, OpenAI initiated a cybersecurity grant program in 2023 and launched the Codex Security tool this year, which automatically monitors codebases and suggests fixes, having already assisted in remediating over 3,000 high and critical vulnerabilities. Faced with the dual application of AI for both offense and defense, OpenAI believes security safeguards must expand in tandem with model capabilities. The limited deployment strategy for GPT-5.4-Cyber is an attempt to strike a balance between safety and utility.

This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com