Codex-Max from OpenAI handles coding activities around-the-clock
OpenAI has released GPT-5.1-Codex-Max, an enhancement to its agentic coding model that employs a novel "compaction" technique to manage development sessions lasting more than 24 hours and operate across context windows for longer-running, difficult jobs.
The specifics:
Codex-Max outperforms the new Gemini 3 Pro in coding workloads and demonstrates significant gains over Codex-High throughout development benchmarks.
Through increased reasoning efficiency, the model performs noticeably quicker on real-world tasks while using 30% less tokens than its predecessor.
Max can operate across millions of tokens and for more than 24 hours straight because to compaction, which enables it to "prune" session history while maintaining context.
For Plus, Pro, and Enterprise users, the model is instantly accessible through OpenAI's Codex CLI and IDE extensions; API access will follow shortly.
Coding performance was one of the only areas still trailing, even if Gemini 3 stole the show this week. Codex-Max, another incremental improvement rather than a major release, raises the bar even more. The up-only trend in task time capacities for the best AI models is likewise maintained by the 24-hour coding sessions.
🔗 External Resource:
Visit Link →