← Back
OpenAI
OpenAI releases GPT-5.4 mini in Codex; 2x faster with 30% lower token usage
OpenAI API · releasemodelfeature · developers.openai.com ↗

GPT-5.4 mini Now Available in Codex

OpenAI has released GPT-5.4 mini as a new model option within Codex, delivering improved performance with significantly lower resource consumption. The model improves across coding, reasoning, image understanding, and tool use while maintaining a lightweight footprint.

Key Performance Improvements

  • 2x faster execution compared to GPT-5 mini
  • 30% token usage of GPT-5.4, allowing comparable tasks to run 3.3x longer
  • Improved performance across coding, reasoning, image understanding, and tool use

Recommended Use Cases

GPT-5.4 mini is designed for:

  • Codebase exploration and analysis
  • Large-file code review
  • Processing supporting documents
  • Less reasoning-intensive subagent work

For more complex planning, coordination, and final judgment, developers should continue using GPT-5.4.

How to Use

GPT-5.4 mini is immediately available across all Codex platforms:

CLI: Start a new thread with codex --model gpt-5.4-mini or use /model during an active session

IDE Extension: Select GPT-5.4 mini from the model selector in the composer

Codex App & Web: Choose GPT-5.4 mini from the model selector in the composer

Users should update to the latest versions of the CLI, IDE extension, or Codex app if GPT-5.4 mini is not yet visible.