← Back
GitHub
GitHub Copilot gains GPT-5.4 mini model with faster response times and stronger codebase exploration
GitHub Copilot · releasefeaturemodelintegration · github.blog ↗

GPT-5.4 mini Now Available in GitHub Copilot

OpenAI's GPT-5.4 mini has rolled out to GitHub Copilot, bringing improved performance for developers using the AI pair programmer. According to GitHub, this is OpenAI's highest-performing mini model to date, with notable improvements in response latency and codebase exploration capabilities.

Key Capabilities

The new model excels in several areas:

  • Faster response times: Delivers the fastest time to first token compared to previous mini models
  • Improved codebase exploration: Better at understanding and navigating large codebases
  • Tool effectiveness: Especially strong when using grep-style search tools and similar utilities
  • Agentic features: Full support across chat, ask, edit, and agent modes

Availability and Rollout

GPT-5.4 mini is available to Copilot Pro, Pro+, Business, and Enterprise users. You can select the model from the model picker in:

  • Visual Studio Code, Visual Studio, JetBrains IDEs, Xcode, and Eclipse
  • github.com and GitHub Mobile (iOS/Android)
  • GitHub CLI

For the best experience, GitHub recommends upgrading to the latest version of your IDE, as newer versions support improved prompting and model parameters.

Enabling the Model

Enterprise and Business users: Administrators must enable the GPT-5.4 mini policy in Copilot settings. Once enabled, users will see the model in the VS Code model picker.

Bring Your Own Key: Users can select Manage Models from the picker, choose GPT-5.4 mini, and provide their own OpenAI API key for access.

Pricing

The model launches with a 0.33x premium request multiplier, though GitHub notes this pricing is tentative and subject to change. Full pricing details are available in the Copilot documentation.