Vercel AI Gateway adds MiniMax M2.7 model in standard and high-speed variants
MiniMax M2.7 Available on AI Gateway
MiniMax's M2.7 model is now accessible through Vercel's AI Gateway, offering developers a significant upgrade for software engineering and agentic tasks. The model is available in two variants:
- Standard: Full performance for standard use cases
- High-speed: 2x throughput at ~100 tokens per second, optimized for latency-sensitive applications
Key Capabilities
M2.7 introduces native support for advanced workflows:
- Multi-agent collaboration and complex skill orchestration
- Dynamic tool search for intelligent agentic workflows
- Enhanced production debugging and end-to-end project delivery
- Improvements in professional office tasks and software engineering
Getting Started
To use M2.7 with the Vercel AI SDK, set the model parameter to either minimax/minimax-m2.7 or minimax/minimax-m2.7-highspeed. The example below demonstrates using the high-speed variant for analyzing production alerts and submitting fixes:
import { streamText } from 'ai';
const result = streamText({
model: 'minimax/minimax-m2.7-highspeed',
prompt: `Analyze the production alert logs from the last hour,
correlate them with recent deployments, identify the
root cause, and submit a fix with a non-blocking
migration to restore service.`,
});
AI Gateway Features
M2.7 joins other models on Vercel's unified AI Gateway, which provides:
- Unified API for calling multiple models
- Usage tracking and cost monitoring
- Configurable retries and failover strategies
- Built-in observability and provider routing
- Bring Your Own Key (BYOK) support for cost optimization