What's New
Vercel's AI Gateway now supports OpenAI's Responses API, a modern alternative to the Chat Completions API. Developers can point their OpenAI SDK directly to AI Gateway's base URL and use the creator/model naming convention to route requests seamlessly.
Key Features
The Responses API integration includes:
- Text generation and streaming: Send prompts and receive responses with token streaming
- Tool calling: Define functions for models to invoke with result feedback loops
- Structured output: Constrain responses to JSON schemas for predictable data formats
- Reasoning: Control model computational effort with configurable reasoning levels
While all this functionality was previously accessible through AI Gateway via the AI SDK and Chat Completions API, developers can now use the Responses API directly for a more streamlined experience.
Getting Started
Install the OpenAI SDK and configure it to point at AI Gateway:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.AI_GATEWAY_API_KEY,
baseURL: 'https://ai-gateway.vercel.sh/v1',
});
Then use the Responses API as you normally would, with support for both TypeScript and Python.
Example: Structured Output with Reasoning
Combine reasoning levels with JSON schemas to get structured, predictable responses:
const response = await client.responses.create({
model: 'anthropic/claude-sonnet-4.6',
input: 'Build a Next.js app with auth and a dashboard page.',
reasoning: { effort: 'high' },
text: {
format: {
type: 'json_schema',
name: 'app_plan',
strict: true,
schema: { /* your schema */ }
}
}
});
For detailed documentation and more examples, visit the AI Gateway Responses API docs.