Vercel adds LiteLLM server support, enabling OpenAI-compatible LLM gateway deployment
Deploy LiteLLM on Vercel
Vercel now supports deploying LiteLLM server, a lightweight proxy server that provides OpenAI-compatible access to language models. This integration enables developers to run a unified LLM gateway on Vercel's platform, abstracting away provider-specific APIs.
What You Get
LiteLLM Gateway Features:
- OpenAI-compatible API endpoints for seamless client integration
- Support for multiple LLM providers through a single interface
- Native integration with Vercel AI Gateway for optimized routing
- Simple Python-based deployment with minimal configuration
Getting Started
Deployment is straightforward with a basic app.py:
from litellm.proxy import proxy_server
app = proxy_server.app
Configure which models to route through Vercel AI Gateway or other providers using litellm_config.yaml. For example, routing GPT-5.4 through the AI Gateway:
- model_name: gpt-5.4-gateway
litellm_params:
model: vercel_ai_gateway/openai/gpt-5.4
api_key: os.environ/VERCEL_AI_GATEWAY_API_KEY
Next Steps
Deploy your LiteLLM instance using Vercel's deployment tools, or consult the documentation for advanced routing configurations and multi-provider setups. This is particularly useful for teams needing cost optimization or provider failover capabilities.