← Back
Zed
Zed launches Zeta2 edit prediction model with 30% better acceptance rate
· featuremodelreleaseopen-source · zed.dev ↗

Zeta2: A Major Step Forward

Zed has released Zeta2, the next generation of its edit prediction model, now available as the default for all users. The model delivers a 30% improvement in acceptance rate over Zeta1 and represents a complete reimagining of the training pipeline and infrastructure.

Key Improvements

Training at Scale: Zeta1 was trained on approximately 500 hand-curated examples. Zeta2 scales this dramatically to nearly 100,000 examples collected on an opt-in basis from Zed users working in open-source licensed repositories. This expansion required building an entirely new data pipeline for collection, processing, orchestration, and evaluation.

Better Context Understanding: The model now leverages LSP-based context retrieval (the same infrastructure powering go-to-definition) to access type definitions and symbols across your codebase. This eliminates guesswork about code dependencies and provides Zeta2 with richer information to make better predictions. Latency improvements also make predictions faster.

Open Weight Release: Zeta2 is released as open-weight and trained entirely on open source code. Users who contributed data actively opted in rather than having to opt out. The model is available on Hugging Face, allowing developers to inspect it, run it locally, or fine-tune it on their own codebases.

What's Next

Future plans include "jumps"—a feature that helps developers follow compiler-flagged changes across call sites automatically. The primary focus, however, is continuous improvement through Direct Preference Optimization (DPO) and prompt format experimentation. Users can contribute to model improvement by enabling training data collection in the edit prediction settings.

Getting Started

Zeta2 is the default edit prediction model in Zed today. Developers new to edit predictions can enable them through settings or consult the documentation.