← Back
Hugging Face
IBM releases Mellea 0.4.0 with Granite Libraries for structured AI workflows
· releasefeaturesdkopen-source · huggingface.co ↗

Mellea 0.4.0 Release

IBM Research has released Mellea 0.4.0, an open-source Python library designed to bring structure and predictability to generative AI programs. Unlike general-purpose orchestration frameworks, Mellea replaces probabilistic prompt behavior with deterministic, maintainable workflows through constrained decoding, structured repair loops, and composable pipelines.

What's New in Mellea 0.4.0

The latest release expands on the foundational primitives from version 0.3.0 and introduces several key improvements:

  • Native Granite Libraries Integration: Direct support for the new Granite Libraries with standardized APIs that leverage constrained decoding to guarantee schema correctness
  • Instruct-Validate-Repair Pattern: New rejection sampling strategies for iterative validation and repair of generated outputs
  • Observability Hooks: Event-driven callbacks to monitor, track, and debug generative workflows in production

Granite Libraries: Specialized Model Adapters

The Granite Libraries represent a new approach to LLM composition—rather than relying on general-purpose prompting, each library contains specialized LoRA adapters fine-tuned for specific tasks. Three libraries are launching today for the granite-4.0-micro model:

  • granitelib-core-r1.0: Requirements validation within the instruct-validate-repair loop
  • granitelib-rag-r1.0: Pre-retrieval, post-retrieval, and post-generation tasks for agentic RAG pipelines
  • granitelib-guardian-r1.0: Safety, factuality, and policy compliance checking

This modular approach increases task-specific accuracy at modest parameter cost without disrupting base model capabilities.

Getting Started

Developers can access the libraries through multiple channels:

The release includes companion research papers and video tutorials explaining the architectural patterns and use cases.