← Back
IBM releases Mellea 0.4.0 and three Granite Libraries for structured AI workflows
· releasefeatureopen-sourcesdkapi · huggingface.co ↗

Mellea 0.4.0 Release

IBM Research has released Mellea 0.4.0, an open-source Python library designed to replace probabilistic prompt behavior with structured, maintainable AI workflows. Unlike general-purpose orchestration frameworks, Mellea uses constrained decoding, structured repair loops, and composable pipelines to make LLM-based programs predictable and maintainable.

Key Features in 0.4.0

  • Native Granite Libraries Integration: Direct API support for specialized model adapters with guaranteed schema correctness through constrained decoding
  • Instruct-Validate-Repair Pattern: Rejection sampling strategies to validate and repair LLM outputs
  • Observability Hooks: Event-driven callbacks for monitoring and tracking generative workflows

Three New Granite Libraries

IBM simultaneously released three specialized Granite Libraries comprised of LoRA adapters for the Granite-4.0-micro model:

  • Granitelib-core-r1.0: Handles requirements validation within Mellea's instruct-validate-repair loop
  • Granitelib-rag-r1.0: Covers pre-retrieval, post-retrieval, and post-generation tasks for agentic RAG pipelines
  • Granitelib-guardian-r1.0: Focuses on safety, factuality, and policy compliance checking

Each library provides specialized, fine-tuned adapters that increase task accuracy while maintaining a modest parameter count, without disrupting the base model's capabilities.

Getting Started

Developers can access Mellea via the GitHub repository and PyPI, with comprehensive documentation available at docs.mellea.ai. The Granite Libraries are available on Hugging Face.