← Back
IBM releases Mellea 0.4.0 with three Granite Libraries for structured AI workflows
· releasefeatureopen-sourcesdk · huggingface.co ↗

Mellea 0.4.0 Release

IBM Research has released Mellea 0.4.0, an open-source Python library for building predictable, structured generative AI workflows. Rather than relying on probabilistic prompt behavior, Mellea uses constrained decoding, structured repair loops, and composable pipelines to create maintainable LLM-based programs.

Key Features in 0.4.0

This release builds on foundational work from v0.3.0 with several important additions:

  • Native Granite Libraries Integration: New standardized API that uses constrained decoding to guarantee schema correctness
  • Instruct-Validate-Repair Pattern: Rejection sampling strategies for robust output handling
  • Observability Hooks: Event-driven callbacks to monitor and track workflow execution

Introducing Granite Libraries

Three specialized library collections are now available for the granite-4.0-micro model, each comprising LoRA (Low-Rank Adaptation) adapters for specific tasks:

  • Granitelib-Core-r1.0: Requirements validation in the instruct-validate-repair loop
  • Granitelib-RAG-r1.0: Agentic RAG pipeline tasks including pre-retrieval, post-retrieval, and post-generation steps
  • Granitelib-Guardian-r1.0: Safety, factuality, and policy compliance checking

By using specialized adapters instead of general-purpose prompting, these libraries increase accuracy for each task while maintaining modest parameter counts and preserving the base model's capabilities.

Getting Started

Mellea is available on PyPI with full documentation at docs.mellea.ai. Granite Libraries can be accessed through the Hugging Face collection. The project is open-source and available on GitHub.