Monday, August 18, 2025

IBM Research Introduces Mellea: A Structured, Programmable Library for Generative Computing

Related stories

Former CEO of Twitter Parag Agrawal launches Parallel Web Systems for AI agents

Parallel Web Systems has unveiled a groundbreaking API designed...

Databricks Launches Assistant Edit Mode: A Powerful New Way to Transform Notebooks

Databricks introduced Databricks Assistant Edit Mode, a transformative feature...

Oracle to Offer Google Gemini Models for Enterprise AI

Oracle and Google Cloud have deepened their strategic collaboration...
spot_imgspot_img

IBM Research unveiled Mellea, a groundbreaking open-source library designed to usher in a new era of generative computing, offering a structured, programmable alternative to conventional prompt engineering.

IBM describes generative computing as a paradigm shift that moves beyond prompt engineering toward structured, programmable interaction with large-language models (LLMs). The traditional prompt model prompting an LLM with full English sentences and tweaking phrasing until the desired result is achieved is not sustainable for mission-critical or repeatable enterprise workflows due to its brittleness. Slight variations in phrasing or model version can lead to drastically different outcomes.

At this year’s Think conference, IBM researchers introduced generative computing as a new approach that positions LLMs as software components requiring development tools and programming structures, rather than ad-hoc prompts. IBM Research emphasized that a runtime equipped with programming abstractions replaces the traditional API that interacts with models via tokens.

The development team has created abstractions aimed at reducing unpredictability in LLM behavior. These include structured instructions that yield consistent results across different models, sampling strategies to moderate randomness, and intrinsic safety guardrails defining expected model behavior.

To implement these abstractions, IBM Research developed activated low-rank adapters (aLoRAs), which enhance foundational models with specialized, task-specific capabilities at inference time without delay. These capabilities include rewriting user queries, assessing document relevance, determining answerability, estimating uncertainty, detecting hallucinations, and generating sentence-level citations. These tools are now available to developers on platforms including Hugging Face and vLLM.

Also Read: HPE Powers Agentic, Physical AI with NVIDIA Blackwell, models

“We believe that generative computing demands new programming models for using LLMs, new fundamental low-level operations performed by LLMs, and new ways of building LLMs themselves,” stated David Cox, Vice President of AI Models at IBM Research.

Cox’s team has created Mellea a library tailored to this vision. Mellea enables developers to replace inconsistent agents and brittle prompts with structured, maintainable, robust, and efficient AI workflows. The objective is to move away from large, unwieldy prompts and instead formulate AI tasks as structured, maintainable “mellea problems.”

Mellea is open source and available now on GitHub, with compatibility across many inference services and model families.

IBM Research highlights Mellea as an initial step into the broader realm of generative computing. The research team anticipates that further advancements will continue shaping how enterprises build AI moving from fickle prompt engineering to reliable programmatic systems.

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img