Skip to main content

Log Prompts with Models

Prompts are often used as a part of GenAI applications. Managing the association between prompts and models is crucial for tracking the evolution of models and ensuring consistency across different environments. MLflow Prompt Registry is integrated with MLflow's model tracking capability, allowing you to track which prompts (and versions) are used by your models and applications.

Basic Usage

To log a model with associated prompts, use the prompts parameter in the log_model method. The prompts parameter accepts a list of prompt URLs or prompt objects that are associated with the model. The associated prompts are displayed in the MLflow UI for the model run.

import mlflow

with mlflow.start_run():
mlflow.<flavor>.log_model(
model,
...
# Specify a list of prompt URLs or prompt objects.
prompts=["prompts:/summarization-prompt/2"]
)
warning

The prompts parameter for associating prompts with models is only supported for GenAI flavors such as OpenAI, LangChain, LlamaIndex, DSPy, etc. Please refer to the GenAI flavors for the full list of supported flavors.

Example 1: Logging Prompts with LangChain

1. Create a prompt

If you haven't already created a prompt, follow this step to create a new prompt.

2. Define a Chain using the registered prompts

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

# Load registered prompt
prompt = mlflow.load_prompt("prompts:/summarization-prompt/2")

# Create LangChain prompt object
langchain_prompt = ChatPromptTemplate.from_messages(
[
(
# IMPORTANT: Convert prompt template from double to single curly braces format
"system",
prompt.to_single_brace_format(),
),
("placeholder", "{messages}"),
]
)

# Define the LangChain chain
llm = ChatOpenAI()
chain = langchain_prompt | llm

# Invoke the chain
response = chain.invoke({"num_sentences": 1, "sentences": "This is a test sentence."})
print(response)

3. Log the Chain to MLflow

Then log the chain to MLflow and specify the prompt URL in the prompts parameter:

with mlflow.start_run(run_name="summarizer-model"):
mlflow.langchain.log_model(
chain, artifact_path="model", prompts=["prompts:/summarization-prompt/2"]
)

Now you can view the associated prompts to the model in MLflow UI:

Associated Prompts

Moreover, you can view the list of models (runs) that use a specific prompt in the prompt details page:

Associated Prompts

Example 2: Automatic Prompt Logging with Models-from-Code

Models-from-Code is a feature that allows you to define and log models in code. Logging a model with code brings several benefits, such as portability, readability, avoiding serialization, and more.

Combining with MLflow Prompt Registry, the feature unlocks even more flexibility to manage prompt versions. Notably, if your model code uses a prompt from MLflow Prompt Registry, MLflow automatically logs it with the model for you.

In the following example, we use LangGraph to define a very simple chat bot using the registered prompt.

1. Create a prompt

import mlflow

# Register a new prompt
prompt = mlflow.register_prompt(
name="chat-prompt",
template="You are an expert in programming. Please answer the user's question about programming.",
)

2. Define a Graph using the registered prompt

Create a Python script chatbot.py with the following content.

If you are using Jupyter notebook, you can uncomment the %writefile magic command and run the following code in a cell to generate the script.

# %%writefile chatbot.py

import mlflow
from typing import Annotated
from typing_extensions import TypedDict

from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages


class State(TypedDict):
messages: list


llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.1)
system_prompt = mlflow.load_prompt("prompts:/chat-prompt/1")


def add_system_message(state: State):
return {
"messages": [
{
"role": "system",
"content": system_prompt.to_single_brace_format(),
},
*state["messages"],
]
}


def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}


graph_builder = StateGraph(State)
graph_builder.add_node("add_system_message", add_system_message)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "add_system_message")
graph_builder.add_edge("add_system_message", "chatbot")
graph_builder.add_edge("chatbot", END)

graph = graph_builder.compile()

mlflow.models.set_model(graph)

3. Log the Graph to MLflow

Specify the file path to the script in the model parameter:

with mlflow.start_run():
model_info = mlflow.langchain.log_model(
lc_model="./chatbot.py",
artifact_path="graph",
)

We didn't specify the prompts parameter this time, but MLflow automatically logs the prompt loaded within the script to the logged model. Now you can view the associated prompt in MLflow UI:

Associated Prompts

4. Load the graph back and invoke

Finally, let's load the graph back and invoke it to see the chatbot in action.

# Enable MLflow tracing for LangChain to view the prompt passed to LLM.
mlflow.langchain.autolog()

# Load the graph
graph = mlflow.langchain.load_model(model_info.model_uri)

graph.invoke(
{
"messages": [
{
"role": "user",
"content": "What is the difference between multi-threading and multi-processing?",
}
]
}
)

Chatbot