mlflow.genai
- class mlflow.genai.Scorer(*, name: str, aggregations: Optional[list] = None)[source]
Bases:
pydantic.main.BaseModel
Note
Experimental: This class may change or be removed in a future release without warning.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- run(*, inputs=None, outputs=None, expectations=None, trace=None)[source]
- mlflow.genai.create_dataset(uc_table_name: str, experiment_id: Optional[Union[str, list[str]]] = None) EvaluationDataset [source]
Create a dataset with the given name and associate it with the given experiment.
- Parameters
uc_table_name – The UC table name of the dataset.
experiment_id – The ID of the experiment to associate the dataset with. If not provided, the current experiment is inferred from the environment.
- Returns
The created dataset.
- Return type
- mlflow.genai.delete_dataset(uc_table_name: str) None [source]
Delete the dataset with the given name.
- Parameters
uc_table_name – The UC table name of the dataset.
- mlflow.genai.evaluate(data: EvaluationDatasetTypes, scorers: list[Scorer], predict_fn: Optional[Callable[[...], Any]] = None, model_id: Optional[str] = None) mlflow.models.evaluation.base.EvaluationResult [source]
Note
Experimental: This function may change or be removed in a future release without warning.
Evaluate the performance of a generative AI model/application using specified data and scorers.
This function allows you to evaluate a model’s performance on a given dataset using various scoring criteria. It supports both built-in scorers provided by MLflow and custom scorers. The evaluation results include metrics and detailed per-row assessments.
There are three different ways to use this function:
1. Use Traces to evaluate the model/application.
The data parameter takes a DataFrame with trace column, which contains a single trace object corresponding to the prediction for the row. This dataframe is easily obtained from the existing traces stored in MLflow, by using the
mlflow.search_traces()
function.import mlflow from mlflow.genai.scorers import correctness, safety import pandas as pd trace_df = mlflow.search_traces(model_id="<my-model-id>") mlflow.genai.evaluate( data=trace_df, scorers=[correctness, safety], )
Built-in scorers will understand the model inputs, outputs, and other intermediate information e.g. retrieved context, from the trace object. You can also access to the trace object from the custom scorer function by using the trace parameter.
from mlflow.genai.scorers import scorer @scorer def faster_than_one_second(inputs, outputs, trace): return trace.info.execution_duration < 1000
2. Use DataFrame or dictionary with “inputs”, “outputs”, “expectations” columns.
Alternatively, you can pass inputs, outputs, and expectations (ground truth) as a column in the dataframe (or equivalent list of dictionaries).
import mlflow from mlflow.genai.scorers import correctness import pandas as pd data = pd.DataFrame( [ { "inputs": {"question": "What is MLflow?"}, "outputs": "MLflow is an ML platform", "expectations": "MLflow is an ML platform", }, { "inputs": {"question": "What is Spark?"}, "outputs": "I don't know", "expectations": "Spark is a data engine", }, ] ) mlflow.genai.evaluate( data=data, scorers=[correctness()], )
3. Pass `predict_fn` and input samples (and optionally expectations).
If you want to generate the outputs and traces on-the-fly from your input samples, you can pass a callable to the predict_fn parameter. In this case, MLflow will pass the inputs to the predict_fn as keyword arguments. Therefore, the “inputs” column must be a dictionary with the parameter names as keys.
import mlflow from mlflow.genai.scorers import correctness, safety import openai # Create a dataframe with input samples data = pd.DataFrame( [ {"inputs": {"question": "What is MLflow?"}}, {"inputs": {"question": "What is Spark?"}}, ] ) # Define a predict function to evaluate. The "inputs" column will be # passed to the prediction function as keyword arguments. def predict_fn(question: str) -> str: response = openai.OpenAI().chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": question}], ) return response.choices[0].message.content mlflow.genai.evaluate( data=data, predict_fn=predict_fn, scorers=[correctness, safety], )
- Parameters
data –
Dataset for the evaluation. Must be one of the following formats:
An EvaluationDataset entity
Pandas DataFrame
Spark DataFrame
List of dictionaries
The dataset must include either of the following columns:
- trace column that contains a single trace object corresponding
to the prediction for the row.
If this column is present, MLflow extracts inputs, outputs, assessments, and other intermediate information e.g. retrieved context, from the trace object and uses them for scoring. When this column is present, the predict_fn parameter must not be provided.
inputs, outputs, expectations columns.
Alternatively, you can pass inputs, outputs, and expectations(ground truth) as a column in the dataframe (or equivalent list of dictionaries).
inputs (required): Column containing inputs for evaluation. The value must be a dictionary. When predict_fn is provided, MLflow will pass the inputs to the predict_fn as keyword arguments. For example,
predict_fn: def predict_fn(question: str, context: str) -> str
inputs: {“question”: “What is MLflow?”, “context”: “MLflow is an ML platform”}
predict_fn will receive “What is MLflow?” as the first argument (question) and “MLflow is an ML platform” as the second argument (context)
outputs (optional): Column containing model or app outputs. If this column is present, predict_fn must not be provided.
expectations (optional): Column containing a dictionary of ground truths.
For list of dictionaries, each dict should follow the above schema.
scorers – A list of Scorer objects that produces evaluation scores from inputs, outputs, and other additional contexts. MLflow provides pre-defined scorers, but you can also define custom ones.
predict_fn –
The target function to be evaluated. The specified function will be executed for each row in the input dataset, and outputs will be used for scoring.
The function must emit a single trace per call. If it doesn’t, decorate the function with @mlflow.trace decorator to ensure a trace to be emitted.
model_id – Optional model identifier (e.g. “models:/my-model/1”) to associate with the evaluation results. Can be also set globally via the
mlflow.set_active_model()
function.
- Returns
An
mlflow.models.EvaluationResult~
object.
Note
This function is only supported on Databricks. The tracking URI must be set to Databricks.
Warning
This function is not thread-safe. Please do not use it in multi-threaded environments.
- mlflow.genai.get_dataset(uc_table_name: str) EvaluationDataset [source]
Get the dataset with the given name.
- Parameters
uc_table_name – The UC table name of the dataset.
- Returns
The dataset.
- Return type
- mlflow.genai.scorer(func=None, *, name: Optional[str] = None, aggregations: Optional[list[typing.Union[typing.Literal['min', 'max', 'mean', 'median', 'variance', 'p90', 'p99'], typing.Callable]]] = None)[source]
Note
Experimental: This function may change or be removed in a future release without warning.
A decorator to define a custom scorer that can be used in
mlflow.genai.evaluate()
.The scorer function should take in a subset of the following parameters:
Parameter
Description
Source
inputs
A single input to the target model/app.
Derived from either dataset or trace.
When the dataset contains
inputs
column, the value will be passed as is.When traces are provided as evaluation dataset, this will be derived from the
inputs
field of the trace (i.e. inputs captured as the root span of the trace).
outputs
A single output from the target model/app.
Derived from either dataset, trace, or output of
predict_fn
.When the dataset contains
outputs
column, the value will be passed as is.When
predict_fn
is provided, MLflow will make a prediction using theinputs
and thepredict_fn
and pass the result as theoutputs
.When traces are provided as evaluation dataset, this will be derived from the
response
field of the trace (i.e. outputs captured as the root span of the trace).
expectations
Ground truth or any expectation for each prediction e.g., expected retrieved docs.
Derived from either dataset or trace.
When the dataset contains
expectations
column, the value will be passed as is.When traces are provided as evaluation dataset, this will be a dictionary that contains a set of assessments in the format of [assessment name]: [assessment value].
trace
A trace object corresponding to the prediction for the row.
Specified as a
trace
column in the dataset, or generated during the prediction.The scorer function should return one of the following:
A boolean value
An integer value
A float value
A string value
A single
Feedback
objectA list of
Feedback
objects
Note
The metric name will be determined by the scorer function’s name or a custom name specified in the name parameter for the scorer.
Example
import json from mlflow.genai.scorers import scorer from mlflow.entities import AssessmentSource, Feedback # Basic scorers that returns primitive values @scorer def not_empty(outputs) -> bool: return outputs != "" @scorer def exact_match(outputs, expectations) -> bool: return outputs == expectations["expected_response"] @scorer def num_tool_calls(trace) -> int: spans = trace.search_spans(name="tool_call") return len(spans) # Use `Feedback` object to return additional information about the scorer's # result, such as a rationale for the score. @scorer def harmfulness(outputs) -> Feedback: import openai prompt = f''' Judge if the following text is harmful or not. Text: {outputs} Return the answer in a JSON object with the following format: {{ "harmful": true "reason": "The text contains harmful content" }} Do not output any other characters than the json object. ''' response = openai.OpenAI().chat.completions.create( model="o4-mini", messages=[{"role": "user", "content": prompt}], ) payload = json.loads(response.choices[0].message.content) return Feedback( value=payload["harmful"], rationale=payload["reason"], source=AssessmentSource( source_type="LLM_JUDGE", source_id="openai:/o4-mini", ), ) # Use the scorer in an evaluation mlflow.genai.evaluate( data=data, scorers=[not_empty, exact_match, num_tool_calls, harmfulness], )
- mlflow.genai.to_predict_fn(endpoint_uri: str) Callable [source]
Note
Experimental: This function may change or be removed in a future release without warning.
Convert an endpoint URI to a predict function.
- Parameters
endpoint_uri – The endpoint URI to convert.
- Returns
A predict function that can be used to make predictions.
Example
The following example assumes that the model serving endpoint accepts a JSON object with a messages key. Please adjust the input based on the actual schema of the model serving endpoint.
from mlflow.genai.scorers import get_all_scorers data = [ { "inputs": { "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is MLflow?"}, ] } }, { "inputs": { "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is Spark?"}, ] } }, ] predict_fn = mlflow.genai.to_predict_fn("endpoints:/chat") mlflow.genai.evaluate( data=data, predict_fn=predict_fn, scorers=get_all_scorers(), )
You can also directly invoke the function to validate if the endpoint works properly with your input schema.
predict_fn(**data[0]["inputs"])
- class mlflow.genai.scorers.BuiltInScorer(*, name: str, aggregations: Optional[list] = None)[source]
Bases:
Scorer
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- update_evaluation_config() dict [source]
The builtin scorer will take in an evaluation_config and return an updated version of it as necessary to comply with the expected format for mlflow.evaluate(). More details about built-in judges can be found at https://docs.databricks.com/aws/en/generative-ai/agent-evaluation/llm-judge-reference
- mlflow.genai.scorers.get_all_scorers() list[mlflow.genai.scorers.base.BuiltInScorer] [source]
Note
Experimental: This function may change or be removed in a future release without warning.
Returns a list of all built-in scorers.
Example:
import mlflow from mlflow.genai.scorers import get_all_scorers data = [ { "inputs": {"question": "What is the capital of France?"}, "outputs": "The capital of France is Paris.", "expectations": [ {"expected_response": "Paris is the capital city of France."}, ], } ] result = mlflow.genai.evaluate(data=data, scorers=get_all_scorers())
- mlflow.genai.scorers.get_rag_scorers() list[mlflow.genai.scorers.base.BuiltInScorer] [source]
Note
Experimental: This function may change or be removed in a future release without warning.
Returns a list of built-in scorers for evaluating RAG models. Contains scorers chunk_relevance, context_sufficiency, groundedness, and relevance_to_query.
Example:
import mlflow from mlflow.genai.scorers import get_rag_scorers data = mlflow.search_traces(...) result = mlflow.genai.evaluate(data=data, scorers=get_rag_scorers())
- mlflow.genai.scorers.scorer(func=None, *, name: Optional[str] = None, aggregations: Optional[list[typing.Union[typing.Literal['min', 'max', 'mean', 'median', 'variance', 'p90', 'p99'], typing.Callable]]] = None)[source]
Note
Experimental: This function may change or be removed in a future release without warning.
A decorator to define a custom scorer that can be used in
mlflow.genai.evaluate()
.The scorer function should take in a subset of the following parameters:
Parameter
Description
Source
inputs
A single input to the target model/app.
Derived from either dataset or trace.
When the dataset contains
inputs
column, the value will be passed as is.When traces are provided as evaluation dataset, this will be derived from the
inputs
field of the trace (i.e. inputs captured as the root span of the trace).
outputs
A single output from the target model/app.
Derived from either dataset, trace, or output of
predict_fn
.When the dataset contains
outputs
column, the value will be passed as is.When
predict_fn
is provided, MLflow will make a prediction using theinputs
and thepredict_fn
and pass the result as theoutputs
.When traces are provided as evaluation dataset, this will be derived from the
response
field of the trace (i.e. outputs captured as the root span of the trace).
expectations
Ground truth or any expectation for each prediction e.g., expected retrieved docs.
Derived from either dataset or trace.
When the dataset contains
expectations
column, the value will be passed as is.When traces are provided as evaluation dataset, this will be a dictionary that contains a set of assessments in the format of [assessment name]: [assessment value].
trace
A trace object corresponding to the prediction for the row.
Specified as a
trace
column in the dataset, or generated during the prediction.The scorer function should return one of the following:
A boolean value
An integer value
A float value
A string value
A single
Feedback
objectA list of
Feedback
objects
Note
The metric name will be determined by the scorer function’s name or a custom name specified in the name parameter for the scorer.
Example
import json from mlflow.genai.scorers import scorer from mlflow.entities import AssessmentSource, Feedback # Basic scorers that returns primitive values @scorer def not_empty(outputs) -> bool: return outputs != "" @scorer def exact_match(outputs, expectations) -> bool: return outputs == expectations["expected_response"] @scorer def num_tool_calls(trace) -> int: spans = trace.search_spans(name="tool_call") return len(spans) # Use `Feedback` object to return additional information about the scorer's # result, such as a rationale for the score. @scorer def harmfulness(outputs) -> Feedback: import openai prompt = f''' Judge if the following text is harmful or not. Text: {outputs} Return the answer in a JSON object with the following format: {{ "harmful": true "reason": "The text contains harmful content" }} Do not output any other characters than the json object. ''' response = openai.OpenAI().chat.completions.create( model="o4-mini", messages=[{"role": "user", "content": prompt}], ) payload = json.loads(response.choices[0].message.content) return Feedback( value=payload["harmful"], rationale=payload["reason"], source=AssessmentSource( source_type="LLM_JUDGE", source_id="openai:/o4-mini", ), ) # Use the scorer in an evaluation mlflow.genai.evaluate( data=data, scorers=[not_empty, exact_match, num_tool_calls, harmfulness], )