MLflow Open Inference Protocol End to End Workflow

In this example we are going to build a model using mlflow, pack and deploy it on seldon-core on a local kind cluster

Prerequisites before running this notebook:

  • install and configure mc, follow the relevant section in this link

  • run this jupyter notebook in conda environment

$ conda create --name python3.8-mlflow-example python=3.8 -y
$ conda activate python3.8-mlflow-example
$ pip install jupyter
$ jupyter notebook

Setup seldon-core and minio

Setup Seldon Core

Use the setup notebook to Setup Cluster with Ambassador Ingress and Install Seldon Core. Instructions also online.

Setup MinIO

Use the provided notebook to install Minio in your cluster and configure mc CLI tool. Instructions also online.

Train elasticnet wine model using mlflow

Install mlflow and required dependencies to train the model

Define where the model artifacts will be saved

Define training function

Train the elasticnet_wine model

Install dependencies to be able to pack and deploy the model on seldon_core

We are going to use conda-pack to pack the python enviornment. We also need mlserver dependencies. We are planning to simplify this workflow in future releases.

Pack the conda enviornment

Configure mc to access the minio service in the local kind cluster

note: make sure that minio ip is reflected properly below, run kubectl get service -n minio-system

Copy the model artifacts to minio

Create model deployment configuration

Deploy the model on the local kind cluster

Get prediction from the service using REST

Delete the model deployment

Last updated

Was this helpful?