Model Explainer Open Inference Protocol Example

In this notebook we will show examples that illustrate how to explain models using [MLServer] (https://github.com/SeldonIO/MLServer).

MLServer is a Python server for your machine learning models through a REST and gRPC interface, fully compliant with KFServing's v2 Dataplane spec.

Running this Notebook

This should install the required package dependencies, if not please also install:

  • install and configure mc, follow the relevant section in this link

  • run this jupyter notebook in conda environment

$ conda create --name python3.8-example python=3.8 -y
$ conda activate python3.8-example
$ pip install jupyter
$ jupyter notebook
!pip install sklearn alibi

Setup Seldon Core

Follow the instructions to Setup Cluster with Ambassador Ingress and Install Seldon Core.

Then port-forward to that ingress on localhost:8003 in a separate terminal either with:

  • Ambassador: kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080

  • Istio: kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:8080

Setup MinIO

Use the provided notebook to install Minio in your cluster and configure mc CLI tool. Instructions also online.

Train iris model using sklearn

Train model

Save model

Create AnchorTabular explainer

Create explainer artifact

Save explainer

Install dependencies to pack the enviornment for deployment

Pack enviornment

Copy artifacts to object store (minio)

Configure mc to access the minio service in the local kind cluster

note: make sure that minio ip is reflected properly below, run:

  • kubectl get service -n minio-system

  • mc config host add minio-seldon [ip] minioadmin minioadmin

Deploy to local kind cluster

Create deployment CRD

Deploy

Test explainer

Last updated

Was this helpful?