SKLearn Iris Classifier

Note: Seldon has adopted the industry-standard Open Inference Protocol (OIP) and is no longer maintaining the Seldon and TensorFlow protocols. This transition allows for greater interoperability among various model serving runtimes, such as MLServer. To learn more about implementing OIP for model serving in Seldon Core 1, see MLServer.

We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.

In this notebook we create a new custom LIGHTGBM_SERVER prepackaged server with two versions:

  • A Seldon protocol LightGBM model server

  • A KfServing Open Inference protocol or V2 protocol version using MLServer for running lightgbm models

The Seldon model server is in defined in lightgbmserver folder.

Prerequisites

  • A kubernetes cluster with kubectl configured

  • curl

Setup Seldon Core

Use the setup notebook to Setup Cluster to setup Seldon Core with an ingress - either Ambassador or Istio.

Then port-forward to that ingress on localhost:8003 in a separate terminal either with:

  • Ambassador: kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080

  • Istio: kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:8080

!kubectl create namespace seldon

Training (can be skipped)

Update Seldon Core with Custom Model

DeployLightGBM Model with Seldon Protocol

Wait for new webhook certificates to be loaded

Deploy Model with KFserving Protocol

Last updated

Was this helpful?