Custom Protobuf Data Example

  • Wrap a scikit-learn python model for use as a prediction microservice in seldon-core

    • Run locally on Docker to test

    • Deploy on seldon-core running on a Kubernetes cluster

Dependencies

  • Seldon Core v1.0.3+ installed

  • pip install sklearn seldon-core protobuf grpcio

Train locally

import os

import numpy as np
from sklearn import datasets
from sklearn.externals import joblib
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline


def main():
    clf = LogisticRegression()
    p = Pipeline([("clf", clf)])
    print("Training model...")
    p.fit(X, y)
    print("Model trained!")

    filename_p = "IrisClassifier.sav"
    print("Saving model in %s" % filename_p)
    joblib.dump(p, filename_p)
    print("Model saved!")


if __name__ == "__main__":
    print("Loading iris data set...")
    iris = datasets.load_iris()
    X, y = iris.data, iris.target
    print("Dataset loaded!")
    main()

Custom Protobuf Specification

First, we'll need to define our custom protobuf specification so that it can be leveraged.

Custom Protobuf Compilation

We will need to compile our custom protobuf for python so that we can unpack the customData field passed to our predict method later on.

gRPC test

Wrap model using s2i

Serve the model locally

Test using custom protobuf payload

Stop serving model

Setup Seldon Core

Use the setup notebook to setup Seldon Core with an ingress - either Ambassador or Istio

Then port-forward to that ingress on localhost:8003 in a separate terminal either with:

  • Ambassador: kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080

  • Istio: kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80

Deploy your Seldon Model

We first create a configuration file:

Run the model in our cluster

Apply the Seldon Deployment configuration file we just created

Check that the model has been deployed

Test by sending prediction calls

IrisPredictRequest sent via the customData field.

Cleanup our deployment

Last updated

Was this helpful?