Protocol Examples
Prerequisites
A kubernetes cluster with kubectl configured
curl
grpcurl
pygmentize
Examples
Note:Seldon has adopted the industry-standard Open Inference Protocol (OIP) and is no longer maintaining the Seldon and TensorFlow protocols. This transition allows for greater interoperability among various model serving runtimes, such as MLServer. To learn more about implementing OIP for model serving in Seldon Core 1, see MLServer.
We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.
Setup Seldon Core
Use the setup notebook to Setup Cluster to setup Seldon Core with an ingress - either Ambassador or Istio.
Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
Ambassador:
kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080Istio:
kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:8080
Seldon Protocol Model
We will deploy a REST model that uses the SELDON Protocol namely by specifying the attribute protocol: seldon
Seldon protocol Model with ModelUri with two custom models
Tensorflow Protocol Model
We will deploy a model that uses the TENSORLFOW Protocol namely by specifying the attribute protocol: tensorflow
V2 Protocol Model
We will deploy a REST model that uses the V2 Protocol namely by specifying the attribute protocol: v2
Last updated
Was this helpful?