githubEdit

Install Locally

This guide runs through how to set up and install Seldon Core in a Kubernetes cluster running on your local machine. By the end, you'll have Seldon Core up and running and be ready to start deploying machine learning models.

Prerequisites

In order to install Seldon Core locally you'll need the following tools:

circle-exclamation

Docker or Podman

Dockerarrow-up-right and Podmanarrow-up-right are container engines. Kind needs a container engine (like docker or podman) to actually run the containers inside your clusters. You only need one of either Docker or Podman. Note that Docker is no longer free for individual use at large companies:

circle-info

If using Podman remember to set alias docker=podman

Kind

Kindarrow-up-right is a tool for running Kubernetes clusters locally. We'll use it to create a cluster on your machine so that you can install Seldon Core in to it. If don't already have kindarrow-up-right installed on your machine, you'll need to follow their installation guide:

Kubectl

kubectlarrow-up-right is the Kubernetes command-line tool. It allows you to run commands against Kubernetes clusters, which we'll need to do as part of setting up Seldon Core.

Helm

Helmarrow-up-right is a package manager that makes it easy to find, share and use software built for Kubernetes. If you don't already have Helm installed locally, you can install it here:

Set Up Kind

Once kind is installed on your system you can create a new Kubernetes cluster by running:

After kind has created your cluster, you can configure kubectl to use the cluster by setting the context:

From now on, all commands run using kubectl will be directed at your kind cluster.

circle-info

Kind prefixes your cluster names with kind- so your cluster context is kind-seldon and not just seldon

Install Cluster Ingress

Ingress is a Kubernetes object that provides routing rules for your cluster. It manages the incoming traffic and routes it to the services running inside the cluster.

Seldon Core supports using either Istioarrow-up-right or Ambassadorarrow-up-right to manage incoming traffic. Seldon Core automatically creates the objects and rules required to route traffic to your deployed machine learning models.

Istio is an open source service mesh. If the term service mesh is unfamiliar to you, it's worth reading a little more about Istioarrow-up-right.

Download Istio

For Linux and macOS, the easiest way to download Istio is using the following command:

Move to the Istio package directory. For example, if the package is istio-1.11.4:

Add the istioctl client to your path (Linux or macOS):

Install Istio

Istio provides a command line tool istioctl to make the installation process easy. The demo configuration profilearrow-up-right has a good set of defaults that will work on your local cluster.

The namespace label istio-injection=enabled instructs Istio to automatically inject proxies alongside anything we deploy in that namespace. We'll set it up for our default namespace:

Create Istio Gateway

In order for Seldon Core to use Istio's features to manage cluster traffic, we need to create an Istio Gatewayarrow-up-right by running the following command:

circle-exclamation

For custom configuration and more details on installing Seldon Core with Istio please see the Istio Ingressarrow-up-right page.

Install Seldon Core

To install Seldon Core, you can refer to this page.

Local Port Forwarding

Because your kubernetes cluster is running locally, we need to forward a port on your local machine to one in the cluster for us to be able to access it externally. You can do this by running:

This will forward any traffic from port 8080 on your local machine to port 80 inside your cluster.

You have now successfully installed Seldon Core on a local cluster and are ready to start deploying modelsarrow-up-right as production microservices.

Last updated

Was this helpful?