Docker Installation
Preparation
git clone https://github.com/SeldonIO/seldon-core --branch=v2
Build Seldon CLI
Install Docker Compose (or directly from GitHub release if not using Docker Desktop).
Install
make
. This will depend on your version of Linux, for example on Ubuntu runsudo apt-get install build-essential
.
Deploy
From the project root run:
make deploy-local
This will run with latest
images for the components.
Note: Triton and MLServer are large images at present (11G and 9G respectively) so will take time to download on first usage.
Run a particular version
To run a particular release set the environment variable CUSTOM_IMAGE_TAG
to the desired version before running the command, e.g.:
export CUSTOM_IMAGE_TAG=0.2.0
make deploy-local
GPU support
To enable GPU on servers:
Make sure that
nvidia-container-runtime
is installed, follow linkEnable GPU:
export GPU_ENABLED=1
Local Models
To deploy with a local folder available for loading models set the environment variable LOCAL_MODEL_FOLDER
to the folder, e.g.:
export LOCAL_MODEL_FOLDER=/home/seldon/models
make deploy-local
This folder will be mounted at /mnt/models
. You can then specify models as shown below:
# samples/models/sklearn-iris-local.yaml
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
metadata:
name: iris
spec:
storageUri: "/mnt/models/iris"
requirements:
- sklearn
If you have set the local model folder as above then this would be looking at /home/seldon/models/iris
.
Tracing
The default local install will provide Jaeger tracing at http://0.0.0.0:16686/search
.
Metrics
The default local install will expose Grafana at http://localhost:3000
.
Undeploy
From the project root run:
make undeploy-local
Last updated
Was this helpful?