All pages
Powered by GitBook
1 of 1

Loading...

MLServer

An open source inference server for your machine learning models.

Overview

  • Multi-model serving, letting users run multiple models within the same process.

Usage

You can install the mlserver package running:

pip install mlserver
pip install mlserver-sklearn

Inference Runtimes

Out of the box, MLServer provides support for:

Framework
Supported
Documentation

Scikit-Learn

✅

XGBoost

✅

Spark MLlib

✅

LightGBM

✅

CatBoost

✅

Tempo

✅

MLflow

✅

Alibi-Detect

✅

Alibi-Explain

✅

HuggingFace

✅

Supported Python Versions

🔴 Unsupported

🟠 Deprecated: To be removed in a future version

🟢 Supported

🔵 Untested

Python Version
Status

3.7

🔴

3.8

🔴

3.9

🟢

3.10

🟢

3.11

🟢

3.12

🟢

3.13

🔴

Examples

Developer Guide

Versioning

We generally keep the version as a placeholder for an upcoming version.

For example:

./hack/update-version.sh 0.2.0.dev1

Testing

To run all of the tests for MLServer and the runtimes, use:

make test

To run run tests for a single file, use something like:

tox -e py3 -- tests/batch_processing/test_rest.py

MLServer aims to provide an easy way to start serving your machine learning models through a REST and gRPC interface, fully compliant with spec. Watch a quick video introducing the project .

Ability to run across multiple models through a pool of inference workers.

Support for , to group inference requests together on the fly.

Scalability with deployment in Kubernetes native frameworks, including and , where MLServer is the core Python inference server used to serve machine learning models.

Support for the standard on both the gRPC and REST flavours, which has been standardised and adopted by various model serving frameworks.

You can read more about the goals of this project on the .

Note that to use any of the optional , you'll need to install the relevant package. For example, to serve a scikit-learn model, you would need to install the mlserver-sklearn package:

For further information on how to use MLServer, you can check any of the .

Inference runtimes allow you to define how your model should be used within MLServer. You can think of them as the backend glue between MLServer and your machine learning framework of choice. You can read more about .

Out of the box, MLServer comes with a set of pre-packaged runtimes which let you interact with a subset of common frameworks. This allows you to start serving models saved in these frameworks straight away. However, it's also possible to .

To see MLServer in action, check out . You can find below a few selected examples showcasing how you can leverage MLServer to start serving your machine learning models.

Both the main mlserver package and the try to follow the same versioning schema. To bump the version across all of them, you can use the script.

KFServing's V2 Dataplane
here
inference in parallel for vertical scaling
adaptive batching
Seldon Core
KServe (formerly known as KFServing)
V2 Inference Protocol
initial design document
inference runtimes in their documentation page
write custom runtimes
our full list of examples
Serving a scikit-learn model
Serving a xgboost model
Serving a lightgbm model
Serving a catboost model
Serving a tempo pipeline
Serving a custom model
Serving an alibi-detect model
Serving a HuggingFace model
Multi-Model Serving with multiple frameworks
Loading / unloading models from a model repository
inference runtimes packages
./hack/update-version.sh
inference runtimes
available examples
MLServer SKLearn
MLServer XGBoost
MLServer MLlib
MLServer LightGBM
MLServer CatBoost
github.com/SeldonIO/tempo
MLServer MLflow
MLServer Alibi Detect
MLServer Alibi Explain
MLServer HuggingFace