All pages
Powered by GitBook
1 of 1

Loading...

MLServer

An open source inference server for your machine learning models.

Overview

MLServer aims to provide an easy way to start serving your machine learning models through a REST and gRPC interface, fully compliant with KFServing's V2 Dataplane spec. Watch a quick video introducing the project here.

  • Multi-model serving, letting users run multiple models within the same process.

  • Ability to run across multiple models through a pool of inference workers.

  • Support for , to group inference requests together on the fly.

  • Scalability with deployment in Kubernetes native frameworks, including and , where MLServer is the core Python inference server used to serve machine learning models.

  • Support for the standard on both the gRPC and REST flavours, which has been standardised and adopted by various model serving frameworks.

You can read more about the goals of this project on the .

Usage

You can install the mlserver package running:

Note that to use any of the optional , you'll need to install the relevant package. For example, to serve a scikit-learn model, you would need to install the mlserver-sklearn package:

For further information on how to use MLServer, you can check any of the .

Inference Runtimes

Inference runtimes allow you to define how your model should be used within MLServer. You can think of them as the backend glue between MLServer and your machine learning framework of choice. You can read more about .

Out of the box, MLServer comes with a set of pre-packaged runtimes which let you interact with a subset of common frameworks. This allows you to start serving models saved in these frameworks straight away. However, it's also possible to .

Out of the box, MLServer provides support for:

Framework
Supported
Documentation

Supported Python Versions

πŸ”΄ Unsupported

🟠 Deprecated: To be removed in a future version

🟒 Supported

πŸ”΅ Untested

Python Version
Status

Examples

To see MLServer in action, check out . You can find below a few selected examples showcasing how you can leverage MLServer to start serving your machine learning models.

Developer Guide

Versioning

Both the main mlserver package and the try to follow the same versioning schema. To bump the version across all of them, you can use the script.

We generally keep the version as a placeholder for an upcoming version.

For example:

Testing

To run all of the tests for MLServer and the runtimes, use:

To run run tests for a single file, use something like:

CatBoost

βœ…

Tempo

βœ…

MLflow

βœ…

Alibi-Detect

βœ…

Alibi-Explain

βœ…

HuggingFace

βœ…

3.13

πŸ”΄

Serving a tempo pipeline
  • Serving a custom model

  • Serving an alibi-detect model

  • Serving a HuggingFace model

  • Multi-Model Serving with multiple frameworks

  • Loading / unloading models from a model repository

  • Scikit-Learn

    βœ…

    MLServer SKLearn

    XGBoost

    βœ…

    MLServer XGBoost

    Spark MLlib

    βœ…

    MLServer MLlib

    LightGBM

    βœ…

    3.7

    πŸ”΄

    3.8

    πŸ”΄

    3.9

    🟒

    3.10

    🟒

    3.11

    🟒

    3.12

    🟒

    inference in parallel for vertical scaling
    adaptive batching
    Seldon Core
    KServe (formerly known as KFServing)
    V2 Inference Protocol
    initial design document
    inference runtimes
    available examples
    inference runtimes in their documentation page
    write custom runtimes
    our full list of examples
    Serving a scikit-learn model
    Serving a xgboost model
    Serving a lightgbm model
    Serving a catboost model
    inference runtimes packages
    ./hack/update-version.sh

    pip install mlserver
    pip install mlserver-sklearn
    ./hack/update-version.sh 0.2.0.dev1
    make test
    tox -e py3 -- tests/batch_processing/test_rest.py
    MLServer LightGBM
    MLServer CatBoost
    github.com/SeldonIO/tempo
    MLServer MLflow
    MLServer Alibi Detect
    MLServer Alibi Explain
    MLServer HuggingFace
    video_play_icon