An open source inference server for your machine learning models.
Multi-model serving, letting users run multiple models within the same process.
You can install the mlserver
package running:
Out of the box, MLServer provides support for:
Scikit-Learn
✅
XGBoost
✅
Spark MLlib
✅
LightGBM
✅
CatBoost
✅
Tempo
✅
MLflow
✅
Alibi-Detect
✅
Alibi-Explain
✅
HuggingFace
✅
🔴 Unsupported
🟠 Deprecated: To be removed in a future version
🟢 Supported
🔵 Untested
3.7
🔴
3.8
🔴
3.9
🟢
3.10
🟢
3.11
🟢
3.12
🟢
3.13
🔴
We generally keep the version as a placeholder for an upcoming version.
For example:
To run all of the tests for MLServer and the runtimes, use:
To run run tests for a single file, use something like:
MLServer aims to provide an easy way to start serving your machine learning models through a REST and gRPC interface, fully compliant with spec. Watch a quick video introducing the project .
Ability to run across multiple models through a pool of inference workers.
Support for , to group inference requests together on the fly.
Scalability with deployment in Kubernetes native frameworks, including and , where MLServer is the core Python inference server used to serve machine learning models.
Support for the standard on both the gRPC and REST flavours, which has been standardised and adopted by various model serving frameworks.
You can read more about the goals of this project on the .
Note that to use any of the optional , you'll need to install the relevant package. For example, to serve a scikit-learn
model, you would need to install the mlserver-sklearn
package:
For further information on how to use MLServer, you can check any of the .
Inference runtimes allow you to define how your model should be used within MLServer. You can think of them as the backend glue between MLServer and your machine learning framework of choice. You can read more about .
Out of the box, MLServer comes with a set of pre-packaged runtimes which let you interact with a subset of common frameworks. This allows you to start serving models saved in these frameworks straight away. However, it's also possible to .
To see MLServer in action, check out . You can find below a few selected examples showcasing how you can leverage MLServer to start serving your machine learning models.
Both the main mlserver
package and the try to follow the same versioning schema. To bump the version across all of them, you can use the script.