LogoLogo
  • MLServer
  • Getting Started
  • User Guide
    • Content Types (and Codecs)
    • OpenAPI Support
    • Parallel Inference
    • Adaptive Batching
    • Custom Inference Runtimes
    • Metrics
    • Deployment
      • Seldon Core
      • KServe
    • Streaming
  • Inference Runtimes
    • SKLearn
    • XGBoost
    • MLFlow
    • Spark MlLib
    • LightGBM
    • Catboost
    • Alibi-Detect
    • Alibi-Explain
    • HuggingFace
    • Custom
  • Reference
    • MLServer Settings
    • Model Settings
    • MLServer CLI
    • Python API
      • MLModel
      • Types
      • Codecs
      • Metrics
  • Examples
    • Serving Scikit-Learn models
    • Serving XGBoost models
    • Serving LightGBM models
    • Serving MLflow models
    • Serving a custom model
    • Serving Alibi-Detect models
    • Serving HuggingFace Transformer Models
    • Multi-Model Serving
    • Model Repository API
    • Content Type Decoding
    • Custom Conda environments in MLServer
    • Serving a custom model with JSON serialization
    • Serving models through Kafka
    • Streaming
    • Deploying a Custom Tensorflow Model with MLServer and Seldon Core
  • Changelog
Powered by GitBook
On this page
  • Usage
  • Content Types
  • Model Outputs

Was this helpful?

Edit on GitHub
Export as PDF
  1. Inference Runtimes

SKLearn

PreviousInference RuntimesNextXGBoost

Last updated 7 months ago

Was this helpful?

This package provides a MLServer runtime compatible with Scikit-Learn.

Usage

You can install the runtime, alongside mlserver, as:

pip install mlserver mlserver-sklearn

For further information on how to use MLServer with Scikit-Learn, you can check out this .

Content Types

If no is present on the request or metadata, the Scikit-Learn runtime will try to decode the payload as a . To avoid this, either send a different content type explicitly, or define the correct one as part of your .

Model Outputs

The Scikit-Learn inference runtime exposes a number of outputs depending on the model type. These outputs match to the predict, predict_proba and transform methods of the Scikit-Learn model.

Output
Returned By Default
Availability

predict

✅

predict_proba

❌

Only available on non-regressor models.

transform

❌

By default, the runtime will only return the output of predict. However, you are able to control which outputs you want back through the outputs field of your {class}InferenceRequest <mlserver.types.InferenceRequest> payload.

For example, to only return the model's predict_proba output, you could define a payload such as:

---
emphasize-lines: 10-12
---
{
  "inputs": [
    {
      "name": "my-input",
      "datatype": "INT32",
      "shape": [2, 2],
      "data": [1, 2, 3, 4]
    }
  ],
  "outputs": [
    { "name": "predict_proba" }
  ]
}

Available on most models, but not in .

Only available on .

worked out example
content type
NumPy Array
model's metadata
Scikit-Learn pipelines
Scikit-Learn pipelines