LogoLogo
  • MLServer
  • Getting Started
  • User Guide
    • Content Types (and Codecs)
    • OpenAPI Support
    • Parallel Inference
    • Adaptive Batching
    • Custom Inference Runtimes
    • Metrics
    • Deployment
      • Seldon Core
      • KServe
    • Streaming
  • Inference Runtimes
    • SKLearn
    • XGBoost
    • MLFlow
    • Spark MlLib
    • LightGBM
    • Catboost
    • Alibi-Detect
    • Alibi-Explain
    • HuggingFace
    • Custom
  • Reference
    • MLServer Settings
    • Model Settings
    • MLServer CLI
    • Python API
      • MLModel
      • Types
      • Codecs
      • Metrics
  • Examples
    • Serving Scikit-Learn models
    • Serving XGBoost models
    • Serving LightGBM models
    • Serving MLflow models
    • Serving a custom model
    • Serving Alibi-Detect models
    • Serving HuggingFace Transformer Models
    • Multi-Model Serving
    • Model Repository API
    • Content Type Decoding
    • Custom Conda environments in MLServer
    • Serving a custom model with JSON serialization
    • Serving models through Kafka
    • Streaming
    • Deploying a Custom Tensorflow Model with MLServer and Seldon Core
  • Changelog
Powered by GitBook
On this page
  • Usage
  • Content Types

Was this helpful?

Edit on GitHub
Export as PDF
  1. Inference Runtimes

LightGBM

PreviousSpark MlLibNextCatboost

Last updated 7 months ago

Was this helpful?

This package provides a MLServer runtime compatible with LightGBM.

Usage

You can install the runtime, alongside mlserver, as:

pip install mlserver mlserver-lightgbm

For further information on how to use MLServer with LightGBM, you can check out this .

Content Types

If no is present on the request or metadata, the LightGBM runtime will try to decode the payload as a . To avoid this, either send a different content type explicitly, or define the correct one as part of your .

worked out example
content type
NumPy Array
model's metadata