This package provides a MLServer runtime compatible with XGBoost.
You can install the runtime, alongside mlserver
, as:
For further information on how to use MLServer with XGBoost, you can check out this worked out example.
The XGBoost inference runtime will expect that your model is serialised via one of the following methods:
Extension | Docs | Example |
---|---|---|
If no content type is present on the request or metadata, the XGBoost runtime will try to decode the payload as a NumPy Array. To avoid this, either send a different content type explicitly, or define the correct one as part of your model's metadata.
The XGBoost inference runtime exposes a number of outputs depending on the model type. These outputs match to the predict
and predict_proba
methods of the XGBoost model.
By default, the runtime will only return the output of predict
. However, you are able to control which outputs you want back through the outputs
field of your {class}InferenceRequest <mlserver.types.InferenceRequest>
payload.
For example, to only return the model's predict_proba
output, you could define a payload such as:
Output | Returned By Default | Availability |
---|---|---|
*.json
booster.save_model("model.json")
*.ubj
booster.save_model("model.ubj")
*.bst
booster.save_model("model.bst")
predict
✅
Available on all XGBoost models.
predict_proba
❌
Only available on non-regressor models (i.e. XGBClassifier
models).