XGBoost
Last updated
Was this helpful?
Last updated
Was this helpful?
This package provides a MLServer runtime compatible with XGBoost.
You can install the runtime, alongside mlserver
, as:
For further information on how to use MLServer with XGBoost, you can check out this .
The XGBoost inference runtime will expect that your model is serialised via one of the following methods:
*.json
booster.save_model("model.json")
*.ubj
booster.save_model("model.ubj")
*.bst
booster.save_model("model.bst")
The XGBoost inference runtime exposes a number of outputs depending on the model type. These outputs match to the predict
and predict_proba
methods of the XGBoost model.
predict
✅
Available on all XGBoost models.
predict_proba
❌
Only available on non-regressor models (i.e. XGBClassifier
models).
By default, the runtime will only return the output of predict
. However, you are able to control which outputs you want back through the outputs
field of your {class}InferenceRequest <mlserver.types.InferenceRequest>
payload.
For example, to only return the model's predict_proba
output, you could define a payload such as:
If no is present on the request or metadata, the XGBoost runtime will try to decode the payload as a . To avoid this, either send a different content type explicitly, or define the correct one as part of your .