Serving Alibi-Detect models
Out of the box, mlserver supports the deployment and serving of alibi_detect models. Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. In this example, we will cover how we can create a detector configuration to then serve it using mlserver.
Fetch reference data
The first step will be to fetch a reference data and other relevant metadata for an alibi-detect model.
For that, we will use the alibi library to get the adult dataset with demographic features from a 1996 US census.
Install `alibi` library for dataset dependencies and `alibi_detect` library for detector configuration from Pypi
```python
!pip install alibi alibi_detect
```import alibi
import matplotlib.pyplot as plt
import numpy as npadult = alibi.datasets.fetch_adult()
X, y = adult.data, adult.target
feature_names = adult.feature_names
category_map = adult.category_mapn_ref = 10000
n_test = 10000
X_ref, X_t0, X_t1 = X[:n_ref], X[n_ref:n_ref + n_test], X[n_ref + n_test:n_ref + 2 * n_test]
categories_per_feature = {f: None for f in list(category_map.keys())}Drift Detector Configuration
This example is based on the Categorical and mixed type data drift detection on income prediction tabular data from the alibi-detect documentation.
Creating detector and saving configuration
Detecting data drift directly
Serving
Now that we have the reference data and other configuration parameters, the next step will be to serve it using mlserver. For that, we will need to create 2 configuration files:
settings.json: holds the configuration of our server (e.g. ports, log level, etc.).model-settings.json: holds the configuration of our model (e.g. input type, runtime to use, etc.).
settings.json
settings.jsonmodel-settings.json
model-settings.jsonStart serving our model
Now that we have our config in-place, we can start the server by running mlserver start command. This needs to either be ran from the same directory where our config files are or pointing to the folder where they are.
Since this command will start the server and block the terminal, waiting for requests, this will need to be ran in the background on a separate terminal.
Send test inference request
We now have our alibi-detect model being served by mlserver. To make sure that everything is working as expected, let's send a request from our test set.
For that, we can use the Python types that mlserver provides out of box, or we can build our request manually.
View model response
Last updated
Was this helpful?
