Outlier Detection
Last updated
Last updated
In a production environment, it is critical to monitor the data your machine learning model runs inference on, as changes in data can adversely affect the performance of ML models.
In this demo learn how to identify outliers in your inference data using Alibi Detect's VAE outlier detection method for tabular datasets.
In this demo, we will:
Launch an image classification deploying with a model, trained on the CIFAR-10 dataset.
Set up an VAE outlier detector for this particular model
Send a request to get an image classification
Send a perturbed request to identify an outlier instance
This demo requires Knative installation on the cluster as the drift detector will be installed as a kservice. See Knative installation instructions for necessary setup required.
In the Overview page, click Create new deployment.
Enter the deployment details as follows:
Name: cifar10-classifier
Namespace: seldon
Type: Seldon Deployment
Configure the default predictor as follows:
Runtime: Triton (ONNX, PyTorch, Tensorflow, TensorRT)
Model Project: default
Model URI: gs://seldon-models/triton/tf_cifar10
Storage Secret: (leave blank/none)
Model Name: cifar10
Click the Next
button until the end and, finally, click Launch
.
If your deployment is launched successfully, it will have Available
status.
From the cifar10-classifier
deployment dashboard, click Add
inside the Outlier Detection
card.
Configure the detector with the following values:
Detector Name: outlier-detect
Storage URI: gs://seldon-models/scv2/examples/cifar10/outlier-detector
Project: default
Storage Secret: (leave blank)
Reply URL: http://seldon-request-logger.seldon-logs
Click Create Detector
to complete the setup.
After a short while, the detector should become available.
Now that the outlier detector available, we can showcase how to make use of it to identify outliers in the inference data. We are going to send 2 requests to the model, one with a normal image and another with a perturbed image to identify the outlier.
A frog image from the CIFAR-10 dataset in the Open Inference Protocol (OIP) format:
A perturbed image of the same frog in the Open Inference Protocol (OIP) format:
In the deployment dashboard of click Predict in the left pane..
Click Browse to upload the cifar10-frog-oip.json
file.
Click Predict. The prediction request is processed and the response is displayed.
Click Remove to remove the uploaded file
Click Browse again to make a prediction with the perturbed image of the frog using the cifar10-frog-perturbed-oip.json
file. The prediction request is processed and the response is displayed.
Navigate to the Requests
page using the left navigation drawer to view the requests made to the model and their prediction responses. Outlier scores will be available on the right side of each instance.
You can also highlight outlier prediction responses using the Highlight Outliers
toggle on the top-right of the screen.
Furthermore, you can filter the requests to show only the outliers by clicking on the Filter
icon in the top right(next to the Page number) and selecting the Outliers
option, toggling to show only outliers or not.
It is important to be able to monitor the outlier detection requests in real-time to ensure that the model is performing as expected and to take corrective actions when necessary.
Click Monitor in the left pane.
Select the Outlier Detection
tab to view a timeline graph of outlier/inlier requests.
Congratulations, you've successfully demonstrated how to identify outliers in your inference data using Alibi Detect's VAE outlier detection method! 🥳
Why not try our other demos? Ready to dive in? Read our operations guide to learn more about how to use Enterprise Platform.
If you experience issues with this demo, see the troubleshooting docs and also the Knative or Elasticsearch sections.
The Model Name
is linked to the name described in the model-settings.json
file, located in the Google Cloud Storage location. Changing the name in the JSON file would also require changing the Model Name
, and vice versa.