Image Explanations
Last updated
Last updated
Understanding how complex models make predictions is essential for promoting transparency, trust, and identifying potential biases. Model explainers offer insights into feature influence on outcomes, supporting debugging and model refinement.
We are going to demonstrate model explanations using Alibi Explain's Anchor Images method, taking note of the segments in an input image that influenced the prediction the most, as well as, observing the Anchor's Precision and Coverage metrics.
In this demo we will:
Launch an image classification deployment
Send prediction requests to the deployment
Create an explainer for the deployment
Generate explanations for previously sent prediction requests
The model used in this demo was trained to classify images based on the CIFAR10 dataset.
Click on Create new deployment
button.
Enter the deployment details as follows:
Name: cifar10-classifier
Namespace: seldon
Type: Seldon Deployment
Configure the default predictor as follows:
Runtime: Triton (ONNX, PyTorch, Tensorflow, TensorRT)
Model Project: default
Model URI: gs://seldon-models/triton/tf_cifar10
Storage Secret: (leave blank/none)
Model Name: cifar10
Click the Next
button until the end and, finally, click Launch
.
If your deployment is launched successfully, it will have Available
status.
We will make a prediction request using the image of a frog from the cifar10 dataset. The image is a JSON file in the REST format of the Open Inference Protocol (OIP).
Click on the cifar10-classifier
deployment created in the previous section to enter the deployment dashboard.
Inside the deployment dashboard, click on the Predict
button.
On the Predict
page, click on Browse
to select and upload the previously downloaded prediction file.
Click the Predict
button, the prediction request should be successful and the response should be shown.
From the cifar10-classifier
deployment dashboard, click Add
inside the Model Explanation
card.
For step 1 of the Explainer Configuration Wizard, select the Image
model data type and click Next
.
For step 2, make sure Anchor
is selected, then click Next
.
For step 3, enter the following values in the Explainer URI
tab:
Explainer URI: gs://seldon-models/tfserving/cifar10/cifar10_anchor_image_py3.7_alibi-0.7.0
Model Project: default
Storage Secret: (leave blank/none)
Click the Next
button until the end, and finally, click Launch
.
After a short while, the explainer should become available.
Now that the explainer is available, we can make use of it to generate an explanation for the prediction request we made earlier.
Click on the cifar10-classifier
deployment created in the previous sections to enter the deployment dashboard.
Navigate to the Requests
page using the left navigation drawer, and you will see the request you made earlier.
Click on the View explanation
button to generate an explanation for the request.
After a few seconds, the explanation should be generated and displayed on the page.
Congratulations, you've created an explanation for the request! 🥳
Why not try our other demos? Ready to dive in? Read our operations guide to learn more about how to use Enterprise Platform.
The Model Name
is linked to the name described in the model-settings.json
file, located in the Google Cloud Storage location. Changing the name in the JSON file would also require changing the Model Name
, and vice versa.