Image Explanations

Understanding how complex models make predictions is crucial for ensuring transparency, building trust, and identifying potential biases. Model explainers provide insights into how features influence outcomes, aiding in debugging and refining models.

In this demonstration, you can learn about using Alibi Explain's Anchor Images method to explore model explanations. This includes identifying the segments of an input image that had the most influence on the prediction and analyzing the precision of the Anchor and Coverage metrics.

This demo helps you learn about:

  • Launching an image classification pipeline

  • Sending prediction requests to the pipeline

  • Creating an explainer for the pipeline

  • Generating explanations for previously sent prediction requests

The model used in this demo is already trained to classify images based on the CIFAR10 dataset.

Create Seldon Pipeline

  1. In the Overview page click Create new deployment.

  2. Enter the following details for the deployment:

    • name: cifar10-classifier

    • namespace: seldon

    • Type: Seldon ML Pipeline

Deployment details
  1. Configure the default predictor as follows:

    • Runtime: Tensorflow

    • Model Project: default

    • Model URI:

     gs://seldon-models/triton/tf_cifar10
    • Storage Secret: (leave blank/none)

  2. Click Next for the remaining step and click Launch.

  3. When your deployment is launched successfully, the status of the deployment becomes Available.

Make Predictions

You can make a prediction request using the image of a frog from the cifar10 dataset. The image is a JSON file in the REST format of the Open Inference Protocol.

  1. In the Overview page click the cifar10-classifier pipeline that you created.

  2. In the deployment dashboard, click Predict in the left pane.

  3. In the Predict page, click Browse and upload the cifar10-frog-oip.json file.

  4. Click Predict.

Successful prediction using a JSON file

Add an Anchor Images Explainer

  1. In the cifar10-classifier deployment dashboard, click Add inside the MODEL EXPLANATION card..

  2. In the Explainer Configuration Wizard, choose Image and click Next.

Explainer Model Data Type
  1. In the Explainer Types step, choose the Anchor option for Explainer Algorithms supported and click Next.

  2. In the Explainer URI step, set the following details:

    • Explainer URI: gs://seldon-models/tfserving/cifar10/cifar10_anchor_image_py3.7_alibi-0.7.0

    • Model Project: default

    • Storage Secret: (leave blank/none)

Explainer URI
  1. Click Next for the remaining steps without changing any fields, and click Launch.

    After sometime, the explainer should become available.

Get an explanation for the Request

  1. In the cifar10-classifier deployment dashboard, click Requests in the left pane.

  2. Click the View explanation button to generate explanations for the request.

Previously made prediction request with its predicted response

After sometime the explanation for the requests is displayed.

The frog image segments that influenced the prediction the most
The Precision and Coverage Anchor metrics
Perturbed samples of the prediction request which comply with the prediction

Next Steps

Try the other demos or read our operations guide to learn more about how to use Seldon Enterprise Platform.

Last updated

Was this helpful?