githubEdit

Model Explanations

cat
explanation

Seldon provides model explanations using its Alibiarrow-up-right library.

The v1 explainer server supports explainers saved with Python 3.7. However, for the Open Inference Protocol (or V2 protocol) using MLServer, this requirement is no longer necessary.

Package
Version

alibi

0.6.4

Available Methods

Seldon Core supports a subset of the methods currently available in Alibiarrow-up-right. Presently this the following:

| Method | Explainer Key | |--------|---------------|| | Anchor Tabulararrow-up-right | AnchorTabular | | Anchor Textarrow-up-right | AnchorText | | Anchor Imagesarrow-up-right | AnchorImages | | kernel Shaparrow-up-right | KernelShap | | Integrated Gradientsarrow-up-right | IntegratedGradients | | Tree Shaparrow-up-right | TreeShap |

Creating your explainer

For Alibi explainers that need to be trained you should

  1. Use python 3.7 as the Seldon Alibi Explain Server also runs in python 3.7.10 when it loads your explainer.

  2. Follow the Alibi docsarrow-up-right for your particular desired explainer. The Seldon Wrapper presently supports: Anchors (Tabular, Text and Image), KernelShap and Integrated Gradients.

  3. Save your explainer using explainer.savearrow-up-right method and store in the object store or PVC in your cluster. We support various cloud storage solutions through our init container.

The runtime environment in our Alibi Explain Serverarrow-up-right is locked using Poetryarrow-up-right. See our e2e example herearrow-up-right on how to use that definition to train your explainers.

Open Inference Protocol for explainer using MLServerarrow-up-right

The support for Open Inference Protocol is now handled with MLServer moving forward. This is experimental and only works for black-box explainers.

For an e2e example, please check AnchorTabular notebook herearrow-up-right.

Explain API

Note: Seldon has adopted the industry-standard Open Inference Protocol (OIP) and is no longer maintaining the Seldon and TensorFlow protocols. This transition allows for greater interoperability among various model serving runtimes, such as MLServer. To learn more about implementing OIP for model serving in Seldon Core 1, see MLServerarrow-up-right.

We strongly encourage you to adopt the OIP, which provides seamless integration across diverse model serving runtimes, supports the development of versatile client and benchmarking tools, and ensures a high-performance, consistent, and unified inference experience.

For the Seldon Protocol an endpoint path will be exposed for:

So for example if you deployed:

If you were port forwarding to Ambassador or istio on localhost:8003 then the API call would be:

The explain method is also supported for tensorflow and Open Inference protocols. The full list of endpoint URIs is:

Protocol
URI

v2

http://<host>/<ingress-path>/v2/models/<model-name>/infer

seldon

http://<host>/<ingress-path>/api/v1.0/explain

tensorflow

http://<host>/<ingress-path>/v1/models/<model-name>:explain

Note: for tensorflow protocol we support similar non-standard extension as for the prediction APIarrow-up-right, http://<host>/<ingress-path>/v1/models/:explain.

Last updated

Was this helpful?