LogoLogo
search
⌘Ctrlk
LogoLogo
  • Overview
    • Introduction
    • Getting Started
    • Algorithm Overview
    • White-box and black-box models
    • Saving and loading
    • Frequently Asked Questions
  • Explanations
    • Methods
    • Examples
      • Alibi Overview Examples
      • Accumulated Local Effets
      • Anchors
        • Anchor explanations for fashion MNIST
        • Anchor explanations for ImageNet
        • Anchor explanations for income prediction
        • Anchor explanations on the Iris dataset
        • Anchor explanations for movie sentiment
      • Contrastive Explanation Method
      • Counterfactual Instances on MNIST
      • Counterfactuals Guided by Prototypes
      • Counterfactuals with Reinforcement Learning
      • Integrated Gradients
      • Kernel SHAP
      • Partial Dependence
      • Partial Dependence Variance
      • Permutation Importance
      • Similarity explanations
      • Tree SHAP
  • Model Confidence
    • Methods
    • Examples
  • Prototypes
    • Methods
    • Examples
  • API Reference
    • alibi.api
    • alibi.confidence
    • alibi.datasets
    • alibi.exceptions
    • alibi.explainers
    • alibi.models
    • alibi.prototypes
    • alibi.saving
    • alibi.utils
    • alibi.version
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. Explanationschevron-right
  2. Examples

Anchors

Anchor explanations for fashion MNISTchevron-rightAnchor explanations for ImageNetchevron-rightAnchor explanations for income predictionchevron-rightAnchor explanations on the Iris datasetchevron-rightAnchor explanations for movie sentimentchevron-right
PreviousAccumulated Local Effects for predicting house priceschevron-leftNextAnchor explanations for fashion MNISTchevron-right

Last updated 5 months ago

Was this helpful?

Was this helpful?