LogoLogo
Ctrlk
  • Overview
    • Introduction
    • Getting Started
    • Algorithm Overview
    • White-box and black-box models
    • Saving and loading
    • Frequently Asked Questions
  • Explanations
    • Methods
    • Examples
      • Alibi Overview Examples
      • Accumulated Local Effets
      • Anchors
      • Contrastive Explanation Method
      • Counterfactual Instances on MNIST
      • Counterfactuals Guided by Prototypes
      • Counterfactuals with Reinforcement Learning
      • Integrated Gradients
      • Kernel SHAP
      • Partial Dependence
      • Partial Dependence Variance
      • Permutation Importance
      • Similarity explanations
        • Similarity explanations for 20 newsgroups dataset
        • Similarity explanations for ImageNet
        • Similarity explanations for MNIST
      • Tree SHAP
  • Model Confidence
    • Methods
    • Examples
  • Prototypes
    • Methods
    • Examples
  • API Reference
    • alibi.api
    • alibi.confidence
    • alibi.datasets
    • alibi.exceptions
    • alibi.explainers
    • alibi.models
    • alibi.prototypes
    • alibi.saving
    • alibi.utils
    • alibi.version
Powered by GitBook
On this page

Was this helpful?

  1. Explanations
  2. Examples

Similarity explanations

Similarity explanations for 20 newsgroups datasetSimilarity explanations for ImageNetSimilarity explanations for MNIST
PreviousPermutation Feature Importance on “Who’s Going to Leave Next?”NextSimilarity explanations for 20 newsgroups dataset

Last updated 2 months ago

Was this helpful?