LogoLogo
search
⌘Ctrlk
LogoLogo
  • Overview
    • Introduction
    • Getting Started
    • Algorithm Overview
    • White-box and black-box models
    • Saving and loading
    • Frequently Asked Questions
  • Explanations
    • Methods
    • Examples
      • Alibi Overview Examples
      • Accumulated Local Effets
      • Anchors
      • Contrastive Explanation Method
        • Contrastive Explanations Method (CEM) applied to Iris dataset
        • Contrastive Explanations Method (CEM) applied to MNIST
      • Counterfactual Instances on MNIST
      • Counterfactuals Guided by Prototypes
      • Counterfactuals with Reinforcement Learning
      • Integrated Gradients
      • Kernel SHAP
      • Partial Dependence
      • Partial Dependence Variance
      • Permutation Importance
      • Similarity explanations
      • Tree SHAP
  • Model Confidence
    • Methods
    • Examples
  • Prototypes
    • Methods
    • Examples
  • API Reference
    • alibi.api
    • alibi.confidence
    • alibi.datasets
    • alibi.exceptions
    • alibi.explainers
    • alibi.models
    • alibi.prototypes
    • alibi.saving
    • alibi.utils
    • alibi.version
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. Explanationschevron-right
  2. Examples

Contrastive Explanation Method

Contrastive Explanations Method (CEM) applied to Iris datasetchevron-rightContrastive Explanations Method (CEM) applied to MNISTchevron-right
PreviousAnchor explanations for movie sentimentchevron-leftNextContrastive Explanations Method (CEM) applied to Iris datasetchevron-right

Last updated 5 months ago

Was this helpful?

Was this helpful?