LogoLogo
CtrlK
  • Overview
    • Introduction
    • Getting Started
    • Algorithm Overview
    • White-box and black-box models
    • Saving and loading
    • Frequently Asked Questions
  • Explanations
    • Methods
      • ALE
      • Anchors
      • CEM
      • CF
      • CFProto
      • CFRL
      • IntegratedGradients
      • KernelSHAP
      • LinearityMeasure
      • PartialDependence
      • PartialDependenceVariance
      • PermutationImportance
      • ProtoSelect
      • Similarity
      • TreeSHAP
      • TrustScores
    • Examples
      • Alibi Overview Examples
      • Accumulated Local Effets
        • Accumulated Local Effects for classifying flowers
        • Accumulated Local Effects for predicting house prices
      • Anchors
        • Anchor explanations for fashion MNIST
        • Anchor explanations for ImageNet
        • Anchor explanations for income prediction
        • Anchor explanations on the Iris dataset
        • Anchor explanations for movie sentiment
      • Contrastive Explanation Method
        • Contrastive Explanations Method (CEM) applied to Iris dataset
        • Contrastive Explanations Method (CEM) applied to MNIST
      • Counterfactual Instances on MNIST
      • Counterfactuals Guided by Prototypes
        • Counterfactual explanations with one-hot encoded categorical variables
        • Counterfactual explanations with ordinally encoded categorical variables
        • Counterfactuals guided by prototypes on California housing dataset
        • Counterfactuals guided by prototypes on MNIST
      • Counterfactuals with Reinforcement Learning
        • Counterfactual with Reinforcement Learning (CFRL) on Adult Census
        • Counterfactual with Reinforcement Learning (CFRL) on MNIST
      • Integrated Gradients
        • Integrated gradients for a ResNet model trained on Imagenet dataset
        • Integrated gradients for text classification on the IMDB dataset
        • Integrated gradients for MNIST
        • Integrated gradients for transformers models
      • Kernel SHAP
        • Distributed KernelSHAP
        • KernelSHAP: combining preprocessor and predictor
        • Handling categorical variables with KernelSHAP
        • Kernel SHAP explanation for SVM models
        • Kernel SHAP explanation for multinomial logistic regression models
      • Partial Dependence
        • Partial Dependence and Individual Conditional Expectation for predicting bike renting
      • Partial Dependence Variance
        • Feature importance and feature interaction based on partial dependece variance
      • Permutation Importance
        • Permutation Feature Importance on “Who’s Going to Leave Next?”
      • Similarity explanations
        • Similarity explanations for 20 newsgroups dataset
        • Similarity explanations for ImageNet
        • Similarity explanations for MNIST
      • Tree SHAP
        • Explaining Tree Models with Interventional Feature Perturbation Tree SHAP
        • Explaining Tree Models with Path-Dependent Feature Perturbation Tree SHAP
  • Model Confidence
    • Methods
      • Measuring the linearity of machine learning models
      • Trust Scores
    • Examples
      • Measuring the linearity of machine learning models
        • Linearity measure applied to fashion MNIST
        • Linearity measure applied to Iris
      • Trust Scores
        • Trust Scores applied to Iris
        • Trust Scores applied to MNIST
  • Prototypes
    • Methods
      • ProtoSelect
    • Examples
      • ProtoSelect on Adult Census and CIFAR10
  • API Reference
    • alibi.api
      • alibi.api.defaults
      • alibi.api.interfaces
    • alibi.confidence
      • alibi.confidence.model_linearity
      • alibi.confidence.trustscore
    • alibi.datasets
      • alibi.datasets.default
      • alibi.datasets.tensorflow
    • alibi.exceptions
    • alibi.explainers
      • alibi.explainers.ale
      • alibi.explainers.anchors
        • alibi.explainers.anchors.anchor_base
        • alibi.explainers.anchors.anchor_explanation
        • alibi.explainers.anchors.anchor_image
        • alibi.explainers.anchors.anchor_tabular
        • alibi.explainers.anchors.anchor_tabular_distributed
        • alibi.explainers.anchors.anchor_text
        • alibi.explainers.anchors.language_model_text_sampler
        • alibi.explainers.anchors.text_samplers
      • alibi.explainers.backends
        • alibi.explainers.backends.cfrl_base
        • alibi.explainers.backends.cfrl_tabular
        • alibi.explainers.backends.pytorch
          • alibi.explainers.backends.pytorch.cfrl_base
          • alibi.explainers.backends.pytorch.cfrl_tabular
        • alibi.explainers.backends.tensorflow
          • alibi.explainers.backends.tensorflow.cfrl_base
          • alibi.explainers.backends.tensorflow.cfrl_tabular
      • alibi.explainers.cem
      • alibi.explainers.cfproto
      • alibi.explainers.cfrl_base
      • alibi.explainers.cfrl_tabular
      • alibi.explainers.counterfactual
      • alibi.explainers.integrated_gradients
      • alibi.explainers.partial_dependence
      • alibi.explainers.pd_variance
      • alibi.explainers.permutation_importance
      • alibi.explainers.shap_wrappers
      • alibi.explainers.similarity
        • alibi.explainers.similarity.backends
          • alibi.explainers.similarity.backends.pytorch
            • alibi.explainers.similarity.backends.pytorch.base
          • alibi.explainers.similarity.backends.tensorflow
            • alibi.explainers.similarity.backends.tensorflow.base
        • alibi.explainers.similarity.base
        • alibi.explainers.similarity.grad
        • alibi.explainers.similarity.metrics
    • alibi.models
      • alibi.models.pytorch
        • alibi.models.pytorch.actor_critic
        • alibi.models.pytorch.autoencoder
        • alibi.models.pytorch.cfrl_models
        • alibi.models.pytorch.metrics
        • alibi.models.pytorch.model
      • alibi.models.tensorflow
        • alibi.models.tensorflow.actor_critic
        • alibi.models.tensorflow.autoencoder
        • alibi.models.tensorflow.cfrl_models
    • alibi.prototypes
      • alibi.prototypes.protoselect
    • alibi.saving
    • alibi.utils
      • alibi.utils.approximation_methods
      • alibi.utils.data
      • alibi.utils.discretizer
      • alibi.utils.distance
      • alibi.utils.distributed
      • alibi.utils.distributions
      • alibi.utils.download
      • alibi.utils.frameworks
      • alibi.utils.gradients
      • alibi.utils.kernel
      • alibi.utils.lang_model
      • alibi.utils.mapping
      • alibi.utils.missing_optional_dependency
      • alibi.utils.tf
      • alibi.utils.visualization
      • alibi.utils.wrappers
    • alibi.version
Powered by GitBook
On this page

Was this helpful?

  1. Model Confidence
  2. Examples

Trust Scores

Trust Scores applied to IrisTrust Scores applied to MNIST
PreviousLinearity measure applied to IrisNextTrust Scores applied to Iris

Last updated 24 days ago

Was this helpful?