Counterfactuals guided by prototypes on MNIST
pip install alibi[tensorflow]import tensorflow as tf
tf.get_logger().setLevel(40) # suppress deprecation messages
tf.compat.v1.disable_v2_behavior() # disable TF2 behaviour as alibi code still relies on TF1 constructs
from tensorflow.keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
from time import time
from alibi.explainers import CounterfactualProto
print('TF version: ', tf.__version__)
print('Eager execution enabled: ', tf.executing_eagerly()) # FalseLoad and prepare MNIST data

Define and train CNN model
Define and train auto-encoder

Generate counterfactual guided by the nearest class prototype



Prototypes defined by the $k$ nearest encoded instances


Remove the autoencoder loss term $L_{AE}$

Specify prototype classes



Speed up the counterfactual search by removing the predict function loss term



PreviousCounterfactuals guided by prototypes on California housing datasetNextCounterfactuals with Reinforcement Learning
Last updated
Was this helpful?

