Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The CVM drift detector is a non-parametric drift detector, which applies feature-wise two-sample Cramér-von Mises (CVM) tests. For two empirical distributions $F(z)$ and $F_{ref}(z)$, the CVM test statistic is defined as
where $k$ is the joint sample. The CVM test is an alternative to the Kolmogorov-Smirnov (K-S) two-sample test, which uses the maximum distance between two emphirical distributions $F(z)$ and $F_{ref}(z)$. By using the full joint sample, the CVM can exhibit greater power against shifts in higher moments, such as variance changes.
For multivariate data, the detector applies a separate CVM test to each feature, and the p-values obtained for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur. As with other univariate detectors such as the Kolmogorov-Smirnov detector, for high-dimensional data, we typically want to reduce the dimensionality before computing the feature-wise univariate FET tests and aggregating those via the chosen correction method. See Dimension Reduction for more guidance on this.
Arguments:
x_ref
: Data used as reference distribution.
Keyword arguments:
p_val
: p-value used for significance of the CVM test. If the FDR correction method is used, this corresponds to the acceptable q-value.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
correction
: Correction type for multivariate data. Either 'bonferroni' or 'fdr' (False Discovery Rate).
n_features
: Number of features used in the CVM test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.
input_shape
: Shape of input data.
data_type
: can specify data type added to metadata. E.g. 'tabular' or 'image'.
Initialized drift detector example:
We detect data drift by simply calling predict
on a batch of instances x
. We can return the feature-wise p-values before the multivariate correction by setting return_p_val
to True. The drift can also be detected at the feature level by setting drift_type
to 'feature'. No multivariate correction will take place since we return the output of n_features univariate tests. For drift detection on all the features combined with the correction, use 'batch'. return_p_val
equal to True will also return the threshold used by the detector (either for the univariate case or after the multivariate correction).
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains feature-level p-values if return_p_val
equals True.
threshold
: for feature-level drift detection the threshold equals the p-value used for the significance of the CVM test. Otherwise the threshold after the multivariate correction (either bonferroni or fdr) is returned.
distance
: feature-wise CVM statistics between the reference data and the new batch if return_distance
equals True.
The drift detector applies feature-wise two-sample Kolmogorov-Smirnov (K-S) tests. For multivariate data, the obtained p-values for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur.
For high-dimensional data, we typically want to reduce the dimensionality before computing the feature-wise univariate K-S tests and aggregating those via the chosen correction method. Following suggestions in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift, we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs (BBSDs) as out-of-the box preprocessing methods and note that PCA can also be easily implemented using scikit-learn
. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift. The adversarial detector which is part of the library can also be transformed into a drift detector picking up drift that reduces the performance of the classification model. We can therefore combine different preprocessing techniques to figure out if there is drift which hurts the model performance, and whether this drift can be classified as input drift or label shift.
Detecting input data drift (covariate shift) $\Delta p(x)$ for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from HuggingFace's transformer package but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the Text drift detection on IMDB movie reviews notebook.
Arguments:
x_ref
: Data used as reference distribution.
Keyword arguments:
p_val
: p-value used for significance of the K-S test. If the FDR correction method is used, this corresponds to the acceptable q-value.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics. Typically a dimensionality reduction technique.
correction
: Correction type for multivariate data. Either 'bonferroni' or 'fdr' (False Discovery Rate).
alternative
: Defines the alternative hypothesis. Options are 'two-sided' (default), 'less' or 'greater'.
n_features
: Number of features used in the K-S test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.
input_shape
: Shape of input data.
data_type
: can specify data type added to metadata. E.g. 'tabular' or 'image'.
Initialized drift detector example:
We detect data drift by simply calling predict
on a batch of instances x
. We can return the feature-wise p-values before the multivariate correction by setting return_p_val
to True. The drift can also be detected at the feature level by setting drift_type
to 'feature'. No multivariate correction will take place since we return the output of n_features univariate tests. For drift detection on all the features combined with the correction, use 'batch'. return_p_val
equal to True will also return the threshold used by the detector (either for the univariate case or after the multivariate correction).
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains feature-level p-values if return_p_val
equals True.
threshold
: for feature-level drift detection the threshold equals the p-value used for the significance of the K-S test. Otherwise the threshold after the multivariate correction (either bonferroni or fdr) is returned.
distance
: feature-wise K-S statistics between the reference data and the new batch if return_distance
equals True.
Drift detection on molecular graphs
The drift detector applies feature-wise Chi-Squared tests for the categorical features. For multivariate data, the obtained p-values for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur. Similarly to the other drift detectors, a preprocessing steps could be applied, but the output features need to be categorical.
Arguments:
x_ref
: Data used as reference distribution.
Keyword arguments:
p_val
: p-value used for significance of the Chi-Squared test for. If the FDR correction method is used, this corresponds to the acceptable q-value.
categories_per_feature
: Optional dictionary with as keys the feature column index and as values the number of possible categorical values for that feature or a list with the possible values. If you know how many categories are present for a given feature you could pass this in the categories_per_feature
dict in the Dict[int, int] format, e.g. {0: 3, 3: 2}. If you pass N categories this will assume the possible values for the feature are [0, ..., N-1]. You can also explicitly pass the possible categories in the Dict[int, List[int]] format, e.g. {0: [0, 1, 2], 3: [0, 55]}. Note that the categories can be arbitrary int values. If it is not specified, categories_per_feature
is inferred from x_ref
.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics. Typically a dimensionality reduction technique. Needs to return categorical features for the Chi-Squared detector.
correction
: Correction type for multivariate data. Either 'bonferroni' or 'fdr' (False Discovery Rate).
n_features
: Number of features used in the Chi-Squared test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.
data_type
: can specify data type added to metadata. E.g. 'tabular'.
Initialized drift detector example:
We detect data drift by simply calling predict
on a batch of instances x
. We can return the feature-wise p-values before the multivariate correction by setting return_p_val
to True. The drift can also be detected at the feature level by setting drift_type
to 'feature'. No multivariate correction will take place since we return the output of n_features univariate tests. For drift detection on all the features combined with the correction, use 'batch'. return_p_val
equal to True will also return the threshold used by the detector (either for the univariate case or after the multivariate correction).
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains feature-level p-values if return_p_val
equals True.
threshold
: for feature-level drift detection the threshold equals the p-value used for the significance of the Chi-Square test. Otherwise the threshold after the multivariate correction (either bonferroni or fdr) is returned.
distance
: feature-wise Chi-Square test statistics between the reference data and the new batch if return_distance
equals True.
The Maximum Mean Discrepancy (MMD) detector is a kernel-based method for multivariate 2 sample testing. The MMD is a distance-based measure between 2 distributions p and q based on the mean embeddings $\mu_{p}$ and $\mu_{q}$ in a reproducing kernel Hilbert space $F$:
We can compute unbiased estimates of $MMD^2$ from the samples of the 2 distributions after applying the kernel trick. We use by default a radial basis function kernel, but users are free to pass their own kernel of preference to the detector. We obtain a $p$-value via a permutation test on the values of $MMD^2$.
For high-dimensional data, we typically want to reduce the dimensionality before computing the permutation test. Following suggestions in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift, we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs (BBSDs) as out-of-the box preprocessing methods and note that PCA can also be easily implemented using scikit-learn
. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift.
Detecting input data drift (covariate shift) $\Delta p(x)$ for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from HuggingFace's transformer package but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the Text drift detection on IMDB movie reviews notebook.
Arguments:
x_ref
: Data used as reference distribution.
Keyword arguments:
backend
: TensorFlow, PyTorch and KeOps implementations of the MMD detector are available. Specify the backend (tensorflow, pytorch or keops). Defaults to tensorflow.
p_val
: p-value used for significance of the permutation test.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics. Typically a dimensionality reduction technique.
kernel
: Kernel used when computing the MMD. Defaults to a Gaussian RBF kernel (from alibi_detect.utils.pytorch import GaussianRBF
, from alibi_detect.utils.tensorflow import GaussianRBF
or from alibi_detect.utils.keops import GaussianRBF
dependent on the backend used). Note that for the KeOps backend, the diagonal entries of the kernel matrices kernel(x_ref, x_ref)
and kernel(x_test, x_test)
should be equal to 1. This is compliant with the default Gaussian RBF kernel.
sigma
: Optional bandwidth for the kernel as a np.ndarray
. We can also average over a number of different bandwidths, e.g. np.array([.5, 1., 1.5])
.
configure_kernel_from_x_ref
: If sigma
is not specified, the detector can infer it via a heuristic and set sigma
to the median (TensorFlow and PyTorch) or the mean pairwise distance between 2 samples (KeOps) by default. If configure_kernel_from_x_ref
is True, we can already set sigma
at initialization of the detector by inferring it from x_ref
, speeding up the prediction step. If set to False, sigma
is computed separately for each test batch at prediction time.
n_permutations
: Number of permutations used in the permutation test.
input_shape
: Optionally pass the shape of the input data.
data_type
: can specify data type added to the metadata. E.g. 'tabular' or 'image'.
Additional PyTorch keyword arguments:
device
: cuda or gpu to use the GPU and cpu for the CPU. If the device is not specified, the detector will try to leverage the GPU if possible and otherwise fall back on CPU.
Additional KeOps keyword arguments:
batch_size_permutations
: KeOps computes the n_permutations
of the MMD^2 statistics in chunks of batch_size_permutations
. Defaults to 1,000,000.
Initialized drift detector examples for each of the available backends:
We can also easily add preprocessing functions for the TensorFlow and PyTorch frameworks. Note that we can also combine for instance a PyTorch preprocessing step with a KeOps detector. The following example uses a randomly initialized image encoder in PyTorch:
The same functionality is supported in TensorFlow and the main difference is that you would import from alibi_detect.cd.tensorflow import preprocess_drift
. Other preprocessing steps such as the output of hidden layers of a model or extracted text embeddings using transformer models can be used in a similar way in both frameworks. TensorFlow example for the hidden layer output:
Check out the Drift detection on CIFAR10 example for more details.
Alibi Detect also includes custom text preprocessing steps in both TensorFlow and PyTorch based on Huggingface's transformers package:
Again the same functionality is supported in TensorFlow but with from alibi_detect.cd.tensorflow import preprocess_drift
and from alibi_detect.models.tensorflow import TransformerEmbedding
imports. Check out the Text drift detection on IMDB movie reviews example for more information.
We detect data drift by simply calling predict
on a batch of instances x
. We can return the p-value and the threshold of the permutation test by setting return_p_val
to True and the maximum mean discrepancy metric and threshold by setting return_distance
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains the p-value if return_p_val
equals True.
threshold
: p-value threshold if return_p_val
equals True.
distance
: MMD^2 metric between the reference data and the new batch if return_distance
equals True.
distance_threshold
: MMD^2 metric value from the permutation test which corresponds to the the p-value threshold.
Drift detection on molecular graphs
Scaling up drift detection with KeOps
The learned-kernel drift detector (Liu et al., 2020) is an extension of the Maximum Mean Discrepancy drift detector where the kernel used to define the MMD is trained using a portion of the data to maximise an estimate of the resulting test power. Once the kernel has been learned a permutation test is performed in the usual way on the value of the MMD.
This method is closely related to the classifier drift detector which trains a classifier to discriminate between instances from the reference window and instances from the test window. The difference here is that we train a kernel to output high similarity on instances from the same window and low similarity between instances from different windows. If this is possible in a generalisable manner then drift must have occured.
As with the classifier-based approach, we should specify the proportion of data to use for training and testing respectively as well as training arguments such as the learning rate and batch size. Note that a new kernel is trained for each test set that is passed for detection.
Arguments:
x_ref
: Data used as reference distribution.
kernel
: A differentiable TensorFlow or PyTorch module that takes two sets of instances as inputs and returns a kernel similarity matrix as output.
Keyword arguments:
backend
: TensorFlow, PyTorch and KeOps implementations of the learned kernel detector are available. The backend can be specified as tensorflow, pytorch or keops. Defaults to tensorflow.
p_val
: p-value threshold used for the significance of the test.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed. If the input data type is of type List[Any]
then update_x_ref
needs to be set to None and the reference set remains fixed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
n_permutations
: The number of permutations to use in the permutation test once the MMD has been computed.
var_reg
: Constant added to the estimated variance of the MMD for stability.
reg_loss_fn
: The regularisation term reg_loss_fn(kernel) is added to the loss function being optimized.
train_size
: Optional fraction (float between 0 and 1) of the dataset used to train the classifier. The drift is detected on 1 - train_size.
retrain_from_scratch
: Whether the kernel should be retrained from scratch for each set of test data or whether it should instead continue training from where it left off on the previous set. Defaults to True.
optimizer
: Optimizer used during training of the kernel. From torch.optim
for PyTorch and tf.keras.optimizers
for TensorFlow.
learning_rate
: Learning rate for the optimizer.
batch_size
: Batch size used during training of the kernel.
batch_size_predict
: Batch size used for the trained drift detector predictions.
preprocess_batch_fn
: Optional batch preprocessing function. For example to convert a list of generic objects to a tensor which can be processed by the kernel.
epochs
: Number of training epochs for the kernel.
verbose
: Verbosity level during the training of the kernel. 0 is silent and 1 prints a progress bar.
train_kwargs
: Optional additional kwargs for the built-in TensorFlow (from alibi_detect.models.tensorflow import trainer
) or PyTorch (from alibi_detect.models.pytorch import trainer
) trainer functions.
dataset
: Dataset object used during training of the kernel. Defaults to alibi_detect.utils.pytorch.TorchDataset
(an instance of torch.utils.data.Dataset
) for the PyTorch and KeOps backends and alibi_detect.utils.tensorflow.TFDataset
(an instance of tf.keras.utils.Sequence
) for the TensorFlow backend. For PyTorch or KeOps, the dataset should only take the windows x_ref and x_test as input, so when e.g. TorchDataset is passed to the detector at initialisation, during training TorchDataset(x_ref, x_test) is used. For TensorFlow, the dataset is an instance of tf.keras.utils.Sequence
, so when e.g. TFDataset is passed to the detector at initialisation, during training TFDataset(x_ref, x_test, batch_size=batch_size, shuffle=True) is used. x_ref and x_test can be of type np.ndarray or List[Any].
input_shape
: Shape of input data.
data_type
: Optionally specify the data type (e.g. tabular, image or time-series). Added to metadata.
Additional PyTorch and KeOps keyword arguments:
device
: cuda or gpu to use the GPU and cpu for the CPU. If the device is not specified, the detector will try to leverage the GPU if possible and otherwise fall back on CPU.
dataloader
: Dataloader object used during training of the kernel. Defaults to torch.utils.data.DataLoader
. The dataloader is not initialized yet, this is done during init off the detector using the batch_size
. Custom dataloaders can be passed as well, e.g. for graph data we can use torch_geometric.data.DataLoader
.
num_workers
: The number of workers used by the DataLoader
. The default (num_workers=0
) means multi-process data loading is disabled. Setting num_workers>0
may be unreliable on Windows.
Additional KeOps only keyword arguments:
batch_size_permutations
: KeOps computes the n_permutations
of the MMD^2 statistics in chunks of batch_size_permutations
. Defaults to 1,000,000.
Any differentiable Pytorch or TensorFlow module that takes as input two instances and outputs a scalar (representing similarity) can be used as the kernel for this drift detector. However, in order to ensure that MMD=0 implies no-drift the kernel should satify a characteristic property. This can be guaranteed by defining a kernel as where $\Phi$ is a learnable projection, $k_a$ and $k_b$ are simple characteristic kernels (such as a Gaussian RBF), and $\epsilon>0$ is a small constant. By letting $\Phi$ be very flexible we can learn powerful kernels in this manner.
This is easily implemented using the DeepKernel
class provided in alibi_detect
. We demonstrate below how we might define a convolutional kernel for images using Pytorch. By default GaussianRBF
kernels are used for $k_a$ and $k_b$ and here we specify $\epsilon=0.01$, but we could alternatively set eps='trainable'
.
It is important to note that, if retrain_from_scratch=True
and we have not initialised the kernel bandwidth sigma
for the default GaussianRBF
kernel $k_a$ and optionally also for $k_b$, we will initialise sigma
using a median (PyTorch and TensorFlow) or mean (KeOps) bandwidth heuristic for every detector prediction. For KeOps detectors specifically, this could form a computational bottleneck and should be avoided by already specifying a bandwidth in advance. To do this, we can leverage the library's built-in heuristics:
Instantiating the detector is then as simple as passing the reference data and the kernel as follows:
We could have alternatively defined the kernel and instantiated the detector using KeOps:
Or by using TensorFlow as the backend:
We detect data drift by simply calling predict
on a batch of instances x
. return_p_val
equal to True will also return the p-value of the test, return_distance
equal to True will return a notion of strength of the drift and return_kernel
equals True will also return the trained kernel.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
threshold
: the user-defined p-value threshold defining the significance of the test
p_val
: the p-value of the test if return_p_val
equals True.
distance
: MMD^2 metric between the reference data and the new batch if return_distance
equals True.
distance_threshold
: MMD^2 metric value from the permutation test which corresponds to the the p-value threshold if return_distance
equals True.
kernel
: The trained kernel if return_kernel
equals True.
Drift detection on molecular graphs
The classifier-based drift detector Lopez-Paz and Oquab, 2017 simply tries to correctly distinguish instances from the reference set vs. the test set. The classifier is trained to output the probability that a given instance belongs to the test set. If the probabilities it assigns to unseen test instances are significantly higher (as determined by a Kolmogorov-Smirnov test) to those it assigns to unseen reference instances then the test set must differ from the reference set and drift is flagged. Alternatively, the detector also allows to binarize the classifier predictions (0 or 1) and apply a binomial test on the binarized predictions of the reference vs. the test data. To leverage all the available reference and test data, stratified cross-validation can be applied and the out-of-fold predictions are used for the significance test. Note that a new classifier is trained for each test set or even each fold within the test set.
Arguments:
x_ref
: Data used as reference distribution.
model
: Binary classification model used for drift detection. TensorFlow, PyTorch and Sklearn models are supported.
Keyword arguments:
backend
: Specify the backend (tensorflow, pytorch or sklearn). This depends on the framework of the model
. Defaults to tensorflow.
p_val
: p-value threshold used for the significance of the test.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed. If the input data type is of type List[Any]
then update_x_ref
needs to be set to None and the reference set remains fixed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
preds_type
: Whether the model outputs 'probs' (probabilities - for 'tensorflow', 'pytorch', 'sklearn' models), 'logits' (for 'pytorch', 'tensorflow' models), 'scores' (for 'sklearn' models if decision_function
is supported).
binarize_preds
: Whether to test for discrepancy on soft (e.g. probs/logits/scores) model predictions directly with a K-S test or binarise to 0-1 prediction errors and apply a binomial test. Defaults to False and therefore applies the K-S test.
train_size
: Optional fraction (float between 0 and 1) of the dataset used to train the classifier. The drift is detected on 1 - train_size. Cannot be used in combination with n_folds
.
n_folds
: Optional number of stratified folds used for training. The model preds are then calculated on all the out-of-fold predictions. This allows to leverage all the reference and test data for drift detection at the expense of longer computation. If both train_size
and n_folds
are specified, n_folds
is prioritized.
seed
: Optional random seed for fold selection.
optimizer
: Optimizer used during training of the classifier. From torch.optim
for PyTorch and tf.keras.optimizers
for TensorFlow.
learning_rate
: Learning rate for the optimizer. Only relevant for tensorflow and pytorch backends.
batch_size
: Batch size used during training of the classifier.Only relevant for tensorflow and pytorch backends.
epochs
: Number of training epochs for the classifier. Applies to each fold if n_folds
is specified. Only relevant for tensorflow and pytorch backends.
verbose
: Verbosity level during the training of the classifier. 0 is silent and 1 prints a progress bar. Only relevant for tensorflow and pytorch backends.
train_kwargs
: Optional additional kwargs for the built-in TensorFlow (from alibi_detect.models.tensorflow import trainer
) or PyTorch (from alibi_detect.models.pytorch import trainer
) trainer functions.
dataset
: Dataset object used during training of the classifier. Defaults to alibi_detect.utils.pytorch.TorchDataset
(an instance of torch.utils.data.Dataset
) for the PyTorch backend and alibi_detect.utils.tensorflow.TFDataset
(an instance of tf.keras.utils.Sequence
) for the TensorFlow backend. For PyTorch, the dataset should only take the data x and the array of labels y as input, so when e.g. TorchDataset is passed to the detector at initialisation, during training TorchDataset(x, y) is used. For TensorFlow, the dataset is an instance of tf.keras.utils.Sequence
, so when e.g. TFDataset is passed to the detector at initialisation, during training TFDataset(x, y, batch_size=batch_size, shuffle=True) is used. x can be of type np.ndarray or List[Any] while y is of type np.ndarray.
input_shape
: Shape of input data.
data_type
: Optionally specify the data type (e.g. tabular, image or time-series). Added to metadata.
Additional PyTorch keyword arguments:
device
: cuda or gpu to use the GPU and cpu for the CPU. If the device is not specified, the detector will try to leverage the GPU if possible and otherwise fall back on CPU.
dataloader
: Dataloader object used during training of the model. Defaults to torch.utils.data.DataLoader
. The dataloader is not initialized yet, this is done during init off the detector using the batch_size
. Custom dataloaders can be passed as well, e.g. for graph data we can use torch_geometric.data.DataLoader
.
Additional Sklearn keyword arguments:
use_calibration
: Whether to use calibration. Calibration can be used on top of any model. Only relevant for 'sklearn' backend.
calibration_kwargs
: Optional additional kwargs for calibration. Only relevant for 'sklearn' backend. See https://scikit-learn.org/stable/modules/generated/sklearn.calibration.CalibratedClassifierCV.html for more details.
use_oob
: Whether to use out-of-bag(OOB) predictions. Supported only for RandomForestClassifier
.
Initialized TensorFlow drift detector example:
A similar detector using PyTorch:
We detect data drift by simply calling predict
on a batch of instances x
. return_p_val
equal to True will also return the p-value of the test, return_distance
equal to True will return a notion of strength of the drift and return_probs
equals True also returns the out-of-fold classifier model prediction probabilities on the reference and test data (0 = reference data, 1 = test data) as well as the associated out-of-fold reference and test instances.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
threshold
: the user-defined threshold defining the significance of the test
p_val
: the p-value of the test if return_p_val
equals True.
distance
: a notion of strength of the drift if return_distance
equals True. Equal to the K-S test statistic assuming binarize_preds
equals False or the relative error reduction over the baseline error expected under the null if binarize_preds
equals True.
probs_ref
: the instance level prediction probability for the reference data x_ref
(0 = reference data, 1 = test data) if return_probs
is True.
probs_test
: the instance level prediction probability for the test data x
if return_probs
is true.
x_ref_oof
: the instances associated with probs_ref
if return_probs
equals True.
x_test_oof
: the instances associated with probs_test
if return_probs
equals True.
Model-uncertainty drift detectors aim to directly detect drift that's likely to effect the performance of a model of interest. The approach is to test for change in the number of instances falling into regions of the input space on which the model is uncertain in its predictions. For each instance in the reference set the detector obtains the model's prediction and some associated notion of uncertainty. For example for a classifier this may be the entropy of the predicted label probabilities or for a regressor with dropout layers dropout Monte Carlo can be used to provide a notion of uncertainty. The same is done for the test set and if significant differences in uncertainty are detected (via a Kolmogorov-Smirnoff test) then drift is flagged. The detector's reference set should be disjoint from the model's training set (on which the model's confidence may be higher).
ClassifierUncertaintyDrift
should be used with classification models whereas RegressorUncertaintyDrift
should be used with regression models. They are used in much the same way.
By default ClassifierUncertaintyDrift
uses uncertainty_type='entropy'
as the notion of uncertainty for classifier predictions and a Kolmogorov-Smirnov two-sample test is performed on these continuous values. However uncertainty_type='margin'
can also be specified to deem the classifier's prediction uncertain if they fall within a margin (e.g. in [0.45,0.55] for binary classifier probabilities) (similar to Sethi and Kantardzic (2017)) and a Chi-Squared two-sample test is performed on these 0-1 flags of uncertainty.
By default RegressorUncertaintyDrift
uses uncertainty_type='mc_dropout'
and assumes a PyTorch or TensorFlow model with dropout layers as the regressor. This evaluates the model under multiple dropout configurations and uses the variation as the notion of uncertainty. Alternatively a model that outputs (for each instance) a vector of independent model predictions can be passed and uncertainty_type='ensemble'
can be specified. Again the variation is taken as the notion of uncertainty and in both cases a Kolmogorov-Smirnov two-sample test is performed on the continuous notions of uncertainty.
Arguments:
x_ref
: Data used as reference distribution. Should be disjoint from the model's training set
model
: The model of interest whose performance we'd like to remain constant.
Keyword arguments:
p_val
: p-value used for the significance of the test.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
input_shape
: Optionally pass the shape of the input data.
data_type
: Optionally specify the data type (e.g. tabular, image or time-series). Added to metadata.
ClassifierUncertaintyDrift
-specific keyword arguments:
preds_type
: Type of prediction output by the model. Options are 'probs' (in [0,1]) or 'logits' (in [-inf,inf]).
uncertainty_type
: Method for determining the model's uncertainty for a given instance. Options are 'entropy' or 'margin'.
margin_width
: Width of the margin if uncertainty_type = 'margin'. The model is considered uncertain on an instance if the highest two class probabilities it assigns to the instance differ by less than this.
RegressorUncertaintyDrift
-specific keyword arguments:
uncertainty_type
: Method for determining the model's uncertainty for a given instance. Options are 'mc_dropout' or 'ensemble'. For the former the model should have dropout layers and output a scalar per instance. For the latter the model should output a vector of predictions per instance.
n_evals
: The number of times to evaluate the model under different dropout configurations. Only relavent when using the 'mc_dropout' uncertainty type.
Additional arguments if batch prediction required:
backend
: Framework that was used to define model. Options are 'tensorflow' or 'pytorch'.
batch_size
: Batch size to use to evaluate model. Defaults to 32.
device
: Device type to use. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either 'cuda', 'gpu' or 'cpu'. Only relevant for 'pytorch' backend.
Additional arguments for NLP models
tokenizer
: Tokenizer to use before passing data to model.
max_len
: Max length to be used by tokenizer.
Drift detector for a TensorFlow classifier outputting probabilities:
Drift detector for a PyTorch regressor (with dropout layers) outputting scalars:
Note that for the PyTorch RegressorUncertaintyDrift detector the dropout layers need to be defined within the nn.Module
init to be able to set them to train mode when computing the uncertainty estimates, e.g.:
We detect data drift by simply calling predict
on a batch of instances x
. return_p_val
equal to True will also return the p-value of the test and return_distance
equal to True will return the test-statistic.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
threshold
: the user-defined threshold defining the significance of the test.
p_val
: the p-value of the test if return_p_val
equals True.
distance
: the test-statistic if return_distance
equals True.
Drift detection on molecular graphs
The spot-the-diff drift detector is an extension of the Classifier drift detector where the classifier is specified in a manner that makes detections interpretable at the feature level when they occur. The detector is inspired by the work of Jitkrittum et al. (2016) but various major adaptations have been made.
As with the usual classifier-based approach, a portion of the available data is used to train a classifier that can disciminate reference instances from test instances. If the classifier can learn to discriminate in a generalisable manner then drift must have occured. Here we additionally enforce that the classifier takes the form where $\hat{p}_T$ is the predicted probability that instance $x$ is from the test window (rather than reference), $k(\cdot,\cdot)$ is a kernel specifying a notion of similarity between instances, $w_i$ are learnable test locations and $b_i$ are learnable regression coefficients.
If the detector flags drift and $b_i >0$ then we know that it reached its decision by considering how similar each instance is to the instance $w_i$, with those being more similar being more likely to be test instances than reference instances. Alternatively if $b_i < 0$ then instances more similar to $w_i$ were deemed more likely to be reference instances.
In order to provide less noisy and therefore more interpretable results, we define each test location as where $\bar{x}$ is the mean reference instance. We may then interpret $d_i$ as the additive transformation deemed to make the average reference more ($b_i>0$) or less ($b_i<0$) similar to a test instance. Defining the test locations in this way allows us to instead learn the difference $d_i$ and apply regularisation such that non-zero values must be justified by improved classification performance. This allows us to more clearly identify which features any detected drift should be attributed to.
As with the standard classifier-based approach, we should specify the proportion of data to use for training and testing respectively as well as training arguments such as the learning rate and batch size. Note that classifier is trained for each test set that is passed for detection.
Arguments:
x_ref
: Data used as reference distribution.
Keyword arguments:
backend
: Specify the backend (tensorflow or pytorch) to use for defining the kernel and training the test locations/differences.
p_val
: p-value threshold used for the significance of the test.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
kernel
: A differentiable TensorFlow or PyTorch module that takes two instances as input and returns a scalar notion of similarity os output. Defaults to a Gaussian radial basis function.
n_diffs
: The number of test locations to use, each corresponding to an interpretable difference.
initial_diffs
: Array used to initialise the diffs that will be learned. Defaults to Gaussian for each feature with equal variance to that of reference data.
l1_reg
: Strength of l1 regularisation to apply to the differences.
binarize_preds
: Whether to test for discrepency on soft (e.g. probs/logits) model predictions directly with a K-S test or binarise to 0-1 prediction errors and apply a binomial test.
train_size
: Optional fraction (float between 0 and 1) of the dataset used to train the classifier. The drift is detected on 1 - train_size. Cannot be used in combination with n_folds
.
n_folds
: Optional number of stratified folds used for training. The model preds are then calculated on all the out-of-fold instances. This allows to leverage all the reference and test data for drift detection at the expense of longer computation. If both train_size
and n_folds
are specified, n_folds
is prioritized.
retrain_from_scratch
: Whether the classifier should be retrained from scratch for each set of test data or whether it should instead continue training from where it left off on the previous set.
seed
: Optional random seed for fold selection.
optimizer
: Optimizer used during training of the kernel. From torch.optim
for PyTorch and tf.keras.optimizers
for TensorFlow.
learning_rate
: Learning rate for the optimizer.
batch_size
: Batch size used during training of the kernel.
preprocess_batch_fn
: Optional batch preprocessing function. For example to convert a list of generic objects to a tensor which can be processed by the kernel.
epochs
: Number of training epochs for the kernel.
verbose
: Verbosity level during the training of the kernel. 0 is silent and 1 prints a progress bar.
train_kwargs
: Optional additional kwargs for the built-in TensorFlow (from alibi_detect.models.tensorflow import trainer
) or PyTorch (from alibi_detect.models.pytorch import trainer
) trainer functions.
dataset
: Dataset object used during training of the classifier. Defaults to alibi_detect.utils.pytorch.TorchDataset
(an instance of torch.utils.data.Dataset
) for the PyTorch backend and alibi_detect.utils.tensorflow.TFDataset
(an instance of tf.keras.utils.Sequence
) for the TensorFlow backend. For PyTorch, the dataset should only take the data x and the array of labels y as input, so when e.g. TorchDataset is passed to the detector at initialisation, during training TorchDataset(x, y) is used. For TensorFlow, the dataset is an instance of tf.keras.utils.Sequence
, so when e.g. TFDataset is passed to the detector at initialisation, during training TFDataset(x, y, batch_size=batch_size, shuffle=True) is used. x can be of type np.ndarray or List[Any] while y is of type np.ndarray.
input_shape
: Shape of input data.
data_type
: Optionally specify the data type (e.g. tabular, image or time-series). Added to metadata.
Additional PyTorch keyword arguments:
device
: cuda or gpu to use the GPU and cpu for the CPU. If the device is not specified, the detector will try to leverage the GPU if possible and otherwise fall back on CPU.
dataloader
: Dataloader object used during training of the classifier. Defaults to torch.utils.data.DataLoader
. The dataloader is not initialized yet, this is done during init off the detector using the batch_size
. Custom dataloaders can be passed as well, e.g. for graph data we can use torch_geometric.data.DataLoader
.
Any differentiable Pytorch or TensorFlow module that takes as input two instances and outputs a scalar (representing similarity) can be used as the kernel for this drift detector. By default a simple Gaussian RBF kernel is used. Keeping the kernel simple can aid interpretability, but alternatively a "deep kernel" of the form where $\Phi$ is a (differentiable) projection, $k_a$ and $k_b$ are simple kernels (such as a Gaussian RBF) and $\epsilon>0$ a small constant can be used. The DeepKernel
class found in either alibi_detect.utils.tensorflow
or alibi_detect.utils.pytorch
aims to make defining such kernels straightforward. You should not allow too many learnable parameters however as we would like the classifier to discriminate using the test locations rather than kernel parameters.
Instantiating the detector is as simple as passing your reference data and selecting a backend, but you should also consider the number of "diffs" you would like your model to use to discriminate reference from test instances and the strength of regularisation you would like to apply to them.
Using n_diffs=1
is the simplest to interpret and seems to work well in practice. Using more diffs may result in stronger detection power but the diffs may be harder to interpret due to intereactions and conditional dependencies.
The strength of the regularisation (l1_reg
) to apply to the diffs should also be specified. Stronger regularisation results in sparser diffs as the classifier is encouraged to discriminate using fewer features. This may make the diff more interpretable but may again come at the cost of detection power.
Alternatively we could have used the TensorFlow backend and defined a deep kernel with a convolutional structure:
We detect data drift by simply calling predict
on a batch of instances x
. return_p_val
equal to True will also return the p-value of the test, return_distance
equal to True will return a notion of strength of the drift, return_probs
equals True returns the out-of-fold classifier model prediction probabilities on the reference and test data (0 = reference data, 1 = test data) as well as the associated out-of-fold reference and test instances, and return_kernel
equals True will also return the trained kernel.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
diffs
: a numpy array containing the diffs used to discriminate reference from test instances.
diff_coeffs
a coefficient correspond to each diff where a coeffient greater than zero implies that the corresponding diff makes the average reference instances more similar to a test instance on average and less than zero implies less similar.
threshold
: the user-defined p-value threshold defining the significance of the test
p_val
: the p-value of the test if return_p_val
equals True.
distance
: a notion of strength of the drift if return_distance
equals True. Equal to the K-S test statistic assuming binarize_preds
equals False or the relative error reduction over the baseline error expected under the null if binarize_preds
equals True.
probs_ref
: the instance level prediction probability for the reference data x_ref
(0 = reference data, 1 = test data) if return_probs
is True.
probs_test
: the instance level prediction probability for the test data x
if return_probs
is true.
x_ref_oof
: the instances associated with probs_ref
if return_probs
equals True.
x_test_oof
: the instances associated with probs_test
if return_probs
equals True.
kernel
: The trained kernel if return_kernel
equals True.
Interpretable Drift detection on MNIST and the Wine Quality dataset
Although powerful, modern machine learning models can be sensitive. Seemingly subtle changes in a data distribution can destroy the performance of otherwise state-of-the art models, which can be especially problematic when ML models are deployed in production. Typically, ML models are tested on held out data in order to estimate their future performance. Crucially, this assumes that the process underlying the input data $\mathbf{X}$ and output data $\mathbf{Y}$ remains constant.
Drift is said to occur when the process underlying $\mathbf{X}$ and $\mathbf{Y}$ at test time differs from the process that generated the training data. In this case, we can no longer expect the model’s performance on test data to match that observed on held out training data. At test time we always observe features $\mathbf{X}$, and the ground truth then refers to a corresponding label $\mathbf{Y}$. If ground truths are available at test time, supervised drift detection can be performed, with the model’s predictive performance monitored directly. However, in many scenarios, such as the binary classification example below, ground truths are not available and unsupervised drift detection methods are required.
To explore the different types of drift, consider the common scenario where we deploy a model $f: \boldsymbol{x} \mapsto y$ on input data $\mathbf{X}$ and output data $\mathbf{Y}$, jointly distributed according to $P(\mathbf{X},\mathbf{Y})$. The model is trained on training data drawn from a distribution $P_{ref}(\mathbf{X},\mathbf{Y})$. Drift is said to have occurred when $P(\mathbf{X},\mathbf{Y}) \ne P_{ref}(\mathbf{X},\mathbf{Y})$. Writing the joint distribution as
we can classify drift under a number of types:
Covariate drift: Also referred to as input drift, this occurs when the distribution of the input data has shifted $P(\mathbf{X}) \ne P_{ref}(\mathbf{X})$, whilst $P(\mathbf{Y}|\mathbf{X})$ = $P_{ref}(\mathbf{Y}|\mathbf{X})$. This may result in the model giving unreliable predictions.
Prior drift: Also referred to as label drift, this occurs when the distribution of the outputs has shifted $P(\mathbf{Y}) \ne P_{ref}(\mathbf{Y})$, whilst $P(\mathbf{X}|\mathbf{Y})=P_{ref}(\mathbf{X}|\mathbf{Y})$. This can affect the model's decision boundary, as well as the model's performance metrics.
Concept drift: This occurs when the process generating $y$ from $x$ has changed, such that $P(\mathbf{Y}|\mathbf{X}) \ne P_{ref}(\mathbf{Y}|\mathbf{X})$. It is possible that the model might no longer give a suitable approximation of the true process.
Note that a change in one of the conditional probabilities $P(\mathbf{X}|\mathbf{Y})$ and $P(\mathbf{Y}|\mathbf{X})$ does not necessarily imply a change in the other. For example, consider the pneumonia prediction example from Lipton et al., whereby a classification model $f$ is trained to predict $y$, the occurrence (or not) of pneumonia, given a list of symptoms $\boldsymbol{x}$. During a pneumonia outbreak, $P(\mathbf{Y}|\mathbf{X})$ (e.g. pneumonia given cough) might rise, but the manifestations of the disease $P(\mathbf{X}|\mathbf{Y})$ might not change. In many cases, knowledge of underlying causal structure of the problem can be used to deduce that one of the conditionals will remain unchanged.
Below, the different types of drift are visualised for a simple two-dimensional classification problem. It is possible for a drift to fall under more than one category, for example the prior drift below also happens to be a case of covariate drift.
It is relatively easy to spot drift by eyeballing these figures here. However, the task becomes considerably harder for high-dimensional real problems, especially since real-time ground truths are not typically available. Some types of drift, such as prior and concept drift, are especially difficult to detect without access to ground truths. As a workaround proxies are required, for example a model’s predictions can be monitored to check for prior drift.
Alibi Detect offers a wide array of methods for detecting drift (see here), some of which are examined in the NeurIPS 2019 paper Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift. Generally, these aim to determine whether the distribution $P(\mathbf{z})$ has drifted from a reference distribution $P_{ref}(\mathbf{z})$, where $\mathbf{z}$ may represent input data $\mathbf{X}$, true output data $\mathbf{Y}$, or some form of model output, depending on what type of drift we wish to detect.
Due to natural randomness in the process being modelled, we don’t necessarily expect observations $\mathbf{z}1,\dots,\mathbf{z}N$ drawn from $P(\mathbf{z})$ to be identical to $\mathbf{z}^{ref}1,\dots,\mathbf{z}^{ref}M$ drawn from $P{ref}(\mathbf{z})$. To decide whether differences between $P(\mathbf{z})$ and $P{ref}(\mathbf{z})$ are due to drift or just natural randomness in the data, statistical two-sample hypothesis testing is used, with the null hypothesis $P(\mathbf{z})=P{ref}(\mathbf{z})$. If the $p$-value obtained is below a given threshold, the null is rejected and the alternative hypothesis $P(\mathbf{z}) \ne P{ref}(\mathbf{z})$ is accepted, suggesting drift is occurring.
Since $\mathbf{z}$ is often high-dimensional (even a 200 x 200 greyscale image has 40k dimensions!), performing hypothesis testing in the full-space is often either computationally intractable, or unsuitable for the chosen statistical test. Instead, the pipeline below is often used, with dimension reduction as a pre-processing step.
:::{figure} images/drift_pipeline.png :align: center :alt: Drift detection pipeline
Figure inspired by Figure 1 in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift. :::
Hypothesis testing involves first choosing a test statistic $S(\mathbf{z})$, which is expected to be small if the null hypothesis $H_0$ is true, and large if the alternative $H_a$ is true. For observed data $\mathbf{z}$, $S(\mathbf{z})$ is computed, followed by a $p$-value $\hat{p} = P(\text{such an extreme } S(\mathbf{z}) | H_0)$. In other words, $\hat{p}$ represents the probability of such an extreme value of $S(\mathbf{z})$ occurring given that $H_0$ is true. When $\hat{p}\le \alpha$, results are said to be statistically significant, and the null $P(\mathbf{z})=P_{ref}(\mathbf{z})$ is rejected. Conveniently, the threshold $\alpha$ represents the desired false positive rate.
The test statistics available in Alibi Detect can be broadly split into two categories; univariate and multivariate tests:
Univariate:
Chi-Squared (for categorical data)
Fisher's Exact Test (for binary data)
When applied to multidimensional data with dimension $d$, the univariate tests are applied in a feature-wise manner. The obtained $p$-values for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur. If the tests (i.e. each feature dimension) are independent, these corrections preserve the desired false positive rate (FPR). However, usually this is not the case, resulting in FPR's up to $d$-times lower than desired, which becomes especially problematic when $d$ is large. Additionally, since the univariate tests examine the feature-wise marginal distributions, they may miss drift in cases where the joint distribution over all $d$ features has changed, but the marginals have not. The multivariate tests avoid these problems, at the cost of greater complexity.
Given an input dataset $\mathbf{X}\in \mathbb{R}^{N\times d}$, where $N$ is the number of observations and $d$ the number of dimensions, the aim is to reduce the data dimensionality from $d$ to $K$, where $K\ll d$. A drift detector can then be applied to the lower dimensional data $\hat{\mathbf{X}}\in \mathbb{R}^{N\times K}$, where distances more meaningfully capture notions of similarity/dissimilarity between instances. Dimension reduction approaches can be broadly categorised under:
Linear projections
Non-linear projections
Feature maps (from ML model)
Model uncertainty
Alibi Detect allows for a high degree of flexibility here, with a user’s chosen dimension reduction technique able to be incorporated into their chosen detector via the preprocess_fn
argument (and sometimes preprocess_batch_fn
and preprocess_at_init
, depending on the detector). In the following sections, the three categories of techniques are briefly introduced. Alibi Detect offers the following functionality using either TensorFlow or PyTorch backends and preprocessing utilities. For more details, see the examples.
This includes dimension reduction techniques such as principal component analysis (PCA) and sparse random projections (SRP). These techniques involve using a transformation or projection matrix $\mathbf{R}$ to reduce the dimensionality of a given data matrix $\mathbf{X}$, such that $\hat{\mathbf{X}} = \mathbf{XR}$. A straightforward way to include such techniques as a pre-processing stage is to pass them to the detectors via the preprocess_fn
argument, for example for the scikit-learn
library’s PCA
class:
:::{admonition} Note 1: Disjoint training and reference data sets
Astute readers may have noticed that in the code snippet above, the data X_train
is used to “train” the PCA
model, but the MMDDrift
detector is initialised with X_ref
. This is a subtle yet important point. If a detector’s preprocessor (a dimension reduction or other input preprocessing step) is trained on the reference data (X_ref
), any over-fitting to this data may make the resulting detector overly sensitive to differences between the reference and test data sets.
To avoid an overly discriminative detector, it is customary to draw two disjoint datasets from $P_{ref}(\mathbf{z})$, a training set and a held-out reference set. The training data is used to train any input preprocessing steps, and the detector is then initialised on the reference set, and used to detect drift between the reference and test set. This also applies to the learned drift detectors, which should be trained on the training set not the reference set. :::
A common strategy for obtaining non-linear dimension reducing representations is to use an autoencoder, but other non-linear techniques can also be used. Autoencoders consist of an encoder function $\phi : \mathcal{X} \mapsto \mathcal{H}$ and a decoder function $\psi : \mathcal{H} \mapsto \mathcal{X}$, where the latent space $\mathcal{H}$ has lower dimensionality than the input space $\mathcal{X}$. The output of the encoder $\hat{\mathbf{X}} \in \mathcal{H}$ can then be monitored by the drift detector. Training involves learning both the encoding function $\phi$ and the decoding function $\psi$, in order to reduce the reconstruction loss, e.g. if MSE is used: $\phi, \psi = \text{arg} \min_{\phi, \psi}, \lVert \mathbf{X}-(\phi \circ \psi)\mathbf{X}\rVert^2$. However, untrained (randomly initialised) autoencoders can also be used. For an example, a pytorch
autoencoder can be incorporated into a detector by packaging it as a callable function using {func}~alibi_detect.cd.pytorch.preprocess.preprocess_drift
and {func}~functools.partial
:
Following Detecting and Correcting for Label Shift with Black Box Predictors, feature maps can be extracted from existing pre-trained black-box models such as the image classifier shown below. Instead of using the latent space as the dimensionality-reducing representation, other layers of the model such as the softmax outputs or predicted class-labels can also be extracted and monitored. Since different layers yield different output dimensions, different hypothesis tests are required for each.
:::{figure} images/BBSD.png :align: center :alt: Black box shift detection
Figure inspired by this MNIST classification example from the timeserio package. :::
Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift shows that extracting feature maps from existing models can be an effective technique, which is encouraging since this allows the user to repurpose existing black-box models for use as drift detectors. The syntax for incorporating existing models into drift detectors is similar to the previous autoencoder example, with the added step of using {class}~alibi_detect.cd.tensorflow.preprocess.HiddenOutput
to select the model’s network layer to extract outputs from. The code snippet below is borrowed from Maximum Mean Discrepancy drift detector on CIFAR-10, where the softmax layer of the well-known ResNet-32 model is fed into an MMDDrift
detector.
The model uncertainty-based drift detector uses the ML model of interest itself to detect drift. These detectors aim to directly detect drift that’s likely to affect the performance of a model of interest. The approach is to test for change in the number of instances falling into regions of the input space on which the model is uncertain in its predictions. For each instance in the reference set the detector obtains the model’s prediction and some associated notion of uncertainty. The same is done for the test set and if significant differences in uncertainty are detected (via a Kolmogorov-Smirnoff test) then drift is flagged. The model’s notion of uncertainty depends on the type of model. For a classifier this may be the entropy of the predicted label probabilities. For a regressor with dropout layers, dropout Monte Carlo can be used to provide a notion of uncertainty.
The model uncertainty-based detectors are classed under the dimension reduction category since a model's uncertainty is by definition one-dimensional. However, the syntax for the uncertainty-based detectors is different to the other detectors. Instead of passing a pre-processing step to a detector via a preprocess_fn
(or similar) argument, the dimension reduction (in this case computing a notion of uncertainty) is performed internally by these detectors.
Dimension reduction is a common preprocessing task (e.g. for covariate drift detection on tabular or image data), but some modalities of data (e.g. text and graph data) require other forms of preprocessing in order for drift detection to be performed effectively.
When dealing with text data, performing drift detection on raw strings or tokenized data is not effective since they don’t represent the semantics of the input. Instead, we extract contextual embeddings from language transformer models and detect drift on those. This procedure has a significant impact on the type of drift we detect. Strictly speaking we are not detecting covariate/input drift anymore since the entire training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract.
:::{figure} images/BERT.png :align: center :alt: The DistilBERT language representation model
Figure based on Jay Alammar’s excellent visual guide to the BERT model :::
Alibi Detect contains functionality to leverage pre-trained embeddings from HuggingFace’s transformer package. Popular models such as BERT or DistilBERT (shown above) can be used, but Alibi Detect also allows you to easily use your own embeddings of choice. A subsequent dimension reduction step can also be applied if necessary, as is done in the Text drift detection on IMDB movie reviews example, where the 768-dimensional embeddings from the BERT model are passed through an untrained AutoEncoder to reduce their dimensionality. Alibi Detect allows various types of embeddings to be extracted from transformer models, using {class}~alibi_detect.models.tensorflow.embedding.TransformerEmbedding
.
In a similar manner to text data, graph data requires preprocessing before drift detection can be performed. This can be done by extracting graph embeddings from graph neural network (GNN) encoders, as shown below, and demonstrated in the Drift detection on molecular graphs example.
For a simple example, we’ll use the MMD detector to check for drift on the two-dimensional binary classification problem shown previously (see notebook). The MMD detector is a kernel-based method for multivariate two sample testing. Since the number of dimensions is already low, dimension reduction step is not necessary here here. For a more advanced example using the MMD detector with dimension reduction, check out the Maximum Mean Discrepancy drift detector on CIFAR-10 example.
The true model/process is defined as:
where the slope $s$ is set as $s=-1$.
The reference distribution is defined as a mixture of two Normal distributions:
with the standard deviation set at $\sigma=0.8$, and the weights set to $\phi_1=\phi_2=0.5$. Reference data $\mathbf{X}^{ref}$ and training data $\mathbf{X}^{train}$ (see Note 1) can be generated by sampling from this distribution. The corresponding labels $\mathbf{Y}^{ref}$ and $\mathbf{Y}^{train}$ are obtained by evaluating true_model()
.
For a model, we choose the well-known decision tree classifier. As well as training the model, this is a good time to initialise the MMD detector with the held-out reference data $\mathbf{X}^{ref}$ by calling:
The significance threshold is set at $\alpha=0.05$, meaning the detector will flag results as drift detected when the computed $p$-value is less than this i.e. $\hat{p}< \alpha$.
Before introducing drift, we first examine the case where no drift is present. We resample from the same mixture of Gaussian distributions to generate test data $\mathbf{X}$. The individual data observations are different, but the underlying distributions are unchanged, hence no true drift is present.
Unsurprisingly, the model’s mean test accuracy is relatively high. To run the detector on test data the .predict()
is used:
For the test statistic $S(\mathbf{X})$ (we write $S(\mathbf{X})$ instead of $S(\mathbf{z})$ since the detector is operating on input data), the MMD detector uses the kernel trick to compute unbiased estimates of $\text{MMD}^2$. The $\text{MMD}$ is a distance-based measure between the two distributions $P$ and $P_{ref}$, based on the mean embeddings $\mu$ and $\mu_{ref}$ in a reproducing kernel Hilbert space $F$:
A $p$-value is then obtained via a permutation test on the estimates of $\text{MMD}^2$. As expected, since we are sampling from the reference distribution $P_{ref}(\mathbf{X})$, the detector’s prediction is 'is_drift':0
here, indicating that drift is not detected. More specifically, the detector’s $p$-value (p_val
) is above the threshold of $\alpha=0.05$ (threshold
), indicating that no statistically significant drift has been detected. The .predict()
method also returns $\hat{S}(\mathbf{X})$ (distance_threshold
), which is the threshold in terms of the test statistic $S(\mathbf{X})$ i.e. when $S(\mathbf{X})\ge \hat{S}(\mathbf{X})$ statistically significant drift has been detected.
To impose covariate drift, we apply a shift to the mean of one of the normal distributions:
The test data has drifted into a previously unseen region of feature space, and the model is now misclassifying a number of test observations. If true test labels are available, this is easily detectable by monitoring the test accuracy. However, labels are not always available at test time, in which case a drift detector monitoring the covariates comes in handy. In this case, the MMD detector successfully detects the covariate drift.
In a similar manner, a proxy for prior drift can be monitored by initialising a detector on labels from the reference set, and then feeding it a model’s predicted labels:
It can often be challenging to specify a test statistic $S(\mathbf{z})$ that is large when drift is present and small otherwise. Alibi Detect offers a number of learned detectors, which try to explicitly learn a test statistic which satisfies this property:
Spot-the-diff(erence)
These detectors can be highly effective, but require training hence potentially increasing data requirements and set-up time. Similarly to when training preprocessing steps, it is important that the learned detectors are trained on training data which is held-out from the reference data set (see Note 1). A brief overview of these detectors is given below. For more details, see the detectors’ respective pages.
The MMD detector uses a kernel $k(\mathbf{z},\mathbf{z}^{ref})$ to compute unbiased estimates of $\text{MMD}^2$. The user is free to provide their own kernel, but by default a Gaussian RBF kernel is used. The Learned kernel drift detector (Liu et al., 2020) extends this approach by training a kernel to maximise an estimate of the resulting test power. The learned kernel is defined as
where $\Phi$ is a learnable projection, $k_a$ and $k_b$ are simple characteristic kernels (such as a Gaussian RBF), and $\epsilon>0$ is a small constant. By letting $\Phi$ be very flexible we can learn powerful kernels in this manner.
The figure below compares the use of a Gaussian and a learned kernel for identifying differences between two distributions $P$ and $P_{ref}$. The distributions are each equal mixtures of nine Gaussians with the same modes, but each component of $P_{ref}$ is an isotropic Gaussian, whereas the covariance of $P$ differs in each component. The Gaussian kernel (c) treats points isotropically throughout the space, based upon $\lVert \mathbf{z} - \mathbf{z}^{ref} \rVert$ only. The learned kernel (d) behaves differently in different regions of the space, adapting to local structure and therefore allowing better detection of differences between $P$ and $P_{ref}$.
:::{figure} images/deep_kernel.png :align: center :alt: Gaussian and deep kernels
Original image source: Liu et al., 2020. Captions modified to match notation used elsewhere on this page. :::
The classifier-based drift detector (Lopez-Paz and Oquab, 2017) attempts to detect drift by explicitly training a classifier to discriminate between data from the reference and test sets. The statistical test used depends on whether the classifier outputs probabilities or binarized (0 or 1) predictions, but the general idea is to determine whether the classifiers performance is statistically different from random chance. If the classifier can learn to discriminate better than randomly (in a generalisable manner) then drift must have occurred.
Liu et al. show that a classifier-based drift detector is actually a special case of the learned kernel. An important difference is that to train a classifier we maximise its accuracy (or a cross-entropy proxy), while for a learned kernel we maximise the test power directly. Liu et al. show that the latter approach is empirically superior.
The spot-the-diff(erence) drift detector is an extension of the Classifier drift detector, where the classifier is specified in a manner that makes detections interpretable at the feature level when they occur. The detector is inspired by the work of Jitkrittum et al. (2016) but various major adaptations have been made.
As with the usual classifier-based approach, a portion of the available data is used to train a classifier that can discriminate reference instances from test instances. However, the spot-the-diff detector is specified such that when drift is detected, we can inspect the weights of the classifier to shine light on exactly which features of the data were used to distinguish reference from test samples, and therefore caused drift to be detected. The Interpretable drift detection with the spot-the-diff detector on MNIST and Wine-Quality datasets example demonstrates this capability.
So far, we have discussed drift detection in an offline context, with the entire test set ${\mathbf{z}i}{i=1}^{N}$ compared to the reference dataset ${\mathbf{z}^{ref}i}{i=1}^{M}$. However, at test time, data sometimes arrives sequentially. Here it is desirable to detect drift in an online fashion, allowing us to respond as quickly as possible and limit the damage it might cause.
One approach is to perform a test for drift every $W$ time-steps, using the $W$ samples that have arrived since the last test. In other words, that is to compare ${\mathbf{z}i}{i=t-W+1}^{t}$ to ${\mathbf{z}^{ref}i}{i=1}^{M}$. Such a strategy could be implemented using any of the offline detectors implemented in alibi-detect, but being both sensitive to slight drift and responsive to severe drift is difficult. If the window size $W$ is too large the delay between consecutive statistical tests hampers responsiveness to severe drift, but an overly small window is unreliable. This is demonstrated below, where the offline MMD detector is used to monitor drift in data $\mathbf{X}$ sampled from a normal distribution $\mathcal{N}\left(\mu,\sigma^2 \right)$ over time $t$, with the mean starting to drift from $\mu=0$ to $\mu=0.5$ at $t=40$.
An alternative strategy is to perform a test each time data arrives. However the usual offline methods are not applicable because the process for computing $p$-values is too expensive. Additionally, they don’t account for correlated test outcomes when using overlapping windows of test data, leading to miscalibrated detectors operating at an unknown False Positive Rate (FPR). Well-calibrated FPR’s are crucial for judging the significance of a drift detection. In the absence of calibration, drift detection can be useless since there is no way of knowing what fraction of detections are false positives. To tackle this problem, Alibi Detect offers specialist online drift detectors:
These detectors leverage the calibration method introduced by Cobb et al. (2021) in order to ensure they are well well-calibrated when used in a sequential manner. The detectors compute a test statistic $S(\mathbf{z})$ during the configuration phase. Then, at test time, the test statistic is updated sequentially at a low cost. When no drift has occurred the test statistic fluctuates around its expected value, and once drift occurs the test statistic starts to drift upwards. When it exceeds some preconfigured threshold value, drift is detected. The online detectors are constructed in a similar manner to the offline detectors, for example for the online MMD detector:
But, in addition to providing the detector with reference data, the expected run-time (see below), and size of the sliding window must also be defined. Another important difference is that the online detectors make predictions on single data instances:
This can be seen in the animation below, where the online detector considers each incoming observation/sample individually, instead of considering a batch of observations like the offline detectors.
Unlike offline detectors which require the specification of a threshold $p$-value, which is equivalent to a false positive rate (FPR), the online detectors in alibi-detect require the specification of an expected run-time (ERT) (an inverted FPR). This is the number of time-steps that we insist our detectors, on average, should run for in the absence of drift, before making a false detection.
Usually we would like the ERT to be large, however this results in insensitive detectors which are slow to respond when drift does occur. Hence, there is a tradeoff between the expected run time and the expected detection delay (the time taken for the detector to respond to drift in the data). To target the desired ERT, thresholds are configured during an initial configuration phase via simulation (n_bootstraps
sets the number of boostrap simulations used here). This configuration process is only suitable when the amount of reference data is relatively large (ideally around an order of magnitude larger than the desired ERT). Configuration can be expensive (less so with a GPU), but allows the detector to operate at a low-cost at test time. For a more in-depth explanation, see Drift Detection: An Introduction with Seldon.
The FET drift detector is a non-parametric drift detector. It applies Fisher's Exact Test (FET) to each feature, and is intended for application to Bernoulli distributions, with binary univariate data consisting of either (True, False)
or (0, 1)
. This detector is ideal for use in a supervised setting, monitoring drift in a model's instance level accuracy (i.e. correct prediction = 0, and incorrect prediction = 1).
The detector is primarily intended for univariate data, but can also be used in a multivariate setting. For multivariate data, the obtained p-values for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur. As with other univariate detectors such as the Kolmogorov-Smirnov detector, for high-dimensional data, we typically want to reduce the dimensionality before computing the feature-wise univariate FET tests and aggregating those via the chosen correction method. See Dimension Reduction for more guidance on this.
For the $j^{th}$ feature, the FET detector considers the 2x2 contingency table between the reference data $x_j^{ref}$ and test data $x_j$ for that feature:
True (1) | False (0) | |
---|---|---|
where $N^{ref}_1$ represents the number of 1's in the reference data (for the $j^{th}$ feature), $N^{ref}_0$ the number of 0's, and so on. These values can be used to define an odds ratio:
The null hypothesis is $H_0: \widehat{OR}=1$. In other words, the proportion of 1's to 0's is unchanged between the test and reference distributions, such that the odds of 1's vs 0's is independent of whether the data is drawn from the reference or test distribution. The offline FET detector can perform one-sided or two-sided tests, with the alternative hypothesis set by the alternative
keyword argument:
If alternative='greater'
, the alternative hypothesis is $H_a: \widehat{OR}>1$ i.e. proportion of 1's versus 0's has increased compared to the reference distribution.
If alternative='less'
, the alternative hypothesis is $H_a: \widehat{OR}<1$ i.e. the proportion of 1's versus 0's has decreased compared to the reference distribution.
If alternative='two-sided'
, the alternative hypothesis is $H_a: \widehat{OR} \ne 1$ i.e. the proportion of 1's versus 0's has changed compared to the reference distribution.
The p-value returned by the detector is then the probability of obtaining an odds ratio at least as extreme as that observed (in the direction specified by alternative
), assuming the null hypothesis is true.
Arguments:
x_ref
: Data used as reference distribution. Note this should be the raw data, for example np.array([0, 0, 1, 0, 0, 0])
, not the 2x2 contingency table.
Keyword arguments:
p_val
: p-value used for significance of the FET test. If the FDR correction method is used, this corresponds to the acceptable q-value.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via reservoir sampling with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics. Typically a dimensionality reduction technique.
correction
: Correction type for multivariate data. Either 'bonferroni' or 'fdr' (False Discovery Rate).
alternative
: Defines the alternative hypothesis. Options are 'greater' (default), 'less' or 'two-sided'.
n_features
: Number of features used in the FET test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.
input_shape
: Shape of input data.
data_type
: can specify data type added to metadata. E.g. 'tabular' or 'image'.
Initialized drift detector example:
We detect data drift by simply calling predict
on a batch of instances x
. We can return the feature-wise p-values before the multivariate correction by setting return_p_val
to True. The drift can also be detected at the feature level by setting drift_type
to 'feature'. No multivariate correction will take place since we return the output of n_features univariate tests. For drift detection on all the features combined with the correction, use 'batch'. return_p_val
equal to True will also return the threshold used by the detector (either for the univariate case or after the multivariate correction).
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains feature-level p-values if return_p_val
equals True.
threshold
: for feature-level drift detection the threshold equals the p-value used for the significance of the FET test. Otherwise the threshold after the multivariate correction (either bonferroni or fdr) is returned.
distance
: Feature-wise test statistics between the reference data and the new batch if return_distance
equals True. In this case, the test statistics correspond to the odds ratios.
The drift detector applies feature-wise two-sample (K-S) tests for the continuous numerical features and tests for the categorical features. For multivariate data, the obtained p-values for each feature are aggregated either via the or the (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur. Similarly to the other drift detectors, a preprocessing steps could be applied, but the output features need to be categorical.
Arguments:
x_ref
: Data used as reference distribution.
Keyword arguments:
p_val
: p-value used for significance of the K-S and Chi-Squared test across all features. If the FDR correction method is used, this corresponds to the acceptable q-value.
categories_per_feature
: Dictionary with as keys the column indices of the categorical features and optionally as values the number of possible categorical values for that feature or a list with the possible values. If you know which features are categorical and simply want to infer the possible values of the categorical feature from the reference data you can pass a Dict[int, NoneType] such as {0: None, 3: None} if features 0 and 3 are categorical. If you also know how many categories are present for a given feature you could pass this in the categories_per_feature
dict in the Dict[int, int] format, e.g. {0: 3, 3: 2}. If you pass N categories this will assume the possible values for the feature are [0, ..., N-1]. You can also explicitly pass the possible categories in the Dict[int, List[int]] format, e.g. {0: [0, 1, 2], 3: [0, 55]}. Note that the categories can be arbitrary int values.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics. Typically a dimensionality reduction technique.
correction
: Correction type for multivariate data. Either 'bonferroni' or 'fdr' (False Discovery Rate).
alternative
: Defines the alternative hypothesis for the K-S tests. Options are 'two-sided' (default), 'less' or 'greater'. Make sure to use 'two-sided' when mixing categorical and numerical features.
n_features
: Number of features used in the K-S and Chi-Squared tests. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.
data_type
: can specify data type added to metadata. E.g. 'tabular'.
Initialized drift detector example:
We detect data drift by simply calling predict
on a batch of instances x
. We can return the feature-wise p-values before the multivariate correction by setting return_p_val
to True. The drift can also be detected at the feature level by setting drift_type
to 'feature'. No multivariate correction will take place since we return the output of n_features univariate tests. For drift detection on all the features combined with the correction, use 'batch'. return_p_val
equal to True will also return the threshold used by the detector (either for the univariate case or after the multivariate correction).
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains feature-level p-values if return_p_val
equals True.
threshold
: for feature-level drift detection the threshold equals the p-value used for the significance of the K-S and Chi-Squared tests. Otherwise the threshold after the multivariate correction (either bonferroni or fdr) is returned.
distance
: feature-wise K-S or Chi-Squared statistics between the reference data and the new batch if return_distance
equals True.
The online detector is a non-parametric method for online drift detection on continuous data. Like the detector, it applies a univariate Cramér-von Mises (CVM) test to each feature. This detector is an adaptation of that proposed in by Ross et al. .
Warning
This detector is multi-threaded, with Numba used to parallelise over the simulated streams. There is a on MacOS, where Numba's default OpenMP causes segfaults. A workaround is to use the slightly less performant workqueue
threading layer on MacOS by setting the NUMBA_THREADING_LAYER
enviroment variable or running:
Online detectors assume the reference data is large and fixed and operate on single data points at a time (rather than batches). These data points are passed into the test-windows, and a two-sample test-statistic between the reference data and test-window is computed at each time-step. When the test-statistic exceeds a preconfigured threshold, drift is detected. Configuration of the thresholds requires specification of the expected run-time (ERT) which specifies how many time-steps that the detector, on average, should run for in the absence of drift before making a false detection. Thresholds are then configured to target this ERT by simulating n_bootstraps
number of streams of length t_max = 2*max(window_sizes) - 1
. Conveniently, the non-parametric nature of the detector means that thresholds depend only on $M$, the length of the reference data set. Therefore, for multivariate data, configuration is only as costly as the univariate case.
Note
In order to reduce the memory requirements of the threshold configuration process, streams are simulated in batches of size $N_{batch}$, set with the batch_size
keyword argument. However, the memory requirements still scale with $O(M^2N_{batch})$. If configuration is requiring too much memory (or time), then consider subsampling the reference data. The quadratic growth of the cost with respect to the number of reference instances $M$, combined with the diminishing increase in test power, often makes this a worthwhile tradeoff.
Specification of test-window sizes (the detector accepts multiple windows of different size $W$) is also required, with smaller windows allowing faster response to severe drift and larger windows allowing more power to detect slight drift. Since this detector requires the windows to be full to function, the ERT is measured from t = min(window_sizes)-1
.
Although this detector is primarly intended for univariate data, it can also be applied to multivariate data. In this case, the detector makes a correction similar to the Bonferroni correction used for the offline detector. Given $d$ features, the detector configures thresholds by targeting the $1-\beta$ quantile of test statistics over the simulated streams, where $\beta = 1 - (1-(1/ERT))^{(1/d)}$. For the univariate case, this simplifies to $\beta = 1/ERT$. At prediction time, drift is flagged if the test statistic of any feature stream exceed the thresholds.
Note
In the multivariate case, for the ERT's upper bound to be accurate, the feature streams must be independent. Regardless of independence, the ERT will still be properly lower bounded.
Arguments:
x_ref
: Data used as reference distribution.
ert
: The expected run-time in the absence of drift, starting from t=min(windows_sizes).
window_sizes
: The sizes of the sliding test-windows used to compute the test-statistics. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift.
Keyword arguments:
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
n_bootstraps
: The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT.
batch_size
: The maximum number of bootstrap simulations to compute in each batch when configuring thresholds. A smaller batch size reduces memory requirements, but can result in a longer configuration run time.
n_features
: Number of features used in the FET test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.
verbose
: Whether or not to print progress during configuration.
input_shape
: Shape of input data.
data_type
: Optionally specify the data type (tabular, image or time-series). Added to metadata.
Initialized drift detector example:
We detect data drift by sequentially calling predict
on single instances x_t
(no batch dimension) as they each arrive. We can return the test-statistic and the threshold by setting return_test_stat
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if any of the test-windows have drifted from the reference data and 0 otherwise.
time
: The number of observations that have been so far passed to the detector as test instances.
ert
: The expected run-time the detector was configured to run at in the absence of drift.
test_stat
: CVM test-statistics between the reference data and the test_windows if return_test_stat
equals True.
threshold
: The values the test-statsitics are required to exceed for drift to be detected if return_test_stat
equals True.
The detector's state may be saved with the save_state
method:
The previously saved state may then be loaded via the load_state
method:
The online detector is a non-parametric method for online drift detection. Like the detector, it applies an Fisher's Exact Test (FET) to each feature. It is intended for application to streams, with binary data consisting of either (True, False)
or (0, 1)
. This detector is ideal for use in a supervised setting, monitoring drift in a model's instance level accuracy (i.e. correct prediction = 0, and incorrect prediction = 1).
Online detectors assume the reference data is large and fixed and operate on single data points at a time (rather than batches). These data points are passed into the test-windows, and a two-sample test-statistic (in this case $F=1-\hat{p}$) between the reference data and test-window is computed at each time-step. When the test-statistic exceeds a preconfigured threshold, drift is detected. Configuration of the thresholds requires specification of the expected run-time (ERT) which specifies how many time-steps that the detector, on average, should run for in the absence of drift before making a false detection.
In a similar manner to that proposed in by Ross et al. , thresholds are configured by simulating n_bootstraps
Bernoulli streams. The length of streams can be set with the t_max
parameter. Since the thresholds are expected to converge after t_max = 2*max(window_sizes) - 1
time steps, we only need to simulate trajectories and estimate thresholds up to this point, and t_max
is set to this value by default. Following , the test statistics are smoothed using an exponential moving average to remove their discreteness, allowing more precise quantiles to be targeted:
For a window size of $W$, at time $t$ the value of the statistic $F_t$ depends on more than just the previous $W$ values. If $\lambda$, set by lam
, is too small, thresholds may keep decreasing well past $2W - 1$ timesteps. To avoid this, the default lam
is set to a high value of $\lambda=0.99$, meaning that discreteness is still broken, but the value of the test statistic depends (almost) solely on the last $W$ observations. If more smoothing is desired, the t_max
parameter can be manually set at a larger value.
Note
The detector must configure thresholds for each window size and each feature. This can be a time consuming process if the number of features is high. For high-dimensional data users are recommended to apply a dimension reduction step via preprocess_fn
.
Specification of test-window sizes (the detector accepts multiple windows of different size $W$) is also required, with smaller windows allowing faster response to severe drift and larger windows allowing more power to detect slight drift. Since this detector requires a window to be full to function, the ERT is measured from t = min(window_sizes)-1
.
Although this detector is primarly intended for univariate data, it can also be applied to multivariate data. In this case, the detector makes a correction similar to the Bonferroni correction used for the offline detector. Given $d$ features, the detector configures thresholds by targeting the $1-\beta$ quantile of test statistics over the simulated streams, where $\beta = 1 - (1-(1/ERT))^{(1/d)}$. For the univariate case, this simplifies to $\beta = 1/ERT$. At prediction time, drift is flagged if the test statistic of any feature stream exceed the thresholds.
Note
In the multivariate case, for the ERT to be accurately targeted the feature streams must be independent.
Arguments:
x_ref
: Data used as reference distribution.
ert
: The expected run-time in the absence of drift, starting from t=min(windows_sizes).
window_sizes
: The sizes of the sliding test-windows used to compute the test-statistics. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift.
Keyword arguments:
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
n_bootstraps
: The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT.
t_max
: Length of streams to simulate when configuring thresholds. If None, this is set to 2 * max(window_sizes
) - 1.
alternative
: Defines the alternative hypothesis. Options are 'greater' (default) or 'less', corresponding to an increase or decrease in the mean of the Bernoulli stream.
lam
: Smoothing coefficient used for exponential moving average. If heavy smoothing is applied (lam
<<1), a larger t_max
may be necessary in order to ensure the thresholds have converged.
n_features
: Number of features used in the FET test. No need to pass it if no preprocessing takes place. In case of a preprocessing step, this can also be inferred automatically but could be more expensive to compute.
verbose
: Whether or not to print progress during configuration.
input_shape
: Shape of input data.
data_type
: Optionally specify the data type (tabular, image or time-series). Added to metadata.
Initialized drift detector example:
We detect data drift by sequentially calling predict
on single instances x_t
(no batch dimension) as they each arrive. We can return the test-statistic and the threshold by setting return_test_stat
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if any of the test-windows have drifted from the reference data and 0 otherwise.
time
: The number of observations that have been so far passed to the detector as test instances.
ert
: The expected run-time the detector was configured to run at in the absence of drift.
test_stat
: FET test-statistics (1-p_val
) between the reference data and the test_windows if return_test_stat
equals True.
threshold
: The values the test-statsitics are required to exceed for drift to be detected if return_test_stat
equals True.
The detector's state may be saved with the save_state
method:
The previously saved state may then be loaded via the load_state
method:
The drift detector applies feature-wise two-sample (K-S) tests for the continuous numerical features and tests for the categorical features. For multivariate data, the obtained p-values for each feature are aggregated either via the or the (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur.
The instances contain a person's characteristics like age, marital status or education while the label represents whether the person makes more or less than $50k per year. The dataset consists of a mixture of numerical and categorical features. It is fetched using the library, which can be installed with pip:
The fetch_adult
function returns a Bunch
object containing the instances, the targets, the feature names and a dictionary with as keys the column indices of the categorical features and as values the possible categories for each categorical variable.
We split the data in a reference set and 2 test sets on which we test the data drift:
We need to provide the drift detector with the columns which contain categorical features so it knows which features require the Chi-Squared and which ones require the K-S univariate test. We can either provide a dict with as keys the column indices and as values the number of possible categories or just set the values to None and let the detector infer the number of categories from the reference data as in the example below:
Initialize the detector:
We can also save/load an initialised detector:
Now we can check whether the 2 test sets are drifting from the reference data:
Let's take a closer look at each of the features. The preds
dictionary also returns the K-S or Chi-Squared test statistics and p-value for each feature:
None of the feature-level p-values are below the threshold:
If you are interested in individual feature-wise drift, this is also possible:
What about the second test set?
We can again investigate the individual features:
It seems like there is little divergence in the distributions of the features between the reference and test set. Let's visualize this:
While the TabularDrift detector works fine with numerical or categorical features only, we can also directly use a categorical drift detector. In this case, we don't need to specify the categorical feature columns. First we construct a categorical-only dataset and then use the ChiSquareDrift detector:
The online detector is a kernel-based method for online drift detection. The MMD is a distance-based measure between 2 distributions p and q based on the mean embeddings $\mu_{p}$ and $\mu_{q}$ in a reproducing kernel Hilbert space $F$:
Given reference samples ${X_i}{i=1}^{N}$ and test samples ${Y_i}{i=t}^{t+W}$ we may compute an unbiased estimate $\widehat{MMD}^2(F, {X_i}{i=1}^N, {Y_i}{i=t}^{t+W})$ of the squared MMD between the two underlying distributions. The estimate can be updated at low-cost as new data points enter into the test-window. We use by default a , but users are free to pass their own kernel of preference to the detector.
Online detectors assume the reference data is large and fixed and operate on single data points at a time (rather than batches). These data points are passed into the test-window and a two-sample test-statistic (in this case squared MMD) between the reference data and test-window is computed at each time-step. When the test-statistic exceeds a preconfigured threshold, drift is detected. Configuration of the thresholds requires specification of the expected run-time (ERT) which specifies how many time-steps that the detector, on average, should run for in the absence of drift before making a false detection. It also requires specification of a test-window size, with smaller windows allowing faster response to severe drift and larger windows allowing more power to detect slight drift.
For high-dimensional data, we typically want to reduce the dimensionality before passing it to the detector. Following suggestions in , we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs () as out-of-the box preprocessing methods and note that can also be easily implemented using scikit-learn
. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift.
Detecting input data drift (covariate shift) $\Delta p(x)$ for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the notebook.
Arguments:
x_ref
: Data used as reference distribution.
ert
: The expected run-time in the absence of drift, starting from t=0.
window_size
: The size of the sliding test-window used to compute the test-statistic. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift.
Keyword arguments:
backend
: Backend used for the MMD implementation and configuration.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
kernel
: Kernel used for the MMD computation, defaults to Gaussian RBF kernel.
sigma
: Optionally set the GaussianRBF kernel bandwidth. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If sigma
is not specified, the 'median heuristic' is adopted whereby sigma
is set as the median pairwise distance between reference samples.
n_bootstraps
: The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT.
verbose
: Whether or not to print progress during configuration.
input_shape
: Shape of input data.
data_type
: Optionally specify the data type (tabular, image or time-series). Added to metadata.
Additional PyTorch keyword arguments:
device
: Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either 'cuda', 'gpu' or 'cpu'. Only relevant for 'pytorch' backend.
Initialized drift detector example:
The same detector in PyTorch:
We can also easily add preprocessing functions for both frameworks. The following example uses a randomly initialized image encoder in PyTorch:
The same functionality is supported in TensorFlow and the main difference is that you would import from alibi_detect.cd.tensorflow import preprocess_drift
. Other preprocessing steps such as the output of hidden layers of a model or extracted text embeddings using transformer models can be used in a similar way in both frameworks. TensorFlow example for the hidden layer output:
Again the same functionality is supported in TensorFlow but with from alibi_detect.cd.tensorflow import preprocess_drift
and from alibi_detect.models.tensorflow import TransformerEmbedding
imports.
We detect data drift by sequentially calling predict
on single instances x_t
(no batch dimension) as they each arrive. We can return the test-statistic and the threshold by setting return_test_stat
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the test-window (of the most recent window_size
observations) has drifted from the reference data and 0 otherwise.
time
: The number of observations that have been so far passed to the detector as test instances.
ert
: The expected run-time the detector was configured to run at in the absence of drift.
test_stat
: MMD^2 metric between the reference data and the test_window if return_test_stat
equals True.
threshold
: The value the test-statsitic is required to exceed for drift to be detected if return_test_stat
equals True.
The detector's state may be saved with the save_state
method:
The previously saved state may then be loaded via the load_state
method:
Under the hood, drift detectors leverage a function (also known as a test-statistic) that is expected to take a large value if drift has occurred and a low value if not. The power of the detector is partly determined by how well the function satisfies this property. However, specifying such a function in advance can be very difficult.
The classifier-based drift detector simply tries to correctly distinguish instances from the reference data vs. the test set. The classifier is trained to output the probability that a given instance belongs to the test set. If the probabilities it assigns to unseen tests instances are significantly higher (as determined by a Kolmogorov-Smirnov test) than those it assigns to unseen reference instances then the test set must differ from the reference set and drift is flagged. To leverage all the available reference and test data, stratified cross-validation can be applied and the out-of-fold predictions are used for the significance test. Note that a new classifier is trained for each test set or even each fold within the test set.
The method works with both the PyTorch, TensorFlow, and Sklearn frameworks. We will focus exclusively on the Sklearn backend in this notebook.
Adult dataset consists of 32,561 distributed over 2 classes based on whether the annual income is >50K. We evaluate drift on particular subsets of the data which are constructed based on the education level. As we will further discuss, our reference dataset will consist of people having a low education level, while our test dataset will consist of people having a high education level.
Note: we need to install alibi
to fetch the adult
dataset.
We split the dataset in two based on the education level. We define a low_education
level consisting of: 'Dropout'
, 'High School grad'
, 'Bachelors'
, and a high_education
level consisting of: 'Bachelors'
, 'Masters'
, 'Doctorate'
. Intentionally we included an overlap between the two distributions consisting of people that have a Bachelors
degree. Our goal is to detect that the two distributions are different.
We sample our reference dataset from the low_education
level. In addition, we sample two other datasets:
x_h0
- sampled from the low_education
level to support the null hypothesis (i.e., the two distributions are identical);
x_h1
- sampled from the high_education
level to support the alternative hypothesis (i.e., the two distributions are different);
We perform a binomial test using a RandomForestClassifier
.
As expected, when testing against x_h0
, we fail to reject $H_0$, while for the second case there is enough evidence to reject $H_0$ and flag that the data has drifted.
For the classifiers that do not support predict_proba
but offer support for decision_function
, we can perform a K-S test on the scores by setting preds_type='scores'
.
Some models can return a poor estimate of the class label probability or some might not even support probability predictions. We can add calibration on top of each classifier to obtain better probability estimates and perform a K-S test. For demonstrative purposes, we will calibrate a LinearSVC
which does not support predict_proba
, but any other classifier would work.
In order to use the entire dataset and obtain unbiased predictions required to perform the statistical test, the ClassifierDrift
detector has the option to perform a n_folds
split. Although appealing due to its data efficiency, this method can be slow since it is required to train a number of n_folds
classifiers.
For demonstrative purposes, we will compare the running time of the ClassifierDrift
detector when using a RandomForestClassifier
in two setups: n_folds=5, use_oob=False
and use_oob=True
.
We can observe that in this particular setting, using the out-of-bag prediction can speed up the procedure up to almost x4.
The online Least Squares Density Difference detector is a non-parametric method for online drift detection. The LSDD between two distributions $p$ and $q$ on $\mathcal{X}$ is defined as and also has an empirical estimate $\widehat{LSDD}({X_i}{i=1}^N, {Y_i}{i=t}^{t+W})$ that can be updated at low cost as the test window is updated to ${Y_i}_{i=t+1}^{t+1+W}$. The detector is motivated by, but is a modified version of, .
Online detectors assume the reference data is large and fixed and operate on single data points at a time (rather than batches). These data points are passed into the test-window and a two-sample test-statistic (in this case an estimate of LSDD) between the reference data and test-window is computed at each time-step. When the test-statistic exceeds a preconfigured threshold, drift is detected. Configuration of the thresholds requires specification of the expected run-time (ERT) which specifies how many time-steps that the detector, on average, should run for in the absence of drift before making a false detection. It also requires specification of a test-window size, with smaller windows allowing faster response to severe drift and larger windows allowing more power to detect slight drift.
For high-dimensional data, we typically want to reduce the dimensionality before passing it to the detector. Following suggestions in , we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs () as out-of-the box preprocessing methods and note that can also be easily implemented using scikit-learn
. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift.
Detecting input data drift (covariate shift) $\Delta p(x)$ for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the notebook.
Arguments:
x_ref
: Data used as reference distribution.
ert
: The expected run-time in the absence of drift, starting from t=0.
window_size
: The size of the sliding test-window used to compute the test-statistic. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift.
Keyword arguments:
backend
: Backend used for the LSDD implementation and configuration.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics.
sigma
: Optionally set the bandwidth of the Gaussian kernel used in estimating the LSDD. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If sigma
is not specified, the 'median heuristic' is adopted whereby sigma
is set as the median pairwise distance between reference samples.
n_bootstraps
: The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT.
n_kernel_centers
: The number of reference samples to use as centers in the Gaussian kernel model used to estimate LSDD. Defaults to 2*window_size.
lambda_rd_max
: The maximum relative difference between two estimates of LSDD that the regularization parameter lambda is allowed to cause. Defaults to 0.2 as in the paper.
verbose
: Whether or not to print progress during configuration.
input_shape
: Shape of input data.
data_type
: Optionally specify the data type (tabular, image or time-series). Added to metadata.
Additional PyTorch keyword arguments:
device
: Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either 'cuda', 'gpu' or 'cpu'. Only relevant for 'pytorch' backend.
Initialized drift detector example:
The same detector in PyTorch:
We can also easily add preprocessing functions for both frameworks. The following example uses a randomly initialized image encoder in PyTorch:
The same functionality is supported in TensorFlow and the main difference is that you would import from alibi_detect.cd.tensorflow import preprocess_drift
. Other preprocessing steps such as the output of hidden layers of a model or extracted text embeddings using transformer models can be used in a similar way in both frameworks. TensorFlow example for the hidden layer output:
Again the same functionality is supported in TensorFlow but with from alibi_detect.cd.tensorflow import preprocess_drift
and from alibi_detect.models.tensorflow import TransformerEmbedding
imports.
We detect data drift by sequentially calling predict
on single instances x_t
(no batch dimension) as they each arrive. We can return the test-statistic and the threshold by setting return_test_stat
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the test-window (of the most recent window_size
observations) has drifted from the reference data and 0 otherwise.
time
: The number of observations that have been so far passed to the detector as test instances.
ert
: The expected run-time the detector was configured to run at in the absence of drift.
test_stat
: LSDD metric between the reference data and the test_window if return_test_stat
equals True.
threshold
: The value the test-statsitic is required to exceed for drift to be detected if return_test_stat
equals True.
The detector's state may be saved with the save_state
method:
The previously saved state may then be loaded via the load_state
method:
The context-aware maximum mean discrepancy drift detector () is a kernel based method for detecting drift in a manner that can take relevant context into account. A normal drift detector detects when the distributions underlying two sets of samples ${x^0_i}{i=1}^{n_0}$ and ${x^1_i}{i=1}^{n_1}$ differ. A context-aware drift detector only detects differences that can not be attributed to a corresponding difference between sets of associated context variables ${c^0_i}{i=1}^{n_0}$ and ${c^1_i}{i=1}^{n_1}$.
Context-aware drift detectors afford practitioners the flexibility to specify their desired context variable. It could be a transformation of the data, such as a subset of features, or an unrelated indexing quantity, such as the time or weather. Everything that the practitioner wishes to allow to change between the reference window and test window should be captured within the context variable.
On a technical level, the method operates in a manner similar to the . However, instead of using an estimate of the squared difference between kernel mean embeddings of $X_{\text{ref}}$ and $X_{\text{test}}$ as the test statistic, we now use an estimate of the expected squared difference between the kernel of $X_{\text{ref}}|C$ and $X_{\text{test}}|C$. As well as the kernel defined on the space of data $X$ required to define the test statistic, estimating the statistic additionally requires a kernel defined on the space of the context variable $C$. For any given realisation of the test statistic an associated p-value is then computed using a .
The detector is designed for cases where the training data contains a rich variety of contexts and individual test windows may cover a much more limited subset. It is assumed that the test contexts remain within the support of those observed in the reference set.
Arguments:
x_ref
: Data used as reference distribution.
c_ref
: Context for the reference distribution.
Keyword arguments:
backend
: Both TensorFlow and PyTorch implementations of the context-aware MMD detector as well as various preprocessing steps are available. Specify the backend (tensorflow or pytorch). Defaults to tensorflow.
p_val
: p-value used for significance of the permutation test.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data x_ref
at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
update_ref
: Reference data can optionally be updated to the last N instances seen by the detector. The parameter should be passed as a dictionary {'last': N}.
preprocess_fn
: Function to preprocess the data (x_ref
and x
) before computing the data drift metrics. Typically a dimensionality reduction technique. NOTE: Preprocessing is not applied to the context data.
x_kernel
: Kernel defined on the data x_*
. Defaults to a Gaussian RBF kernel (from alibi_detect.utils.pytorch import GaussianRBF
or from alibi_detect.utils.tensorflow import GaussianRBF
dependent on the backend used).
c_kernel
: Kernel defined on the context c_*
. Defaults to a Gaussian RBF kernel (from alibi_detect.utils.pytorch import GaussianRBF
or from alibi_detect.utils.tensorflow import GaussianRBF
dependent on the backend used).
n_permutations
: Number of permutations used in the conditional permutation test.
prop_c_held
: Proportion of contexts held out to condition on.
n_folds
: Number of cross-validation folds used when tuning the regularisation parameters.
batch_size
: If not None
, then compute batches of MMDs at a time rather than all at once which could lead to memory issues.
input_shape
: Optionally pass the shape of the input data.
data_type
: can specify data type added to the metadata. E.g. 'tabular' or 'image'.
verbose
: Whether or not to print progress during configuration.
Additional PyTorch keyword arguments:
device
: cuda or gpu to use the GPU and cpu for the CPU. If the device is not specified, the detector will try to leverage the GPU if possible and otherwise fall back on CPU.
Initialized drift detector example with the PyTorch backend:
The same detector in TensorFlow:
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains the p-value if return_p_val
equals True.
threshold
: p-value threshold if return_p_val
equals True.
distance
: conditional MMD^2 metric between the reference data and the new batch if return_distance
equals True.
distance_threshold
: conditional MMD^2 metric value from the permutation test which corresponds to the the p-value threshold.
coupling_xx
: coupling matrix $W_\text{ref,ref}$ for the reference data.
coupling_yy
: coupling matrix $W_\text{test,test}$ for the test data.
coupling_xy
: coupling matrix $W_\text{ref,test}$ between the reference and test data.
At any point, the state may be reset to t=0
with the reset_state
method. When saving the detector with save_detector
, the state will be saved, unless t=0
(see ).
At any point, the state may be reset to t=0
with the reset_state
method. When saving the detector with save_detector
, the state will be saved, unless t=0
(see ).
[1] Ross, G.J., Tasoulis, D.K. & Adams, N.M. Sequential monitoring of a Bernoulli sequence when the pre-change parameter is unknown. Comput Stat 28, 463–479 (2013). doi: . arXiv: .
Check out the example for more details.
Alibi Detect also includes custom text preprocessing steps in both TensorFlow and PyTorch based on Huggingface's package:
At any point, the state may be reset to t=0
with the reset_state
method. When saving the detector with save_detector
, the state will be saved, unless t=0
(see ).
For the RandomForestClassifier
we can avoid retraining n_folds
classifiers by using the out-of-bag predictions. In a RandomForestClassifier
each tree is trained on a separate dataset obtained by sampling with replacement the original training set, a method known as bagging. On average, only 63% unique samples from the original dataset are used to train each tree (). Thus, for each tree, we can obtain predictions for the remaining out-of-bag samples (i.e., the rest of 37%). By cumulating the out-of-bag predictions across all the trees we can eventually obtain a prediction for each sample in the original dataset. Note that we used the word 'eventually' because if the number of trees is too small, covering the entire original dataset might be unlikely.
Check out the example for more details.
Alibi Detect also includes custom text preprocessing steps in both TensorFlow and PyTorch based on Huggingface's package:
At any point, the state may be reset to t=0
with the reset_state
method. When saving the detector with save_detector
, the state will be saved, unless t=0
(see ).
We detect data drift by simply calling predict
on a batch of test or deployment instances x
and contexts c
. We can return the p-value and the threshold of the permutation test by setting return_p_val
to True and the context-aware maximum mean discrepancy metric and threshold by setting return_distance
to True. We can also set return_coupling
to True which additionally returns the coupling matrices $W_\text{ref,test}$, $W_\text{ref,ref}$ and $W_\text{test,test}$. As illustrated in the examples (, ) this can provide deep insights into where the reference and test distributions are similar and where they differ.
$x_j$
$N_1$
$N_0$
$x_j^{ref}$
$N^{ref}_1$
$N^{ref}_0$
In this notebook we show how to detect drift on ECG data given a specific context using the context-aware MMD detector (Cobb and Van Looveren, 2022). Consider the following simple example: we have a heatbeat monitoring system which is trained on a wide variety of heartbeats sampled from people of all ages across a variety of activities (e.g. rest or running). Then we deploy the system to monitor individual people during certain activities. The distribution of the heartbeats monitored during deployment will then be drifting against the reference data which resembles the full training distribution, simply because only individual people in a specific setting are being tracked. However, this does not mean that the system is not working and requires re-training. We are instead interested in flagging drift given the relevant context such as the person's characteristics (e.g. age or medical history) and the activity. Traditional drift detectors cannot flexibly deal with this setting since they rely on the i.i.d. assumption when sampling the reference and test sets. The context-aware detector however allows us to pass this context to the detector and flag drift appropriately. More generally, the context-aware drift detector detects changes in the data distribution which cannot be attributed to a permissible change in the context variable. On top of that, the detector allows you to understand which subpopulations are present in both the reference and test data which provides deeper insights into the distribution underlying the test data.
Useful context (or conditioning) variables for the context-aware drift detector include but are not limited to:
Domain or application specific contexts such as the time of day or the activity (e.g. running or resting).
Conditioning on the relative prevalences of known subpopulations, such as the frequency of different types of heartbeats. It is important to note that while the relative frequency of each subpopulation (e.g. the different heartbeat types) might change, the distribution underlying each individual subpopulation (e.g. each specific type of heartbeat) cannot change.
Conditioning on model predictions. Assume we trained a classifier which detects arrhythmia, then we can provide the classifier model predictions as context and understand if, given the model prediction, the data comes from the same underlying distribution as the reference data or not.
Conditioning on model uncertainties which would allow increases in model uncertainty due to drift into familiar regions of high aleatoric uncertainty (often fine) to be distinguished from that into unfamiliar regions of high epistemic uncertainty (often problematic).
The following settings will be showcased throughout the notebook:
A change in the prevalences of subpopulations (i.e. different types of heartbeats as determined by an unsupervised clustering model or an ECG classifier) which are also present in the reference data is observed. Contrary to traditional drift detection approaches, the context-aware detector does not flag drift as this change in frequency of various heartbeats is permissible given the context provided.
A change in the underlying distribution underlying one or more subpopulations takes place. While we allow changes in the prevalences of the subpopulations accounted for by the context variable, we do not allow changes of the subpopulations themselves. If for instance the ECGs are corrupted by noise on the sensor measurements, we want to flag drift.
We also show how to condition the detector on different context variables such as the ECG classifier model predictions, cluster membership by an unsupervised clustering algorithm and timestamps.
Under setting 1. we want our detector to be well-calibrated (a controlled False Positive Rate (FPR) and more generally a p-value which is uniformly distributed between 0 and 1) while under setting 2. we want our detector to be powerful and flag drift. Lastly, we show how the detector can help you to understand the connection between the reference and test data distributions better.
The dataset contains 5000 ECG’s, originally obtained from Physionet from the BIDMC Congestive Heart Failure Database, record chf07. The data has been pre-processed in 2 steps: first each heartbeat is extracted, and then each beat is made equal length via interpolation. The data is labeled and contains 5 classes. The first class $N$ which contains almost 60% of the observations is seen as normal while the others are supraventricular ectopic beats ($S$), ventricular ectopic beats ($V$), fusion beats ($F$) and unknown beats ($Q$).
The notebook requires the torch
and statsmodels
packages to be installed, which can be done via pip
:
Before we start let's fix the random seeds for reproducibility:
First we load the data, show the distribution across the ECG classes and visualise some ECGs from each class.
We can see that most heartbeats can be classified as normal, followed by the unknown class. We will now sample 500 heartbeats to train a simple ECG classifier. Importantly, we leave out the $F$ and $V$ classes which are used to detect drift. First we define a helper function to sample data.
We use a prop_train fraction of all samples to train the classifier and then remove instances from the $F$ and $V$ classes. The rest of the data is used by our drift detectors.
Now we define and train our classifier on the training set.
Let's evaluate out classifier on both the training and drift portions of the datasets.
We start with an example where no drift occurs and the reference and test data are both sampled randomly from all classes present in the reference data (classes 0, 1 and 3). Under this scenario, we expect no drift to be detected by either a normal MMD detector or by the context-aware MMD detector.
Before we can start using the context-aware drift detector, first we need to define our context variable. In our experiments we allow the relative prevalences of subpopulations (i.e. the relative frequency of different types of hearbeats also present in the reference data) to vary while the distributions underlying each of the subpopulations remain unchanged. To achieve this we condition on the prediction probabilities of the classifier we trained earlier to distinguish the different types of ECGs. We can do this because the prediction probabilities can account for the frequency of occurrence of each of the heartbeat types (be it imperfectly given our classifier makes the occasional mistake).
The below figure of the Q-Q (Quantile-Quantile) plots of a random sample from the uniform distribution U[0,1] against the obtained p-values from the vanilla and context-aware MMD detectors illustrate how well both detectors are calibrated. A perfectly calibrated detector should have a Q-Q plot which closely follows the diagonal. Only the middle plot in the grid shows the detector's p-values. The other plots correspond to n_runs p-values actually sampled from U[0,1] to contextualise how well the central plot follows the diagonal given the limited number of samples.
As expected we can see that both the normal MMD and the context-aware MMD detectors are well-calibrated.
We now focus our attention on a more realistic problem where the relative frequency of one or more subpopulations (i.e. types of hearbeats) is changing while the underlying subpopulation distribution stays the same. This would be the expected setting when we monitor the heartbeat of a specific person (e.g. only normal heartbeats) and we don't want to flag drift.
While the usual MMD detector only returns very low p-values (mostly 0), the context-aware MMD detector remains calibrated.
In the following example we change the distribution of one or more of the underlying subpopulations (i.e. the different types of heartbeats). Notice that now we do want to flag drift since our context variable, which permits changes in relative subpopulation prevalences, can no longer explain the change in distribution.
We will again sample from the normal heartbeats, but now we will add random noise to a fraction of the extracted heartbeats to change the distribution. This could be the result of an error with some of the sensors. The perturbation is illustrated below:
As we can see from the Q-Q and power of the detector, the changes in the subpopulation are easily detected:
We now use the cluster membership probabilities of a Gaussian mixture model which is fit on the training instances as context variables instead of the model predictions. We will test both the calibration when the frequency of the subpopulations (the cluster memberships) changes as well as the power when the $F$ and $V$ heartbeats are included.
The test statistic $\hat{t}$ of the context-aware MMD detector can be formulated as follows: $\hat{t} = \langle K_{0,0}, W_{0,0} \rangle + \langle K_{1,1}, W_{1,1} \rangle -2\langle K_{0,1}, W_{0,1}\rangle$ where $0$ refers to the reference data, $1$ to the test data, and $W_{.,.}$ and $K_{.,.}$ are the weight and kernel matrices, respectively. The weight matrices $W_{.,.}$ allow us to focus on the distribution's subpopulations of interest. Reference instances which have similar contexts as the test data will have higher values for their entries in $W_{0,1}$ than instances with dissimilar contexts. We can therefore interpret $W_{0,1}$ as the coupling matrix between instances in the reference and the test sets. This allows us to investigate which subpopulations from the reference set are present and which are missing in the test data. If we also have a good understanding of the model performance on various subpopulations of the reference data, we could even try and use this coupling matrix to roughly proxy model performance on the unlabeled test instances. Note that in this case we would require labels from the reference data and make sure the reference instances come from the validation, not the training set.
In the following example we only pick 1 type of heartbeat (the normal one) to be present in the test set while 3 types are present in the reference set. We can then investigate via the coupling matrix whether the test statistic $\hat{t}$ focused on the right types of heartbeats in the reference data via $W_{0,1}$. More concretely, we can sum over the columns (the test instances) of $W_{0,1}$ and check which reference instances obtained the highest weights.
As expected no drift was detected since the test set only contains normal heartbeats. We now sort the weights of w_ref
in descending order. We expect the top 400 entries to be fairly high and consistent since these represent the normal heartbeats in the reference set. Afterwards, the weight attribution to the other instances in the reference set should be low. The plot below confirms that this is indeed what happens.
The dataset consists of nicely extracted and aligned ECGs of 140 data points for each observation. However in reality it is likely that we will continuously or periodically observe instances which are not nicely aligned. We could however assign a timestamp to the data (e.g. starting from a peak) and use time as the context variable. This is illustrated in the example below.
First we create a new dataset where we split each instance in slices of non-overlapping ECG segments. Each of the segments will have an associated timestamp as context variable. Then we can check the calibration under no change (besides the time-varying behaviour which is accounted for) as well as the power for ECG segments where we add incorrect time stamps to some of the segments.
The drift detector applies feature-wise two-sample Kolmogorov-Smirnov (K-S) tests. For multivariate data, the obtained p-values for each feature are aggregated either via the Bonferroni or the False Discovery Rate (FDR) correction. The Bonferroni correction is more conservative and controls for the probability of at least one false positive. The FDR correction on the other hand allows for an expected fraction of false positives to occur.
For high-dimensional data, we typically want to reduce the dimensionality before computing the feature-wise univariate K-S tests and aggregating those via the chosen correction method. Following suggestions in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift, we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs (BBSDs) as out-of-the box preprocessing methods and note that PCA can also be easily implemented using scikit-learn
. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift. The adversarial detector which is part of the library can also be transformed into a drift detector picking up drift that reduces the performance of the classification model. We can therefore combine different preprocessing techniques to figure out if there is drift which hurts the model performance, and whether this drift can be classified as input drift or label shift.
The method works with both the PyTorch and TensorFlow frameworks for the optional preprocessing step. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes. We evaluate the drift detector on the CIFAR-10-C dataset (Hendrycks & Dietterich, 2019). The instances in CIFAR-10-C have been corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in the classification model performance. We also check for drift against the original test set with class imbalances.
Original CIFAR-10 data:
For CIFAR-10-C, we can select from the following corruption types at 5 severity levels:
Let's pick a subset of the corruptions at corruption level 5. Each corruption type consists of perturbations on all of the original test set images.
We split the original test set in a reference dataset and a dataset which should not be rejected under the H0 of the K-S test. We also split the corrupted data by corruption type:
We can visualise the same instance for each corruption type:
We can also verify that the performance of a classification model on CIFAR-10 drops significantly on this perturbed dataset:
Given the drop in performance, it is important that we detect the harmful data drift!
First we try a drift detector using the TensorFlow framework for the preprocessing step. We are trying to detect data drift on high-dimensional (32x32x3) data using feature-wise univariate tests. It therefore makes sense to apply dimensionality reduction first. Some dimensionality reduction methods also used in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift are readily available: a randomly initialized encoder (UAE or Untrained AutoEncoder in the paper), BBSDs (black-box shift detection using the classifier's softmax outputs) and PCA.
Random encoder
First we try the randomly initialized encoder:
The p-value used by the detector for the multivariate data with encoding_dim features is equal to p_val / encoding_dim because of the Bonferroni correction.
Let's check whether the detector thinks drift occurred on the different test sets and time the prediction calls:
As expected, drift was only detected on the corrupted datasets. The feature-wise p-values for each univariate K-S test per (encoded) feature before multivariate correction show that most of them are well above the $0.05$ threshold for H0 and below for the corrupted datasets.
BBSDs
For BBSDs, we use the classifier's softmax outputs for black-box shift detection. This method is based on Detecting and Correcting for Label Shift with Black Box Predictors. The ResNet classifier is trained on data standardised by instance so we need to rescale the data.
Now we initialize the detector. Here we use the output of the softmax layer to detect the drift, but other hidden layers can be extracted as well by setting 'layer' to the index of the desired hidden layer in the model:
Again we can see that the p-value used by the detector for the multivariate data with 10 features (number of CIFAR-10 classes) is equal to p_val / 10 because of the Bonferroni correction.
There is no drift on the original held out test set:
We can also check what happens when we introduce class imbalances between the reference data X_ref and the tested data X_imb. The reference data will use $75$% of the instances of the first 5 classes and only $25$% of the last 5. The data used for drift testing then uses respectively $25$% and $75$% of the test instances for the first and last 5 classes.
Update reference dataset for the detector and make predictions. Note that we store the preprocessed reference data since the preprocess_at_init
kwarg is by default True:
So far we have kept the reference data the same throughout the experiments. It is possible however that we want to test a new batch against the last N instances or against a batch of instances of fixed size where we give each instance we have seen up until now the same chance of being in the reference batch (reservoir sampling). The update_x_ref
argument allows you to change the reference data update rule. It is a Dict which takes as key the update rule ('last' for last N instances or 'reservoir_sampling') and as value the batch size N of the reference data. You can also save the detector after the prediction calls to save the updated reference data.
The reference data is now updated with each predict
call. Say we start with our imbalanced reference set and make a prediction on the remaining test set data X_imb, then the drift detector will figure out data drift has occurred.
We can now see that the reference data consists of N instances, obtained through reservoir sampling.
We then draw a random sample from the training set and compare it with the updated reference data. This still highlights that there is data drift but will update the reference data again:
When we draw a new sample from the training set, it highlights that it is not drifting anymore against the reservoir in X_ref.
Instead of the Bonferroni correction for multivariate data, we can also use the less conservative False Discovery Rate (FDR) correction. See here or here for nice explanations. While the Bonferroni correction controls the probability of at least one false positive, the FDR correction controls for an expected amount of false positives. The p_val
argument at initialisation time can be interpreted as the acceptable q-value when the FDR correction is applied.
We can leverage the adversarial scores obtained from an adversarial autoencoder trained on normal data and transform it into a data drift detector. The score function of the adversarial autoencoder becomes the preprocessing function for the drift detector. The K-S test is then a simple univariate test on the adversarial scores. Importantly, an adversarial drift detector flags malicious data drift. We can fetch the pretrained adversarial detector from a Google Cloud Bucket or train one from scratch:
Initialise the drift detector:
Make drift predictions on the original test set and corrupted data:
While X_imb clearly exhibits input data drift due to the introduced class imbalances, it is not flagged by the adversarial drift detector since the performance of the classifier is not affected and the drift is not malicious. We can visualise this by plotting the adversarial scores together with the harmfulness of the data corruption as reflected by the drop in classifier accuracy:
We can therefore use the scores of the detector itself to quantify the harmfulness of the drift! We can generalise this to all the corruptions at each severity level in CIFAR-10-C:
We now compute mean scores and standard deviations per severity level and plot the results. The plot shows the mean adversarial scores (lhs) and ResNet-32 accuracies (rhs) for increasing data corruption severity levels. Level 0 corresponds to the original test set. Harmful scores are scores from instances which have been flipped from the correct to an incorrect prediction because of the corruption. Not harmful means that the prediction was unchanged after the corruption.
Model distillation is a technique that is used to transfer knowledge from a large network to a smaller network. Typically, it consists of training a second model with a simplified architecture on soft targets (the output distributions or the logits) obtained from the original model.
Here, we apply model distillation to obtain harmfulness scores, by comparing the output distributions of the original model with the output distributions of the distilled model, in order to detect adversarial data, malicious data drift or data corruption. We use the following definition of harmful and harmless data points:
Harmful data points are defined as inputs for which the model's predictions on the uncorrupted data are correct while the model's predictions on the corrupted data are wrong.
Harmless data points are defined as inputs for which the model's predictions on the uncorrupted data are correct and the model's predictions on the corrupted data remain correct.
Analogously to the adversarial AE detector, which is also part of the library, the model distillation detector picks up drift that reduces the performance of the classification model.
Moreover, in this example a drift detector that applies two-sample Kolmogorov-Smirnov (K-S) tests to the scores is employed. The p-values obtained are used to assess the harmfulness of the data.
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes. We evaluate the drift detector on the CIFAR-10-C dataset (Hendrycks & Dietterich, 2019). The instances in CIFAR-10-C have been corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in the classification model performance.
Original CIFAR-10 data:
For CIFAR-10-C, we can select from the following corruption types at 5 severity levels:
Let's pick a subset of the corruptions at corruption level 5. Each corruption type consists of perturbations on all of the original test set images.
We split the corrupted data by corruption type:
We can visualise the same instance for each corruption type:
We can also verify that the performance of a classification model on CIFAR-10 drops significantly on this perturbed dataset:
Analogously to the adversarial AE detector, which uses an autoencoder to reproduce the output distribution of a classifier and produce adversarial scores, the model distillation detector achieves the same goal by using a simple classifier in place of the autoencoder. This approach is more flexible since it bypasses the instance's generation step, and it can be applied in a straightforward way to a variety of data sets such as text or time series.
We can use the adversarial scores produced by the Model Distillation detector in the context of drift detection. The score function of the detector becomes the preprocessing function for the drift detector. The K-S test is then a simple univariate test between the adversarial scores of the reference batch and the test data. Higher adversarial scores indicate more harmful drift. Importantly, a harmfulness detector flags malicious data drift. We can fetch the pretrained model distillation detector from a Google Cloud Bucket or train one from scratch:
Definition and training of the distilled model
Scores and p-values calculation
Here we initialize the K-S drift detector using the harmfulness scores as a preprocessing function. The KS test is performed on these scores.
Initialise the drift detector:
Calculate scores. We split the corrupted data into harmful and harmless data and visualize the harmfulness scores for various values of corruption severity.
Plot scores
We now plot the mean scores and standard deviations per severity level. The plot shows the mean harmfulness scores (lhs) and ResNet-32 accuracies (rhs) for increasing data corruption severity levels. Level 0 corresponds to the original test set. Harmful scores are scores from instances which have been flipped from the correct to an incorrect prediction because of the corruption. Not harmful means that a correct prediction was unchanged after the corruption.
Plot p-values for contaminated batches
In order to simulate a realistic scenario, we perform a K-S test on batches of instance which are increasingly contaminated with corrupted data. The following steps are implemented:
We randomly pick n_ref=1000
samples from the non-currupted test set to be used as a reference set in the initialization of the K-S drift detector.
We sample batches of data of size batch_size=100
contaminated with an increasing number of harmful corrupted data and harmless corrupted data.
The K-S detector predicts whether drift occurs between the contaminated batches and the reference data and returns the p-values of the test.
We observe that contamination of the batches with harmful data reduces the p-values much faster than contamination with harmless data. In the latter case, the p-values remain above the detection threshold even when the batch is heavily contaminated
We repeat the test for 100 randomly sampled batches and we plot the mean and the maximum p-values for each level of severity and contamination below. We can see from the plot that the detector is able to clearly detect a batch contaminated with harmful data compared to a batch contaminated with harmless data when the percentage of currupted data reaches 20%-30%.
In this notebook we show how to detect drift on text data given a specific context using the context-aware MMD detector (Cobb and Van Looveren, 2022). Consider the following simple example: the upcoming elections result in an increase of political news articles compared to other topics such as sports or science. Given the context (the elections), it is however not surprising that we observe this uptick. Moreover, assume we have a machine learning model which is trained to classify news topics, and this model performs well on political articles. So given that we fully expect this uptick to occur given the context, and that our model performs fine on the political news articles, we do not want to flag this type of drift in the data. This setting corresponds more closely to many real-life settings than traditional drift detection where we make the assumption that both the reference and test data are i.i.d. samples from their underlying distributions.
In our news topics example, each different topic such as politics, sports or weather represents a subpopulation of the data. Our context-aware drift detector can then detect changes in the data distribution which cannot be attributed to a change in the relative prevalences of these subpopulations, which we deem permissible. As a cherry on the cake, the context-aware detector allows you to understand which subpopulations are present in both the reference and test data. This allows you to obtain deep insights into the distribution underlying the test data.
Useful context (or conditioning) variables for the context-aware drift detector include but are not limited to:
Domain or application specific contexts such as the time of day or the weather.
Conditioning on the relative prevalences of known subpopulations, such as the frequency of political articles. It is important to note that while the relative frequency of each subpopulation might change, the distribution underlying each subpopulation cannot change.
Conditioning on model predictions. Assume we trained a classifier which tries to figure out which news topic an article belongs to. Given our model predictions we then want to understand whether our test data follows the same underlying distribution as reference instances with similar model predictions. This conditioning would also be useful in case of trending news topics which cause the model prediction distribution to shift but not necessarily the distribution within each of the news topics.
Conditioning on model uncertainties which would allow increases in model uncertainty due to drift into familiar regions of high aleatoric uncertainty (often fine) to be distinguished from that into unfamiliar regions of high epistemic uncertainty (often problematic).
The following settings will be illustrated throughout the notebook:
A change in the prevalences of subpopulations (i.e. news topics) relative to their prevalences in the training data. Contrary to traditional drift detection approaches, the context-aware detector does not flag drift as this change in frequency of news topics is permissible given the context provided (e.g. more political news articles around elections).
A change in the underlying distribution of one or more subpopulations takes place. While we allow changes in the prevalence of the subpopulations accounted for by the context variable, we do not allow changes of the subpopulations themselves. Let's assume that a newspaper usually has a certain tone (e.g. more conservative) when it comes to politics. If this tone changes (to less conservative) around elections (increased frequency of political news articles), then we want to flag it as drift since the change cannot be attributed to the context given to the detector.
A change in the distribution as we observe a previously unseen news topic. A newspaper might for instance add a classified ads section, which was not present in the reference data.
Under setting 1. we want our detector to be well-calibrated (a controlled False Positive Rate (FPR) and more generally a p-value which is uniformly distributed between 0 and 1) while under settings 2. and 3. we want our detector to be powerful and flag the drift. Lastly, we show how the detector can help you to understand the connection between the reference and test data distributions better.
We use the 20 newsgroup dataset which contains about 18,000 newsgroups post across 20 topics, including politics, science sports or religion.
The notebook requires the umap-learn
, torch
, sentence-transformers
, statsmodels
, seaborn
and datasets
packages to be installed, which can be done via pip
:
Before we start let's fix the random seeds for reproducibility:
First we load the data, show which classes (news topics) are present and what an instance looks like.
Let's take a look at an instance from the dataset:
We embed the news posts using SentenceTransformers pre-trained embeddings and optionally add a dimensionality reduction step with UMAP. UMAP also allows to leverage reference data labels.
We define respectively a generic clustering model using UMAP, a model to embed the text input using pre-trained SentenceTransformers embeddings, a text classifier and a utility function to place the data on the right device.
First we train a classifier on a small subset of the data. The aim of the classifier is to predict the news topic of each instance. Below we define a few simple training and evaluation functions.
We now split the data in 2 sets. The first set (x_train
) we will use to train our text classifier, and the second set (x_drift
) is held out to test our drift detector on.
Let's train our classifier. The classifier consists of a simple MLP head on top of a pre-trained SentenceTransformer model as the backbone. The SentenceTransformer remains frozen during training and only the MLP head is finetuned.
We start with an example where no drift occurs and the reference and test data are both sampled randomly from all news topics. Under this scenario, we expect no drift to be detected by either a normal MMD detector or by the context-aware MMD detector.
First we define some helper functions. The first one visualises the clustered text data while the second function samples disjoint reference and test sets with a specified number of instances per class (i.e. per news topic).
We first define the embedding model using the pre-trained SentenceTransformer embeddings and then embed both the reference and test sets.
By applying UMAP clustering on the SentenceTransformer embeddings, we can visually inspect the various news topic clusters. Note that we fit the clustering model on the held out data first, and then make predictions on the reference and test sets.
We can visually see that the reference and test set are made up of similar clusters of data, grouped by news topic. As a result, we would not expect drift to be flagged. If the data distribution did not change, we can expect the p-value distribution of our statistical test to be uniformly distributed between 0 and 1. So let's see if this assumption holds.
Importantly, first we need to define our context variable for the context-aware MMD detector. In our experiments we allow the relative prevalences of subpopulations to vary while the distributions underlying each of the subpopulations remain unchanged. To achieve this we condition on the prediction probabilities of the classifier we trained earlier to distinguish each of the 20 different news topics. We can do this because the prediction probabilities can account for the frequency of occurrence of each of the topics (be it imperfectly given our classifier makes the occasional mistake).
Before we set off our experiments, we embed all the instances in x_drift
and compute all contexts c_drift
so we don't have to call our transformer model every single pass in the for loop.
The below figure of the Q-Q (Quantile-Quantile) plots of a random sample from the uniform distribution U[0,1] against the obtained p-values from the vanilla and context-aware MMD detectors illustrate how well both detectors are calibrated. A perfectly calibrated detector should have a Q-Q plot which closely follows the diagonal. Only the middle plot in the grid shows the detector's p-values. The other plots correspond to n_runs p-values actually sampled from U[0,1] to contextualise how well the central plot follows the diagonal given the limited number of samples.
As expected we can see that both the normal MMD and the context-aware MMD detectors are well-calibrated.
We now focus our attention on a more realistic problem where the relative frequency of one or more subpopulations (i.e. news topics) is changing in a way which can be attributed to external events. Importantly, the distribution underlying each subpopulation (e.g. the distribution of hockey news itself) remains unchanged, only its frequency changes.
In our example we assume that the World Series and Stanley Cup coincide on the calendar leading to a spike in news articles on respectively baseball and hockey. Furthermore, there is not too much news on Mac or Windows since there are no new releases or products planned anytime soon.
While the context-aware detector remains well calibrated, the MMD detector consistently flags drift (low p-values). Note that this is the expected behaviour since the vanilla MMD detector cannot take any external context into account and correctly detects that the reference and test data do not follow the same underlying distribution.
We can also easily see this on the plot below where the p-values of the context-aware detector are uniformly distributed while the MMD detector's p-values are consistently close to 0. Note that we limited the y-axis range to make the plot easier to read.
In the following example we change the distribution of one or more of the underlying subpopulations. Notice that now we do want to flag drift since our context variable, which permits changes in relative subpopulation prevalences, can no longer explain the change in distribution.
Imagine our news topic classification model is not as granular as before and instead of the 20 categories only predicts the 6 super classes, organised by subject matter:
Computers: comp.graphics; comp.os.ms-windows.misc; comp.sys.ibm.pc.hardware; comp.sys.mac.hardware; comp.windows.x
Recreation: rec.autos; rec.motorcycles; rec.sport.baseball; rec.sport.hockey
Science: sci.crypt; sci.electronics; sci.med; sci.space
Miscellaneous: misc.forsale
Politics: talk.politics.misc; talk.politics.guns; talk.politics.mideast
Religion: talk.religion.misc; talk.atheism; soc.religion.christian
What if baseball and hockey become less popular and the distribution underlying the Recreation class changes? We will want to detect this as the change in distributions of the subpopulations (the 6 super classes) cannot be explained anymore by the context variable.
In order to reuse our pretrained classifier for the super classes, we add the following helper function to map the predictions on the super classes and return one-hot encoded predictions over the 6 super classes. Note that our context variable now changes from a probability distribution over the 20 news topics to a one-hot encoded representation over the 6 super classes.
We can see that the context-aware detector is powerful to detect changes in the distributions of the subpopulations.
Next we illustrate the effectiveness of the context-aware detector to detect new topics which are not present in the reference data. Obviously we also want to flag drift in this case. As an example we introduce movie reviews in the test data.
So far we have conditioned the context-aware detector on the model predictions. There are however many other useful contexts possible. One such example would be to condition on the predictions of an unsupervised clustering algorithm. To facilitate this, we first apply kernel PCA on the embedding vectors, followed by a Gaussian mixture model which clusters the data into 6 classes (same as the super classes). We will test both the calibration under the null hypothesis (no distribution change) as well as the power when a new topic (movie reviews) is injected.
Next we change the number of instances in each cluster between the reference and test sets. Note that we do not alter the underlying distribution of each of the clusters, just the frequency.
Now we run the experiment and show the context-aware detector's calibration when changing the cluster frequencies. We also show how the usual MMD detector will consistently flag drift. Furthermore, we inject instances from the movie reviews dataset and illustrate that the context-aware detector remains powerful when the underlying cluster distribution changes (by including a previously unseen topic).
The test statistic $\hat{t}$ of the context-aware MMD detector can be formulated as follows: $\hat{t} = \langle K_{0,0}, W_{0,0} \rangle + \langle K_{1,1}, W_{1,1} \rangle -2\langle K_{0,1}, W_{0,1}\rangle$ where $0$ refers to the reference data, $1$ to the test data, and $W_{.,.}$ and $K_{.,.}$ are the weight and kernel matrices, respectively. The weight matrices $W_{.,.}$ allow us to focus on the distribution's subpopulations of interest. Reference instances which have similar contexts as the test data will have higher values for their entries in $W_{0,1}$ than instances with dissimilar contexts. We can therefore interpret $W_{0,1}$ as the coupling matrix between instances in the reference and the test sets. This allows us to investigate which subpopulations from the reference set are present and which are missing in the test data. If we also have a good understanding of the model performance on various subpopulations of the reference data, we could even try and use this coupling matrix to roughly proxy model performance on the unlabeled test instances. Note that in this case we would require labels from the reference data and make sure the reference instances come from the validation, not the training set.
In the following example we only pick 2 classes to be present in the test set while all 20 are present in the reference set. We can then investigate via the coupling matrix whether the test statistic $\hat{t}$ focused on the right classes in the reference data via $W_{0,1}$. More concretely, we can sum over the columns (the test instances) of $W_{0,1}$ and check which reference instances obtained the highest weights.
A number of convenient and powerful kernel-based drift detectors such as the MMD detector (Gretton et al., 2012) or the learned kernel MMD detector (Liu et al., 2020) do not scale favourably with increasing dataset size $n$, leading to quadratic complexity $\mathcal{O}(n^2)$ for naive implementations. As a result, we can quickly run into memory issues by having to store the $[N_\text{ref} + N_\text{test}, N_\text{ref} + N_\text{test}]$ kernel matrix (on the GPU if applicable) used for an efficient implementation of the permutation test. Note that $N_\text{ref}$ is the reference data size and $N_\text{test}$ the test data size.
We can however drastically speed up and scale up kernel-based drift detectors to large dataset sizes by working with symbolic kernel matrices instead and leverage the KeOps library to do so. For the user of $\texttt{Alibi Detect}$ the only thing that changes is the specification of the detector's backend, e.g. for the MMD detector:
In this notebook we will run a few simple benchmarks to illustrate the speed and memory improvements from using KeOps over vanilla PyTorch on the GPU (1x RTX 2080 Ti) for both the standard MMD and learned kernel MMD detectors.
We randomly sample points from the standard normal distribution and run the detectors with PyTorch and KeOps backends for the following settings:
$N_\text{ref}, N_\text{test} = [2, 5, 10, 20, 50, 100]$ (batch sizes in '000s)
$D = [2, 10, 50]$
Where $D$ denotes the number of features.
The notebook requires PyTorch and KeOps to be installed. Once PyTorch is installed, KeOps can be installed via pip:
Before we start let’s fix the random seeds for reproducibility:
First we define some utility functions to run the experiments:
As detailed earlier, we will compare the PyTorch with the KeOps implementation of the MMD and learned kernel MMD detectors for a variety of reference and test data batch sizes as well as different feature dimensions. Note that for the PyTorch implementation, the portion of the kernel matrix for the reference data itself can already be computed at initialisation of the detector. This computation will not be included when we record the detector's prediction time. Since use cases where $N_\text{ref} >> N_\text{test}$ are quite common, we will also test for this specific setting. The key reason is that we cannot amortise this computation for the KeOps detector since we are working with lazily evaluated symbolic matrices.
1. $N_\text{ref} = N_\text{test}$
Note that for KeOps we could further increase the number of instances in the reference and test sets (e.g. to 500,000) without running into memory issues.
Below we visualise the runtimes of the different experiments. We can make the following observations:
The relative speed improvements of KeOps over vanilla PyTorch increase with increasing batch size.
Due to the explicit kernel computation and storage, the PyTorch detector runs out-of-memory after a little over 10,000 instances in each of the reference and test sets while KeOps keeps scaling up without any issues.
The relative speed improvements decline with growing feature dimension. Note however that we would not recommend using a (untrained) MMD detector on very high-dimensional data in the first place.
The plots show both the absolute and relative (PyTorch / KeOps) mean prediction times for the MMD drift detector for different feature dimensions $[2, 10, 50]$.
The difference between KeOps and PyTorch is even more striking when we only look at $[2, 10]$ features:
2. $N_\text{ref} >> N_\text{test}$
Now we check whether the speed improvements still hold when $N_\text{ref} >> N_\text{test}$ ($N_\text{ref} / N_\text{test} = 10$) and a large part of the kernel can already be computed at initialisation time of the PyTorch (but not the KeOps) detector.
The below plots illustrate that KeOps indeed still provides large speed ups over PyTorch. The x-axis shows the reference batch size $N_\text{ref}$. Note that $N_\text{ref} / N_\text{test} = 10$.
We conduct similar experiments as for the MMD detector for $N_\text{ref} = N_\text{test}$ and n_features=50
. We use a deep learned kernel with an MLP followed by Gaussian RBF kernels and project the input features on a d_out=2
-dimensional space. Since the learned kernel detector computes the kernel matrix in a batch-wise manner, we can also scale up the number of instances for the PyTorch backend without running out-of-memory.
We again plot the absolute and relative (PyTorch / KeOps) mean prediction times for the learned kernel MMD drift detector for different feature dimensions:
As illustrated in the experiments, KeOps allows you to drastically speed up and scale up drift detection to larger datasets without running into memory issues. The speed benefit of KeOps over the PyTorch (or TensorFlow) MMD detectors decrease as the number of features increases. Note though that it is not advised to apply the (untrained) MMD detector to very high-dimensional data in the first place and that we can apply dimensionality reduction via the deep kernel for the learned kernel MMD detector.
We illustrate drift detection on molecular graphs using a variety of detectors:
Kolmogorov-Smirnov detector on the output of the binary classification Graph Isomorphism Network to detect prediction distribution shift.
Model Uncertainty detector which leverages a measure of uncertainty on the model predictions (in this case MC dropout) to detect drift which could lead to degradation of model performance.
Maximum Mean Discrepancy detector on graph embeddings to flag drift in the input data.
Learned Kernel detector which flags drift in the input data using a (deep) learned kernel. The method trains a (deep) kernel on part of the data to maximise an estimate of the test power. Once the kernel is learned a permutation test is performed in the usual way on the value of the Maximum Mean Discrepancy (MMD) on the held out test set.
Kolmogorov-Smirnov detector to see if drift occurred on graph level statistics such as the number of nodes, edges and the average clustering coefficient.
We will train a classification model and detect drift on the ogbg-molhiv dataset. The dataset contains molecular graphs with both atom features (atomic number-1, chirality, node degree, formal charge, number of H bonds, number of radical electrons, hybridization, aromatic?, in a ring?) and bond level properties (bond type (e.g. single or double), bond stereo code, conjugated?). The goal is to predict whether a molecule inhibits HIV virus replication or not, so the task is binary classification.
The dataset is split using the scaffold splitting procedure. This means that the molecules are split based on their 2D structural framework. Structurally different molecules are grouped into different subsets (train, validation, test) which could mean that there is drift between the splits.
The dataset is retrieved from the Open Graph Benchmark dataset collection.
Besides alibi-detect
, this example notebook also uses PyTorch Geometric and OGB, both of which can be installed via pip/conda.
We set some samples apart to serve as the reference data for our drift detectors. Note that the allowed format of the reference data is very flexible and can be np.ndarray
or List[Any]
:
Let's plot some graph summary statistics such as the distribution of the node degrees, number of nodes and edges as well as the clustering coefficients:
While the average number of nodes and edges are similar across the splits, the histograms show that the tails are slightly heavier for the training graphs.
We borrow code from the PyTorch Geometric GNN explanation example to visualize molecules from the graph objects.
As our classifier we use a variation of a Graph Isomorphism Network incorporating edge (bond) as well as node (atom) features.
Train and evaluate the model. Evaluation is done using ROC-AUC. If you already have a trained model saved, you can directly load it by specifying the load_path
:
We will first detect drift on the prediction distribution of the GIN model. Since the binary classification model returns continuous numerical univariate predictions, we use the Kolmogorov-Smirnov drift detector. First we define some utility functions:
Because we pass lists with torch_geometric.data.Data
objects to the detector, we need to preprocess the data using the batch_fn
into torch_geometric.data.Batch
objects which can be fed to the model. Then we detect drift on the model prediction distribution.
Since the dataset is heavily imbalanced, we will test the detectors on a sample which oversamples from the minority class (molecules which inhibit HIV virus replication):
As expected, prediction distribution shift is detected for the imbalanced sample but not for the random test sample with similar label distribution as the reference data.
The model uncertainty drift detector can pick up when the model predictions drift into areas of changed uncertainty compared to the reference data. This can be a good proxy for drift which results in model performance degradation. The uncertainty is estimated via a Monte Carlo estimate (MC dropout). We use the RegressorUncertaintyDrift detector since our binary classification model returns 1D logits.
Although we didn't pick up drift in the GIN model prediction distribution for the test sample, we can see that the model is less certain about the predictions on the test set, illustrated by the lower ROC-AUC.
We can also more detect drift on the input data by encoding the data with a randomly initialized GNN to extract graph embeddings. Then we apply our detector of choice, e.g. the MMD detector on the extracted embeddings.
Instead of applying the MMD detector on the pooling output of a randomly initialized GNN encoder, we use the Learned Kernel detector which trains the encoder and kernel on part of the data to maximise an estimate of the detector's test power. Once the kernel is learned a permutation test is performed in the usual way on the value of the MMD on the held out test set.
Since the molecular scaffolds are different across the train, validation and test sets, we expect that this type of data shift is picked up in the input data (technically not the input but the graph embedding).
We could also compute graph-level statistics such as the number of nodes, edges and clustering coefficient and detect drift on those statistics using the Kolmogorov-Smirnov test with multivariate correction (e.g. Bonferroni). First we define a preprocessing step to extract the summary statistics from the graphs:
The 3 returned p-values correspond to respectively the p-values for the number of nodes, edges and clustering coefficient. We already saw in the EDA that the distributions of the node, edge and clustering coefficients look similar across the train, validation and test sets except for the tails. This is confirmed by running the drift detector on the graph statistics which cannot seem to pick up on the differences in molecular scaffolds between the datasets, unless we heavily oversample from the minority class where the number of nodes and edges but not the clustering coefficient significantly differ.
Under the hood drift detectors leverage a function (also known as a test-statistic) that is expected to take a large value if drift has occurred and a low value if not. The power of the detector is partly determined by how well the function satisfies this property. However, specifying such a function in advance can be very difficult. In this example notebook we consider two ways in which a portion of the available data may be used to learn such a function before then applying it on the held out portion of the data to test for drift.
The classifier-based drift detector simply tries to correctly distinguish instances from the reference data vs. the test set. The classifier is trained to output the probability that a given instance belongs to the test set. If the probabilities it assigns to unseen tests instances are significantly higher (as determined by a Kolmogorov-Smirnov test) to those it assigns to unseen reference instances then the test set must differ from the reference set and drift is flagged. To leverage all the available reference and test data, stratified cross-validation can be applied and the out-of-fold predictions are used for the significance test. Note that a new classifier is trained for each test set or even each fold within the test set.
The method works with both the PyTorch and TensorFlow frameworks. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes. We evaluate the drift detector on the CIFAR-10-C dataset (Hendrycks & Dietterich, 2019). The instances in CIFAR-10-C have been corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in the classification model performance. We also check for drift against the original test set with class imbalances.
Original CIFAR-10 data:
For CIFAR-10-C, we can select from the following corruption types at 5 severity levels:
Let's pick a subset of the corruptions at corruption level 5. Each corruption type consists of perturbations on all of the original test set images.
We split the original test set in a reference dataset and a dataset which should not be flagged as drift. We also split the corrupted data by corruption type:
We can visualise the same instance for each corruption type:
Single fold
We use a simple classification model and try to distinguish between the reference data and the corrupted test sets. The detector defaults to binarize=False
which means a Kolmogorov-Smirnov test will be used to test for significant disparity between continuous model predictions (e.g. probabilities or logits). Initially we'll test at a significance level of $p=0.05$, use $75$% of the shuffled reference and test data for training and evaluate the detector on the remaining $25$%. We only train for 1 epoch.
If needed, the detector can be saved and loaded with save_detector
and load_detector
:
Let's check whether the detector thinks drift occurred on the different test sets and time the prediction calls:
As expected, drift was only detected on the corrupted datasets and the classifier could easily distinguish the corrupted from the reference data.
Use all the available data via cross-validation
So far we've only used $25$% of the data to detect the drift since $75$% is used for training purposes. At the cost of additional training time we can however leverage all the data via stratified cross-validation. We just need to set the number of folds and keep everything else the same. So for each test set n_folds
models are trained, and the out-of-fold predictions combined for the significance test:
An alternative to training a classifier to output high probabilities for instances from the test window and low probabilities for instances from the reference window is to learn a kernel that outputs high similarities between instances from the same window and low similarities between instances from different windows. The kernel may then be used within an MMD-test for drift. Liu et al. (2020) propose this learned approach and note that it is in fact a generalisation of the above classifier-based method. However, in this case we can train the kernel to directly optimise an estimate of the detector's power, which can result in superior performance.
Any differentiable Pytorch or TensorFlow module that takes as input two instances and outputs a scalar (representing similarity) can be used as the kernel for this drift detector. However, in order to ensure that MMD=0 implies no-drift the kernel should satify a characteristic property. This can be guarenteed by defining a kernel as where $\Phi$ is a learnable projection, $k_a$ and $k_b$ are simple characteristic kernels (such as a Gaussian RBF, and $\epsilon>0$ is a small constant. By letting $\Phi$ be very flexible we can learn powerful kernels in this manner.
This can be implemented as shown below. We use Pytorch instead of TensorFlow this time for the sake of variety. Because we are dealing with images we give our projection $\Phi$ a convolutional architecture.
We may then specify a DeepKernel
in the following manner. By default GaussianRBF
kernels are used for $k_a$ and $k_b$ and here we specify $\epsilon=0.01$, but we could alternatively set eps='trainable'
.
Since our PyTorch encoder expects the images in a (batch size, channels, height, width) format, we transpose the data. Note that this step could also be passed to the drift detector via the preprocess_fn
kwarg:
We then pass the kernel to the LearnedKernelDrift
detector. By default $75%$ of the data is used to train the kernel and the MMD-test is performed on the other $25%$.
Again, the detector can be saved and loaded:
Finally, lets make some predictions with the detector:
The Maximum Mean Discrepancy (MMD) detector is a kernel-based method for multivariate 2 sample testing. The MMD is a distance-based measure between 2 distributions p and q based on the mean embeddings $\mu_{p}$ and $\mu_{q}$ in a reproducing kernel Hilbert space $F$:
We can compute unbiased estimates of $MMD^2$ from the samples of the 2 distributions after applying the kernel trick. We use by default a radial basis function kernel, but users are free to pass their own kernel of preference to the detector. We obtain a $p$-value via a permutation test on the values of $MMD^2$. This method is also described in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift.
The method is implemented in both the PyTorch and TensorFlow frameworks with support for CPU and GPU. Various preprocessing steps are also supported out-of-the box in Alibi Detect for both frameworks and illustrated throughout the notebook. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes. We evaluate the drift detector on the CIFAR-10-C dataset (Hendrycks & Dietterich, 2019). The instances in CIFAR-10-C have been corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in the classification model performance. We also check for drift against the original test set with class imbalances.
Original CIFAR-10 data:
For CIFAR-10-C, we can select from the following corruption types at 5 severity levels:
Let's pick a subset of the corruptions at corruption level 5. Each corruption type consists of perturbations on all of the original test set images.
We split the original test set in a reference dataset and a dataset which should not be rejected under the H0 of the MMD test. We also split the corrupted data by corruption type:
We can visualise the same instance for each corruption type:
We can also verify that the performance of a classification model on CIFAR-10 drops significantly on this perturbed dataset:
Given the drop in performance, it is important that we detect the harmful data drift!
First we try a drift detector using the TensorFlow framework for both the preprocessing and the MMD computation steps.
We are trying to detect data drift on high-dimensional (32x32x3) data using a multivariate MMD permutation test. It therefore makes sense to apply dimensionality reduction first. Some dimensionality reduction methods also used in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift are readily available: a randomly initialized encoder (UAE or Untrained AutoEncoder in the paper), BBSDs (black-box shift detection using the classifier's softmax outputs) and PCA (using scikit-learn
).
Random encoder
First we try the randomly initialized encoder:
Let's check whether the detector thinks drift occurred on the different test sets and time the prediction calls:
As expected, drift was only detected on the corrupted datasets.
BBSDs
For BBSDs, we use the classifier's softmax outputs for black-box shift detection. This method is based on Detecting and Correcting for Label Shift with Black Box Predictors. The ResNet classifier is trained on data standardised by instance so we need to rescale the data.
Initialisation of the drift detector. Here we use the output of the softmax layer to detect the drift, but other hidden layers can be extracted as well by setting 'layer' to the index of the desired hidden layer in the model:
Again drift is only flagged on the perturbed data.
We can do the same thing using the PyTorch backend. We illustrate this using the randomly initialized encoder as preprocessing step:
Since our PyTorch encoder expects the images in a (batch size, channels, height, width) format, we transpose the data:
The drift detector will attempt to use the GPU if available and otherwise falls back on the CPU. We can also explicitly specify the device. Let's compare the GPU speed up with the CPU implementation:
Notice the over 30x acceleration provided by the GPU.
Similar to the TensorFlow implementation, PyTorch can also use the hidden layer output from a pretrained model for the preprocessing step via:
Model-uncertainty drift detectors aim to directly detect drift that's likely to effect the performance of a model of interest. The approach is to test for change in the number of instances falling into regions of the input space on which the model is uncertain in its predictions. For each instance in the reference set the detector obtains the model's prediction and some associated notion of uncertainty. For example for a classifier this may be the entropy of the predicted label probabilities or for a regressor with dropout layers dropout Monte Carlo can be used to provide a notion of uncertainty. The same is done for the test set and if significant differences in uncertainty are detected (via a Kolmogorov-Smirnoff test) then drift is flagged.
It is important that the detector uses a reference set that is disjoint from the model's training set (on which the model's confidence may be higher).
For models that require batch evaluation both PyTorch and TensorFlow frameworks are supported. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
We start by demonstrating how to leverage model uncertainty to detect malicious drift when the model of interest is a classifer.
Dataset
CIFAR10 consists of 60,000 32 by 32 RGB images equally distributed over 10 classes. We evaluate the drift detector on the CIFAR-10-C dataset (Hendrycks & Dietterich, 2019). The instances in CIFAR-10-C have been corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in the classification model performance. We also check for drift against the original test set with class imbalances.
Original CIFAR-10 data:
For CIFAR-10-C, we can select from the following corruption types at 5 severity levels:
Let's pick a subset of the corruptions at corruption level 5. Each corruption type consists of perturbations on all of the original test set images.
We split the original test set in a reference dataset and a dataset which should not be rejected under the no-change null H0. We also split the corrupted data by corruption type:
We can visualise the same instance for each corruption type:
We can also verify that the performance of a classification model on CIFAR-10 drops significantly on this perturbed dataset:
Given the drop in performance, it is important that we detect the harmful data drift!
Detect drift
Unlike many other approaches we needn't specify a dimension-reducing preprocessing step as the detector operates directly on the data as it is input to the model of interest. In fact, the two-stage projection input -> prediction -> uncertainty can be thought of as the projection from the input space onto the real line, ready to perform the test.
We simply pass the model to the detector and inform it that the predictions should be interpreted as 'probs' rather than 'logits' (i.e. a softmax has already been applied). By default uncertainty_type='entropy'
is used as the notion of uncertainty for classifier predictions, however uncertainty_type='margin'
can be specified to deem the classifier's prediction uncertain if they fall within a margin (e.g. in [0.45,0.55] for binary classifier probabilities) (similar to Sethi and Kantardzic (2017)).
Let's check whether the detector thinks drift occurred on the different test sets and time the prediction calls:
Note here how drift is only detected for the corrupted datasets on which the model's performance is significantly degraded. For the 'brightness' corruption, for which the model maintains 89% classification accuracy, the change in model uncertainty is not deemed significant (p-value 0.11, above the 0.05 threshold). For the other corruptions which signficiantly hamper model performance, the malicious drift is detected.
We now demonstrate how to leverage model uncertainty to detect malicious drift when the model of interest is a regressor. This is a less general approach as regressors often make point-predictions with no associated notion of uncertainty. However, if the model makes its predictions by ensembling the predicitons of sub-models then we can consider the variation in the sub-model predictions as a notion of uncertainty. RegressorUncertaintyDetector
facilitates models that output a vector of such sub-model predictions (uncertainty_type='ensemble'
) or deep learning models that include dropout layers and can therefore (as noted by Gal and Ghahramani 2016) be considered as an ensemble (uncertainty_type='mc_dropout'
, the default option).
Dataset
The Wine Quality Data Set consists of 1599 and 4898 samples of red and white wine respectively. Each sample has an associated quality (as determined by experts) and 11 numeric features indicating its acidity, density, pH etc. We consider the regression problem of tring to predict the quality of red wine sample given these features. We will then consider whether the model remains suitable for predicting the quality of white wine samples or whether the associated change in the underlying distribution should be considered as malicious drift.
First we load in the data.
We can see that the data for both red and white wine samples take the same format.
We shuffle and normalise the data such that each feature takes a value in [0,1], as does the quality we seek to predict.
We split the red wine data into a set on which to train the model, a reference set with which to instantiate the detector and a set which the detector should not flag drift. We then instantiate a DataLoader to pass the training data to a PyTorch model in batches.
Regression model
We now define the regression model that we'll train to predict the quality from the features. The exact details aren't important other than the presence of at least one dropout layer. We then train the model for 20 epochs to optimise the mean square error on the training data.
We now evaluate the trained model on both unseen samples of red wine and white wine. We see that, unsurprisingly, the model is better able to predict the quality of unseen red wine samples.
Detect drift
We now look at whether a regressor-uncertainty detector would have picked up on this malicious drift. We instantiate the detector and obtain drift predictions on both the held-out red-wine samples and the white-wine samples. We specify uncertainty_type='mc_dropout'
in this case, but alternatively we could have trained an ensemble model that for each instance outputs a vector of multiple independent predictions and specified uncertainty_type='ensemble'
.
In the context of deployed models, data (model queries) usually arrive sequentially and we wish to detect it as soon as possible after its occurence. One approach is to perform a test for drift every $W$ time-steps, using the $W$ samples that have arrived since the last test. Such a strategy could be implemented using any of the offline detectors implemented in alibi-detect
, but being both sensitive to slight drift and responsive to severe drift is difficult. If the window size $W$ is too small then slight drift will be undetectable. If it is too large then the delay between test-points hampers responsiveness to severe drift.
An alternative strategy is to perform a test each time data arrives. However the usual offline methods are not applicable because the process for computing p-values is too expensive and doesn't account for correlated test outcomes when using overlapping windows of test data.
Online detectors instead work by computing the test-statistic once using the first $W$ data points and then updating the test-statistic sequentially at low cost. When no drift has occured the test-statistic fluctuates around its expected value and once drift occurs the test-statistic starts to drift upwards. When it exceeds some preconfigured threshold value, drift is detected.
Unlike offline detectors which require the specification of a threshold p-value (a false positive rate), the online detectors in alibi-detect
require the specification of an expected run-time (ERT) (an inverted FPR). This is the number of time-steps that we insist our detectors, on average, should run for in the absense of drift before making a false detection. Usually we would like the ERT to be large, however this results in insensitive detectors which are slow to respond when drift does occur. There is a tradeoff between the expected run time and the expected detection delay.
To target the desired ERT, thresholds are configured during an initial configuration phase via simulation. This configuration process is only suitable when the amount reference data (most likely the training data of the model of interest) is relatively large (ideally around an order of magnitude larger than the desired ERT). Configuration can be expensive (less so with a GPU) but allows the detector to operate at low-cost during deployment.
This notebook demonstrates online drift detection using two different two-sample distance metrics for the test-statistic, the maximum mean discrepency (MMD) and least-squared density difference (LSDD), both of which can be updated sequentially at low cost.
The online detectors are implemented in both the PyTorch and TensorFlow frameworks with support for CPU and GPU. Various preprocessing steps are also supported out-of-the box in Alibi Detect for both frameworks and an example will be given in this notebook. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
The Wine Quality Data Set consists of 4898 and 1599 samples of white and red wine respectively. Each sample has an associated quality (as determined by experts) and 11 numeric features indicating its acidity, density, pH etc. We consider the regression problem of tring to predict the quality of white wine samples given these features. We will then consider whether the model remains suitable for predicting the quality of red wine samples or whether the associated change in the underlying distribution should be considered as drift.
The Maximum Mean Discepency (MMD) is a distance-based measure between 2 distributions p and q based on the mean embeddings $\mu_{p}$ and $\mu_{q}$ in a reproducing kernel Hilbert space $F$:
Given reference samples ${X_i}{i=1}^{N}$ and test samples ${Y_i}{i=t}^{t+W}$ we may compute an unbiased estimate $\widehat{MMD}^2(F, {X_i}{i=1}^N, {Y_i}{i=t}^{t+W})$ of the squared MMD between the two underlying distributions. Depending on the size of the reference and test windows, $N$ and $W$ respectively, this can be relatively expensive. However, once computed it is possible to update the statistic to estimate to the squared MMD between the distributions underlying ${X_i}{i=1}^{N}$ and ${Y_i}{i=t+1}^{t+1+W}$ at a very low cost, making it suitable for online drift detection.
By default we use a radial basis function kernel, but users are free to pass their own kernel of preference to the detector.
First we load in the data:
We can see that the data for both red and white wine samples take the same format.
We shuffle and normalise the data such that each feature takes a value in [0,1], as does the quality we seek to predict. We assue that our model was trained on white wine samples, which therefore forms the reference distribution, and that red wine samples can be considered to be drawn from a drifted distribution.
Although it may not be necessary on this relatively low-dimensional data for which individual features are semantically meaningful, we demonstrate how principle component analysis (PCA) can be performed as a preprocessing stage to project raw data onto a lower dimensional representation which more concisely captures the factors of variation in the data. As not to bias the detector it is necessary to fit the projection using a split of the data which isn't then passed as reference data. We additionally split off some white wine samples to act as undrifted data during deployment.
Now we define a PCA object to be used as a preprocessing function to project the 11-D data onto a 2-D representation. We learn the first 2 principal components on the training split of the reference data.
Hopefully the learned preprocessing step has learned a projection such that in the lower dimensional space the two samples are distinguishable.
Now we can define our online drift detector. We specify an expected run-time (in the absence of drift) of 50 time-steps, and a window size of 10 time-steps. Upon initialising the detector thresholds will be computed using 2500 boostrap samples. These values of ert
, window_size
and n_bootstraps
are lower than a typical use-case in order to demonstrate the average behaviour of the detector over a large number of runs in a reasonable time.
We now define a function which will simulate a single run and return the run-time. Note how the detector acts on single instances at a time, the run-time is considered as the time elapsed after the test-window has been filled, and that the detector is stateful and must be reset between detections.
Now we look at the distribution of run-times when operating on the held-out data from the reference distribution of white wine samples. We report the average run-time, however note that the targeted run-time distribution, a Geometric distribution with mean ert
, is very high variance so the empirical average may not be that close to ert
over a relatively small number of runs. We can see that the detector accurately targets the desired Geometric distribution however by inspecting the linearity of a Q-Q plot.
If we run the detector in an identical manner but on data from the drifted distribution of red wine samples the average run-time is much lower.
Here we address the same problem but using the least squares density difference (LSDD) as the two-sample distance in a manner similar to Bu et al. (2017). The LSDD between two distributions $p$ and $q$ on $\mathcal{X}$ is defined as and also has an empirical estimate $\widehat{LSDD}({X_i}{i=1}^N, {Y_i}{i=t}^{t+W})$ that can be updated at low cost as the test window is updated to ${Y_i}_{i=t+1}^{t+1+W}$.
We additionally show that TensorFlow can also be used as the backend and that sometimes it is not necessary to perform preprocessing, making definition of the drift detector simpler. Moreover, in the absence of a learned preprocessing stage we may use all of the reference data available.
And now we define the LSDD-based online drift detector, again with an ert
of 50 and window_size
of 10.
We run this new detector on the held out reference data and again see that in the absence of drift the distribution of run-times follows a Geometric distribution with mean ert
.
And when drift has occured the detector is very fast to respond.
Under the hood drift detectors leverage a function of the data that is expected to be large when drift has occured and small when it hasn't. In the Learned drift detectors on CIFAR-10 example notebook we note that we can learn a function satisfying this property by training a classifer to distinguish reference and test samples. However we now additionally note that if the classifier is specified in a certain way then when drift is detected we can inspect the weights of the classifier to shine light on exactly which features of the data were used to distinguish reference from test samples and therefore caused drift to be detected.
The SpotTheDiffDrift
detector is designed to make this process straightforward. Like the ClassifierDrift
detector, it uses a portion of the available data to train a classifier to discriminate between reference and test instances. Letting $\hat{p}_T(x)$ represent the probability assigned by the classifier that the instance $x$ is from the test set rather than reference set, the difference here is that we use a classifier of the form where $k(\cdot,\cdot)$ is a kernel specifying a notion of similarity between instances, $w_i$ are learnable test locations and $b_i$ are learnable regression coefficients.
The idea here is that if the detector flags drift and $b_i >0$ then we know that it reached its decision by considering how similar each instance is to the instance $w_i$, with those being more similar being more likely to be test instances than reference instances. Alternatively if $b_i < 0$ then instances more similar to $w_i$ were deemed more likely to be reference instances.
In order to provide less noisy and therefore more interpretable results, we define each test location as where $\bar{x}$ is the mean reference instance. We may then interpret $d_i$ as the additive transformation deemed to make the average reference more ($b_i>0$) or less ($b_i<0$) similar to a test instance. Defining the test locations in this way allows us to instead learn the difference $d_i$ and apply regularisation such that non-zero values must be justified by improved classification performance. This allows us to more clearly identify which features any detected drift should be attributed to.
This approach to interpretable drift detection is inspired by the work of Jitkrittum et al. (2016), however several major adaptations have been made.
The method works with both the PyTorch and TensorFlow frameworks. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
We start with an image example in order to provide a visual illustration of how the detector works. For this prupose we use the MNIST dataset of 28 by 28 grayscale handwritten digits. To represent the common problem of new classes emerging during the deployment phase we consider a reference set of ~9,000 instances containing only the digits 1-9 and a test set of 10,000 instances containing all of the digits 0-9. We would like drift to be detected in this scenario because a model trained of the reference instances will not know how to process instances from the new class.
This notebook requires the torchvision
package which can be installed via pip
:
When instantiating the detector we should specify the number of "diffs" we would like it to use to discriminate reference from test instances. Here there is a trade off. Using n_diffs=1
is the simplest to interpret and seems to work well in practice. Using more diffs may result in stronger detection power but the diffs may be harder to interpret due to intereactions and conditional dependencies.
The strength of the regularisation (l1_reg
) to apply to the diffs should also be specified. Stronger regularisation results in sparser diffs as the classifier is encouraged to discriminate using fewer features. This may make the diff more interpretable but may again come at the cost of detection power.
We should also specify how the classifier should be trained with standard arguments such as learning_rate
, epochs
and batch_size
. By default a Gaussian RBF is used for the kernel but alternatives can be specified via the kernel
kwarg. Additionally the classifier can be initialised with any desired diffs by passing them with the initial_diffs
kwarg -- by default they are initialised with Gaussian noise with standard deviation equal to that observed in the reference data.
When we then call the detector to detect drift on the deployment/test set it trains the classifier (thereby learning the diffs) and the usual is_drift
and p_val
properties can be inspected in the usual way:
As expected, the drift was detected. However we may now additionally look at the learned diffs and corresponding coefficients to determine how the detector reached this decision.
The detector has identified the zero that was missing from the reference data -- it realised that test instances were on average more (coefficient > 0) simmilar to an instance with below average middle pixel values and above average zero-region pixel values than reference instances were. It used this information to determine that drift had occured.
To provide an example on tabular data we consider the Wine Quality Data Set consisting of 4898 and 1599 samples of white and red wine respectively. Each sample has an associated quality (as determined by experts) and 11 numeric features indicating its acidity, density, pH etc. To represent the problem of a model being trained on one distribution and deployed on a subtly different one, we take as a reference set the samples of white wine and consider the red wine samples to form a 'corrupted' deployment set.
We can see that the data for both red and white wine samples take the same format.
We extract the features and shuffle and normalise them such that they take values in [0,1].
We then split off half of the reference set to act as an unseen sample from the same underlying distribution for which drift should not be detected.
We instantiate our detector in the same way as we do above, but this time using the Pytorch backend for the sake of variety. We then get the predictions of the detector on both the undrifted and corrupted test sets.
As expected drift is detected on the red wine samples but not the held out white wine samples from the same distribution. Now we can inspect the returned diff to determine how the detector reached its decision
We see that the detector was able to discriminate the corrupted (red) wine samples from the reference (white) samples by noting that on average reference samples (coeff < 0) typically contain more sulfur dioxide and residual sugars but have less sulphates and chlorides and have lower pH and volatile and fixed acidity.
We illustrate drift detection on text data using the following detectors:
Maximum Mean Discrepancy (MMD) detector using pre-trained transformers to flag drift in the embedding space.
Classifier drift detector to detect drift in the input space.
The Amazon dataset contains product reviews with a star rating. We will test whether drift can be detected if the ratings start to drift. For more information, check the WILDS documentation page.
Besides alibi-detect
, this example notebook also uses the Amazon dataset through the WILDS package. WILDS is a curated collection of benchmark datasets that represent distribution shifts faced in the wild and can be installed via pip
:
Throughout the notebook we use detectors with both PyTorch
and TensorFlow
backends.
We first load the dataset and create reference data, data which should not be rejected under the null of the test (H0) and data which should exhibit drift (H1). The drift is introduced later by specifying a specific star rating for the test instances.
The following cell will download the Amazon dataset (if DOWNLOAD=True). The download size is ~7GB and size on disk is ~7GB.
First we embed instances using a pretrained transformer model and detect data drift using the MMD detector on the embeddings.
Helper functions:
Define the transformer embedding preprocessing step:
Define a function which will for a specified number of iterations (n_sample
):
Configure the MMDDrift
detector with a new reference data sample
Detect drift on the H0 and H1 splits
Now we will use the ClassifierDrift detector which uses a binary classification model to try and distinguish the reference from the test (H0 or H1) data. Drift is then detected on the difference between the prediction distributions on out-of-fold reference vs. test instances using a Kolmogorov-Smirnov 2 sample test on the prediction probabilities or via a binomial test on the binarized predictions. We use a pretrained transformer model but freeze its weights and only train the head which consists of 2 dense layers with a leaky ReLU non-linearity:
We can do the same using TensorFlow instead of PyTorch as backend. We first define the classifier again and then simply run the detector:
We detect drift on text data using both the Maximum Mean Discrepancy and Kolmogorov-Smirnov (K-S) detectors. In this example notebook we will focus on detecting covariate shift $\Delta p(x)$ as detecting predicted label distribution drift does not differ from other modalities (check K-S and MMD drift on CIFAR-10).
It becomes however a little bit more involved when we want to pick up input data drift $\Delta p(x)$. When we deal with tabular or image data, we can either directly apply the two sample hypothesis test on the input or do the test after a preprocessing step with for instance a randomly initialized encoder as proposed in Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift (they call it an Untrained AutoEncoder or UAE). It is not as straightforward when dealing with text, both in string or tokenized format as they don't directly represent the semantics of the input.
As a result, we extract (contextual) embeddings for the text and detect drift on those. This procedure has a significant impact on the type of drift we detect. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract.
The library contains functionality to leverage pre-trained embeddings from HuggingFace's transformer package but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in this notebook.
Note
As is done in this example, it is recommended to pass text data to detectors as a list of strings (List[str]
). This allows for seamless integration with HuggingFace's transformers library.
One exception to the above is when custom embeddings are used. Here, it is important to ensure that the data is passed to the custom embedding model in a compatible format. In the final example, a preprocess_batch_fn
is defined in order to convert list
's to the np.ndarray
's expected by the custom TensorFlow embedding.
The method works with both the PyTorch and TensorFlow frameworks for the statistical tests and preprocessing steps. Alibi Detect does however not install PyTorch for you. Check the PyTorch docs how to do this.
Binary sentiment classification dataset containing $25,000$ movie reviews for training and $25,000$ for testing. Install the nlp
library to fetch the dataset:
Let's take a look at respectively a negative and positive review:
We split the original test set in a reference dataset and a dataset which should not be rejected under the H0 of the statistical test. We also create imbalanced datasets and inject selected words in the reference set.
Reference, H0 and imbalanced data:
Inject words in reference data:
First we need to specify the type of embedding we want to extract from the BERT model. We can extract embeddings from the ...
pooler_output: Last layer hidden-state of the first token of the sequence (classification token; CLS) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training. Note: this output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.
last_hidden_state: Sequence of hidden states at the output of the last layer of the model, averaged over the tokens.
hidden_state: Hidden states of the model at the output of each layer, averaged over the tokens.
hidden_state_cls: See hidden_state but use the CLS token output.
If hidden_state or hidden_state_cls is used as embedding type, you also need to pass the layer numbers used to extract the embedding from. As an example we extract embeddings from the last 8 hidden states.
Let's check what an embedding looks like:
So the BERT model's embedding space used by the drift detector consists of a $768$-dimensional vector for each instance. We will therefore first apply a dimensionality reduction step with an Untrained AutoEncoder (UAE) before conducting the statistical hypothesis test. We use the embedding model as the input for the UAE which then projects the embedding on a lower dimensional space.
Let's test this again:
We proceed to initialize the drift detector. From here on the detector works the same as for other modalities such as images. Please check the images example or the K-S detector documentation for more information about each of the possible parameters.
Let’s first check if drift occurs on a similar sample from the training set as the reference data.
Detect drift on imbalanced and perturbed datasets:
Again check the images example or the MMD detector documentation for more information about each of the possible parameters.
H0:
Imbalanced data:
Perturbed data:
We can run the same detector with PyTorch backend for both the preprocessing step and MMD implementation:
H0:
Imbalanced data:
Perturbed data:
So far we used pre-trained embeddings from a BERT model. We can however also use embeddings from a model trained from scratch. First we define and train a simple classification model consisting of an embedding and LSTM layer in TensorFlow.
Load and tokenize data:
Let's check out an instance:
Define and train a simple model:
Extract the embedding layer from the trained model and combine with UAE preprocessing step:
Again, create reference, H0 and perturbed datasets. Also test against the Reuters news topic classification dataset.
H0:
Perturbed data:
The detector is not as sensitive as the Transformer-based K-S drift detector. The embeddings trained from scratch only trained on a small dataset and a simple model with cross-entropy loss function for 2 epochs. The pre-trained BERT model on the other hand captures semantics of the data better.
Sample from the Reuters dataset:
This notebook demonstrates a typical workflow for applying online drift detectors to streams of image data. For those unfamiliar with how the online drift detectors operate in alibi_detect
we recommend first checking out the more introductory example Online Drift Detection on the Wine Quality Dataset where online drift detection is performed for the wine quality dataset.
This notebook requires the wilds
, torch
and torchvision
packages which can be installed via pip
:
We will use the Camelyon17 dataset, one of the WILDS datasets of Koh et al, (2020) that represent "in-the-wild" distribution shifts for various data modalities. It contains tissue scans to be classificatied as benign or cancerous. The pre-change distribution corresponds to scans from across three hospitals and the post-change distribution corresponds to scans from a new fourth hospital.
Koh et al, (2020) show that models trained on scans from the pre-change distribution achieve an accuracy of 93.2% on unseen scans from same distribution, but only 70.3% accuracy on scans from the post-change distribution.
First we create a function that converts the Camelyon dataset to a stream in order to simulate a live deployment environment. We extract N instances to act as the reference set on which a model of interest was trained. We then consider a stream of images from the pre-change (same) distribution and a stream of images from the post-change (drifted) distribution.
The following cell will download the Camelyon dataset (if DOWNLOAD=True). The download size is ~10GB and size on disk is ~15GB.
Shown below are samples from the pre-change distribution:
And samples from the post-change distribution:
The images are of dimension 96x96x3. We train an autoencoder in order to define a more structured representational space of lower dimension. This projection can be thought of as an extension of the kernel. It is important that trained preprocessing components are trained on a split of data that doesn't then form part of the reference data passed to the drift detector.
We can train the autoencoder using a helper function provided for convenience in alibi-detect
.
The preprocessing/projection functions are expected to map numpy arrays to numpy array, so we wrap the encoder within the function below.
alibi-detect
's online drift detectors window the stream of data in an 'overlapping window' manner such that a test is performed at every time step. We will use an estimator of MMD as the test statistic. The estimate is updated incrementally at low cost. The thresholds are configured via simulation in an initial configuration phase to target the desired expected runtime (ERT) in the absence of change. For a detailed description of this calibration procedure see Cobb et al, 2021.
We define a function which will apply the detector to the streams and return the time at which drift was detected.
First we apply the detector multiple times to the pre-change stream where the distribution is unchanged.
We see that the average runtime in the absence of change is close to the desired ERT, as expected. We can inspect the detector's test_stats
and thresholds
properties to see how the test statistic varied over time and how close it got to exceeding the threshold.
Now we apply it to the post-change stream where the images are from a drifted distribution.
We see that the detector is quick to flag drift when it has occured.
When true outputs/labels are available, we can perform supervised drift detection; monitoring the model's performance directly in order to check for harmful drift. Two detectors ideal for this application are the Fisher’s Exact Test (FET) detector and Cramér-von Mises (CVM) detector detectors.
The FET detector is designed for use on binary data, such as the instance level performance indicators from a classifier (i.e. 0/1 for each incorrect/correct classification). The CVM detector is designed use on continuous data, such as a regressor's instance level loss or error scores.
In this example we will use the offline versions of these detectors, which are suitable for use on batches of data. In many cases data may arrive sequentially, and the user may wish to perform drift detection as the data arrives to ensure it is detected as soon as possible. In this case, the online versions of the FET and CVM detectors can be used, as will be explored in a future example.
The palmerpenguins dataset consists of data on 344 penguins from 3 islands in the Palmer Archipelago, Antarctica. There are 3 different species of penguin in the dataset, and a common task is to classify the the species of each penguin based upon two features, the length and depth of the peguin's bill, or beak.
Artwork by Allison Horst
This notebook requires the seaborn
package for visualization and the palmerpenguins
package to load data. Thse can be installed via pip
:
To download the dataset we use the palmerpenguins package:
The data consists of 333 rows (one row is removed as contains a NaN), one for each penguin. There are 8 features describing the peguins' physical characteristics, their species and sex, the island each resides on, and the year measurements were taken.
For our first example use case, we will perform the popular species classification task. Here we wish the classify the species
based on only bill_length_mm
and bill_depth_mm
. To start we remove the other features and visualise those that remain.
The above plot shows that the Adeilie species can primarily be identified by looking at bill length. Then to further distinguish between Gentoo and Chinstrap, we can look at the bill depth.
Next we separate the data into inputs and outputs, and encoder the species data to integers. Finally, we now split into three data sets; one to train the classifier, one to act a reference set when testing for drift, and one to test for drift on.
For this dataset, a relatively shallow decision tree classifier should be sufficient, and so we train an sklearn
one on the training data.
As expected, the decision tree is able to give acceptable classification accuracy on the train and test sets.
In order to demonstrate use of the drift detectors, we first need to add some artificial drift to the test data X_test
. We add two types of drift here; to create covariate drift we subtract 5mm from the bill length of all the Gentoo penguins. $P(y|\mathbf{X})$ is unchanged here, but clearly we have introduced a delta $\Delta P(\mathbf{X})$. To create concept drift, we switch the labels of the Gentoo and Chinstrap penguins, so that the underlying process $P(y|\mathbf{X})$ is changed.
We now define a utility function to plot the classifier's decision boundaries, and we use this to visualise the reference data set, the test set, and the two new data sets where drift is present.
These plots serve as a visualisation of the differences between covariate drift and concept drift. Importantly, the model accuracies shown above also highlight the fact that not all drift is necessarily malicious, in the sense that even relatively significant drift does not always lead to degradation in a model's performance indicators. For example, the model actually gives a slightly higher accuracy on the covariate drift data set than on the no drift set in this case. Conversely, the concept drift unsuprisingly leads to severely degraded model performance.
Before getting to the main task in this example, monitoring malicious drift with a supervised drift detector, we will first use the MMD detector to check for covariate drift. To do this we initialise it in an unsupervised manner by passing it the input data X_ref
.
Applying this detector on the no drift, covariate drift and concept drift data sets, we see that the detector only detects drift in the covariate drift case. Not detecting drift in the no drift case is desirable, but not detecting drift in the concept drift case is potentially problematic.
The fact that the unsupervised detector above does not detect the severe concept drift demonstrates the motivation for using supervised drift detectors that directly check for malicious drift, which can include malicious concept drift.
To perform supervised drift detection we first need to compute the model's performance indicators. Since this is a classification task, a suitable performance indicator is the instance level binary losses, which are computed below.
As seen above, these losses are binary data, where 0 represents an incorrect classification for each instance, and 1 represents a correct classification.
Since this is binary data, the FET detector is chosen, and initialised on the reference loss data. The alternative
hypothesis is set to less
, meaning we will only flag drift if the proportion of 1s to 0s is reduced compared to the reference data. In other words, we only flag drift if the model's performance has degraded.
Applying this detector to the same three data sets, we see that malicious drift isn't detected in the no drift or covariate drift cases, which is unsurprising since the model performance isn't degraded in these cases. However, with this supervised detector, we now detect the malicious concept drift as desired.
To provide a short example of supervised detection in a regression setting, we now rework the dataset into a regression task, and use the CVM detector on the model's squared error.
Warning: Must have scipy >= 1.7.0 installed for this example.
For a regression task, we take the penguins' flipper length and sex as inputs, and aim to predict the penguins' body mass. Looking at a scatter plot of these features, we can see there is substantial correlation between the chosen inputs and outputs.
Again, we split the dataset into the same three sets; a training set, reference set and test set.
This time we train a linear regressor on the training data, and find that it gives acceptable training and test accuracy.
To generate a copy of the test data with concept drift added, we use the model to create new output data, with a multiplicative factor and some Gaussian noise added. The quality of our synthetic output data is of course affected by the accuracy of the model, but it serves to demonstrate the behavior of the model (and detector) when $P(y|\mathbf{X})$ is changed.
Unsurprisingly, the covariate drift leads to degradation in the model accuracy.
As in the classification example, in order to perform supervised drift detection we need to compute the models performance indicators. For this regression example, the instance level squared errors are used.
The CVM detector is trained on the reference losses:
As desired, the CVM detector does not detect drift on the no drift data, but does on covariate drift data.
The is a method for multivariate 2 sample testing. The LSDD between two distributions $p$ and $q$ on $\mathcal{X}$ is defined as
Given two samples we can compute an estimate of the $LSDD$ between the two underlying distributions and use it as a test statistic. We then obtain a $p$-value via a on the values of the $LSDD$ estimates. In practice we actually estimate the LSDD scaled by a factor that maintains numerical stability when dimensionality is high.
Note
$LSDD$ is based on the assumption that a probability density exists for both distributions and hence is only suitable for continuous data. If you are working with tabular data containing categorical variables, we recommend using the instead.
For high-dimensional data, we typically want to reduce the dimensionality before computing the permutation test. Following suggestions in , we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs () as out-of-the box preprocessing methods and note that can also be easily implemented using scikit-learn
. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift.
Detecting input data drift (covariate shift) $\Delta p(x)$ for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the notebook.
Arguments:
x_ref
: Data used as reference distribution.
Keyword arguments:
backend
: Both TensorFlow and PyTorch implementations of the LSD detector as well as various preprocessing steps are available. Specify the backend (tensorflow or pytorch). Defaults to tensorflow.
p_val
: p-value used for significance of the permutation test.
preprocess_at_init
: Whether to already apply the (optional) preprocessing step to the reference data at initialization and store the preprocessed data. Dependent on the preprocessing step, this can reduce the computation time for the predict step significantly, especially when the reference dataset is large. Defaults to True. It is possible that it needs to be set to False if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
x_ref_preprocessed
: Whether or not the reference data x_ref
has already been preprocessed. If True, the reference data will be skipped and preprocessing will only be applied to the test data passed to predict
.
update_x_ref
: Reference data can optionally be updated to the last N instances seen by the detector or via with size N. For the former, the parameter equals {'last': N} while for reservoir sampling {'reservoir_sampling': N} is passed.
preprocess_fn
: Function to preprocess the data before computing the data drift metrics. Typically a dimensionality reduction technique.
sigma
: Optionally set the bandwidth of the Gaussian kernel used in estimating the LSDD. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If sigma
is not specified, the 'median heuristic' is adopted whereby sigma
is set as the median pairwise distance between reference samples.
n_permutations
: Number of permutations used in the permutation test.
n_kernel_centers
: The number of reference samples to use as centers in the Gaussian kernel model used to estimate LSDD. Defaults to 1/20th of the reference data.
lambda_rd_max
: The maximum relative difference between two estimates of LSDD that the regularization parameter lambda is allowed to cause. Defaults to 0.2 as in the paper.
input_shape
: Optionally pass the shape of the input data.
data_type
: can specify data type added to the metadata. E.g. 'tabular' or 'image'.
Additional PyTorch keyword arguments:
device
: cuda or gpu to use the GPU and cpu for the CPU. If the device is not specified, the detector will try to leverage the GPU if possible and otherwise fall back on CPU.
Initialized drift detector example:
The same detector in PyTorch:
We can also easily add preprocessing functions for both frameworks. The following example uses a randomly initialized image encoder in PyTorch:
The same functionality is supported in TensorFlow and the main difference is that you would import from alibi_detect.cd.tensorflow import preprocess_drift
. Other preprocessing steps such as the output of hidden layers of a model or extracted text embeddings using transformer models can be used in a similar way in both frameworks. TensorFlow example for the hidden layer output:
We detect data drift by simply calling predict
on a batch of instances x
. We can return the p-value and the threshold of the permutation test by setting return_p_val
to True and the maximum mean discrepancy metric and threshold by setting return_distance
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_drift
: 1 if the sample tested has drifted from the reference data and 0 otherwise.
p_val
: contains the p-value if return_p_val
equals True.
threshold
: p-value threshold if return_p_val
equals True.
distance
: LSDD metric between the reference data and the new batch if return_distance
equals True.
distance_threshold
: LSDD metric value from the permutation test which corresponds to the the p-value threshold.
For the related MMDDrift
detector.
The LSDDDrift
detector can be used in exactly the same way as the MMDDrift
detector which is further demonstrated in the example.
Alibi Detect also includes custom text preprocessing steps in both TensorFlow and PyTorch based on Huggingface's package:
Again the same functionality is supported in TensorFlow but with from alibi_detect.cd.tensorflow import preprocess_drift
and from alibi_detect.models.tensorflow import TransformerEmbedding
imports. Check out the example for more information.