Auto-Encoding Gaussian Mixture Model
Last updated
Was this helpful?
Last updated
Was this helpful?
The Auto-Encoding Gaussian Mixture Model (AEGMM) Outlier Detector follows the paper. The encoder compresses the data while the reconstructed instances generated by the decoder are used to create additional features based on the reconstruction error between the input and the reconstructions. These features are combined with encodings and fed into a Gaussian Mixture Model (). The AEGMM outlier detector is first trained on a batch of unlabeled, but normal (inlier) data. Unsupervised or semi-supervised training is desirable since labeled data is often scarce. The sample energy of the GMM can then be used to determine whether an instance is an outlier (high sample energy) or not (low sample energy). The algorithm is suitable for tabular and image data.
Parameters:
threshold
: threshold value for the sample energy above which the instance is flagged as an outlier.
n_gmm
: number of components in the GMM.
encoder_net
: tf.keras.Sequential
instance containing the encoder network. Example:
decoder_net
: tf.keras.Sequential
instance containing the decoder network. Example:
gmm_density_net
: layers for the GMM network wrapped in a tf.keras.Sequential
class. Example:
aegmm
: instead of using a separate encoder, decoder and GMM density net, the AEGMM can also be passed as a tf.keras.Model
.
recon_features
: function to extract features from the reconstructed instance by the decoder. Defaults to a combination of the mean squared reconstruction error and the cosine similarity between the original and reconstructed instances by the AE.
data_type
: can specify data type added to metadata. E.g. 'tabular' or 'image'.
Initialized outlier detector example:
We then need to train the outlier detector. The following parameters can be specified:
X
: training batch as a numpy array of preferably normal data.
loss_fn
: loss function used for training. Defaults to the custom AEGMM loss which is a combination of the mean squared reconstruction error, the sample energy of the GMM and a loss term penalizing small values on the diagonals of the covariance matrices in the GMM to avoid trivial solutions. It is important to balance the loss weights below so no single loss term dominates during the optimization.
w_energy
: weight on sample energy loss term. Defaults to 0.1.
w_cov_diag
: weight on covariance diagonals. Defaults to 0.005.
epochs
: number of training epochs.
batch_size
: batch size used during training.
verbose
: boolean whether to print training progress.
log_metric
: additional metrics whose progress will be displayed if verbose equals True.
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
We detect outliers by simply calling predict
on a batch of instances X
to compute the instance level sample energies. We can also return the instance level outlier score by setting return_instance_score
to True.
The prediction takes the form of a dictionary with meta
and data
keys. meta
contains the detector's metadata while data
is also a dictionary which contains the actual predictions stored in the following keys:
is_outlier
: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape (batch size,).
instance_score
: contains instance level scores if return_instance_score
equals True.
optimizer
: optimizer used for training. Defaults to with learning rate 1e-4.