# Getting Started

## Installation

Alibi works with Python 3.7+ and can be installed from [PyPI](https://pypi.org/project/alibi/) or [conda-forge](https://conda-forge.org/) by following these instructions.

### PyPI

Alibi can be installed from [PyPI](https://pypi.org/project/alibi/) with `pip`:

{% tabs %}
{% tab title="Standard" %}
Default installation.

```bash
pip install alibi
```

{% endtab %}

{% tab title="SHAP" %}
Installation with support for computing [SHAP](https://shap.readthedocs.io/en/stable/index.html) values.

```bash
pip install alibi[shap]
```

{% endtab %}

{% tab title="Distributed" %}
Installation with support for [distributed Kernel SHAP](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/examples/distributed_kernel_shap_adult_lr.ipynb).

```bash
pip install alibi[ray]
```

{% endtab %}

{% tab title="TensorFlow" %}
Installation with support for tensorflow backends. Required for

* [Contrastive Explanation Method (CEM)](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/methods/CEM.ipynb)
* [Counterfactuals Guided by Prototypes](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/methods/CFProto.ipynb)
* [Counterfactual Instances](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/methods/CF.ipynb)
* [Integrated gradients](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/methods/IntegratedGradients.ipynb)
* [Anchors on Textual data](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/examples/anchor_text_movie.ipynb) with `sampling_strategy='language_model'`
* One of Torch or TensorFlow is required for the [Counterfactuals with RL](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/methods/CFRL.ipynb) methods

```bash
pip install alibi[tensorflow]
```

{% endtab %}

{% tab title="Torch" %}
Installation with support for torch backends. One of Torch or TensorFlow is required for:

* [Counterfactuals with RL](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/methods/CFRL.ipynb)
* [Similarity explanations](https://github.com/SeldonIO/alibi/blob/master/docs-gb/source/methods/Similarity.ipynb)

```bash
pip install alibi[torch]
```

{% endtab %}

{% tab title="All" %}
Installs all optional dependencies.

```bash
pip install alibi[all]
```

{% endtab %}
{% endtabs %}

### conda-forge

* To install the conda-forge version it is recommended to use [mamba](https://mamba.readthedocs.io/en/stable/), which can be installed to the *base* conda enviroment with:

```bash
conda install mamba -n base -c conda-forge
```

* `mamba` can then be used to install alibi in a conda enviroment:

{% tabs %}
{% tab title="Standard" %}
Default installation.

```bash
mamba install -c conda-forge alibi
```

{% endtab %}

{% tab title="SHAP" %}
Installation with support for computing \[SHAP]\(<https://shap.readthedocs.io/en/stable/index.html>) values.

```bash
mamba install -c conda-forge alibi shap
```

{% endtab %}

{% tab title="Distributed" %}
Installation with support for distributed computation of explanations.

```bash
mamba install -c conda-forge alibi ray 
```

{% endtab %}
{% endtabs %}

## Features

Alibi is a Python package designed to help explain the predictions of machine learning models and gauge the confidence of predictions. The focus of the library is to support the widest range of models using black-box methods where possible.

To get a list of the latest available model explanation algorithms, you can type:

```python
import alibi
alibi.explainers.__all__
```

```bash
['ALE', 
'AnchorTabular',
'DistributedAnchorTabular', 
'AnchorText', 
'AnchorImage', 
'CEM', 
'Counterfactual', 
'CounterfactualProto', 
'CounterfactualRL', 
'CounterfactualRLTabular',
'PartialDependence',
'TreePartialDependence',
'PartialDependenceVariance',
'PermutationImportance',
'plot_ale',
'plot_pd',
'plot_pd_variance',
'plot_permutation_importance',
'IntegratedGradients', 
'KernelShap', 
'TreeShap',
'GradientSimilarity']
```

For gauging model confidence:

```python
alibi.confidence.__all__
```

```bash
['linearity_measure',
 'LinearityMeasure',
 'TrustScore']
```

For dataset summarization

```python
alibi.prototypes.__all__
```

```bash
['ProtoSelect',
 'visualize_image_prototypes']
```

For detailed information on the methods:

* [Overview of available methods](/alibi-explain/overview/algorithms.md)
  * [Accumulated Local Effects](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/ALE.ipynb)
  * [Anchor explanations](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/Anchors.ipynb)
  * [Contrastive Explanation Method (CEM)](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/CEM.ipynb)
  * [Counterfactual Instances](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/CF.ipynb)
  * [Counterfactuals Guided by Prototypes](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/CFProto.ipynb)
  * [Counterfactuals with RL](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/CFRL.ipynb)
  * [Integrated gradients](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/IntegratedGradients.ipynb)
  * [Kernel SHAP](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/KernelSHAP.ipynb)
  * [Linearity Measure](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/LinearityMeasure.ipynb)
  * [ProtoSelect](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/ProtoSelect.ipynb)
  * [PartialDependence](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/PartialDependence.ipynb)
  * [PD Variance](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/PartialDependenceVariance.ipynb)
  * [Permutation Importance](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/PermutationImportance.ipynb)
  * [TreeShap](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/TreeSHAP.ipynb)
  * [Trust Scores](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/TrustScores.ipynb)
  * [Similarity explanations](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/Similarity.ipynb)

## Basic Usage

The alibi explanation API takes inspiration from `scikit-learn`, consisting of distinct initialize, fit and explain steps. We will use the [Anchor method on tabular data](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/methods/Anchors.ipynb#Tabular-Data) to illustrate the API.

First, we import the explainer:

```python
from alibi.explainers import AnchorTabular
```

Next, we initialize it by passing it a [prediction function](/alibi-explain/overview/white_box_black_box.md) and any other necessary arguments:

```python
explainer = AnchorTabular(predict_fn, feature_names)
```

Some methods require an additional `.fit` step which requires access to the training set the model was trained on:

```python
explainer.fit(X_train)
```

```bash
AnchorTabular(meta={
    'name': 'AnchorTabular',
    'type': ['blackbox'],
    'explanations': ['local'],
    'params': {'seed': None, 'disc_perc': (25, 50, 75)}
})
```

Finally, we can call the explainer on a test instance which will return an `Explanation` object containing the explanation and any additional metadata returned by the computation:

```python
 explanation = explainer.explain(x)
```

The returned `Explanation` object has `meta` and `data` attributes which are dictionaries containing any explanation metadata (e.g. parameters, type of explanation) and the explanation itself respectively:

```python
explanation.meta
```

```bash
{'name': 'AnchorTabular',
 'type': ['blackbox'],
 'explanations': ['local'],
 'params': {'seed': None,
  'disc_perc': (25, 50, 75),
  'threshold': 0.95,
  'delta': ...truncated output...
```

```python
explanation.data
```

```bash
{'anchor': ['petal width (cm) > 1.80', 'sepal width (cm) <= 2.80'],
 'precision': 0.9839228295819936,
 'coverage': 0.31724137931034485,
 'raw': {'feature': [3, 1],
  'mean': [0.6453362255965293, 0.9839228295819936],
  'precision': [0.6453362255965293, 0.9839228295819936],
  'coverage': [0.20689655172413793, 0.31724137931034485],
  'examples': ...truncated output...
```

The top level keys of both `meta` and `data` dictionaries are also exposed as attributes for ease of use of the explanation:

```python
explanation.anchor
```

```bash
['petal width (cm) > 1.80', 'sepal width (cm) <= 2.80']
```

Some algorithms, such as [Kernel SHAP](https://github.com/ramonpzg/alibi/blob/rp-alibi-newdocs-dec23/doc/source/methods/KernelSHAP.ipynb), can run batches of explanations in parallel, if the number of cores is specified in the algorithm constructor:

```python
distributed_ks = KernelShap(predict_fn, distributed_opts={'n_cpus': 10})
```

Note that this requires the user to run `pip install alibi[ray]` to install dependencies of the distributed backend.

The exact details will vary slightly from method to method, so we encourage the reader to become familiar with the [types of algorithms supported](/alibi-explain/overview/algorithms.md) in Alibi.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.seldon.ai/alibi-explain/overview/getting_started.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
