# Test the Installation

To confirm the successful installation of [Seldon Core 2](/seldon-core-2/installation/production-environment.md), [Kafka](/seldon-core-2/installation/production-environment/kafka.md), and the [service mesh](/seldon-core-2/installation/production-environment/istio.md), deploy a sample model and perform an inference test. Follow these steps:

## Deploy the Iris Model

1. Apply the following configuration to deploy the Iris model in the namespace `seldon-mesh`:

```bash
kubectl apply -f - --namespace=seldon-mesh <<EOF
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
metadata:
  name: iris
spec:
  storageUri: "gs://seldon-models/scv2/samples/mlserver_1.3.5/iris-sklearn"
  requirements:
    - sklearn
EOF

```

The output is:

```
model.mlops.seldon.io/iris created
```

2. Verify that the model is deployed in the namespace `seldon-mesh`.

```bash
kubectl wait --for condition=ready --timeout=300s model --all -n seldon-mesh
```

When the model is deployed, the output is similar to:

```bash
model.mlops.seldon.io/iris condition met
```

## Deploy a pipeline for the Iris Model

{% hint style="info" %}
**Note**: The pipeline name must not be reused as the name of any individual step within the pipeline. This results in a Kubernetes validation error: `pipeline iris must not have a step name with the same name as pipeline name`
{% endhint %}

1. Apply the following configuration to deploy the Iris model in the namespace `seldon-mesh`:

```bash
kubectl apply -f - --namespace=seldon-mesh <<EOF
apiVersion: mlops.seldon.io/v1alpha1
kind: Pipeline
metadata:
  name: irispipeline
spec:
  steps:
    - name: iris
  output:
    steps:
    - iris
EOF
```

The output is:

```
pipeline.mlops.seldon.io/irispipeline created
```

2. Verify that the pipeline is deployed in the namespace `seldon-mesh`.

```bash
kubectl wait --for condition=ready --timeout=300s pipeline --all -n seldon-mesh
```

When the pipeline is deployed, the output is similar to:

```bash
pipeline.mlops.seldon.io/irispipeline condition met
```

## Perform an Inference test

1. Use curl to send a test inference request to the deployed model. Replace \<INGRESS\_IP> with your service mesh's ingress IP address. Ensure that:

* The Host header matches the expected virtual host configured in your service mesh.
* The Seldon-Model header specifies the correct model name.

```bash
curl -k http://<INGRESS_IP>:80/v2/models/iris/infer \
  -H "Host: seldon-mesh.inference.seldon" \
  -H "Content-Type: application/json" \
  -H "Seldon-Model: iris" \
  -d '{
    "inputs": [
      {
        "name": "predict",
        "shape": [1, 4],
        "datatype": "FP32",
        "data": [[1, 2, 3, 4]]
      }
    ]
  }'
```

The output is similar to:

```bash
{"model_name":"iris_1","model_version":"1","id":"f4d8b82f-2af3-44fb-b115-60a269cbfa5e","parameters":{},"outputs":[{"name":"predict","shape":[1,1],"datatype":"INT64","parameters":{"content_type":"np"},"data":[2]}]}
```

2. Use curl to send a test inference request through the pipeline to the deployed model. Replace \<INGRESS\_IP> with your service mesh's ingress IP address. Ensure that:

* The Host header matches the expected virtual host configured in your service mesh.
* The Seldon-Model header specifies the correct pipeline name.
* To route inference requests to a pipeline endpoint, include the `.pipeline` suffix in the model name within the request header. This distinguishes the pipeline from a model that shares the same base name.

```bash
curl -k http://<INGRESS_IP>:80/v2/models/irispipeline/infer \
  -H "Host: seldon-mesh.inference.seldon" \
  -H "Content-Type: application/json" \
  -H "Seldon-Model: irispipeline.pipeline" \
  -d '{
    "inputs": [
      {
        "name": "predict",
        "shape": [1, 4],
        "datatype": "FP32",
        "data": [[1, 2, 3, 4]]
      }
    ]
  }'
```

The output is similar to:

```bash
{"model_name":"","outputs":[{"data":[2],"name":"predict","shape":[1,1],"datatype":"INT64","parameters":{"content_type":"np"}}]}
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.seldon.ai/seldon-core-2/installation/test-installation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
