Inference
Synchronous Requests
Find the Seldon Service Endpoint
````{tab} Docker Compose
In the default Docker Compose setup, container ports are accessible from the host machine.
This means you can use `localhost` or `0.0.0.0` as the hostname.
The default port for sending inference requests to the Seldon system is `9000`.
This is controlled by the `ENVOY_DATA_PORT` environment variable for Compose.
Putting this together, you can send inference requests to `0.0.0.0:9000`.
````
````{tab} Kubernetes
In Kubernetes, Seldon creates a single `Service` called `seldon-mesh` in the namespace it is installed into.
By default, this namespace is also called `seldon-mesh`.
If this `Service` is exposed via a load balancer, the appropriate address and port can be found via:
```bash
kubectl get svc seldon-mesh -n seldon-mesh -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
```
If you are not using a `LoadBalancer` for the `seldon-mesh` `Service`, you can still send inference requests.
For development and testing purposes, you can port-forward the `Service` locally using the below.
Inference requests can then be sent to `localhost:8080`.
```
kubectl port-forward svc/seldon-mesh -n seldon-mesh 8080:80
```
If you are using a service mesh like Istio or Ambassador, you will need to use the IP address of the service mesh ingress and determine the appropriate port.
````
Make Inference Requests
Request Routing
Seldon Routes
Ingress Routes
Asynchronous Requests
Model Inference
Pipeline Inference
Pipeline Metadata
Request IDs
Last updated
Was this helpful?

