# Server Config

{% hint style="info" %}
**Note**: This section is for advanced usage where you want to define new types of inference servers.
{% endhint %}

Server configurations define how to create an inference server. By default one is provided\
for Seldon MLServer and one for NVIDIA Triton Inference Server. Both these servers support\
the Open Inference Protocol which is a requirement for all inference servers. They define how\
the Kubernetes ReplicaSet is defined which includes the Seldon Agent reverse proxy as well\
as an Rclone server for downloading artifacts for the server. The Kustomize ServerConfig for\
MlServer is shown below:

{% @github-files/github-code-block url="<https://github.com/SeldonIO/seldon-core/blob/v2/operator/config/serverconfigs/mlserver.yaml>" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.seldon.ai/seldon-core-2/installation/advanced-configurations/serverconfig.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
