Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Models can be scaled by setting their replica count, e.g.
Currently, the number of replicas will need not to exceed the replicas of the Server the model is scheduled to.
Servers can be scaled by setting their replica count, e.g.
Currently, models scheduled to a server can only scale up to the server replica count.
Seldon Core 2 runs with several control and dataplane components. The scaling of these resources is discussed below:
Pipeline gateway.
This pipeline gateway handles REST and gRPC synchronous requests to Pipelines. It is stateless and can be scaled based on traffic demand.
Model gateway.
This component pulls model requests from Kafka and sends them to inference servers. It can be scaled up to the partition factor of your Kafka topics. At present we set a uniform partition factor for all topics in one installation of Seldon .
Dataflow engine.
The dataflow engine runs KStream topologies to manage Pipelines. It can run as multiple replicas and the scheduler will balance Pipelines to run across it with a consistent hashing load balancer. Each Pipeline is managed up to the partition factor of Kafka (presently hardwired to one).
Scheduler.
This manages the control plane operations. It is presently required to be one replica as it maintains internal state within a BadgerDB held on local persistent storage (stateful set in Kubernetes). Performance tests have shown this not to be a bottleneck at present.
Kubernetes Controller.
The Kubernetes controller manages resources updates on the cluster which it passes on to the Scheduler. It is by default one replica but has the ability to scale.
Envoy
Envoy replicas get their state from the scheduler for routing information and can be scaled as needed.
Allow configuration of partition factor for data plane consistent hashing load balancer.
Allow Model gateway and Pipeline gateway to use consistent hashing load balancer.
Consider control plane scaling options.
Kubernetes provides the core platform for production use. However, the architecture in agnostic to the underlying cluster technology as shown by the local Docker installation.
Inference artifacts referenced by Models can be stored in any of the storage backends supported by Rclone. This includes local filesystems, AWS S3, and Google Cloud Storage (GCS), among others. Configuration is provided out-of-the-box for public GCS buckets, which enables the use of Seldon-provided models like in the below example:
This configuration is provided by the Kubernetes Secret seldon-rclone-gs-public
. It is made available to Servers as a preloaded secret. You can define and use your own storage configurations in exactly the same way.
To define a new storage configuration, you need the following details:
Remote name
Remote type
Provider parameters
A remote is what Rclone calls a storage location. The type defines what protocol Rclone should use to talk to this remote. A provider is a particular implementation for that storage type. Some storage types have multiple providers, such as s3
having AWS S3 itself, MinIO, Ceph, and so on.
The remote name is your choice. The prefix you use for models in spec.storageUri
must be the same as this remote name.
The remote type is one of the values supported by Rclone. For example, for AWS S3 it is s3
and for Dropbox it is dropbox
.
The provider parameters depend entirely on the remote type and the specific provider you are using. Please check the Rclone documentation for the appropriate provider. Note that Rclone docs for storage types call the parameters properties and provide both config and env var formats--you need to use the config format. For example, the GCS parameter --gcs-client-id
described here should be used as client_id
.
For reference, this format is described in the Rclone documentation. Note that we do not support the use of opts
discussed in that section.
Kubernetes Secrets are used to store Rclone configurations, or storage secrets, for use by Servers. Each Secret should contain exactly one Rclone configuration.
A Server can use storage secrets in one of two ways:
It can dynamically load a secret specified by a Model in its .spec.secretName
It can use global configurations made available via preloaded secrets
The name of a Secret is entirely your choice, as is the name of the data key in that Secret. All that matters is that there is a single data key and that its value is in the format described above.
Note: It is possible to use preloaded secrets for some Models and dynamically loaded secrets for others.
Rather than Models always having to specify which secret to use, a Server can load storage secrets ahead of time. These can then be reused across many Models.
When using a preloaded secret, the Model definition should leave .spec.secretName
empty. The protocol prefix in .spec.storageUri
still needs to match the remote name specified by a storage secret.
The secrets to preload are named in a centralised ConfigMap called seldon-agent
. This ConfigMap applies to all Servers managed by the same SeldonRuntime. By default this ConfigMap only includes seldon-rclone-gs-public
, but can be extended with your own secrets as shown below:
The easiest way to change this is to update your SeldonRuntime.
If your SeldonRuntime is configured using the seldon-core-v2-runtime
Helm chart, the corresponding value is config.agentConfig.rclone.configSecrets
. This can be used as shown below:
Otherwise, if your SeldonRuntime is configured directly, you can add secrets by setting .spec.config.agentConfig.rclone.config_secrets
. This can be used as follows:
Assuming you have installed MinIO in the minio-system
namespace, a corresponding secret could be:
You can then reference this in a Model with .spec.secretName
:
GCS can use service accounts for access. You can generate the credentials for a service account using the gcloud CLI:
The contents of gcloud-application-credentials.json
can be put into a secret:
You can then reference this in a Model with .spec.secretName
:
For Kubernetes usage we provide a set of custom resources for interacting with Seldon.
SeldonRuntime - for installing Seldon in a particular namespace.
Servers - for deploying sets of replicas of core inference servers (MLServer or Triton).
Models - for deploying single machine learning models, custom transformation logic, drift detectors, outliers detectors and explainers.
Experiments - for testing new versions of models.
Pipelines - for connecting together flows of data between models.
SeldonConfig and ServerConfig define the core installation configuration and machine learning inference server configuration for Seldon. Normally, you would not need to customize these but this may be required for your particular custom installation within your organisation.
ServerConfigs - for defining new types of inference server that can be reference by a Server resource.
SeldonConfig - for defining how seldon is installed
Seldon Core 2 requires Kafka to implement data-centric inference Pipelines. See our architecture documentation to learn more on how Seldon Core 2 uses Kafka.
Note: Kafka integration is required to enable data-centric inference pipelines feature. It is highly advice to configure Kafka integration to take full advantage of Seldon Core 2 features.
We list alternatives below.
We recommend to use managed Kafka solution for production installation. This allow to take away all the complexity on running secure and scalable Kafka cluster away.
We currently have tested and documented integration with following managed solutions:
Confluent Cloud (security: SASL/PLAIN)
Confluent Cloud (security: SASL/OAUTHBEARER)
Amazon MSK (security: mTLS)
Amazon MSK (security: SASL/SCRAM)
Azure Event Hub (security: SASL/PLAIN)
See our Kafka security section for configuration examples.
Seldon Core 2 requires Kafka to implement data-centric inference Pipelines. To install Kafka for testing purposed in your k8s cluster, we recommend to use Strimzi Operator.
Note: This page discuss how to install Strimzi Operator and create Kafka cluster for trial, dev, or testing purposes. For production grade installation consult Strimzi documentation or use one of managed solutions mentioned here.
You can install and configure Strimzi using either Helm charts or our Ansible playbooks, both documented below.
The installation of a Kafka cluster requires the Strimzi Kafka operator installed in the same namespace. This allows to directly use the mTLS certificates created by Strimzi Operator. One option to install the Strimzi operator is via Helm.
Note that we are using here KRaft instead of Zookeeper for Kafka. You can enable featureGates
during Helm installation via:
Warning: Currently Kraft installation of Strimzi is not production ready. See Strimzi documentation and related GitHub issue for further details.
Create Kafka cluster in seldon-mesh
namespace
Note that a specific strimzi operator version is associated with a subset of supported Kafka versions.
We provide automation around the installation of a Kafka cluster for Seldon Core 2 to help with development and testing use cases. You can follow the steps defined here to install Kafka via ansible.
You can use our Ansible playbooks to install only Strimzi Operator and Kafka cluster by setting extra Ansible vars:
You can check kafka-examples for more details.
As we are using KRaft, use Kafka version 3.4 or above.
For security settings check here.
Metrics are exposed for scrapping by Prometheus.
We recommend to install kube-prometheus that provides an all-in-one package with the Prometheus operator.
You will need to modify the default RBAC installed by kube-prometheus as described here.
From the prometheus
folder in the project run:
We use a PodMonitor for scrapping agent metrics. The envoy and server monitors are there for completeness but not presently needed.
Includes:
Agent pod monitor. Monitors the metrics port of server inference pods.
Server pod monitor. Monitors the server-metrics port of inference server pods.
Envoy service monitor. Monitors the Envoy gateway proxies.
Pipeline gateway pod monitor. Monitors the metrics port of pipeline gateway pods.
Pod monitors were chosen as ports for metrics are not exposed at service level as we do not have a top level service for server replicas but 1 headless service per replica. Future discussions could reference this.
Check metrics for more information.
Note: The default installation will provide two initial servers: one MLServer and one Triton. You only need to define additional servers for advanced use cases.
A Server defines an inference server onto which models will be placed for inference. By default on installation two server StatefulSets will be deployed one MlServer and one Triton. An example Server definition is shown below:
The main requirement is a reference to a ServerConfig resource in this case mlserver
.
One can easily utilize a custom image with the existing ServerConfigs. For example, the following defines an MLServer server with a custom image:
This server can then be targeted by a particular model by specifying this server name when creating the model, for example:
One can also create a Server definition to add a persistent volume to your server. This can be used to allow models to be loaded directly from the persistent volume.
The server can be targeted by a model whose artifact is on the persistent volume as shown below.
A fully worked example for this can be found here.
An alternative would be to create your own ServerConfig for more complex use cases or you want to standardise the Server definition in one place.
The Seldon models and pipelines are exposed via a single service endpoint in the install namespace called seldon-mesh
. All models, pipelines and experiments can be reached via this single Service endpoint by setting appropriate headers on the inference REST/gRPC request. By this means Seldon is agnostic to any service mesh you may wish to use in your organisation. We provide some example integrations for some example service meshes below (alphabetical order):
We welcome help to extend these to other service meshes.
The SeldonRuntime resource is used to create an instance of Seldon installed in a particular namespace.
For the definition of SeldonConfiguration
above see the SeldonConfig resource.
The specification above contains overrides for the chosen SeldonConfig
. To override the PodSpec
for a given component, the overrides
field needs to specify the component name and the PodSpec
needs to specify the container name, along with fields to override.
For instance, the following overrides the resource limits for cpu
and memory
in the hodometer
component in the seldon-mesh
namespace, while using values specified in the seldonConfig
elsewhere (e.g. default
).
As a minimal use you should just define the SeldonConfig
to use as a base for this install, for example to install in the seldon-mesh
namespace with the SeldonConfig
named default
:
The helm chart seldon-core-v2-runtime
allows easy creation of this resource and associated default Servers for an installation of Seldon in a particular namespace.
When a SeldonConfig resource changes any SeldonRuntime resources that reference the changed SeldonConfig will also be updated immediately. If this behaviour is not desired you can set spec.disableAutoUpdate
in the SeldonRuntime resource for it not be be updated immediately but only when it changes or any owned resource changes.
Autoscaling in Seldon applies to various concerns:
Inference servers autoscaling
Model autoscaling
Model memory overcommit
Autoscaling of servers can be done via HorizontalPodAutoscaler
(HPA).
HPA can be applied to any deployed Server
resource. In this case HPA will manage the number of server replicas in the corresponding statefulset according to utilisation metrics (e.g. CPU or memory).
For example assuming that a triton
server is deployed, then the user can attach an HPA based on cpu utilisation as follows:
In this case, according to load, the system will add / remove server replicas to / from the triton
statefulset.
It is worth considering the following points:
If HPA adds a new server replica, this new replica will be included in any future scheduling decisions. In other words, when deploying a new model or rescheduling failed models this new replica will be considered.
If HPA deletes an existing server replica, the scheduler will first attempt to drain any loaded model on this server replica before the server replica gets actually deleted. This is achieved by leveraging a PreStop
hook on the server replica pod that triggers a process before receiving the termination signal. This draining process is capped by terminationGracePeriodSeconds
, which the user can set (default is 2 minutes).
Therefore there should generally be minimal disruption to the inference workload during scaling.
For more details on HPA check this Kubernetes walk-through.
Note: Autoscaling of inference servers via seldon-scheduler
is under consideration for the roadmap. This allow for more fine grained interactions with model autoscaling.
Autoscaling both Models and Servers using HPA and custom metrics is possible for the special case of single model serving (i.e. single model per server). Check the detailed documentation here. For multi-model serving (MMS), a different solution is needed as discussed below.
As each model server can serve multiple models, models can scale across the available replicas of the server according to load.
Autoscaling of models is enabled if at least MinReplicas
or MaxReplicas
is set in the model custom resource. Then according to load the system will scale the number of Replicas
within this range.
For example the following model will be deployed at first with 1 replica and it can scale up according to load.
Note that model autoscaling will not attempt to add extra servers if the desired number of replicas cannot be currently fulfilled by the current provisioned number of servers. This is a process left to be done by server autoscaling.
Additionally when the system autoscales, the initial model spec is not changed (e.g. the number of Replicas
) and therefore the user cannot reset the number of replicas back to the initial specified value without an explicit change.
If only Replicas
is specified by the user, autoscaling of models is disabled and the system will have exactly the number of replicas of this model deployed regardless of inference load.
The model autoscaling architecture is designed such as each agent decides on which models to scale up / down according to some defined internal metrics and then sends a triggering message to the scheduler. The current metrics are collected from the data plane (inference path), representing a proxy on how loaded is a given model with fulfilling inference requests.
The main idea is that we keep the "lag" for each model. We define the "lag" as the difference between incoming and outgoing requests in a given time period. If the lag crosses a threshold, then we trigger a model scale up event. This threshold can be defined via SELDON_MODEL_INFERENCE_LAG_THRESHOLD
inference server environment variable.
For now we keep things simple and we trigger model scale down events if a model has not been used for a number of seconds. This is defined in SELDON_MODEL_INACTIVE_SECONDS_THRESHOLD
inference server environment variable.
Each agent checks the above stats periodically and if any model hits the corresponding threshold, then the agent sends an event to the scheduler to request model scaling.
How often this process executes can be defined via SELDON_SCALING_STATS_PERIOD_SECONDS
inference server environment variable.
The scheduler will perform model autoscale if:
The model is stable (no state change in the last 5 minutes) and available.
The desired number of replicas is within range. Note we always have a least 1 replica of any deployed model and we rely on over commit to reduce the resources used further.
For scaling up, there is enough capacity for the new model replica.
Servers can hold more models than available memory if overcommit is swictched on (default yes). This allows under utilized models to be moved from inference server memory to allow for other models to take their place. Note that these evicted models are still registered and in the case future inference requests arrive, the system will reload the models back to memory before serving the requests. If traffic patterns for inference of models vary then this can allow more models than available server memory to be run on the system.
Note: This section is for advanced usage where you want to define how seldon is installed in each namespace.
The SeldonConfig resource defines the core installation components installed by Seldon. If you wish to install Seldon, you can use the SeldonRuntime resource which allows easy overriding of some parts defined in this specification. In general, we advise core DevOps to use the default SeldonConfig or customize it for their usage. Individual installation of Seldon can then use the SeldonRuntime with a few overrides for special customisation needed in that namespace.
The specification contains core PodSpecs for each core component and a section for general configuration including the ConfigMaps that are created for the Agent (rclone defaults), Kafka and Tracing (open telemetry).
Some of these values can be overridden on a per namespace basis via the SeldonRuntime resource. Labels and annotations can also be set at the component level - these will be merged with the labels and annotations from the SeldonConfig resource in which they are defined and added to the component's corresponding Deployment, or StatefulSet.
The default configuration is shown below.
We support Open Telemetry tracing. By default all components will attempt to send OLTP events to seldon-collector.seldon-mesh:4317
which will export to Jaeger at simplest-collector.seldon-mesh:4317
.
The components can be installed from the tracing/k8s
folder. In future an Ansible playbook will be created. This installs a Open Telemetry collector and a simple Jaeger install with a service that can be port forwarded to at simplest.seldon-mesh:16686
.
An example Jaeger trace is show below:
A Model is the core atomic building block. It specifies a machine learning artifact that will be loaded onto one of the running Servers. A model could be a standard machine learning inference component such as
a Tensorflow model, PyTorch model or SKLearn model.
an inference transformation component such as a SKLearn pipeline or a piece of custom python logic. a monitoring component such as an outlier detector or drift detector.
An alibi-explain model explainer
An example is shown below for a SKLearn model for iris classification:
Its Kubernetes spec
has two core requirements
A storageUri
specifying the location of the artifact. This can be any rclone URI specification.
A requirements
list which provides tags that need to be matched by the Server that can run this artifact type. By default when you install Seldon we provide a set of Servers that cover a range of artifact types.
An Experiment defines a traffic split between Models or Pipelines. This allows new versions of models and pipelines to be tested.
An experiment spec has three sections:
candidates
(required) : a set of candidate models to split traffic.
default
(optional) : an existing candidate who endpoint should be modified to split traffic as defined by the candidates.
Each candidate has a traffic weight. The percentage of traffic will be this weight divided by the sum of traffic weights.
mirror
(optional) : a single model to mirror traffic to the candidates. Responses from this model will not be returned to the caller.
An example experiment with a defaultModel
is shown below:
This defines a split of 50% traffic between two models iris
and iris2
. In this case we want to expose this traffic split on the existing endpoint created for the iris
model. This allows us to test new versions of models (in this case iris2
) on an existing endpoint (in this case iris
). The default
key defines the model whose endpoint we want to change. The experiment will become active when both underplying models are in Ready status.
An experiment over two separate models which exposes a new API endpoint is shown below:
To call the endpoint add the header seldon-model: <experiment-name>.experiment
in this case: seldon-model: experiment-iris.experiment
. For example with curl:
For examples see the local experiments notebook.
Running an experiment between some pipelines is very similar. The difference is resourceType: pipeline
needs to be defined and in this case the candidates or mirrors will refer to pipelines. An example is shown below:
For an example see the local experiments notebook.
A mirror can be added easily for model or pipeline experiments. An example model mirror experiment is shown below:
For an example see the local experiments notebook.
An example pipeline mirror experiment is shown below:
For an example see the local experiments notebook.
To allow cohorts to get consistent views in an experiment each inference request passes back a response header x-seldon-route
which can be passed in future requests to an experiment to bypass the random traffic splits and get a prediction from the sequence of models and pipelines used in the initial request.
Note: you must pass the normal seldon-model
header along with the x-seldon-route
header.
This is illustrated in the local experiments notebook.
Caveats: the models used will be the same but not necessarily the same replica instances. This means at present this will not work for stateful models that need to go to the same model replica instance.
As an alternative you can choose to run experiments at the service mesh level if you use one of the popular service meshes that allow header based routing in traffic splits. For further discussion see here.
Pipelines allow one to connect flows of inference data transformed by Model
components. A directed acyclic graph (DAG) of steps can be defined to join Models together. Each Model will need to be capable of receiving a V2 inference request and respond with a V2 inference response. An example Pipeline is shown below:
The steps
list shows three models: tfsimple1
, tfsimple2
and tfsimple3
. These three models each take two tensors called INPUT0
and INPUT1
of integers. The models produce two outputs OUTPUT0
(the sum of the inputs) and OUTPUT1
(subtraction of the second input from the first).
tfsimple1
and tfsimple2
take as inputs the input to the Pipeline: the default assumption when no explicit inputs are defined. tfsimple3
takes one V2 tensor input from each of the outputs of tfsimple1
and tfsimple2
. As the outputs of tfsimple1
and tfsimple2
have tensors named OUTPUT0
and OUTPUT1
their names need to be changed to respect the expected input tensors and this is done with a tensorMap
component providing this tensor renaming. This is only required if your models can not be directly chained together.
The output of the Pipeline is the output from the tfsimple3
model.
The full GoLang specification for a Pipeline is shown below:
Note: This section is for advanced usage where you want to define new types of inference servers.
Server configurations define how to create an inference server. By default one is provided for Seldon MLServer and one for NVIDIA Triton Inference Server. Both these servers support the V2 inference protocol which is a requirement for all inference servers. They define how the Kubernetes ReplicaSet is defined which includes the Seldon Agent reverse proxy as well as an Rclone server for downloading artifacts for the server. The Kustomize ServerConfig for MlServer is shown below:
Learn how to jointly autoscale model and server replicas based on a metric of inference requests per second (RPS) using HPA, when there is a one-to-one correspondence between models and servers (single-model serving). This will require:
Having a Seldon Core 2 install that publishes metrics to prometheus (default). In the following, we will assume that prometheus is already installed and configured in the seldon-monitoring
namespace.
Installing and configuring , which allows prometheus queries on relevant metrics to be published as k8s custom metrics
Configuring HPA manifests to scale Models and the corresponding Server replicas based on the custom metrics
The Core 2 HPA-based autoscaling has the following constraints/limitations:
HPA scaling only targets single-model serving, where there is a 1:1 correspondence between models and servers. Autoscaling for multi-model serving (MMS) is supported for specific models and workloads via the Core 2 native features described .
Significant improvements to MMS autoscaling are planned for future releases.
Only custom metrics from Prometheus are supported. Native Kubernetes resource metrics such as CPU or memory are not. This limitation exists because of HPA's design: In order to prevent multiple HPA CRs from issuing conflicting scaling instructions, each HPA CR must exclusively control a set of pods which is disjoint from the pods controlled by other HPA CRs. In Seldon Core 2, CPU/memory metrics can be used to scale the number of Server replicas via HPA. However, this also means that the CPU/memory metrics from the same set of pods can no longer be used to scale the number of model replicas.
We are working on improvements in Core 2 to allow both servers and models to be scaled based on a single HPA manifest, targeting the Model CR.
Each Kubernetes cluster supports only one active custom metrics provider. If your cluster already uses a custom metrics provider different from prometheus-adapter
, it will need to be removed before being able to scale Core 2 models and servers via HPA.
The Kubernetes community is actively exploring solutions for allowing multiple custom metrics providers to coexist.
The role of the Prometheus Adapter is to expose queries on metrics in Prometheus as k8s custom or external metrics. Those can then be accessed by HPA in order to take scaling decisions.
To install through helm:
These commands install prometheus-adapter
as a helm release named hpa-metrics
in the same namespace where Prometheus is installed, and point to its service URL (without the port).
The URL is not fully qualified as it references a Prometheus instance running in the same namespace. If you are using a separately-managed Prometheus instance, please update the URL accordingly.
If you are running Prometheus on a different port than the default 9090, you can also pass --set prometheus.port=[custom_port]
You may inspect all the options available as helm values by running helm show values prometheus-community/prometheus-adapter
Please check that the metricsRelistInterval
helm value (default to 1m) works well in your setup, and update it otherwise. This value needs to be larger than or equal to your Prometheus scrape interval. The corresponding prometheus adapter command-line argument is --metrics-relist-interval
. If the relist interval is set incorrectly, it will lead to some of the custom metrics being intermittently reported as missing.
We now need to configure the adapter to look for the correct prometheus metrics and compute per-model RPS values. On install, the adapter has created a ConfigMap
in the same namespace as itself, named [helm_release_name]-prometheus-adapter
. In our case, it will be hpa-metrics-prometheus-adapter
.
Overwrite the ConfigMap as shown in the following manifest, after applying any required customizations.
Change the name
if you've chosen a different value for the prometheus-adapter
helm release name. Change the namespace
to match the namespace where prometheus-adapter
is installed.
In this example, a single rule is defined to fetch the seldon_model_infer_total
metric from Prometheus, compute its per second change rate based on data within a 2 minute sliding window, and expose this to Kubernetes as the infer_rps
metric, with aggregations available at model, server, inference server pod and namespace level.
When HPA requests the infer_rps
metric via the custom metrics API for a specific model, prometheus-adapter issues a Prometheus query in line with what it is defined in its config.
For the configuration in our example, the query for a model named irisa0
in namespace seldon-mesh
would be:
It is important to sanity-check the query by executing it against your Prometheus instance. To do so, pick an existing model CR in your Seldon Core 2 install, and send some inference requests towards it. Then, wait for a period equal to at least twice the Prometheus scrape interval (Prometheus default 1 minute), so that two values from the series are captured and a rate can be computed. Finally, you can modify the model name and namespace in the query above to match the model you've picked and execute the query.
If the query result is empty, please adjust it until it consistently returns the expected metric values. Pay special attention to the window size (2 minutes in the example): if it is smaller than twice the Prometheus scrape interval, the query may return no results. A compromise needs to be reached to set the window size large enough to reject noise but also small enough to make the result responsive to quick changes in load.
Update the metricsQuery
in the prometheus-adapter ConfigMap to match any query changes you have made during tests.
The rule definition can be broken down in four parts:
Discovery (the seriesQuery
and seriesFilters
keys) controls what Prometheus metrics are considered for exposure via the k8s custom metrics API.
As an alternative to the example above, all the Seldon Prometheus metrics of the form seldon_model.*_total
could be considered, followed by excluding metrics pre-aggregated across all models (.*_aggregate_.*
) as well as the cummulative infer time per model (.*_seconds_total
):
For RPS, we are only interested in the model inference count (seldon_model_infer_total
)
Association (the resources
key) controls the Kubernetes resources that a particular metric can be attached to or aggregated over.
The resources key defines an association between certain labels from the Prometheus metric and k8s resources. For example, on line 17, "model": {group: "mlops.seldon.io", resource: "model"}
lets prometheus-adapter
know that, for the selected Prometheus metrics, the value of the "model" label represents the name of a k8s model.mlops.seldon.io
CR.
One k8s custom metric is generated for each k8s resource associated with a prometheus metric. In this way, it becomes possible to request the k8s custom metric values for models.mlops.seldon.io/iris
or for servers.mlops.seldon.io/mlserver
.
The labels that do not refer to a namespace
resource generate "namespaced" custom metrics (the label values refer to resources which are part of a namespace) -- this distinction becomes important when needing to fetch the metrics via kubectl, and in understanding how certain Prometheus query template placeholders are replaced.
Naming (the name
key) configures the naming of the k8s custom metric.
In the example ConfigMap, this is configured to take the Prometheus metric named seldon_model_infer_total
and expose custom metric endpoints named infer_rps
, which when called return the result of a query over the Prometheus metric.
Instead of a literal match, one could also use regex group capture expressions, which can then be referenced in the custom metric name:
Querying (the metricsQuery
key) defines how a request for a specific k8s custom metric gets converted into a Prometheus query.
The query can make use of the following placeholders:
.Series is replaced by the discovered prometheus metric name (e.g. seldon_model_infer_total
)
.LabelMatchers, when requesting a namespaced metric for resource X
with name x
in namespace n
, is replaced by X=~"x",namespace="n"
. For example, model=~"iris0", namespace="seldon-mesh"
. When requesting the namespace resource itself, only the namespace="n"
is kept.
.GroupBy is replaced by the resource type of the requested metric (e.g. model
, server
, pod
or namespace
).
Once you have applied any necessary customizations, replace the default prometheus-adapter config with the new one, and restart the deployment (this restart is required so that prometheus-adapter picks up the new config):
In order to test that the prometheus adapter config works and everything is set up correctly, you can issue raw kubectl requests against the custom metrics API
Note: If no inference requests were issued towards any model in the Seldon install, the metrics configured above will not be available in prometheus, and thus will also not appear when checking via the commands below. Therefore, please first run some inference requests towards a sample model to ensure that the metrics are available — this is only required for the testing of the install.
Listing the available metrics:
For namespaced metrics, the general template for fetching is:
For example:
Fetching model RPS metric for a specific (namespace, model)
pair (seldon-mesh, irisa0)
:
Fetching model RPS metric aggregated at the (namespace, server)
level (seldon-mesh, mlserver)
:
Fetching model RPS metric aggregated at the (namespace, pod)
level (seldon-mesh, mlserver-0)
:
Fetching the same metric aggregated at namespace
level (seldon-mesh)
:
For every (Model, Server) pair you want to autoscale, you need to apply 2 HPA manifests based on the same metric: one scaling the Model, the other the Server. The example below only works if the mapping between Models and Servers is 1-to-1 (i.e no multi-model serving).
Consider a model named irisa0
with the following manifest. Please note we don’t set minReplicas/maxReplicas
. This disables the seldon lag-based autoscaling so that it doesn’t interact with HPA (separate minReplicas/maxReplicas
configs will be set on the HPA side)
You must also explicitly define a value for spec.replicas
. This is the key modified by HPA to increase the number of replicas, and if not present in the manifest it will result in HPA not working until the Model CR is modified to have spec.replicas
defined.
Let’s scale this model when it is deployed on a server named mlserver
, with a target RPS per replica of 3 RPS (higher RPS would trigger scale-up, lower would trigger scale-down):
It is important to keep both the scaling metric and any scaling policies the same across the two HPA manifests. This is to ensure that both the Models and the Servers are scaled up/down at approximately the same time. Small variations in the scale-up time are expected because each HPA samples the metrics independently, at regular intervals.
Note: If a Model gets scaled up slightly before its corresponding Server, the model is currently marked with the condition ModelReady "Status: False" with a "ScheduleFailed" message until new Server replicas become available. However, the existing replicas of that model remain available and will continue to serve inference load.
In order to ensure similar scaling behaviour between Models and Servers, the number of minReplicas
and maxReplicas
, as well as any other configured scaling policies should be kept in sync across the HPA for the model and the server.
The Object metric allows for two target value types: AverageValue
and Value
. Of the two, only AverageValue
is supported for the current Seldon Core 2 setup. The Value
target type is typically used for metrics describing the utilization of a resource and would not be suitable for RPS-based scaling.
The example HPA manifests use metrics of type "Object" that fetch the data used in scaling decisions by querying k8s metrics associated with a particular k8s object. The endpoints that HPA uses for fetching those metrics are the same ones that were tested in the previous section using kubectl get --raw ...
. Because you have configured the Prometheus Adapter to expose those k8s metrics based on queries to Prometheus, a mapping exists between the information contained in the HPA Object metric definition and the actual query that is executed against Prometheus. This section aims to give more details on how this mapping works.
In our example, the metric.name:infer_rps
gets mapped to the seldon_model_infer_total
metric on the prometheus side, based on the configuration in the name
section of the Prometheus Adapter ConfigMap. The prometheus metric name is then used to fill in the <<.Series>>
template in the query (metricsQuery
in the same ConfigMap).
Then, the information provided in the describedObject
is used within the Prometheus query to select the right aggregations of the metric. For the RPS metric used to scale the Model (and the Server because of the 1-1 mapping), it makes sense to compute the aggregate RPS across all the replicas of a given model, so the describedObject
references a specific Model CR.
However, in the general case, the describedObject
does not need to be a Model. Any k8s object listed in the resources
section of the Prometheus Adapter ConfigMap may be used. The Prometheus label associated with the object kind fills in the <<.GroupBy>>
template, while the name gets used as part of the <<.LabelMatchers>>
. For example:
If the described object is { kind: Namespace, name: seldon-mesh }
, then the Prometheus query template configured in our example would be transformed into:
If the described object is not a namespace (for example, { kind: Pod, name: mlserver-0 }
) then the query will be passed the label describing the object, alongside an additional label identifying the namespace where the HPA manifest resides in.:
The target
section establishes the thresholds used in scaling decisions. For RPS, the AverageValue
target type refers to the threshold per replica RPS above which the number of the scaleTargetRef
(Model or Server) replicas should be increased. The target number of replicas is being computed by HPA according to the following formula:
As an example, if averageValue=50
and infer_rps=150
, the targetReplicas
would be 3.
Importantly, computing the target number of replicas does not require knowing the number of active pods currently associated with the Server or Model. This is what allows both the Model and the Server to be targeted by two separate HPA manifests. Otherwise, both HPA CRs would attempt to take ownership of the same set of pods, and transition into a failure state.
This is also why the Value
target type is not currently supported. In this case, HPA first computes an utilizationRatio
:
As an example, if threshold_value=100
and custom_metric_value=200
, the utilizationRatio
would be 2. HPA deduces from this that the number of active pods associated with the scaleTargetRef
object should be doubled, and expects that once that target is achieved, the custom_metric_value
will become equal to the threshold_value
(utilizationRatio=1
). However, by using the number of active pods, the HPA CRs for both the Model and the Server also try to take exclusive ownership of the same set of pods, and fail.
Each HPA CR has it's own timer on which it samples the specified custom metrics. This timer starts when the CR is created, with sampling of the metric being done at regular intervals (by default, 15 seconds).
As a side effect of this, creating the Model HPA and the Server HPA (for a given model) at different times will mean that the scaling decisions on the two are taken at different times. Even when creating the two CRs together as part of the same manifest, there will usually be a small delay between the point where the Model and Server spec.replicas
values are changed.
Despite this delay, the two will converge to the same number when the decisions are taken based on the same metric (as in the previous examples).
When showing the HPA CR information via kubectl get
, a column of the output will display the current metric value per replica and the target average value in the format [per replica metric value]/[target]
. This information is updated in accordance to the sampling rate of each HPA resource. It is therefore expected to sometimes see different metric values for the Model and it's corresponding Server.
Some versions of k8s will display [per pod metric value]
instead of [per replica metric value]
, with the number of pods being computed based on a label selector present in the target resource CR (the status.selector
value for the Model or Server in the Core 2 case).
HPA is designed so that multiple HPA CRs cannot target the same underlying pod with this selector (with HPA stopping when such a condition is detected). This means that in Core 2, the Model and Server selector cannot be the same. A design choice was made to assign the Model a unique selector that does not match any pods.
As a result, for the k8s versions displaying [per pod metric value]
, the information shown for the Model HPA CR will be an overflow caused by division by zero. This is only a display artefact, with the Model HPA continuing to work normally. The actual value of the metric can be seen by inspecting the corresponding Server HPA CR, or by fetching the metric directly via kubectl get --raw
Filtering metrics by additional labels on the prometheus metric:
The prometheus metric from which the model RPS is computed has the following labels managed by Seldon Core 2:
If you want the scaling metric to be computed based on a subset of the Prometheus time series with particular label values (labels either managed by Seldon Core 2 or added automatically within your infrastructure), you can add this as a selector the HPA metric config. This is shown in the following example, which scales only based on the RPS of REST requests as opposed to REST + gRPC:
When deploying HPA-based scaling for Seldon Core 2 models and servers as part of a production deployment, it is important to understand the exact interactions between HPA-triggered actions and Seldon Core 2 scheduling, as well as potential pitfalls in choosing particular HPA configurations.
When using custom metrics such as RPS, the actual number of replicas added during scale-up or reduced during scale-down will entirely depend, alongside the maximums imposed by the policy, on the configured target (averageValue
RPS per replica) and on how quickly the inferencing load varies in your cluster. All three need to be considered jointly in order to deliver both an efficient use of resources and meeting SLAs.
Naturally, the first thing to consider is an estimated peak inference load (including some margins) for each of the models in the cluster. If the minimum number of model replicas needed to serve that load without breaching latency SLAs is known, it should be set as spec.maxReplicas
, with the HPA target.averageValue
set to peak_infer_RPS
/maxReplicas
.
If maxReplicas
is not already known, an open-loop load test with a slowly ramping up request rate should be done on the target model (one replica, no scaling). This would allow you to determine the RPS (inference request throughput) when latency SLAs are breached or (depending on the desired operation point) when latency starts increasing. You would then set the HPA target.averageValue
taking some margin below this saturation RPS, and compute spec.maxReplicas
as peak_infer_RPS
/target.averageValue
. The margin taken below the saturation point is very important, because scaling-up cannot be instant (it requires spinning up new pods, downloading model artifacts, etc.). In the period until the new replicas become available, any load increases will still need to be absorbed by the existing replicas.
It is important for the cluster to have sufficient resources for creating the total number of desired server replicas set by the HPA CRs across all the models at a given time.
Not having sufficient cluster resources to serve the number of replicas configured by HPA at a given moment, in particular under aggressive scale-up HPA policies, may result in breaches of SLAs. This is discussed in more detail in the following section.
A similar approach should be taken for setting minReplicas
, in relation to estimated RPS in the low-load regime. However, it's useful to balance lower resource usage to immediate availability of replicas for inference rate increases from that lowest load point. If low-load regimes only occur for small periods of time, and especially combined with a high rate of increase in RPS when moving out of the low-load regime, it might be worth to set the minReplicas
floor higher in order to ensure SLAs are met at all times.
Each spec.replica
value change for a Model or Server triggers a rescheduling event for the Seldon Core 2 scheduler, which considers any updates that are needed in mapping Model replicas to Server replicas such as rescheduling failed Model replicas, loading new ones, unloading in the case of the number of replicas going down, etc.
Two characteristics in the current implementation are important in terms of autoscaling and configuring the HPA scale-up policy:
The scheduler does not create new Server replicas when the existing replicas are not sufficient for loading a Model's replicas (one Model replica per Server replica). Whenever a Model requests more replicas than available on any of the available Servers, its ModelReady
condition transitions to Status: False
with a ScheduleFailed
message. However, any replicas of that Model that are already loaded at that point remain available for servicing inference load.
There is no partial scheduling of replicas. For example, consider a model with 2 replicas, currently loaded on a server with 3 replicas (two of those server replicas will have the model loaded). If you update the model replicas to 4, the scheduler will transition the model to ScheduleFailed
, seeing that it cannot satisfy the requested number of replicas. The existing 2 model replicas will continue to serve traffic, but a third replica will not be loaded onto the remaining server replica.
In other words, the scheduler either schedules all the requested replicas, or, if unable to do so, leaves the state of the cluster unchanged.
Introducing partial scheduling would make the overall results of assigning models to servers significantly less predictable and ephemeral. This is because models may end up moved back-and forth between servers depending on the speed with which various server replicas become available. Network partitions or other transient errors may also trigger large changes to the model-to-server assignments, making it challenging to sustain consistent data plane load during those periods.
Taken together, the two Core 2 scheduling characteristics, combined with a very aggressive HPA scale-up policy and a continuously increasing RPS may lead to the following pathological case:
Based on RPS, HPA decides to increase both the Model and Server replicas from 2 (an example start stable state) to 8. While the 6 new Server pods get scheduled and get the Model loaded onto them, the scheduler will transition the Model into the ScheduleFailed
state, because it cannot fulfill the requested replicas requirement. During this period, the initial 2 Model replicas continue to serve load, but are using their RPS margins and getting closer to the saturation point.
At the same time, load continues to increase, so HPA further increases the number of required Model and Server replicas from 8 to 12, before all of the 6 new Server pods had a chance to become available. The new replica target for the scheduler also becomes 12, and this would not be satisfied until all the 12 Server replicas are available. The 2 Model replicas that are available may by now be saturated and the infer latency spikes up, breaching set SLAs.
The process may continue until load stabilizes.
If at any point the number of requested replicas (<=maxReplicas
) exceeds the resource capacity of the cluster, the requested server replica count will never be reached and thus the Model will remain permanently in the ScheduleFailed
state.
While most likely encountered during continuous ramp-up RPS load tests with autoscaling enabled, the pathological case example is a good showcase for the elements that need to be taken into account when setting the HPA policies.
The speed with which new Server replicas can become available versus how many new replicas may HPA request in a given time:
The HPA scale-up policy should not be configured to request more replicas than can become available in the specified time. The following example reflects a confidence that 5 Server pods will become available within 90 seconds, with some safety margin. The default scale-up config, that also adds a percentage based policy (double the existing replicas within the set periodSeconds
) is not recommended because of this.
Perhaps more importantly, there is no reason to scale faster than the time it takes for replicas to become available - this is the true maximum rate with which scaling up can happen anyway.
The duration of transient load spikes which you might want to absorb within the existing per-replica RPS margins.
The previous example, at line 13, configures a scale-up stabilization window of one minute. It means that for all of the HPA recommended replicas in the last 60 second window (4 samples of the custom metric considering the default sampling rate), only the smallest will be applied.
Such stabilization windows should be set depending on typical load patterns in your cluster: not being too aggressive in reacting to increased load will allow you to achieve cost savings, but has the disadvantage of a delayed reaction if the load spike turns out to be sustained.
The duration of any typical/expected sustained ramp-up period, and the RPS increase rate during this period.
It is useful to consider whether the replica scale-up rate configured via the policy (line 15 in the example) is able to keep-up with this RPS increase rate.
Such a scenario may appear, for example, if you are planning for a smooth traffic ramp-up in a blue-green deployment as you are draining the "blue" deployment and transitioning to the "green" one
provides service mesh and ingress products. Our examples here are based on the Emissary ingress.
We will run through some examples as shown in the notebook service-meshes/ambassador/ambassador.ipynb
in our repo.
Seldon Iris classifier model
Default Ambassador Host and Listener
Ambassador Mappings for REST and gRPC endpoints
Seldon provides an Experiment resource for service mesh agnostic traffic splitting but if you wish to control this via Ambassador and example is shown below to split traffic between two models.
Assumes
You have installed emissary as per their docs
Tested with
provides a service mesh and ingress solution.
We will run through some examples as shown in the notebook service-meshes/istio/istio.ipynb
in our repo.
A Seldon Iris Model
An istio Gateway
An instio VirtualService to expose REST and gRPC
Two Iris Models
An istio Gateway
An istio VirtualService with traffic split
Assumes
You have installed istio as per their docs
You have exposed the ingressgateway as an external loadbalancer
tested with:
provides a service mesh and ingress solution.
We will run through some examples as shown in the notebook service-meshes/traefik/traefik.ipynb
A Seldon Iris Model
Traefik Service
Traefik IngressRoute
Traefik Middleware for adding a header
Assumes
Tested with traefik-10.19.4
You may want to modify the query in the example to match the one that you typically use in your monitoring setup for RPS metrics. The example calls with a 2 minute sliding window. Values scraped at the beginning and end of the 2 minute window before query time are used to compute the RPS.
A list of all the Prometheus metrics exposed by Seldon Core 2 in relation to Models, Servers and Pipelines is available , and those may be used when customizing the configuration.
For a complete reference for how prometheus-adapter
can be configured via the ConfigMap
, please consult the docs .
Customize scale-up / scale-down rate & properties by using scaling policies as described in the
For more resources, please consult the and the
Using the default scaling policy, HPA is relatively aggressive on scale-up (responding quickly to increases in load), with a maximum replicas increase of either 4 every 15 seconds or 100% of existing replicas within the same period (whichever is highest). In contrast, scaling-down is more gradual, with HPA only scaling down to the maximum number of recommended replicas in the most recent 5 minute rolling window, in order to avoid flapping. Those parameters can be customized via .
If there are multiple models which typically experience peak load in a correlated manner, you need to ensure that sufficient cluster resources are available for k8s to concurrently schedule the maximum number of server pods, with each pod holding one model replica. This can be ensured by using either or, when running workloads in the cloud, any provider-specific cluster autoscaling services.
Note: Traffic splitting does not presently work due to this . We recommend you use a Seldon Experiment instead.
emissary-ingress-7.3.2 insatlled via
Currently not working due to this
Warning Traffic splitting does not presently work due to this . We recommend you use a Seldon Experiment instead.
You have installed Traefik as per their into namespace traefik-v2