All pages
Powered by GitBook
1 of 2

Loading...

Loading...

Resource allocation

Learn more about using taints and tolerations with node affinity or node selector to allocate resources in a Kubernetes cluster.

When deploying machine learning models in Kubernetes, you may need to control which infrastructure resources these models use. This is especially important in environments where certain workloads, such as resource-intensive models, should be isolated from others or where specific hardware such as GPUs, needs to be dedicated to particular tasks. Without fine-grained control over workload placement, models might end up running on suboptimal nodes, leading to inefficiencies or resource contention.

For example, you may want to:

  • Isolate inference workloads from control plane components or other services to prevent resource contention.

  • Ensure that GPU nodes are reserved exclusively for models that require hardware acceleration.

  • Keep business-critical models on dedicated nodes to ensure performance and reliability.

  • Run external dependencies like Kafka on separate nodes to avoid interference with inference workloads.

To solve these problems, Kubernetes provides mechanisms such as taints, tolerations, and nodeAffinity or nodeSelector to control resource allocation and workload scheduling.

are applied to nodes and to Pods to control which Pods can be scheduled on specific nodes within the Kubernetes cluster. Pods without a matching toleration for a node’s taint are not scheduled on that node. For instance, if a node has GPUs or other specialized hardware, you can prevent Pods that don’t need these resources from running on that node to avoid unnecessary resource usage.

Note: alone do not ensure that a Pod runs on a tainted node. Even if a Pod has the correct toleration, Kubernetes may still schedule it on other nodes without taints. To ensure a Pod runs on a specific node, you need to also use and rules.

When used together, taints and tolerations with nodeAffinity or nodeSelector can effectively allocate certain Pods to specific nodes, while preventing other Pods from being scheduled on those nodes.

In a Kubernetes cluster running Seldon Core 2, this involves two key configurations:

  1. Configuring servers to run on specific nodes using mechanisms like taints, tolerations, and nodeAffinity or nodeSelector.

  2. Configuring models so that they are scheduled and loaded on the appropriate servers.

This ensures that models are deployed on the optimal infrastructure and servers that meet their requirements.

Taints
tolerations
Taints and tolerations
node affinity
node selector

Example: Serving models on dedicated GPU nodes

This example illustrates how to use taints, tolerations with nodeAffinity or nodeSelector to assign GPU nodes to specific models.

Note: Configuration options depend on your cluster setup and the desired outcome. The Seldon CRDs for Seldon Core 2 Pods offer complete customization of Pod specifications, allowing you to apply additional Kubernetes customizations as needed.

To serve a model on a dedicated GPU node, you should follow these steps:

  1. Configuring the node

Configuring the GPU node

Note: To dedicate a set of nodes to run only a specific group of inference servers, you must first provision an additional set of nodes within the Kubernetes cluster for the remaining Seldon Core 2 components. For more information about adding labels and taint to the GPU nodes in your Kubernetes cluster refer to the respective cloud provider documentation.

You can add the taint when you are creating the node or after the node has been provisioned. You can apply the same taint to multiple nodes, not just a single node. A common approach is to define the taint at the node pool level.

When you apply a NoSchedule taint to a node after it is created it may result in existing Pods that do not have a matching toleration to remain on the node without being evicted. To ensure that such Pods are removed, you can use the NoExecute taint effect instead.

In this example, the node includes several labels that are used later for node affinity settings. You may choose to specify some labels, while others are usually added by the cloud provider or a GPU operator installed in the cluster. \

Configure inference servers

To ensure a specific inference server Pod runs only on the nodes you've configured, you can use nodeSelector or nodeAffinity together with a toleration by modifying one of the following:

  • Seldon Server custom resource: Apply changes to each individual inference server.

  • ServerConfig custom resource: Apply settings across multiple inference servers at once.

Configuring Seldon Server custom resource While nodeSelector requires an exact match of node labels for server Pods to select a node, nodeAffinity offers more fine-grained control. It enables a conditional approach by using logical operators in the node selection process. For more information, see .

In this example, a nodeSelector and a toleration is set for the Seldon Server custom resource.

In this example, a nodeAffinity and a toleration is set for the Seldon Server custom resource.

You can configure more advanced Pod selection using nodeAffinity, as in this example:

Configuring ServerConfig custom resource

This configuration automatically affects all servers using that ServerConfig, unless you specify server-specific overrides, which takes precedence.

Configuring models

When you have a set of inference servers running exclusively on GPU nodes, you can assign a model to one of those servers in two ways:

  • Custom model requirements (recommended)

  • Explicit server pinning

Here's the distinction between the two methods of assigning models to servers.

Method
Behavior

When you specify a requirement matching a server capability in the model custom resource it loads the model on any inference server with a capability matching the requirements.

Ensure that the additional capability that matches the requirement label is added to the Server custom resource.

Instead of adding a capability using extraCapabilities on a Server custom resource, you may also add to the list of capabilities in the associated ServerConfig custom resource. This applies to all servers referencing the configuration.

With these specifications, the model is loaded on replicas of inference servers created by the referenced Server custom resource.

Custom model requirements

If the assigned server cannot load the model due to insufficient resources, another similarly-capable server can be selected to load the model.

Explicit pinning

If the specified server lacks sufficient memory or resources, the model load fails without trying another server.

Configuring inference servers
Configuring models
Affinity and anti-affinity

apiVersion: v1
kind: Node
metadata:
  name: example-node         # Replace with the actual node name
  labels:
    pool: infer-srv          # Custom label
    nvidia.com/gpu.product: A100-SXM4-40GB-MIG-1g.5gb-SHARED  # Sample label from GPU discovery
    cloud.google.com/gke-accelerator: nvidia-a100-80gb      # GKE without NVIDIA GPU operator
    cloud.google.com/gke-accelerator-count: "2"              # Accelerator count
spec:
  taints:
    - effect: NoSchedule
      key: seldon-gpu-srv
      value: "true"

apiVersion: mlops.seldon.io/v1alpha1
kind: Server
metadata:
  name: mlserver-llm-local-gpu     # <server name>
  namespace: seldon-mesh            # <seldon runtime namespace>
spec:
  replicas: 1
  serverConfig: mlserver            # <reference Serverconfig CR>
  extraCapabilities:
    - model-on-gpu                  # Custom capability for matching Model to this server
  podSpec:
    nodeSelector:                   # Schedule pods only on nodes with these labels
      pool: infer-srv
      cloud.google.com/gke-accelerator: nvidia-a100-80gb  # Example requesting specific GPU on GKE
      # cloud.google.com/gke-accelerator-count: 2          # Optional GPU count
    tolerations:                    # Allow scheduling on nodes with the matching taint
      - effect: NoSchedule
        key: seldon-gpu-srv
        operator: Equal
        value: "true"
    containers:                     # Override settings from Serverconfig if needed
      - name: mlserver
        resources:
          requests:
            nvidia.com/gpu: 1       # Request a GPU for the mlserver container
            cpu: 40
            memory: 360Gi
            ephemeral-storage: 290Gi
          limits:
            nvidia.com/gpu: 2       # Limit to 2 GPUs
            cpu: 40
            memory: 360Gi

					

apiVersion: mlops.seldon.io/v1alpha1
kind: Server
metadata:
  name: mlserver-llm-local-gpu     # <server name>
  namespace: seldon-mesh            # <seldon runtime namespace>
spec:
  podSpec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: "pool"
              operator: In
              values:
              - infer-srv
            - key: "cloud.google.com/gke-accelerator"
              operator: In
              values:
              - nvidia-a100-80gb
    tolerations:                     # Allow mlserver-llm-local-gpu pods to be scheduled on nodes with the matching taint
    - effect: NoSchedule
      key: seldon-gpu-srv
      operator: Equal
      value: "true"
    containers:                      # If needed, override settings from ServerConfig for this specific Server
      - name: mlserver
        resources:
          requests:
            nvidia.com/gpu: 1        # Request a GPU for the mlserver container
            cpu: 40
            memory: 360Gi
            ephemeral-storage: 290Gi
          limits:
            nvidia.com/gpu: 2        # Limit to 2 GPUs
            cpu: 40
            memory: 360Gi

					
apiVersion: mlops.seldon.io/v1alpha1
kind: Server
metadata:
  name: mlserver-llm-local-gpu     # <server name>
  namespace: seldon-mesh            # <seldon runtime namespace>
spec:
  podSpec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
              - key: "cloud.google.com/gke-accelerator-count"
                operator: Gt       # (greater than)
                values: ["1"]
              - key: "gpu.gpu-vendor.example/installed-memory"
                operator: Gt
                values: ["75000"]
              - key: "feature.node.kubernetes.io/pci-10.present" # NFD Feature label
                operator: In
                values: ["true"] # (optional) only schedule on nodes with PCI device 10

    tolerations:                     # Allow mlserver-llm-local-gpu pods to be scheduled on nodes with the matching taint
    - effect: NoSchedule
      key: seldon-gpu-srv
      operator: Equal
      value: "true"

    containers:                      # If needed, override settings from ServerConfig for this specific Server
      - name: mlserver
        env:
          ...                        # Add your environment variables here
        image: ...                   # Specify your container image here
        resources:
          requests:
            nvidia.com/gpu: 1        # Request a GPU for the mlserver container
            cpu: 40
            memory: 360Gi
            ephemeral-storage: 290Gi
          limits:
            nvidia.com/gpu: 2        # Limit to 2 GPUs
            cpu: 40
            memory: 360Gi
        ...                           # Other configurations can go here
apiVersion: mlops.seldon.io/v1alpha1
kind: ServerConfig
metadata:
  name: mlserver-llm              # <ServerConfig name>
  namespace: seldon-mesh           # <seldon runtime namespace>
spec:
  podSpec:
    nodeSelector:                  # Schedule pods only on nodes with these labels
      pool: infer-srv
      cloud.google.com/gke-accelerator: nvidia-a100-80gb  # Example requesting specific GPU on GKE
      # cloud.google.com/gke-accelerator-count: 2          # Optional GPU count
    tolerations:                   # Allow scheduling on nodes with the matching taint
      - effect: NoSchedule
        key: seldon-gpu-srv
        operator: Equal
        value: "true"
    containers:                    # Define the container specifications
      - name: mlserver
        env:                       # Environment variables (fill in as needed)
          ...
        image: ...                 # Specify the container image
        resources:
          requests:
            nvidia.com/gpu: 1      # Request a GPU for the mlserver container
            cpu: 40
            memory: 360Gi
            ephemeral-storage: 290Gi
          limits:
            nvidia.com/gpu: 2      # Limit to 2 GPUs
            cpu: 40
            memory: 360Gi
        ...                        # Additional container configurations
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
metadata:
  name: llama3           # <model name>
  namespace: seldon-mesh # <seldon runtime namespace>
spec:
  requirements:
  - model-on-gpu         # requirement matching a Server capability
apiVersion: mlops.seldon.io/v1alpha1
kind: Server
metadata:
  name: mlserver-llm-local-gpu     # <server name>
  namespace: seldon-mesh           # <seldon runtime namespace>
spec:
  serverConfig: mlserver           # <reference ServerConfig CR>
  extraCapabilities:
    - model-on-gpu                 # custom capability that can be used for matching Model to this server
  # Other fields would go here
apiVersion: mlops.seldon.io/v1alpha1
kind: ServerConfig
metadata:
  name: mlserver-llm               # <ServerConfig name>
  namespace: seldon-mesh           # <seldon runtime namespace>
spec:
  podSpec:
    containers:
      - name: agent                # note the setting is applied to the agent container
        env:
          - name: SELDON_SERVER_CAPABILITIES
            value: mlserver,alibi-detect,...,xgboost,model-on-gpu  # add capability to the list
        image: ...
    # Other configurations go here
apiVersion: mlops.seldon.io/v1alpha1
kind: Model
metadata:
  name: llama3           # <model name>
  namespace: seldon-mesh # <seldon runtime namespace>
spec:
  server: mlserver-llm-local-gpu   # <reference Server CR>
  requirements:
    - model-on-gpu                # requirement matching a Server capability