LogoLogo
latest
latest
  • About
    • Production-ready ML Serving Framework
    • Seldon Core Features
    • Concepts
    • Architecture
  • Installation
    • Installation Overview
    • Learning Environment
      • Self-hosted Kafka
    • Production Environment
      • Kafka Integration
      • Managed Kafka
      • Ingress Controller
    • Test the Installation
    • Advanced Configurations
      • Helm Configuration
      • Server Config
      • Server Runtime
      • Seldon Config
      • Pipeline Config
      • Managing Kafka Topics
    • Upgrading
  • User Guide
    • Kubernetes Resources
    • Servers
      • Resource allocation
        • Example: Serving models on dedicated GPU nodes
    • Models
      • Multi-Model Serving
      • Inference Artifacts
      • Scheduling
      • rClone
      • Parameterized Models
      • Pandas Query
      • Storage Secrets
    • Inference
      • Inference Server
      • Run Inference
      • OIP
      • Batch
    • Pipelines
    • Scaling
      • Manual Scaling
      • Model Autoscaling
      • Server Autoscaling
      • HPA Autoscaling in single-model serving
    • Data Science Monitoring
      • Dataflow with Kafka
      • Model Performance Metrics
      • Drift Detection
      • Outlier Detection
      • Explainability
    • Operational Monitoring
      • Operational Metrics
      • Observability
      • Usage Metrics
      • Local Metrics
      • Tracing
    • Experiments
    • Examples
      • Local examples
      • Kubernetes examples
      • Huggingface models
      • Model zoo
      • Artifact versions
      • Pipeline examples
      • Pipeline to pipeline examples
      • Cyclic Pipeline
      • Explainer examples
      • Custom Servers
      • Local experiments
      • Experiment version examples
      • Inference examples
      • Tritonclient examples
      • Batch Inference examples (kubernetes)
      • Batch Inference examples (local)
      • Checking Pipeline readiness
      • Multi-Namespace Kubernetes
      • Huggingface speech to sentiment with explanations pipeline
      • Production image classifier with drift and outlier monitoring
      • Production income classifier with drift, outlier and explanations
      • Conditional pipeline with pandas query model
      • Kubernetes Server with PVC
  • Integrations
    • Service Meshes
      • Ambassador
      • Istio
      • Traefik
      • Secure Model Endpoints
  • Resources
    • Security
    • APIs
      • Internal
        • Chainer
        • Agent
      • Inference
        • Open Inference Protocol
      • Scheduler
      • Seldon CLI
        • CLI
        • Seldon
          • Config
          • Config Activate
          • Config Deactivate
          • Config Add
          • Config List
          • Config Remove
        • Experiment
          • Experiment Start
          • Experiment Status
          • Experiment List
          • Experiment Stop
        • Model
          • Model Status
          • Model Load
          • Model List
          • Model Infer
          • Model Metadata
          • Model Unload
        • Pipeline
          • Pipeline Load
          • Pipeline Status
          • Pipeline List
          • Pipeline Inspect
          • Pipeline Infer
          • Pipeline Unload
        • Server
          • Server List
          • Server Status
    • FAQs
    • Development
      • License
      • Release
    • Release Notes
Powered by GitBook
On this page

Was this helpful?

Edit on GitHub
Export as PDF
  1. User Guide

Kubernetes Resources

PreviousUpgradingNextServers

Last updated 7 months ago

Was this helpful?

For Kubernetes usage we provide a set of custom resources for interacting with Seldon.

  • - for installing Seldon in a particular namespace.

  • - for deploying sets of replicas of core inference servers (MLServer or Triton).

  • - for deploying single machine learning models, custom transformation logic, drift detectors, outliers detectors and explainers.

  • - for testing new versions of models.

  • - for connecting together flows of data between models.

Advanced Customization

SeldonConfig and ServerConfig define the core installation configuration and machine learning inference server configuration for Seldon. Normally, you would not need to customize these but this may be required for your particular custom installation within your organisation.

  • - for defining new types of inference server that can be reference by a Server resource.

  • - for defining how seldon is installed

SeldonRuntime
Servers
Models
Experiments
Pipelines
ServerConfigs
SeldonConfig