Resources

For Kubernetes usage we provide a set of custom resources for interacting with Seldon.

  • SeldonRuntime - for installing Seldon in a particular namespace.

  • Servers - for deploying sets of replicas of core inference servers (MLServer or Triton).

  • Models - for deploying single machine learning models, custom transformation logic, drift detectors, outliers detectors and explainers.

  • Experiments - for testing new versions of models.

  • Pipelines - for connecting together flows of data between models.

Advanced Customization

SeldonConfig and ServerConfig define the core installation configuration and machine learning inference server configuration for Seldon. Normally, you would not need to customize these but this may be required for your particular custom installation within your organisation.

  • ServerConfigs - for defining new types of inference server that can be reference by a Server resource.

  • SeldonConfig - for defining how seldon is installed

Last updated

Was this helpful?