Seldon Core 1
search
Ctrlk
Seldon Core 1
  • Getting Started
    • Quick Start Guide
    • License
    • Installation
    • Community
  • Concepts
    • Overview of Components
  • Configuration
    • Installation Parameters
    • Deployments
    • Servers
      • Custom Inference Servers
      • [Storage Initializers]
      • Prepackaged Model Servers
      • Inference Optimization
      • XGBoost Server
      • Triton Inference Server
      • SKLearn Server
      • Tempo Server
      • MLFlow Server
      • HuggingFace Server
      • TensorFlow Serving
    • Routing
    • Wrappers and SDKs
    • Integrations
  • Tutorials
    • Notebooks
  • Reference
    • Annotation Based Configuration
    • Benchmarking
    • General Availability
    • Helm Charts
    • Images
    • Logging and Log Level
    • Private Docker Registry
    • Prediction APIs
    • Release Highlights
    • Seldon Deployment CRD
    • Service Orchestrator
    • Kubeflow
    • Archived Docsarrow-up-right
  • Contributing
    • Overview
    • Seldon Core Licensing
    • End to End Tests
    • Build using Private Repo
    • Seldon Docs Homearrow-up-right
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
githubEdit
  1. Configuration

Servers

Custom Inference Serverschevron-right[Storage Initializers]chevron-rightPrepackaged Model Serverschevron-rightInference Optimizationchevron-rightXGBoost Serverchevron-rightTriton Inference Serverchevron-rightSKLearn Serverchevron-rightTempo Serverchevron-rightMLFlow Serverchevron-rightHuggingFace Serverchevron-rightTensorFlow Servingchevron-right
PreviousTroubleshooting Deploymentschevron-leftNextCustom Inference Serverschevron-right

Last updated 6 months ago

Was this helpful?

Was this helpful?