Seldon Core 1
Ctrlk
  • Getting Started
    • Quick Start Guide
    • License
    • Installation
    • Community
  • Concepts
    • Overview of Components
  • Configuration
    • Installation Parameters
    • Deployments
    • Servers
      • Custom Inference Servers
      • [Storage Initializers]
      • Prepackaged Model Servers
      • Inference Optimization
      • XGBoost Server
      • Triton Inference Server
      • SKLearn Server
      • Tempo Server
      • MLFlow Server
      • HuggingFace Server
      • TensorFlow Serving
    • Routing
    • Wrappers and SDKs
    • Integrations
  • Tutorials
    • Notebooks
  • Reference
    • Annotation Based Configuration
    • Benchmarking
    • General Availability
    • Helm Charts
    • Images
    • Logging and Log Level
    • Private Docker Registry
    • Prediction APIs
    • Release Highlights
    • Seldon Deployment CRD
    • Service Orchestrator
    • Kubeflow
    • Archived Docs
  • Contributing
    • Overview
    • Seldon Core Licensing
    • End to End Tests
    • Roadmap
    • Build using Private Repo
    • Seldon Docs Home
Powered by GitBook
On this page
Edit
  1. Configuration

Servers

Custom Inference Servers[Storage Initializers]Prepackaged Model ServersInference OptimizationXGBoost ServerTriton Inference ServerSKLearn ServerTempo ServerMLFlow ServerHuggingFace ServerTensorFlow Serving
PreviousTroubleshooting DeploymentsNextCustom Inference Servers

Last updated 5 months ago

Was this helpful?

Was this helpful?