Custom Inference Servers
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: nlp-model
spec:
predictors:
- graph:
children: []
implementation: CUSTOM_INFERENCE_SERVER
modelUri: s3://our-custom-models/nlp-model
name: model
name: default
replicas: 1Building a new inference server
Adding a new inference server
Worked Example
Last updated
Was this helpful?