Run Inference
We will show:
Model inference to a Tensorflow model
REST and gRPC using seldon CLI, curl and grpcurl
Pipeline inference
REST and gRPC using seldon CLI, curl and grpcurl
%env INFER_ENDPOINT=0.0.0.0:9000env: INFER_ENDPOINT=0.0.0.0:9000
Tensorflow Model
cat ./models/tfsimple1.yamlapiVersion: mlops.seldon.io/v1alpha1
kind: Model
metadata:
name: tfsimple1
spec:
storageUri: "gs://seldon-models/triton/simple"
requirements:
- tensorflow
memory: 100Ki
Load the model.
Wait for the model to be ready.
Last updated
Was this helpful?

