githubEdit

NVIDIA TensorRT MNIST

digit

This example shows how you can deploy a TensorRT model with NVIDIA Triton Server. In this case we use a prebuilt TensorRT model for NVIDIA v100 GPUs.

Note this example requires some advanced setup and is directed for those with tensorRT experience.

Prerequisites

This example uses the KFServing protocol supported by Triton Infernence Serverarrow-up-right which Seldon also supports.

Check metadata of model

Test prediction on random digit.

png

Last updated

Was this helpful?