Benchmarking
This page is a work in progress to provide benchmarking and load testing.
This work is ongoing and we welcome feedback
Tools
Service Orchestrator
These benchmark tests are to evaluate the extra latency added by including the service orchestrator.
Results
On A 3 node DigitalOcean cluster 24vCPUs 96 GB, running Tensorflow Flowers image classfier.
REST
9ms
gRPC
4ms
Further work:
Statistical confidence test
Tensorflow
Test the max throughput and HPA usage.
Results
On A 3 node DigitalOcean cluster 24vCPUs 96 GB, running Tensorflow Flowers image classfier with HPA and running at max throughput for a single model. No ramp up, as vegeta does not support this. See notebook for details.
Latencies:
mean: 259.990239 ms
50th: 131.917169 ms
90th: 310.053255 ms
95th: 916.684759 ms
99th: 2775.05271 ms
Throughput: 23.997572337989126/s
Errors: False
Flexible Benchmarking with Argo Workflows
We have also an example that shows how to leverage the batch processing workflow that we showcase in the examples, but to perform benchmarking with Seldon Core models.
Last updated
Was this helpful?