Batch Prediction Jobs
Last updated
Was this helpful?
Last updated
Was this helpful?
Install MinIO with Seldon Enterprise Platform.
Set up the namespace with a service account for a production environment. For more information, see the argo install.
This demo helps you learn about:
Deploying a pipeline with a pretrained SKlearn iris model
Running a batch job to get predictions
Checking the output
Click the Create new deployment in the Overview page.
Enter the deployment details as follows:
Name: batch-demo
Namespace: seldon
Type: Seldon ML Pipeline
Configure the default predictor values for only these fields:
Runtime: Scikit Learn
Model URI: gs://seldon-models/scv2/samples/mlserver_1.6.0/iris-sklearn
Model Project: default
Click Next for the remaining pages of the wizard, then click Launch.
When the deployment is launched successfully, in the Overview page the status reads Available for the deployment.
Download the input data file iris-input.txt
. The format for the iris-input.txt
is Open Inference Protocol.
Go to the MinIO browser and create a bucket named data
.
Upload the iris-input.txt
file to the data
bucket.
Click the new pipeline batch-demo tile in the Overview page.
Click the Batch Jobs option the left pane.
Click Create Your First Job and type the following details:
Input Data Location: minio://data/iris-input.txt
Output Data Location: minio://data/iris-output-{{workflow.name}}.txt
Number of Workers: 5
Number of Retries: 3
Batch Size: 10
Minimum Batch Wait Interval (sec): 0
Method: Predict
Transport Protocol: REST
Input Data Type: Open Inference Protocol (OIP)
Object Store Secret Name: minio-bucket-envvars
4. After a couple of minutes when the job is complete, refresh the page to see the status.
5. Inspect the output file in MinIO:
If you open the output file you should see contents such as:
If not, see the argo section for troubleshooting.