Stream Processing with KNative Eventing

In this example we will show how you can enable real time stream processing in Seldon Core by leveraging the KNative Eventing integration.

In this example we will deploy a simple model containerised with Seldon Core and we will leverage the basic Seldon Core integration with KNative Eventing which will allow us to connect it so it can receive cloud events as requests and return a cloudevent-enabled response which can be collected by other components.

Pre-requisites

You will require the following in order to go ahead:

Deploy your Seldon Model

We will first deploy our model using Seldon Core. In this case we'll use one of the pre-packaged model servers.

We first createa configuration file:

%%writefile ./assets/simple-iris-deployment.yaml

apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: iris-deployment
spec:
  predictors:
  - graph:
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/v1.19.0-dev/sklearn/iris
      name: simple-iris-model
      children: []
    name: default
    replicas: 1

Run the model in our cluster

Now we run the Seldon Deployment configuration file we just created.

Check that the model has been deployed

Create a Trigger to reach our model

We want to create a trigger that is able to reach directly to the service.

We will be using the following seldon deployment:

Create trigger configuration

Create this trigger file which will send all cloudevents of type "seldon.<deploymentName>.request".

CHeck that the trigger is working correctly (you should see "Ready: True"), together with the URL that will be reached.

Send a request to the KNative Eventing default broker

To send requests we can do so by sending a curl command from a pod inside of the cluster.

Check our model has received it

We can do this by checking the logs (we can query the logs through the service name) and see that the request has been processed

Connect a source to listen to the results of the seldon model

Our Seldon Model is producing results which are sent back to KNative.

This means that we can connect other subsequent services through a trigger that filters for those response cloudevents.

First create the service that willl print the results

This is just a simple pod that prints all the request data into the console.

Now run the event display resources

Check that the event display has been deployed

Create trigger for event display

We now can create a trigger that sends all the requests of the type and source created by the seldon deployment to our event display pod

Apply that trigger

Check our triggers are correctly set up

We now should see the event trigger available.

Send a couple of requests more

We can use the same process we outlined above to send a couple more events.

Visualise the requests that come from the service

Last updated

Was this helpful?