Kafka Integration
Kafka is a component in the Seldon Core 2 ecosystem, that provides scalable, reliable, and flexible communication for machine learning deployments. It serves as a strong backbone for building complex inference pipelines, managing high-throughput asynchronous predictions, and seamlessly integrating with event-driven systems—key features needed for contemporary enterprise-grade ML platforms.
An inference request is a request sent to a machine learning model to make a prediction or inference based on input data. It is a core concept in deploying machine learning models in production, where models serve predictions to users or systems in real-time or batch mode.
To explore this feature of Seldon Enterprise Platform, you need to integrate with Kafka. You can do so by either:
integrating it through managed solution within your cloud provider or
by deploying it directly within a Kubernetes cluster.
For more information on how to set up a managed Kafka solution, refer to the following sections:
Configuration examples provides the steps to configure some of the managed Kafka services.
Securing Kafka provides more information about the encryption and authentication.
Last updated
Was this helpful?