Kafka Integration
Learn how to set up and configure Kafka for Seldon Core in production environments, including cluster setup and security configuration.
Kafka is a component in the Seldon Core 2 ecosystem, that provides scalable, reliable, and flexible communication for machine learning deployments. It serves as a strong backbone for building complex inference pipelines, managing high-throughput asynchronous predictions, and seamlessly integrating with event-driven systems—key features needed for contemporary enterprise-grade ML platforms.
An inference request is a request sent to a machine learning model to make a prediction or inference based on input data. It is a core concept in deploying machine learning models in production, where models serve predictions to users or systems in real-time or batch mode.
To explore this feature of Seldon Core 2, you need to integrate with Kafka. Integrate Kafka through managed cloud services or by deploying it directly within a Kubernetes cluster.
Securing Kafka provides more information about the encrytion and authentication.
Configuration examples provides the steps to configure some of the managed Kafka services.
Self-hosted Kafka
Seldon Core 2 requires Kafka to implement data-centric inference Pipelines. To install Kafka for testing purposed in your Kubernetes cluster, use Strimzi Operator. For more information, see Self Hosted Kafka
Last updated
Was this helpful?