Kafka Integration
Last updated
Was this helpful?
Last updated
Was this helpful?
is a component in the Seldon Core 2 ecosystem, that provides scalable, reliable, and flexible communication for machine learning deployments. It serves as a strong backbone for building complex inference pipelines, managing high-throughput asynchronous predictions, and seamlessly integrating with event-driven systems—key features needed for contemporary enterprise-grade ML platforms.
An inference request is a request sent to a machine learning model to make a prediction or inference based on input data. It is a core concept in deploying machine learning models in production, where models serve predictions to users or systems in real-time or batch mode.
To explore this feature of Seldon Core 2, you need to integrate with Kafka. Integrate Kafka through or by deploying it .
provides more information about the encrytion and authentication.
provides the steps to configure some of the managed Kafka services.
Seldon Core 2 requires Kafka to implement data-centric inference Pipelines. To install Kafka for testing purposed in your Kubernetes cluster, use . For more information, see