Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Seldon can be run with secure control plane and data plane operations. There are three areas of concern:
The various communication points between services are shown in the diagram below:
TLS control plane activation is switched on and off via the environment variable: CONTROL_PLANE_SECURITY_PROTOCOL
whose values can be PLAINTEXT
or SSL
.
Certificates will be loaded and used for the control plane gRPC services. The secrets or folders will be watched for updates (on certificate renewal) and automatically loaded again.
When installing seldon-core-v2-setup
you can set the secret names for your certificates. If using cert-manager example discussed below this would be as follows:
Kafka secure activation is switched on and off via the environment variable: KAFKA_SECURITY_PROTOCOL
whose values can be PLAINTEXT
, SSL
or SASL_SSL
.
Examples are shown below:
mTLS Strimzi example
mTLS AWS MSK example
SASL PLAIN with Confluent Cloud example
SASL PLAIN with Azure Event Hub example
SASL SCRAM with Strimzi example
SASL SCRAM with AWS MSK example
SASL OAUTH with Confluent Cloud example
TLS Data plane activation is switched on and off via the environment variable: ENVOY_SECURITY_PROTOCOL
whose values can be PLAINTEXT
or SSL
.
When activated this ensures TLS is used to communicate to Envoy via the xDS server as well as using the SDS service to send cretificates to envoy to use for upstream and downstream networking. Downstream is the external access to Seldon and upstream is the path from Envoy to the model servers or pipeline gateway.
When installing seldon-core-v2-setup
you can set data plane operations to TLS as below. This assumes the secrets installed by the helm chart at the end of this section.
The above uses default secret names defined for the certificates installed. You can change the names of the required certificate secrets as shown in a longer configuration below (again using the default names for illustration).
For this we use the following updated Helm values (k8s/samples/values-tls-dataplane-example.yaml
):
We use Envoy internally to direct traffic and in Envoy's terminology upstream
is for internal model servers called from Envoy while the downstream server is the entrypoint server running in Envoy to receive grpc and REST calls. The above settings ensure mTLS for internal "upstream" traffic while provides a standard SSL non-mTLS entrpoint.
To use the above with the seldon CLI you would need a custom config file as follow:
We skip SSL Verify as these are internal self-signed certificates. For production use you would change this to the correct DNS name you are exposing the Seldon entrypoint.
The installer/cluster controller for Seldon needs to provide the certificates. As part of Seldon we provide an example set of certificate issuers and certificates using cert-manager.
You can install Certificates into the desired namespace, here we use seldon-mesh
as an example.
New in Seldon Core 2.7.0
Seldon Core 2 can integrate with Confluent Cloud managed Kafka. In this example we use .
In your Confluent Cloud Console go to and register your Identity Provider.
See Confluent Cloud for further details.
In your Confluent Cloud Console go to and add new identity pool to your newly registered Identity Provider.
See Confluent Cloud for further details.
Seldon Core 2 expects oauth credentials to be in form of K8s secret
You will need following information from Confluent Cloud:
Cluster ID: Cluster Overview
→ Cluster Settings
→ General
→ Identification
Identity Pool ID: Accounts & access
→ Identity providers
→ <specific provider details>
Client ID, client secret and token endpoint url should come from identity provider, e.g. Keycloak or Azure AD.
Configure Seldon Core 2 by setting following Helm values:
Note you may need to tweak replicationFactor
and numPartitions
to your cluster configuration.
Set the kafka config map debug setting to all
. For Helm install you can set kafka.debug=all
.
Seldon will run with .
At present we support mTLS authentication to MSK which can be run from a Kubernetes cluster inside or outside Amazon. If running outside your MSK cluster must have a public endpoint.
If you running your Kubernetes cluster outside AWS you will need to create a .
You will need to setup Kafka ACLs for your user where the username is the CommonName of the certificate of the client and allow full topic access. For example, to add a user with CN=myname to have full operations using the kafka-acls script with :
You will also need to allow the connecting user to be able to perform admin tasks on the cluster so we can create topics on demand.
You will need to allow group access also.
Create a secret for the client certificate you created. If you followed the you will need to export your private key from the JKS keystore. The certificate and chain will be provided in PEM format when you get the certificate signed. You can use these to create a secret with:
tls.key : PEM formatted private key
tls.crt : PEM formatted certificate
ca.crt : Certificate chain
To extract certificates from truststore do:
Add ca.crt to a secret.
We provide a template you can extend in k8s/samples/values-aws-msk-kafka-mtls.yaml.tmpl
:
Copy this and modify by adding your broker endpoints.
Set the kafka config map debug setting to "all". For Helm install you can set kafka.debug=all
.
If you see an error from the producer in the Pipeline gateway complaining about not enough insync replicas then the replication factor Seldon is using is less than the cluster setting for min.insync.replicas
which for a default AWS MSK cluster defaults to 2. Ensure this is equal to that of the cluster. This value can be set in the Helm chart with kafka.topics.replicationFactor
.
New in Seldon Core 2.5.0
Seldon Core 2 can integrate with Confluent Cloud managed Kafka. In this example we use SASL security mechanism.
In your Confluent Cloud environment create new API keys. The easiest way to obtain all required information is to head to Clients
-> New client
(choose e.g. Go) and generate new Kafka cluster API key from there.
This will generate for you:
Key
(we use it as username
)
Secret
(we use it as password
)
Do not forget to also copy the bootstrap.servers
from the example config.
See Confluent Cloud in case of issues.
Seldon Core 2 expects password to be in form of K8s secret
Configure Seldon Core 2 by setting following Helm values:
You may need to tweak replicationFactor
and numPartitions
to your cluster configuration.
Set the kafka config map debug setting to all
. For Helm install you can set kafka.debug=all
.
First check Confluent Cloud .
Create a secret for the broker certificate. If following the you will need to export the trusstore of Amazon into PEM format and save as ca.crt.
First .
First check Confluent Cloud .
New in Seldon Core 2.5.0
Seldon Core 2 can integrate with Amazon managed Apache Kafka (MSK). You can control access to your Amazon MSK clusters using sign-in credentials that are stored and secured using AWS Secrets Manager. Storing user credentials in Secrets Manager reduces the overhead of cluster authentication such as auditing, updating, and rotating credentials. Secrets Manager also lets you share user credentials across clusters.
Configuration of the AWS MSK instance itself is out of scope for this example. Please follow the official AWS documentation on how to enable SASL and public access to the Kafka cluster (if required).
To setup SASL/SCRAM in an Amazon MSK cluster, please follow the guide from Amazon's Official documentation.
Do not forget to also copy the bootstrap.servers
which we will use it in our configuration later below for Seldon.
Seldon Core 2 expects password to be in form of K8s secret.
Configure Seldon Core 2 by setting following Helm values:
Note you may need to tweak replicationFactor
and numPartitions
to your cluster configuration.
Please check Amazon MSK Troubleshooting documentation.
Set the kafka config map debug setting to all
. For Helm install you can set kafka.debug=all
.
If you have installed Strimzi we have an example Helm chart to create a Kafka cluster for seldon and an associated user in kafka/strimzi
folder. Ensure the tls
is enabled with:
The Ansible setup-ecosystem
playbook will also install Strimzi and include a mTLS endpoint. See here.
Create a Kafka User seldon
in the namespace seldon was installed. This assumes Strimzi Kafka cluster is installed in the same namespace or is running with cluster wide permissions. Our Ansible scripts to setup the ecosystem will also create this user if tls is active.
If you don't have this user you can install it with in your desired namespace (here seldon-mesh
):
Install seldon with the Strimzi certificate secrets using a custom values file. This sets the secret created by Strimzi for the user created above (seldon
) and targets the server certificate authority secret from the name of the cluster created on install of the Kafka cluster (seldon-cluster-ca-cert
).
Configure Seldon Core 2 by setting following Helm values:
You can now go ahead and install a SeldonRuntime in your desired install namespace (here seldon-mesh
), e.g.
Create a Strimzi Kafka cluster with SASL_SSL enabled. This can be done with our Ansible scripts by running the following from the ansible/
folder:
The referenced SASL/SCRAM YAML file looks like the below:
This will use the Strimzi Helm chart provided in Seldon Core 2. This will call the Strimzi cluster Helm chart provided by the project with overrides for the cluster authentication type and will also create a user seldon
with password credentials in a Kubernetes Secret.
Install Seldon Core 2 with SASL settings using a custom values file. This sets the secret created by Strimzi for the user created above (seldon
) and targets the server certificate authority secret from the name of the cluster created on install of the Kafka cluster (seldon-cluster-ca-cert
).
Configure Seldon Core 2 by setting following Helm values:
New in Seldon Core 2.5.0
Seldon Core 2 can integrate with Azure Event Hub via Kafka protocol.
You will need at least Standard
tier for your Event Hub Namespace as Basic
tier does not support Kafka protocol.
Seldon Core 2 creates 2 Kafka topics for each pipeline and model plus one global topic for errors. This means that total number of topics will be 2 x (#models + #pipelines) + 1
which will likely exceed the limit of Standard
tier in Azure Event Hub. You can find more information on quotas, like the number of partitions per Event Hub, here.
To start you will need to have an Azure Event Hub Namespace. You can create one following Azure quickstart docs. Note that you do not need to create an Event Hub (topics) as Seldon Core 2 will require all the topics it needs automatically.
To connect to Azure Event Hub provided Kafka API you need to obtain:
Kafka Endpoint
Connection String
You can obtain both using Azure Portal as documented here.
You should get the Connection String for a namespace level as we will need to dynamically create new topics.
The Connection String should be in format of
Seldon Core 2 expects password to be in form of K8s secret.
Configure Seldon Core 2 by setting following Helm values:
Note you may need to tweak replicationFactor
and numPartitions
to your cluster configuration. The username should read $ConnectionString
and this is not a variable for you to replace.
First check Azure Event Hub troubleshooting guide.
Set the kafka config map debug setting to all
. For Helm install you can set kafka.debug=all
.
Verify that you did not hit quotas for topics or partitions in your Event Hub namespace.
Kubernetes secrets and mounted files can be used to provide the certificates in PEM format. These are controlled by environment variables for server or client depending on the component:
CONTROL_PLANE_SECURITY_PROTOCOL
SSL or PLAINTEXT
For a server (scheduler):
CONTROL_PLANE_SERVER_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the certificates
CONTROL_PLANE_CLIENT_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the validation ca roots to verify the client certificate
CONTROL_PLANE_SERVER_TLS_KEY_LOCATION
the path to the TLS private key
CONTROL_PLANE_SERVER_TLS_CRT_LOCATION
the path to the TLS certificate
CONTROL_PLANE_SERVER_TLS_CA_LOCATION
the path to the TLS CA chain for the server
CONTROL_PLANE_CLIENT_TLS_CA_LOCATION
the path to the TLS CA chain for the client for mTLS verification
For a client (agent, modelgateway, hodometer, CRD controller):
CONTROL_PLANE_CLIENT_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the certificates
CONTROL_PLANE_SERVER_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the validation ca roots to verify the server certificate
CONTROL_PLANE_CLIENT_TLS_KEY_LOCATION
the path to the TLS private key
CONTROL_PLANE_CLIENT_TLS_CRT_LOCATION
the path to the TLS certificate
CONTROL_PLANE_CLIENT_TLS_CA_LOCATION
the path to the TLS CA chain for the client
CONTROL_PLANE_SERVER_TLS_CA_LOCATION
the path to the TLS CA chain for the server for mTLS verification
KAFKA_SECURITY_PROTOCOL
PLAINTXT, SSL, or SASL_SSL
KAFKA_CLIENT_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the Kafka client certificate
KAFKA_CLIENT_SERVER_TLS_KEY_LOCATION
the path to the TLS private key
KAFKA_CLIENT_SERVER_TLS_CRT_LOCATION
the path to the TLS certificate
KAFKA_CLIENT_SERVER_TLS_CA_LOCATION
the path to the CA chain for the client
KAFKA_BROKER_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the validation ca roots for the kafka broker
KAFKA_BROKER_TLS_CA_LOCATION
The path to the broker validatiob CA chain
KAFKA_CLIENT_SASL_USERNAME
SASL username
KAFKA_CLIENT_SASL_SECRET_NAME
the name of the namespaced secret which holds the SASL password
KAFKA_CLIENT_SASL_PASSWORD_LOCATION
the path to the file containing the SASL password
Envoy xDS server will use the control plane server and client certificates defined above.
Downstream server
ENVOY_DOWNSTREAM_SERVER_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the certificates
ENVOY_DOWNSTREAM_CLIENT_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the validation ca roots to verify the client certificate
ENVOY_DOWNSTREAM_SERVER_TLS_KEY_LOCATION
the path to the TLS private key
ENVOY_DOWNSTREAM_SERVER_TLS_CRT_LOCATION
the path to the TLS certificate
ENVOY_DOWNSTREAM_SERVER_TLS_CA_LOCATION
the path to the TLS CA chain for the server
ENVOY_DOWNSTREAM_CLIENT_TLS_CA_LOCATION
the path to the TLS CA chain for the client for mTLS verification
Downstream client
ENVOY_DOWNSTREAM_CLIENT_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the certificates
ENVOY_DOWNSTREAM_SERVER_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the validation ca roots to verify the server certificate
ENVOY_DOWNSTREAM_CLIENT_TLS_KEY_LOCATION
the path to the TLS private key
ENVOY_DOWNSTREAM_CLIENT_TLS_CRT_LOCATION
the path to the TLS certificate
ENVOY_DOWNSTREAM_CLIENT_TLS_CA_LOCATION
the path to the TLS CA chain for the server
ENVOY_DOWNSTREAM_SERVER_TLS_CA_LOCATION
the path to the TLS CA chain for the server for mTLS verification
Upstream server
ENVOY_UPSTREAM_SERVER_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the certificates
ENVOY_UPSTREAM_CLIENT_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the validation ca roots to verify the client certificate
ENVOY_UPSTREAM_SERVER_TLS_KEY_LOCATION
the path to the TLS private key
ENVOY_UPSTREAM_SERVER_TLS_CRT_LOCATION
the path to the TLS certificate
ENVOY_UPSTREAM_SERVER_TLS_CA_LOCATION
the path to the TLS CA chain for the server
ENVOY_UPSTREAM_CLIENT_TLS_CA_LOCATION
the path to the TLS CA chain for the client for mTLS verification
Upstream client
ENVOY_UPSTREAM_CLIENT_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the certificates
ENVOY_UPSTREAM_SERVER_TLS_SECRET_NAME
(optional) the name of the namespaced secret which holds the validation ca roots to verify the server certificate
ENVOY_UPSTREAM_CLIENT_TLS_KEY_LOCATION
the path to the TLS private key
ENVOY_UPSTREAM_CLIENT_TLS_CRT_LOCATION
the path to the TLS certificate
ENVOY_UPSTREAM_CLIENT_TLS_CA_LOCATION
the path to the TLS CA chain for the server
ENVOY_UPSTREAM_SERVER_TLS_CA_LOCATION
the path to the TLS CA chain for the server for mTLS verificatio