ElasticSearch

Elasticsearch Installation

Note: Elasticsearch is an external component outside of the main Seldon stack. Therefore, it is the cluster administrator's responsibility to administrate and manage the Elasticsearch instance used by Seldon. Seldon Enterprise Platform does not support OpenSearch as an alternative to Elasticsearch.

Initial Configuration

Copy the default Fluentd config

cp ./seldon-deploy-install/reference-configuration/efk/values-fluentd.yaml fluentd-values.yaml

As the starting fluentd configuration is crafted for Elasticsearch, by Open Distro, you need to modify the elasticsearch section in the fluentd-values.yaml file:

elasticsearch:
  hosts: ['elasticsearch-master.seldon-logs.svc.cluster.local']
  logstash:
    enabled: true
    prefix: 'kubernetes_cluster'
  auth:
    enabled: false
  scheme: "http"
  sslVerify: false

Ensure Required Namespaces Exist

We'll be installing in the seldon-logs namespace. We'll also set up some config in the seldon-system namespace.

kubectl create namespace seldon-logs || echo "namespace seldon-logs exists"
kubectl create namespace seldon-system || echo "namespace seldon-system exists"

Elasticsearch

Elasticsearch can be installed using Elastic Cloud on Kubernetes (ECK). ECK can be installed using helm:

helm repo add elastic https://helm.elastic.co
helm repo update
helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace --version=2.11.1

Then, create an Elasticsearch instance called seldon in the seldon-logs namespace using the following scripts:

cat << EOF > elasticsearch.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: seldon
  namespace: seldon-logs
spec:
  nodeSets:
    - config:
        node.store.allow_mmap: false
      count: 3
      name: default
      podTemplate:
        spec:
          containers:
            - name: elasticsearch
              resources:
                limits:
                  memory: 2Gi
                requests:
                  cpu: "1"
                  memory: 2Gi
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 8Gi
            storageClassName: standard
  version: 7.17.18
EOF

kubectl apply -f elasticsearch.yaml

Note: Currently, we guarantee compatibility with Elasticsearch 7.X. Compatibility with Elasticsearch 8.X is not guaranteed.

Authentication

Security is managed by the ECK operator and cannot be disabled.

The operator will create a secret with the credentials for the elastic user.

We can use this password to provide credentials to other components that need to access Elasticsearch.

To do this, we generate secrets in the seldon-logs (for the request logger) and seldon-system (for Seldon Enterprise Platform) namespaces from the elastic user's password:

ELASTIC_USERNAME=$(echo -n elastic | base64)
ELASTIC_PASSWORD=$(kubectl get secret seldon-es-elastic-user -n seldon-logs -o go-template='{{.data.elastic}}')
cat << EOF > elastic-credentials.yaml
apiVersion: v1
data:
  password: ${ELASTIC_PASSWORD}
  username: ${ELASTIC_USERNAME}
kind: Secret
metadata:
  name: elastic-credentials
type: Opaque
EOF

kubectl apply -f elastic-credentials.yaml -n seldon-logs
kubectl apply -f elastic-credentials.yaml -n seldon-system

Fluentd

We need to modify the fluentd-values.yaml file to point to the Elasticsearch instance we just created, as well as set the appropriate credentials. Retrieve the ELASTIC_PASSWORD:

export ELASTIC_PASSWORD=$(kubectl get secret elastic-credentials -n seldon-logs -o go-template='{{.data.password | base64decode}}')

Make a copy of the values-fluentd.yaml file:

cp values-fluentd.yaml values-elasticsearch-fluentd.yaml

Update the following values in values-elasticsearch-fluentd.yaml:

elasticsearch:
  auth:
    user: "elastic"
    password: <ELASTIC_PASSWORD>
  hosts:
    - seldon-es-http.seldon-logs.svc.cluster.local

We can then install Fluentd using helm:

helm upgrade --install fluentd fluentd-elasticsearch \
    --version 10.0.1 \
    --namespace seldon-logs -f values-elasticsearch-fluentd.yaml \
    --repo https://kokuwaio.github.io/helm-charts

Kibana (optional)

Kibana is useful for creating visualizations and dashboards for Elasticsearch. It is not required for Seldon Enterprise Platform, however users may choose to install it for debugging purposes:

As we are using ECK, we can install Kibana with the following script:

cat << EOF > kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: seldon
  namespace: seldon-logs
spec:
  version: 7.17.18
  count: 1
  elasticsearchRef:
    name: seldon
    namespace: seldon-logs
EOF

kubectl apply -f kibana.yaml
kubectl rollout status deployment/kibana-kibana -n seldon-logs

Configure Seldon Enterprise Platform

The following Helm values need to be set in install-values.yaml.

If you did not install Knative Eventing, requestLogger.trigger.create has to be set to false.

requestLogger:
  create: true
  elasticsearch:
    host: seldon-es-http.seldon-logs.svc.cluster.local
    port: "9200"
    protocol: http
  trigger:
    create: true # false if not using Knative

elasticsearch:
  url: http://seldon-es-http.seldon-logs.svc.cluster.local:9200
  basicAuth: true
  secret:
    name: "elastic-credentials"
    userKey: "username"
    passwordKey: "password"

As the Elasticsearch instance has authentication enabled, we set elasticsearch.basicAuth to true.

We also provide the name of the secret containing the elastic user's credentials in the elasticsearch.secret section.

Authorization

The Seldon Enterprise Platform setup needs authorization on the Elasticsearch cluster to create, manage, and search indexes for prediction logging and other monitoring features. The following security privileges are mandatory for proper functioning of the current Seldon Enterprise Platform features. Read more about Elasticsearch security privileges here.

Seldon Enterprise Platform user security privileges

Elasticsearch Privileges
Privilege Level
Index-pattern(s)

monitor

Cluster

NA

index

Index

inference-log-*

index

Index

reference-log-*

index

Index

drift-log-*

read

Index

inference-log-*

read

Index

reference-log-*

read

Index

drift-log-*

read

Index

kubernetes_cluster-*

read

Index

*

Seldon Request Logger user security Privileges

Elasticsearch Privileges
Privilege Level
Index-pattern(s)

monitor

Cluster

NA

create_index

Index

inference-log-*, reference-log-*, drift-log-*

index

Index

inference-log-*, reference-log-*, drift-log-*

read

Index

inference-log-*, reference-log-*, drift-log-*

write

Index

inference-log-*, reference-log-*, drift-log-*

manage

Index

inference-log-*, reference-log-*, drift-log-*

bulk

Index

inference-log-*, reference-log-*, drift-log-*

Configure EFK Ingress (Optional)

Kibana

It can be useful to access Kibana's UI without having to port-forward.

To expose Kibana externally it needs to have its own path.

This means that we need to modify our kibana.yaml to include an extra spec.config and spec.http section:

spec:
  config:
    server.basePath: /kibana
  http:
    tls:
      selfSignedCertificate:
        disabled: true

The base path is required as Kibana be default runs behind a proxy that adds a random path component to its URL

The tls section is required to disable the self-signed certificate that Kibana uses by default, so that we can use our own certificate.

Then reapply the kibana.yaml:

kubectl apply -f kibana.yaml

Next, configure either an Istio VirtualService . The following VirtualService for Kiban`, created in kibana-vs.yaml, enables its ingress:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kibana
  namespace: seldon-logs
spec:
  gateways:
  - istio-system/seldon-gateway
  hosts:
  - '*'
  http:
  - match:
    - uri:
        prefix: /kibana/
    - uri:
        prefix: /kibana
    rewrite:
      uri: /
    route:
    - destination:
        host: seldon-kb-http
        port:
          number: 5601

Apply the configuration using the command kubectl apply -f kibana-vs.yaml.

You can access Kibana at <your-ingress>/kibana.

Last updated