Skip to content

Infrastructure prerequisites

This section details the infrastructure prerequisites required to deploy kdb Insights Enterprise on the Kubernetes container orchestration system.

Managed Kubernetes cluster

kdb Insights Enterprise currently supports the managed Kubernetes offerings below.

The Kubernetes cluster should have at least one node pool with a minimum node count of three to support the rook-ceph distributed storage system replication. See below.

Ingress Controller

An ingress controller such as ingress-nginx is required to access the kdb Insights Enterprise dashboards and APIs from outside the cluster.

In order to use the NGINX Ingress Controller a valid SSL certificate is required for the ingress endpoint. For details on how certificates are used in kdb Insights Enterprise see here.

Certificate Manager

The cert-manager installation is required to add certificates and certificate issuers as resource types in the Kubernetes cluster.

Each deploy of kdb Insights Enterprise will create a namespaced certificate issuer to provide mTLS between microservices.

A ClusterIssuer such as letsencrypt can be used with the NGINX Ingress Controller above to provide a certificate for the API endpoints.

Distributed storage system

The data tier in kdb Insights Enterprise requires a shared filesystem such as rook-ceph which can be mounted with read/write permission from multiple pods.

Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for the Ceph distributed storage system.

kdb Insights Enterprise supports the following rook-ceph configurations with a replication of 3 requiring at least three worker nodes.

  • Host Storage Cluster

When using a Host Storage Cluster configuration, Rook configures Ceph to store data directly on the host. This requires nodes with locally attached SSD volumes.

This configuration provides the best rook-ceph performance, but in order to achieve a higher level of resiliency it is preferable separate rook-ceph nodes from application nodes. This can be done using labels, taints and tolerations. See Segregating Ceph From User Applications.

NB some CSPs present locally attached SSDs with a pre-existing filesystem. In order to use these SSDs with rook-ceph, the file system must be removed. See Zapping Devices.

  • PVC Cluster

When using a PVC Cluster configuration, Ceph persistent data is stored on volumes requested from a storage class of your choice. The storage can be external to the nodes providing increased volume resiliency at the cost of reduced performance.

Network File Storage

kdb Insights Enterprise requires shared file storage for the package manager and currently supports the CSP specific Network File Storage offerings below.

NB kdb Insights Enterprise requires a StorageClass named sharedfiles to provision shared file storage instances.

DNS record which points to your Kubernetes Ingress

In order to access your cluster, a DNS record should be created which resolves to the external IP address of the cluster’s NGINX Ingress Controller. For more information see DNS Setup.