This section configures kdb Insights Enterprise availability properties. Availability relates to how fault tolerant the application is to failures, either within the application or due to underlying infrastructure.
Kubernetes provides applications with control over where their pods will run through affinities and anti-affinities. kdb Insights Enterprise allows you to tune these settings as required.
By default the application uses hard, anti-affinities to provide node-level resilience for all services. This ensures pod replicas are all scheduled to different nodes to protect against node failures. The hard component of this ensures that no two pods of the same service will be scheduled on the same node.
There are four different preset affinity types supported by the application;
hard affinity setting is the default setting for all services, ensuring that pods are only
scheduled on nodes which do not already have a pod of the same service type running on them.
If there is no node to satisfy the anti-affinity, the pod will not be scheduled.
soft affinity setting setting is similar to the
hard one but may schedule two or more pods
to the same node if the affinity settings cannot be satisfied.
soft-az settings match the behavior above but apply to scheduling
across availability zones rather than simply nodes.
If auto-scaling of the cluster node pool is enabled, a
may cause a cluster to scale to satisfy requirements.
To configure services with one of the preset anti-affinities, you can
override the default behavior in your install values file. Below
configures the information-service and dap components with
If the presets don't suit your needs, you can specify custom affinities and anti-affinities as below. Refer to the Kubernetes documentation for the complete set of options.
- topologyKey: "kubernetes.io/hostname"
- key: "insights.kx.com/serviceName"
kdb Insights Enterprise provides support for overprovisioning your cluster to improve scalability and fault tolerance. This involves deploying an additional chart to your cluster, causing the node pool to scale. The process of scaling a cluster can be slow, so this chart is provided as a way of pre-scaling to mitigate this.