Persistent storage¶
The charts support the creation of Persistent Volumes
The kdb+ Helm charts support the creation of Persistent Volumes (PVs). PVs ensure that if a pod is restarted its Persistent Disk – provisioned via a Persistent Volume Claim (PVC) – is available to the pod on every node in the cluster.
Persistent storage is used for communicating between the RDB (realtime database) and the HDB (historical database). The RDB writes out the tables at end of day (EOD) which the HDB can read.
Another use case for persistent storage is having the tickerplant logs available for replay should the tickerplant pod be rescheduled (moved) by Kubernetes.
Configuration¶
Enable PV for the ticker plant:
$ helm install ticker charts/tp \
--set persistence.enabled=true
Enable PV for the RDB:
$ helm install realtime charts/rdb \
--set persistence.enabled=true
In the examples above, the value persistence.enabled
tells the charts to use Persistent Volumes.
Examine the values file for more configuration, including capacity and storage class configuration.
Storage classes configuration¶
Pod storage class is modified by setting persistence.storageClass
in the values file or command-line argument.
Complete NFS configuration examples in the charts/kx/examples
directory
In the example below, we demonstrate how to configure NFS PVs between the tickerplant and realtime databases:
Tickerplant:
$ helm install ticker charts/tp \
--set persistence.enabled=true \
--set persistence.shared.enabled=true \
--set persistence.storageClass=nfs-client
Realtime database:
$ helm install realtime charts/rdb \
--set persistence.enabled=true \
--set persistence.shared.enabled=true \
--set persistence.storageClass=nfs-client \
Options:
option default
---------------------------------------------------------------------------
persistence.enabled enable persistent storage false
persistence.shared.enabled enable NFS false
persistence.storageClass NFS storage class to use nfs-client
tickerplant.tpPVC existing Persistent Volume Claim kx-tp-pvc
(created by the TP)
Shared volumes¶
In highly available, enterprise scenarios it is desirable for certain components to share disk. For example, a new member of an an autoscaling RDB replicaset might replay its tickerplant log files to ‘catch up’.
The kdb+ Helm charts let you use shared volumes to serve these shared file stores.
The charts themselves are agnostic as to the implementation details of how shared volumes are served. Instead, the charts are simply configured to use a new, shared volume storage class, specifically a storage class that can provision a volume with a ReadWriteMany
(RWX) accessMode.
There are many options when it comes to these RWX storage classes, including to use
- a cloud-native storage operator, such as rook.io
- a cloud provider NFS (AWS EFS, GCP FileStore, etc.) and a NFS client provisioner
- distributed block or parallel filesystem storage: Ceph, Lustre, or glusterFS
- Enterprise Cloud Filers, such as NetApp Cloud Volumes
- custom-deployed NFS servers and the NFS provisioner
Once you have
- configured your NFS server, via Rook
- and configured a Persistent Volume
you can set
global.persistence.enabled=true
-
Instructs the charts to use PVs instead of ephemeral storage, emptyDir
global.persistence.shared.enabled=true
-
Allows the sharing of volumes between certain components of the charts
Configure NFS¶
To use NFS, we must
- provision an NFS server
- install the NFS client provisioner
- configure a Persistent Volume
Provision an NFS server¶
You can configure your own NFS server, or (recommended) configure an NFS server for your cloud environment.
Many clouds also support ‘enterprise NAS’, which can provide better performance characteristics than the typical cloud offered, e.g. NetApp Cloud Volumes.
When creating your NFS server, be sure to take note of
- the IP address of the NFS server
- the path of the export, e.g.
/volumes
or/export
These settings are used in the next step.
Rook NFS share¶
This is a quick example of setting up NFS storage using Rook. It is intended only for demo purposes. The approach follows Rook NFS Quickstart; consult it for more detail.
Deploy the Rook NFS operator.
URI=https://raw.githubusercontent.com/rook/rook/release-1.5/cluster/examples/kubernetes/nfs
kubectl create -f $URI/common.yaml
kubectl create -f $URI/operator.yaml
Check the pod is ready.
kubectl -n rook-nfs-system get pod
Apply the security policy (recommended, but not strictly needed)
kubectl create -f $URI/psp.yaml
Then, create the NFS Server, RBAC initialization first:
kubectl create -f $URI/rbac.yaml
And, provided you have a default StorageClass already (Minikube does), you can create the server:
kubectl create -f $URI/nfs.yaml
If not, or if you want a different underlying storage mechanism, consult the Rook Quick Start guide for details.
Check the server has been created.
kubectl -n rook-nfs get nfsservers.nfs.rook.io
kubectl -n rook-nfs get pod -l app=rook-nfs
We need a storage class we can reference:
kubectl create -f $URI/sc.yaml
We can now use rook-nfs-share1
as the storage class to provision storage.
Provision a cluster with shared storage using Rook NFS:
$ helm install kx . --dependency-update \
--set global.metrics.enabled=true \
--set global.persistence.enabled=true \
--set global.persistence.storageClass=rook-nfs-share1 \
--set global.persistence.shared.storageClass=rook-nfs-share1 \
--set global.persistence.shared.enabled=true