Skip to content


Quickly build kdb+ containers and deploy them to a cloud cluster using Helm and Kubernetes

User guide

The user guide includes details on all key features, implementations and cloud considerations for deploying a cloud-native kdb+ application, including configuration, prerequisites, utilizing metrics and a variety of security collateral.

Ensure you meet the prerequisites. In particular, you must

  • build your containers using QPacker (qp)
  • configure a username and email for the KX OnDemand License
  • update your Kubernetes context and pull the credentials to use the cluster created

Building containers

With qp installed, navigate to the ~/kx-insights/code/deployment/containers directory and run qp build

qp build instructions are defined by the containers/qp.json file.

  "tp": {
    "depends": [ "qlog" ],
    "entry": [ "src/process/tp.q" ]
cd containers/
qp build

Once the container images have been successfully built, use qp to tag and push your Docker images to a Docker repository

qp tag 0.0.1
qp push 0.0.1

For Azure you may need to log into your container registery using az acr login --name <container registry>

Repeat the tag and push for each of the component containers:

  • tp (tickerplant)
  • rdb (realtime database)
  • hdb (historical database)
  • gw (gateway)

After a release of a new image, update the image.repository and image.tag of the component charts values.yaml.

Alternatively --set may be used in the comand line.

Quick-start installation

You can use helm install to install the charts from the source directory.

cd charts/kx
helm install kx . --dependency-update \
  --set global.license.user="Another User" \
  --set"" \
  --set global.image.repository="" \
  --set global.image.tag="0.0.1"

Activate your KX On Demand license by clicking the link in the activation email you receive.

When you install the chart, some information will be displayed allowing you to begin interacting with your kdb+ installation.

NAME: kx
LAST DEPLOYED: Thu Jan 28 11:30:00 2021
NAMESPACE: default
STATUS: deployed
You have installed an instance of the Kx Core umbrella chart.


This install is using a Kx OnDemand license for Another User (

Internal Access

You can access your release, called kx, inside the cluster using the following endpoints:


External Access

Please wait a few moments until your containers are pulled and started on the target cluster.

You can obtain the external IP of this process, if configured, by using:


    TP_HOST=`kubectl --namespace default get svc kx-tp -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
    echo $TP_HOST:$TP_PORT


    GW_HOST=`kubectl --namespace default get svc kx-gw -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
    echo $GW_HOST:$GW_PORT

Enabled Features

Metrics Sidecar         ...false
ServiceMonitor          ...false
Persistent Volumes      ...false
Shared Volumes          ...false
RBAC                    ...false
RDB Auto-Scaling (HPA)  ...false

Ensure your release has come up:

$ helm ls
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS   CHART     APP VERSION
kx          default     1           2021-01-28 11:30:57.771424403 +0000 UTC deployed kx-0.3.0  4.0.0    


Use kubectl to run a few quick checks on your release.

Pod status

All pods successfully started and running

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
kx-gw-558598d5d-j8fcs          1/1     Running   0          71s
kx-hdb-0                       1/1     Running   0          70s
kx-rdb-0                       1/1     Running   0          70s
kx-tp-6849c5b9fc-wl6c4         1/1     Running   0          71s

Service status

All services started and running.

$ kubectl get svc
NAME          TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGE
kx-gw         LoadBalancer   <pending>        5000:30306/TCP   8s
kx-hdb        ClusterIP      None          <none>           5010/TCP         63s
kx-rdb        ClusterIP      None          <none>           5020/TCP         62s
kx-tp         LoadBalancer   5000:32412/TCP   62s

Note the RDB (realtime database) and HDB (historical database) are not being assigned an external IP address. The Tickerplant has been assigned an external IP, with the GW (gateway) still pending.

Your cluster may take a few minutes to assign an external IP address


Simple script containers/src/process/feed.q mimics a feed handler publishing to a tickerplant.

Simply run the script passing the tickerplant and gateway services’ external IPs from the containers/ directory.

Again check for the external IPs of your services.

$ kubectl get svc
NAME          TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGE
kx-gw         LoadBalancer     5000:30306/TCP   7m24s
kx-hdb        ClusterIP      None          <none>           5010/TCP         8m19s
kx-rdb        ClusterIP      None          <none>           5020/TCP         8m18s
kx-tp         LoadBalancer   5000:32412/TCP   8m18s
$ cd ../../containers/
$ rlwrap q src/process/feed.q -tpHost  -gwHost
/- Check status of handle connections within .conn.procs
process| procType    address            handle connected lastRetry
-------| ---------------------------------------------------------
tp     | tickerplant : 3      1                  
gw     | gateway     :   4      1 

With both instances connected, we can assume data is being fed to the tickerplant; but we can also query the gateway for data.

time                          sym src level bid      asize    bsize    ask   ..
2021.01.28D11:44:36.885418000 a   c   541   282.064  516.7994 572.2282 222.12..
2021.01.28D11:44:36.885418000 a   c   501   555.756  154.6752 938.6488 456.94..
2021.01.28D11:44:36.885418000 a   b   401   891.5985 24.81919 189.3644 333.83..
2021.01.28D11:44:36.885418000 a   b   389   382.9713 170.0977 153.6585 882.36..
2021.01.28D11:44:36.885418000 a   c   639   225.3611 170.5818 718.6551 739.18..

Clean up

To delete the Helm releases, run:

$ helm delete kx

Use helm ls to check the release was removed.

Watch out for Persistent Volumes that may have been created by your install: they will not be deleted by Helm

Kubernetes and Helm charts

Using Helm
Template Guide
kubectl Cheat Sheet