Orchestration¶
Quickly build kdb+ containers and deploy them to a cloud cluster using Helm and Kubernetes
User guide
The user guide includes details on all key features, implementations and cloud considerations for deploying a cloud-native kdb+ application, including configuration, prerequisites, utilizing metrics and a variety of security collateral.
Ensure you meet the prerequisites. In particular, you must
- build your containers using QPacker (
qp
) - configure a username and email for the KX OnDemand License
Building containers¶
With qp
installed, navigate to the containers
directory and run qp build
qp
build instructions are defined by the containers/qp.json file.
{
"tp": {
"depends": [ "qlog" ],
"entry": [ "src/process/tp.q" ]
},
...
}
cd containers/
qp build
Once the container images have been successfully built, use qp
to tag
and push
your Docker images to a Docker repository
qp tag gcr.io/cloudpak/tp 0.0.1
qp push gcr.io/cloudpak/tp 0.0.1
Repeat the tag and push for each of the component containers:
tp
(tickerplant)rdb
(realtime database)hdb
(historical database)gw
(gateway)
After a release of a new image, update the image.repository
and image.tag
of the component charts values.yaml
.
Alternatively --set
may be used in the comand line.
Quick-start installation¶
You can use helm install
to install the charts from the source directory.
cd charts/kx
helm install kx . --dependency-update \
--set global.license.user="Another User" \
--set global.license.email="user@example.com" \
--set global.image.repository="gcr.io/cloudpak/" \
--set global.image.tag="0.0.1"
Activate your KX On Demand license by clicking the link in the activation email you receive.
When you install the chart, some information will be displayed allowing you to begin interacting with your kdb+ installation.
NAME: kx
LAST DEPLOYED: Thu Jan 28 11:30:00 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have installed an instance of the Kx Core umbrella chart.
License
=======
This install is using a Kx OnDemand license for Another User (user@example.com).
Internal Access
===============
You can access your release, called kx, inside the cluster using the following endpoints:
kx-tp.default.svc:5000
kx-gw.default.svc:5000
kx-rdb-0.kx.default.svc:5020
kx-hdb-0.kx.default.svc:5010
External Access
===============
Please wait a few moments until your containers are pulled and started on the target cluster.
You can obtain the external IP of this process, if configured, by using:
Tickerplant
TP_PORT=5000
TP_HOST=`kubectl --namespace default get svc kx-tp -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
echo $TP_HOST:$TP_PORT
Gateway
GW_PORT=5000
GW_HOST=`kubectl --namespace default get svc kx-gw -o jsonpath="{.status.loadBalancer.ingress[*].ip}"`
echo $GW_HOST:$GW_PORT
Enabled Features
================
Metrics Sidecar ...false
ServiceMonitor ...false
Persistent Volumes ...false
Shared Volumes ...false
RBAC ...false
RDB Auto-Scaling (HPA) ...false
Ensure your release has come up:
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kx default 1 2021-01-28 11:30:57.771424403 +0000 UTC deployed kx-0.3.0 4.0.0
kubectl
¶
Use kubectl
to run a few quick checks on your release.
Pod status¶
All pods successfully started and running
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kx-gw-558598d5d-j8fcs 1/1 Running 0 71s
kx-hdb-0 1/1 Running 0 70s
kx-rdb-0 1/1 Running 0 70s
kx-tp-6849c5b9fc-wl6c4 1/1 Running 0 71s
Service status¶
All services started and running.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kx-gw LoadBalancer 10.4.15.232 <pending> 5000:30306/TCP 8s
kx-hdb ClusterIP None <none> 5010/TCP 63s
kx-rdb ClusterIP None <none> 5020/TCP 62s
kx-tp LoadBalancer 10.4.15.184 35.225.176.211 5000:32412/TCP 62s
Note the RDB (realtime database) and HDB (historical database) are not being assigned an external IP address. The Tickerplant has been assigned an external IP, with the GW (gateway) still pending.
Your cluster may take a few minutes to assign an external IP address
Testing¶
Simple script containers/src/process/feed.q
mimics a feed handler publishing to a tickerplant.
Simply run the script passing the tickerplant and gateway services’ external IPs from the containers/
directory.
Again check for the external IPs of your services.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kx-gw LoadBalancer 10.4.15.232 34.70.249.82 5000:30306/TCP 7m24s
kx-hdb ClusterIP None <none> 5010/TCP 8m19s
kx-rdb ClusterIP None <none> 5020/TCP 8m18s
kx-tp LoadBalancer 10.4.15.184 35.225.176.211 5000:32412/TCP 8m18s
$ cd ../../containers/
$ rlwrap q src/process/feed.q -tpHost 35.225.176.211 -gwHost 34.70.249.82
/- Check status of handle connections within .conn.procs
q).conn.procs
process| procType address handle connected lastRetry
-------| ---------------------------------------------------------
tp | tickerplant :35.225.176.211:5000 3 1
gw | gateway :34.70.249.82:5000 4 1
With both instances connected, we can assume data is being fed to the tickerplant; but we can also query the gateway for data.
q)gwH:.conn.getProcConnDetails[`gw]`handle
q)gwH(`getQuotesWithin;.z.d;00:00:00;.z.d;.z.t;`a)
time sym src level bid asize bsize ask ..
-----------------------------------------------------------------------------..
2021.01.28D11:44:36.885418000 a c 541 282.064 516.7994 572.2282 222.12..
2021.01.28D11:44:36.885418000 a c 501 555.756 154.6752 938.6488 456.94..
2021.01.28D11:44:36.885418000 a b 401 891.5985 24.81919 189.3644 333.83..
2021.01.28D11:44:36.885418000 a b 389 382.9713 170.0977 153.6585 882.36..
2021.01.28D11:44:36.885418000 a c 639 225.3611 170.5818 718.6551 739.18..
Clean up¶
To delete the Helm releases, run:
$ helm delete kx
Use helm ls
to check the release was removed.
Watch out for Persistent Volumes that may have been created by your install: they will not be deleted by Helm