Installation¶
Tickerplant chart release¶
Complete preflight.
First we will step through deploying a tickerplant chart to a cluster.
Set your Docker image and license¶
Once you have built, tagged and pushed your container images to your repository, update your chart of the image location.
Helm uses variables to template the YML applied to the Kubernetes cluster. There are a few options when setting the variables passed.
Values YML¶
Update the image
object keys within the values, setting the repository
and tag
fields, to your Docker repository and image tag.
image:
repository: gcr.io/cloudpak/
component: tp
pullPolicy: IfNotPresent
tag: 0.0.1
Above our image base repository is set to gcr.io/cloudpak/
and component to tp
.
These fields combined give us the full path to our image location gcr.io/cloudpak/tp
.
Within our Helm charts we separate repository and component: more details in globals.
The image tag is then set to 0.0.1
; latest
may also be used.
We can quickly see the image path being defined by running helm template
on the tickerplant chart.
From within the charts
directory
$ helm template tp | grep image
image: "gcr.io/cloudpak/tp:0.0.1"
imagePullPolicy: IfNotPresent
We must also set the license
details. These are used to request the KX On Demand License.
license:
user: "User Name"
email: "user.name@kx.com"
An email is sent to the given address, requesting verification.
Helm set¶
The variables can also be set with the command line.
Again using helm template
we can quickly see the Docker image being defined.
$ helm template tp --set image.tag=0.0.2 | grep image
image: "gcr.io/cloudpak/tp:0.0.2"
imagePullPolicy: IfNotPresent
Deploying your tickerplant chart¶
Once you have configured your image
values, you can quickly deploy it with helm install
.
The install
command takes two arguments: your custom release name, and the chart name.
$ helm install myrelease tp
NAME: myrelease
LAST DEPLOYED: Mon Jan 18 11:55:06 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
Here we call our release myrelease
and deploy the tp
chart.
To confirm our release, we run helm ls
or helm list
.
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myrelease default 1 2021-01-18 11:55:06.386827932 +0000 UTC deployed tp-0.3.0 4
This lists all your Helm releases in the current cluster namespace.
We can also check the status of our deploy with kubectl
.
View pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myrelease-tp-7c977f5b47-2fzmf 1/1 Running 0 7s
View the service
:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myrelease-tp LoadBalancer 10.4.14.111 34.72.137.96 5000:32625/TCP 42s
Upgrading a release¶
Helm allows you to upgrade your release. This may include changes to some of your values, including your image tag.
These changes may be made within your values file or via the --set
command line option.
helm upgrade myrelease tp --set image.tag=0.0.2
In this example, we update the Docker image. Kubernetes will now bring up a new pod, and terminate the old.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myrelease-tp-7547487b9f-n7dv6 1/1 Running 0 7s
myrelease-tp-7c977f5b47-2fzmf 0/1 Terminating 0 8m11s
We see the release has been upgraded within Helm.
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myrelease default 2 2021-01-18 12:13:10.664281104 +0000 UTC deployed tp-0.3.0 4
The revision is now 2
.
Uninstall a release¶
To remove a release, run helm uninstall
.
This will delete all your release’s resources and pods.
To maintain a history of your release, append the option --keep-history
.
$ helm uninstall myrelease --keep-history
release "myrelease" uninstalled
$ helm list --uninstalled
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myrelease default 2 2021-01-18 12:13:10.664281104 +0000 UTC uninstalled tp-0.3.0 4
Configuring the q Instance¶
The helm chart allows for customization of the underlying q
instance. This can be the command line arguments passed to the q
instance on start up, schema definition or q scripts.
Command-line options¶
Command-line options may be passed to a container image with the component chart values.yaml
.
Set key-value pairs under the key instanceParam
:
instanceParam:
g: 1
t: 100
eodTime: 23:59:59
exampleParam: "aParam"
In the above example we pass -g 1 -t 100 -eodTime 23:59:59 -exampleParam aParam
to the q
instance on start up.
The arguments are passed with additional items relating to your deploy:
$ helm template myrelease tp | grep args
args: ["-p", "5000", "-namespace default -chart tp -release myrelease -eodTime 23:59:59 -exampleParam aParam -g 1 -t 100 "]
Schema¶
You can add schema to a running instance with a combination of the component chart and the built container image.
Adding a .q
file to the directory containers/schema/
will store the schema in the Docker image when qp build
is run.
This directory is read and its contents loaded during initialization of the component instance.
/Define basic example schema
([]sym:"s"$();time:"p"$();number:"j"$())
You can also add schema at deploy time with the component Helm chart
Similarly you can add a .q
file to the charts schema directory e.g. tp/schema/
.
A JSON file may also be used to define instance schema.
Schema JSON has the following structure
{
"schemaName":{
"schemaGroup": "string",
"keys": ["src", "sym"],
"columns":{
"columnName1":{
"type":"p"
},
"columnName2": {
"type": "s",
"attribute": "g"
},
...
}
}
}
tag | custom tag | type | required | description |
---|---|---|---|---|
schemaName | yes | string | yes | set to name of schema |
schemaGroup | no | string | no | name of schema group to add schema to |
keys | no | string[] | no | optional list of column names to apply as keys to schema |
columns | no | object | yes | object containing all individual column names and datatypes |
columnName1 | yes | object | yes | object containing column datatypes and attributes |
type | no | string | yes | datatype of column, can accept char or string, e.g. "j" or "long" |
attribute | no | string | no | optional attribute to apply to column |
Example JSON with multiple schema
Any or all options may be used to add schema to your running instance.
View charts/tp/schema/
for examples.
You can add or amend schema without restarting the pod
Using the chart to deploy schema allows you to add or amend schema without restarting the pod.
helm upgrade myrelease tp
The new or updated JSON or q files are uploaded to the pod, making them available to the running instance. Manual intervention is required to reload the schema files into memory.
Example calls made on component instance to reload schema:
/- Load schema directory
.table.i.loadSchemaDir[.cfg.schemaDir]
/ - Load individual schema json
.table.i.loadJsonSchema[`$":/path/to/schema.json"]
Custom code¶
You can include further q scripts in the container images by saving them in the directory containers/src/libs
;
qp build
includes all items in this directory in the Docker image during instance initialization.
You can also add scripts at deploy time with the component Helm chart.
Q scripts in a charts qScripts
directory are loaded during initialization.
Scripts are loaded in ascending order based on file name, with the exception of init.q
.
If a file is found with this name, it will be loaded first, with the remaining loaded in ascending order.
View charts/tp/qScripts/
for examples.
Using the chart to deploy q scripts allows you to add or amend without restarting the pod
helm upgrade myrelease tp
The new or updated q scripts are uploaded to the pod, making them available to the running instance. Manual intervention is required to load the files into memory.
Example calls made on component instance to reread scripts:
/- Load script directory
.addCode.loadCodeDir[]
/ - Load individual script
.utils.loadFile[`$":/path/to/scripts/"; `newScript.q]
Instance connections¶
Connection details are defined at deploy time using the component values.yaml
and helper
functions defined within templates/_helpers.tpl
The helper functions are used to build default values for service details, but can be overridden using the values.yaml
. The helpers
are then used to define a connections JSON file, loaded during instance initialization.
Using the rdb
chart’s value.yaml
as an example:
tickerplant:
host: kx-tp.default.svc
port: 5000
historical:
host: kx-hdb.default.svc
port: 5010
gateway:
name: gw
host: kx-gw.default.svc
port: 5000
Below are example helper functions, checking for overrides and setting default values for the tickerplant.
_helpers.tpl
{{/*
Define Tickerplant name
*/}}
{{- define "rdb.tp.name" -}}
{{- $default := "tp" }}
{{- if eq "true" ( include "rdb.tp" . ) }}
{{- coalesce (get .Values.tickerplant "name") $default }}
{{- else }}
{{- $default }}
{{- end }}
{{- end }}
{{/*
Define Tickerplant host
*/}}
{{- define "rdb.tp.host" -}}
{{- $default := printf "%s-tp.%s.svc" .Release.Name .Release.Namespace }}
{{- if eq "true" ( include "rdb.tp" . ) }}
{{- coalesce (get .Values.tickerplant "host") $default }}
{{- else }}
{{- $default }}
{{- end }}
{{- end }}
{{/*
Define Tickerplant port
*/}}
{{- define "rdb.tp.port" -}}
{{- $default := 5000 }}
{{- if eq "true" ( include "rdb.tp" . ) }}
{{- coalesce (get .Values.tickerplant "port") $default }}
{{- else }}
{{- $default }}
{{- end }}
{{- end }}
The helpers
are then used to populate a connections JSON, templates/configMap.yml
file:
connections.json: |-
{
{{ include "rdb.tp.name" . | quote }}: {
"processType": "tickerplant",
"host": {{ include "rdb.tp.host" . | quote }},
"port": {{ include "rdb.tp.port" . | quote }}
}
}
RDB release pub sub¶
By releasing both the tickerplant and RDB chart we can create a simple publisher and subscriber.
Update the image
details as before for both the tickerplant and the RDB charts.
image:
repository: gcr.io/cloudpak/
component: tp
pullPolicy: IfNotPresent
tag: 0.0.1
Tickerplant release¶
Within the tickerplant chart add or amend any schema files required. Alternatively leave the example schemas provided.
Release the tickerplant chart as before.
$ helm install ticker tp
NAME: ticker
LAST DEPLOYED: Mon Jan 18 13:20:34 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
View service details:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.4.0.1 <none> 443/TCP 66d
ticker-tp LoadBalancer 10.4.6.229 34.72.53.130 5000:31759/TCP 39s
Run a quick test from your local machine using the External-IP
:
q)tp:hopen `$":34.72.53.130:5000"
q)tp"tables[`.]"
`lastQuote`quote`tick`trade
q)
RDB release¶
To allow the RDB to subscribe to the tickerplant, we must update the connection details within the RDB values file.
The tickerplant host takes the pattern of serviceName.$namespace.svc
.
We can see the service name from our call to get svc
above, and we have deployed to the default
namespace.
tickerplant:
host: ticker-tp.default.svc
port: 5000
We have the option of updating the schema
directory by copying those within the tickerplant chart, or allowing the RDB to request the schema definitions from the tickerplant on subscribe.
Once updated, release the RDB chart:
$ helm install sub rdb
NAME: sub
LAST DEPLOYED: Mon Jan 18 13:30:08 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
Pod logging¶
Verify communication between the pods by checking the tickerplant and RDB logs.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sub-rdb-0 1/1 Running 0 105s
ticker-tp-98bbfd9ff-7q5mh 1/1 Running 0 11m
$ kubectl logs ticker-tp-98bbfd9ff-7q5mh --tail=2
{"time":"2021-01-18T13:30:23.288","component":"tickerplant","level":"INFO","message":"Received subscription request for `table`syms`handle!(`tick;`symbol$();10i)","chart":"tp","release":"ticker","namespace":"default","pod":"ticker-tp-98bbfd9ff-7q5mh"}
{"time":"2021-01-18T13:30:23.288","component":"tickerplant","level":"INFO","message":"Received subscription request for `table`syms`handle!(`trade;`symbol$();10i)","chart":"tp","release":"ticker","namespace":"default","pod":"ticker-tp-98bbfd9ff-7q5mh"}
Test feed¶
For testing a quick script has been added to containers/src/process
.
feed.q
is a simple script to mimic a feed handler publishing to a tickerplant.
Simply run the script passing the tickerplant service external IP address from the containers/
directory.
rlwrap q src/process/feed.q -tpHost 34.72.53.130
The RDB has not been given an external IP address, so we must open a shell to the container.
kubectl exec -it sub-rdb-0 bash
Quickly set up a view env vars to allow us to make use of q
.
$ export QHOME=/opt/kx
$ export QLIC=/opt/kx/lic
$ export PATH=$QHOME/l64:$PATH
$ export LD_PRELOAD=/usr/lib/libexecinfo.so.1
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${QHOME}/l64
$ q
q)h:hopen 5020
q)h"tables[]!count each value each tables[]"
lastQuote| 9
quote | 118000
tick | 118000
trade | 118000
Log recovery¶
The Helm charts can be configured to use Persistent Volumes (PVs). This allows us to recover tickerplant log files on a restart of the tickerplant pod and write date partitions at end of day (EOD) from the RDB. Further steps are required to allow these PVC to be shared between pods, allowing the RDB to recover from the tickerplant log files.
HDB release¶
The HDB chart can be configured to allow the reading of historical data created by a RDB run from a RDB chart.
This would require shared storage between pod, as discussed within the RDB log recovery section, for sharing EOD data with the a HDB pod.
First we must update the image
and license
details as before HDB chart.
image:
repository: gcr.io/cloudpak/
component: hdb
pullPolicy: IfNotPresent
tag: 0.0.1
RDB release shared PVC¶
To allow the HDB pod to read from RDB EOD directory, changes must be made to the RDB values file.
persistence:
storageClass: nfs-client
shared:
enabled: true
Here we turn on persistent storage, enabling the use of a storage class nfs-client
and allowing the PVC to be shared.
Release the RDB chart as before.
We can view the PVC by calling:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES ...
sub-rdb-hdb-data-default-rdb-0 Bound pvc-eb17aba3-2bdf-44c3-81d1-8e14dcc1142e 8G RWX ...
Note the access modes.
HDB release shared PVC¶
To allow the HDB to read from the PVC we must update the values file to make the HDB aware of the existing PVC.
By using get pvc
we are able to see the name give to the RDB PVC.
realtime:
# The RDB PVC name, set via rdb.persistence.rdbPVC in the
# RDB chart.
#
rdbPVC: sub-rdb-hdb-data-default-rdb-0
# Path to RDB EOD data partitions
#
hdbDir: /opt/kx/hdbdata
The YAML above sets the PVC name in the HDB chart. This will mount the PVC on the HDB pod.
We also set the mount point to /opt/kx/hdbdata
.
Once updated, release the HDB chart.
$ helm install historical hdb
NAME: historical
LAST DEPLOYED: Mon Jan 18 13:35:02 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
HDB reload¶
To allow the HDB to be reloaded on EOD, the RDB must be aware of the HDB and its connection details. These details are again contained within the RDB values file.
To permit the RDB to call a reload on a HDB upon an EOD completion, update the following object with the details to HDB service.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
historical-hdb ClusterIP None <none> 5010/TCP,8080/TCP 2d23h
The HDB host takes the pattern of serviceName.$namespace.svc
.
We can see the service name from our call to get svc
above, and we have deployed to the default
namespace.
historical:
host: historical-hdb.default.svc
port: 5010
Test historical query¶
The HDB has not been given an external IP address, so we must open a shell to the container.
kubectl exec -it historical-hdb-0 bash
Quickly set up a view env vars to allow us to make use of q.
$ export QHOME=/opt/kx
$ export QLIC=/opt/kx/lic
$ export PATH=$QHOME/l64:$PATH
$ export LD_PRELOAD=/usr/lib/libexecinfo.so.1
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${QHOME}/l64
$ q
q)h:hopen 5010
q)h"date"
2018.09.04 2018.09.05 2018.09.06 2018.09.07 2018.09.10
Note that the mounted PVC has existing date partitions.
Gateway release¶
Before releasing the Gateway chart we must update the image
and license
details before.
image:
repository: gcr.io/cloudpak/
component: tp
pullPolicy: IfNotPresent
tag: 0.0.1
Gateway connections¶
The Gateway allows us to query externally both the RDB and HDB pods.
Connections to those target databases is configured within the Gateway values file. The structure resembles the Instance Connections discussed previously.
values.yaml
connections:
myRDB:
host: kx-rdb-0.kx-rdb.default.svc
port: 5020
type: "realtime"
secondRDB:
host: kx-rdb-1.kx-rdb.default.svc
port: 5020
type: "realtime"
myHDB:
host: kx-hdb-0.kx-hdb.default.svc
port: 5010
type: "historical"
customName:
host: kx-q-1.kx-q.default.svc
port: 5010
type: "other"
Each connection has an alias
used as the key to that instance's details.
In the example above we are configuring four different instances, two HDB and two RDB.
myRDB:
host: kx-rdb-0.kx-rdb.default.svc
port: 5020
type: "realtime"
This creates an entry with name myRDB
within given host:port
. Targeting is based on the instance connection type
.
In our example API an instance type of realtime
is used for RDB query targeting and historical
for HDB queries.
We have the option to use pod
or Service
details for the hostname.
Using the pod
hostname for each allows the Gateway to query between each replica.
Update the connections object to reflect your RDB and HDB release details.
The HDB host takes the pattern of $release-$chart-$ordinal-$release-$chart.$namespace.svc
.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
historical-hdb-0 1/1 Running 0 35h
sub-rdb-0 1/1 Running 0 35h
We can see the pod names from our call to get pods
above, and we have deployed to the default
namespace.
myRDB:
host: sub-rdb-0.sub-rdb.default.svc
port: 5020
type: "realtime"
myHDB:
host: historical-hdb-0.historical-hdb.default.svc
port: 5010
type: "historical"
Components with the connection details in their values.yaml
register with the gateway instance
This means instances do not need to be explicitly defined with the gateway connections.
Deploying your gateway chart¶
Once the values file has been updated as required, run helm install
and verify as before.
$ helm install gateway gw
NAME: gateway
LAST DEPLOYED: Mon Jan 18 14:14:06 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
Verify gateway connections¶
Once the gateway has been deployed and is running we can use the gateway service’s external IP address to verify some of our values configurations.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway-gw LoadBalancer 10.4.12.84 34.121.253.228 5000:32663/TCP,8080:31176/TCP 1d
This external IP address allows us to query the gateway locally.
q)h:hopen `$"34.121.253.228:5000" /- Create handle to Gateway
q)h".conn.procs" /- Connected instances
process | procType address handle connect..
--------| ---------------------------------------------------------------------------..
myRDB | realtime :sub-rdb-0.sub-rdb.default.svc:5020 7 1 ..
myHDB | historical :historical-hdb-0.historical-hdb.default.svc:5010 10 1 ..
/- Gateway Target Databases
q)h".gw.targetDBs"
process | procType handle connected lastReturn qCount busy
--------| --------------------------------------------------
myRDB | realtime 7 1 0 0
myHDB | historical 10 1 0 0
Gateway API¶
The gateway chart comes with a few basic example API. See gwFunc.q for further details.
// @function getQuotes
// @category Gateway API
//
// @fileOverview Gateway API to target the quote table across multiple instances
//
// @param sDate {date} Start date of query window
// @param eDate {date} End date of query window
// @param syms {symbol[]} List of symbols to filter on
//
// @returns {table} Filtered table result set
//
// @example
// getQuotes[.z.d-1; .z.d; `a`b];
//
getQuotes:{[sDate;eDate;syms]
t:`quote;
getData[t; sDate; 0Np; eDate; 0Np; syms] }
// @function getTrades
// @category Gateway API
//
// @fileOverview Gateway API to target the trade table across multiple instances
//
// @param sDate {date} Start date of query window
// @param eDate {date} End date of query window
// @param syms {symbol[]} List of symbols to filter on
//
// @returns {table} Filtered table result set
//
// @example
// getTrades[.z.d-1; .z.d; `a`b];
//
getTrades:{[sDate;eDate;syms]
t:`trade;
getData[t; sDate; 0Np; eDate; 0Np; syms] }
Umbrella chart¶
A kx
chart is defined in the charts directory.
This is a parent chart which uses the additional chart as dependencies.
Dependencies are defined within kx/Chart.yaml
.
dependencies:
- name: tp
version: "0.3.0"
repository: "file://../tp"
- name: rdb
version: "0.3.0"
repository: "file://../rdb"
- name: hdb
version: "0.3.0"
repository: "file://../hdb"
- name: gw
version: "0.3.0"
repository: "file://../gw"
This sets the chart name, chart version and the chart location.
Umbrella values¶
You can use the values files for the kx
chart to override values files local to the dependency chart.
Using name
in Chart.yaml
we can override the tickerplant’s image.tag
.
tp:
image:
tag: 0.0.2
We are also able to set global
variables, which allow items to be defined once for use all dependencies.
Globals¶
Several global
variables are defined in the Helm chart.
key | type default |
description |
---|---|---|
global.env | object{} |
optionally add key value pairs for environment variable to be added to container |
global.image.pullPolicy | string"IfNotPresent" |
image pull policy |
global.image.repository | string"gcr.io/cloudpak/" |
repository containing component images |
global.image.tag | string0.2.1 |
component image tag |
global.license.email | string"" |
KX License user email |
global.license.user | string"" |
KX License user name |
global.metrics.enabled | boolfalse |
enable the capture of instance metrics |
global.metrics.image.pullPolicy | string"IfNotPresent" |
image pull policy |
global.metrics.image.repository | string"gcr.io/cloudpak/" |
repository containing component images |
global.metrics.image.tag | string0.2.1 |
component image tag |
global.metrics.serviceMonitor.enabled | boolfalse |
enable service monitor |
global.objectstore | object{} |
configuration for use of Object Stores in HDB par.txt |
global.objectstore.buckets | list[] |
list one or many object store paths; each appended to the par.txt file |
global.objectstore.enabled | boolfalse |
append object store paths to par.txt file |
global.objectstore.sym | object{} |
configuration of the object store back up sym file and local sym file |
global.objectstore.sym.cp | boolfalse |
if sym is not found locally, true will enable the copy from object store |
global.objectstore.sym.name | stringsym |
local sym file name on disk, copied object store file will be wrote to disk under this name |
global.objectstore.sym.path | string"" |
full path to object store sym file back up location |
global.persistence.shared.enabled | boolfalse |
enable shared storage between pods |
global.persistence.storageClass | string"nfs-client" |
persistence storage class |
global.rbac.create | boolfalse |
create RBAC Role and RoleBinding |
global.skaffold | boolfalse |
use Skaffold for image tags |
Through the use of helpers each chart runs a check for the existence of a global
value before falling back on the local
value.
Example helper
used to check for the existence of the global.image.repository
:
{{- define "tp.local.image.repo" -}}
{{- $default := "" }}
{{- if kindIs "map" .Values.image }}
{{- coalesce (get .Values.image "repository") $default }}
{{- else }}
{{- $default }}
{{- end }}
{{- end }}
{{- define "tp.image.repo" -}}
{{- $default := include "tp.local.image.repo" . }}
{{- if ne "true" ( include "tp.globals.image" . ) }}
{{- $default }}
{{- else if hasKey .Values.global.image "repository" }}
{{- .Values.global.image.repository }}
{{- else }}
{{- $default }}
{{- end }}
{{- end }}
Using the option of globals
we can now set each dependency charts repository and tag in one place.
global:
image:
repository: gcr.io/cloudpak/
tag: 0.0.2
pullPolicy: IfNotPresent
license:
user: "User Name"
email: "user.name@kx.com"
Deploying the umbrella chart¶
Before we can deploy the kx
chart we must update the dependencies.
$ helm dep update kx/
Hang tight while we grab the latest from your chart repositories...
Update Complete. ⎈Happy Helming!⎈
Saving 4 charts
Deleting outdated chart
This pulls the dependency charts as mentioned in the Chart.yaml
into the kx
chart.
They can be seen in kx/charts
.
$ ls kx/charts/
gw-0.3.0.tgz hdb-0.3.0.tgz rdb-0.3.0.tgz tp-0.3.0.tgz
Once this is done and our values.yaml
has been configured we can deploy the kx
chart.
$ helm install umbrella kx
NAME: umbrella
LAST DEPLOYED: Mon Jan 18 14:38:50 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
By running get pods
we see a pod has been created for each of the dependency charts.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
umbrella-gw-5867d886d7-sfhl6 1/1 Running 0 113s
umbrella-hdb-0 1/1 Running 0 112s
umbrella-rdb-0 1/1 Running 1 112s
umbrella-tp-7c9db9bbbb-thdc9 1/1 Running 0 113s
Test data ingress and egress¶
As before, we can test the flow of data by using the test script feed.q
.
First we get the external IP for both the tickerplant and the gateway service.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.4.0.1 <none> 443/TCP 67d
umbrella-gw LoadBalancer 10.4.9.99 34.72.53.130 5000:30082/TCP 45s
umbrella-hdb ClusterIP None <none> 5010/TCP 63s
umbrella-rdb ClusterIP None <none> 5020/TCP 63s
umbrella-tp LoadBalancer 10.4.9.60 34.121.254.77 5000:31065/TCP 62s
This time we pass both the gateway and the tickerplant external IP addresses.
$ rlwrap q src/process/feed.q -tpHost 34.121.254.77 -gwHost 34.72.53.130
Check status of handle connections within .conn.procs
.
q).conn.procs
process| procType address handle connected lastRetry
-------| ---------------------------------------------------------
tp | tickerplant :34.122.2.193:5000 3 1
gw | gateway :34.72.137.96:5000 4 1
With both instances connected, we can assume data is being fed to the tickerplant, but we can also query the gateway for data.
q)gwH:.conn.getProcConnDetails[`gw]`handle
q)gwH(`getQuotesWithin;.z.d;00:00:00;.z.d;.z.t;`a)
time sym src level bid asize bsize ask ..
-----------------------------------------------------------------------------..
2021.01.18D15:04:12.087939000 a a 954 448.637 763.0876 581.4644 412.71..
2021.01.18D15:04:12.087939000 a b 698 785.8828 882.6257 536.0788 243.64..
2021.01.18D15:04:12.087939000 a c 99 857.0497 13.03159 624.8044 328.95..
2021.01.18D15:04:12.087939000 a b 368 999.9956 213.9782 823.9519 205.96..
..
Helpers¶
Helm template functions, also known as helpers, are defined in each chart – tp/templates/_helpers.tpl
.
They are used to take the user-defined values.yaml
and inject them into the YAML applied to the Kubernetes cluster.
Helper functions are based on the Go template language combined with Sprig.
In our charts we use helpers to check for globals, print JSON files, and set default values.
Global check:
{{/*
Container image tag
*/}}
{{- define "rdb.local.image.tag" -}}
{{- $default := .Chart.AppVersion }}
{{- if kindIs "map" .Values.image }}
{{- coalesce (get .Values.image "tag") $default }}
{{- else }}
{{- $default }}
{{- end }}
{{- end }}
{{- define "rdb.image.tag" -}}
{{- $default := include "rdb.local.image.tag" . }}
{{- if eq "true" ( include "rdb.globals.image" . ) }}
{{- coalesce .Values.global.image.tag $default }}
{{- else }}
{{- $default }}
{{- end }}
{{- end }}
Connections JSON:
{{- define "gw.metrics.config" -}}
{{- $json := ""}}
...
{{- if ( kindIs "map" .Values.metrics.handler )}}
{{- $handler := ( .Values.metrics.handler | toPrettyJson ) }}
{{- $json := printf ",\n\"handler\": " | append $jList.json | set $jList "json" }}
{{- $json := printf "\n%s" $handler | append $jList.json | set $jList "json" }}
{{- end }}
...
{{- end }}
Defaults:
{{- define "rdb.schemaDir" -}}
{{- $pers := .Values.persistence }}
{{- coalesce (get $pers "schemaDir") "/opt/kx/schema" }}
{{- end }}