Skip to content

Usage data submission

When starting kdb+, kdb Insights or kdb Insights Enterprise, it will automatically capture usage (accounting) data in $PWD/kod.q.acct which as part of your agreement with KX you are required to submit regularly. If you wish to use another location then set the environment variable KX_ACCT to the path of the directory to use.

It is recommended on a host that you use a common shared non-volatile directory and export the KX_ACCT environment variable:

export KX_ACCT="/path/to/usage/data"
mkdir -p "$KX_ACCT"
$QHOME/l64/q

Warning

If kdb+/q is unable to write its usage data, it will immediately refuse to start and emit an error

This will populate the directory with files corresponding to your usage that you must retain until successfully submitted back to KX.

To do the submission you need a (reusable) submission endpoint:

klic accounting url
URL      https://storage.googleapis.com/90d8570f70b2446abe57/...
Expires  2021-12-27T12:13:27

Warning

Note the 7 day expiry date, and make sure to refresh the endpoint URL regularly

klic accounting url accepts options to chose your preference of cloud provider and geography, you should look to klic accounting targets and klic accounting url --help on how to do this.

Take the provided URL and pass it the accounting submission tool along with setting the environment variable SSL_CA_CERT_FILE to your systems local trusted root CA bundle:

If it succeeds, the return code will be zero. Otherwise if it failed, it will be non-zero.

cd $KX_ACCT
find . -maxdepth 1 -type f -name '*.t.koda' | xargs -r -I{} curl -s -f -L -X PUT -H 'Content-Type: application/octet-stream' --data-binary @'{}' <URL>
echo $?

Only when successful, you may delete the usage data, taking care to retain the current day data which may still be volatile:

cd $KX_ACCT
find . -maxdepth 1 -type f -name '*.t.koda' -mmin +10 -delete

Warning

If you wish to use Azure as the provider endpoint, you will need to add -H 'x-ms-blob-type: BlockBlob' to the cURL request

Multiple Systems

If you have multiple systems, you can configure kdb+/q to run as a 'sink' service for instances to centrally collect logs:

env KX_ACCT=/run/kx/usage $QHOME/l64/q -$ -p 5001

KX also offers a Docker container you can use for this purpose too:

docker run -it --rm --name sink \
    -v "$PWD/qlic":/run/qlic:ro \
    -v "$PWD/kod.q.acct":/run/kod/data \
    -e KX_SERVACCT='Bearer ...'
    -e KX_ACCT="/run/kod/data"
    -p 0.0.0.0:5001:5001 \
    registry.dl.kx.com/kxi-acc-svc

To configure the clients to use the sink, configure them using again the environment variable KX_ACCT set to the location of the sink:

env KX_ACCT=http://192.0.2.1:5001 $QHOME/l64/q

Automated Usage Submission

If your sink has Internet access (including through a proxy) you can configure the kxi-acc-svc Docker image to handle the uploading of your usage logs for you.

To do this you need to create a service account using:

klic serviceaccount create 1453c0e8-9386-11ec-9c64-a747bf6bfc0a test
created new service account 9415f6a8-9394-11ec-a00f-0fae28122ae1
Type          Bearer
Access Token  eyJhbGci...

You then pass this into the container service via the environment variable KX_SERVACCT='Bearer eyJhbGci...'