Skip to content

Deploying Quickstart

Deploying Quickstart

Deploying a Package

By the end of this step, you should be able to...

  1. Add a pipeline to your package
  2. Deploy that package with pipeline
  3. Understand and configure analytics for the database

Looking at the deploy command we can see there's quite a lot of information, but a lot of it is to do with configuring our insight connection.

We can see the main SOURCE variable is simply a package to deploy (or a reference to a package).

kxi package deploy --help

Usage: kxi package deploy [OPTIONS] SOURCE

  Deploy a package to an insights instance

  --with-version / --without-version
                                  Include the package's version in the
                                  deployment name. If --with-version is set
                                  the deployment will have a name like
                                  pkgname-100. If --without-version is set the
                                  deployment will have a name like pkgname.
                                  Note this is enabled by default to avoid
                                  ambiguity  [default: without-version]
  --remote / --local              Deploy a package from a remote kdb Insights
                                  Enterprise package repo (--remote)[default]
                                  or using a local package (--local) [WARN:
                                  'local' may be deprecated in future
  --via [operator|controller]     Specify the deployment method. Available
                                  options: operator, controller
  --rm-existing-data              Remove the data associated with the old
  --force                         Teardown running deployments if they share
                                  any properties with the package
  --db TEXT                       Deploy an existing package's database (must
                                  be defined in the package)
  --pipeline TEXT                 Deploy an existing package's pipeline (must
                                  be defined in the package)
  --url TEXT                      Insights URL[env var: INSIGHTS_URL; default:
  --realm TEXT                    Realm[env var: INSIGHTS_REALM; default:
  --client-id TEXT                Client id[env var: INSIGHTS_CLIENT_ID;
                                  default: test-publisher]
  --client-secret TEXT            Client secret[env var:
                                  INSIGHTS_CLIENT_SECRET; default: special-
  --auth-enabled / --auth-disabled
                                  Will attempt to retrieve bearer token for
                                  request  [env var: KXI_AUTH_ENABLED]
  --help                          Show this message and exit.

Deploying Analytics: Pipeline

Lets create a package and add a pipeline:

kxi package init mynewpkg --force 
kxi package add --to mynewpkg pipeline --name mypipeline
cat << EOF > mynewpkg/init.q
system"t 1000";[\`publish] 
Creating package at location: mynewpkg
Writing mynewpkg/manifest.yaml
Writing mynewpkg/manifest.yaml

Pipeline entrypoint

Our pipeline that we added by default will read init.q so we are modifying that module, but we can load from any file in the package. To change its entrypoint we can modify the spec field in pipelines/mypipeline.yaml in our package.

If we take a look at our package:

kxi package info mynewpkg


name: mynewpkg
version: 0.0.1
  description: ''
  - name: root
    email: ''
  default: init.q
    file: pipelines/mypipeline.yaml
We can see it's referencing a new pipeline spec.

  • In order to run it, we need to first packit and push:
kxi package packit mynewpkg --tag
kxi package push mynewpkg/0.0.1 --force
Refreshing package before packing...
Writing mynewpkg/manifest.yaml
Creating package from mynewpkg
Package created: /var/folders/6b/_lv5sfrj0l96g_1tc36bh_xr0000gn/T/artifact_store/mynewpkg-0.0.1.kxi
    "mynewpkg": [
            "version": "0.0.1",
            "_status": "InstallationStatus.SUCCESS"

Then we can deploy it:

kxi package deploy mynewpkg/0.0.1
Deploying: name=mynewpkg, ver=0.0.1
    "packageName": "mynewpkg",
    "packageVersion": "0.0.1",
    "uuid": "1375378f-349f-49a6-b0b4-bd3a6b3a8298",
    "status": "Deployed",
    "pipelines": [
    "databases": [
    "assemblies": [
    "streams": [
    "schemas": [
    "update_time": "2023-07-03T13:30:59.946457",
    "instance": "",
    "name": "mynewpkg-001-8298",
    "deploy_name": "mynewpkg-001",
    "error": {
        "content": ""

In this example we have created a package with some analytics and a pipeline. We have then deployed the pipeline and ran the analytic inside the pipeline.

Where did the DBs come from ?

You may notice that you get a DB, streams (I/O busses) and schemas deploying as part of your package which only contained a pipeline. This is due to a limitation we have currently with the deployment mechanism, which expects a DB to exist. In future versions we will remove this dependency and the pipeline will come up alone.

Deploying Analytics: DAP

Databases and their accompanying components (daps & agg) have a slightly different path for loading analytics during a deployment.

Whereas pipelines have a dedicated field for specifying their spec (or analytic), the data access and aggregation processes do not. Instead, we define special Entrypoints in the manifest that the data access and aggregation processes are hook onto. The src files themselves have no specific naming constraints.

The entrypoints must be added manually currently.

    default: init.q
    data-access: src/da.q
    aggregator: src/agg.q  
"entrypoints": {
   "default": "init.q",
   "data-access": "src/da.q",
   "aggregator": "src/agg.q"

Lets add a db for illustrative purposes:

kxi package add --to mynewpkg database --name mydb
Writing mynewpkg/manifest.yaml
mkdir mynewpkg/src
echo "show \"hello\"" > mynewpkg/src/da.q 
echo "show \"hello\"" > mynewpkg/src/agg.q
mkdir: cannot create directory ‘mynewpkg/src’: File exists

We also must ensure that the Database configuration in the package is told which package to load by adding the package name to the dap spec:

kxi package add --to mynewpkg patch --name update_db_env
cat << EOF > mynewpkg/patches/update_db_env.yaml
kind: Package
apiVersion: pakx/v1
  name: target
    - name: mydb
        - name: mydb-shard
                    - name: KXI_PACKAGES
                      value: mynewpkg:0.0.1
                    - name: KXI_PACKAGES
                      value: mynewpkg:0.0.1
                    - name: KXI_PACKAGES
                      value: mynewpkg:0.0.1
Writing mynewpkg/manifest.yaml

We can then apply our patch to our DB's local config to ensure our env flags are added:

kxi package overlay mynewpkg
Found patch to overlay: mynewpkg/patches/update_db_env.yaml
Writing /builds/kxdev/kxinsights/documentation/code-kx-com/mynewpkg/manifest.yaml

What source-ry is this?

We have used a patch to programatically update the database config.

And we can take a look at the resultant yaml:

cat mynewpkg/databases/mydb/shards/mydb-shard.yaml | grep -n KXI_PACKAGES -B 4 -A1
40-    hdb:
41-      allowPartialResults: true
42-      enforceSchema: false
43-      env:
44:      - name: KXI_PACKAGES
45-        value: mynewpkg:0.0.1
55-    idb:
56-      allowPartialResults: true
57-      enforceSchema: false
58-      env:
59:      - name: KXI_PACKAGES
60-        value: mynewpkg:0.0.1
95-    rdb:
96-      allowPartialResults: true
97-      enforceSchema: false
98-      env:
99:      - name: KXI_PACKAGES
100-        value: mynewpkg:0.0.1

We should then be able to deploy the package and see our "hello" log message turn up in our daps!

Reading logs

In order to read logs you can:

  • See them from the Insights UI
  • Use the REST endpoint for logs
  • Log onto your cluster and find the processes
kxi deploy mynewpkg/0.0.1


Deploying Analytics: Aggregator

Though getting analytics into the aggregator is done in a similar method to the above, the aggregator is not currently handled by the packaging framework and so this must be added manually during the kdb Insights Enterprise installation (or afterwards if you have access to the cluster using kubectl).

Next steps