Deploying Quickstart
Deploying Quickstart
Deploying a Package
Before Continuing...
Make sure that you've:
By the end of this step, you should be able to...
- Add a pipeline to your package
- Deploy that package with pipeline
- Understand and configure analytics for the database
Looking at the deploy
command we can see there's quite a lot of information, but a lot of it is to do with configuring our insight connection.
We can see the main SOURCE
variable is simply a package to deploy (or a reference to a package).
kxi package deploy --help
Usage: kxi package deploy [OPTIONS] SOURCE
Deploy a package to an insights instance
Options:
--with-version / --without-version
Include the package's version in the
deployment name. If --with-version is set
the deployment will have a name like
pkgname-100. If --without-version is set the
deployment will have a name like pkgname.
Note this is enabled by default to avoid
ambiguity [default: without-version]
--remote / --local Deploy a package from a remote kdb Insights
Enterprise package repo (--remote)[default]
or using a local package (--local) [WARN:
'local' may be deprecated in future
releases]
--via [operator|controller] Specify the deployment method. Available
options: operator, controller
--rm-existing-data Remove the data associated with the old
deployment
--force Teardown running deployments if they share
any properties with the package
--db TEXT Deploy an existing package's database (must
be defined in the package)
--pipeline TEXT Deploy an existing package's pipeline (must
be defined in the package)
--url TEXT Insights URL[env var: INSIGHTS_URL; default:
https://replace.me]
--realm TEXT Realm[env var: INSIGHTS_REALM; default:
insights]
--client-id TEXT Client id[env var: INSIGHTS_CLIENT_ID;
default: test-publisher]
--client-secret TEXT Client secret[env var:
INSIGHTS_CLIENT_SECRET; default: special-
secret]
--auth-enabled / --auth-disabled
Will attempt to retrieve bearer token for
request [env var: KXI_AUTH_ENABLED]
--help Show this message and exit.
Deploying Analytics: Pipeline
Lets create a package and add a pipeline:
kxi package init mynewpkg --force
kxi package add --to mynewpkg pipeline --name mypipeline
cat << EOF > mynewpkg/init.q
.z.ts:{publish[([]10?10)]};
system"t 1000";
.qsp.run
.qsp.read.fromCallback[\`publish]
.qsp.write.toConsole[]
EOF
Creating package at location: mynewpkg
Writing mynewpkg/manifest.yaml
Writing mynewpkg/manifest.yaml
Pipeline entrypoint
Our pipeline that we added by default will read init.q
so we are modifying that module, but we can load from any file in the package. To change its entrypoint we can modify the spec
field in pipelines/mypipeline.yaml
in our package.
If we take a look at our package:
kxi package info mynewpkg
==PATH==
/builds/kxdev/kxinsights/documentation/code-kx-com/mynewpkg
==OBJECT==
Package
==Manifest==
name: mynewpkg
version: 0.0.1
metadata:
description: ''
authors:
- name: root
email: ''
entrypoints:
default: init.q
pipelines:
mypipeline:
file: pipelines/mypipeline.yaml
- In order to run it, we need to first
packit
andpush
:
kxi package packit mynewpkg --tag
kxi package push mynewpkg/0.0.1 --force
Refreshing package before packing...
Writing mynewpkg/manifest.yaml
Creating package from mynewpkg
Package created: /var/folders/6b/_lv5sfrj0l96g_1tc36bh_xr0000gn/T/artifact_store/mynewpkg-0.0.1.kxi
mynewpkg-0.0.1.kxi
{
"mynewpkg": [
{
"version": "0.0.1",
"_status": "InstallationStatus.SUCCESS"
}
]
}
Then we can deploy it:
kxi package deploy mynewpkg/0.0.1
Deploying: name=mynewpkg, ver=0.0.1
{
"packageName": "mynewpkg",
"packageVersion": "0.0.1",
"uuid": "1375378f-349f-49a6-b0b4-bd3a6b3a8298",
"status": "Deployed",
"pipelines": [
"123c398d-39b6-0c34-4f98-9668bc64e262"
],
"databases": [
"61386e60-d05d-888b-7c40-2df184282a8a"
],
"assemblies": [
"b2b9569c-0bde-9d8b-7098-95ebfc7fdebe"
],
"streams": [
"42fc7ade-0f05-21e1-f9f3-086433554706",
"54778e5d-3686-d29c-a6d5-e8a6bc4c9e4d"
],
"schemas": [
"1afca3bf-6516-deaf-cee2-286e863a9d11"
],
"update_time": "2023-07-03T13:30:59.946457",
"instance": "https://example.at.kx.com",
"name": "mynewpkg-001-8298",
"deploy_name": "mynewpkg-001",
"error": {
"content": ""
}
}
In this example we have created a package with some analytics and a pipeline. We have then deployed the pipeline and ran the analytic inside the pipeline.
Where did the DBs come from ?
You may notice that you get a DB, streams (I/O busses) and schemas deploying as part of your package which only contained a pipeline. This is due to a limitation we have currently with the deployment mechanism, which expects a DB to exist. In future versions we will remove this dependency and the pipeline will come up alone.
Deploying Analytics: DAP
Databases and their accompanying components (daps & agg) have a slightly different path for loading analytics during a deployment.
Whereas pipelines have a dedicated field for specifying their spec
(or analytic), the data access and aggregation processes do not.
Instead, we define special Entrypoints
in the manifest that the data access and aggregation processes are hook onto.
The src files themselves have no specific naming constraints.
The entrypoints must be added manually currently.
entrypoints:
default: init.q
data-access: src/da.q
aggregator: src/agg.q
"entrypoints": {
"default": "init.q",
"data-access": "src/da.q",
"aggregator": "src/agg.q"
}
Lets add a db for illustrative purposes:
kxi package add --to mynewpkg database --name mydb
Writing mynewpkg/manifest.yaml
mkdir mynewpkg/src
echo "show \"hello\"" > mynewpkg/src/da.q
echo "show \"hello\"" > mynewpkg/src/agg.q
mkdir: cannot create directory ‘mynewpkg/src’: File exists
We also must ensure that the Database configuration in the package is told which package to load by adding the package name to the dap spec:
kxi package add --to mynewpkg patch --name update_db_env
cat << EOF > mynewpkg/patches/update_db_env.yaml
kind: Package
apiVersion: pakx/v1
metadata:
name: target
spec:
databases:
- name: mydb
shards:
- name: mydb-shard
daps:
instances:
rdb:
env:
- name: KXI_PACKAGES
value: mynewpkg:0.0.1
idb:
env:
- name: KXI_PACKAGES
value: mynewpkg:0.0.1
hdb:
env:
- name: KXI_PACKAGES
value: mynewpkg:0.0.1
EOF
Writing mynewpkg/manifest.yaml
We can then apply our patch to our DB's local config to ensure our env flags are added:
kxi package overlay mynewpkg
Found patch to overlay: mynewpkg/patches/update_db_env.yaml
Writing /builds/kxdev/kxinsights/documentation/code-kx-com/mynewpkg/manifest.yaml
What source-ry is this?
We have used a patch to programatically update the database config.
And we can take a look at the resultant yaml:
cat mynewpkg/databases/mydb/shards/mydb-shard.yaml | grep -n KXI_PACKAGES -B 4 -A1
40- hdb:
41- allowPartialResults: true
42- enforceSchema: false
43- env:
44: - name: KXI_PACKAGES
45- value: mynewpkg:0.0.1
--
55- idb:
56- allowPartialResults: true
57- enforceSchema: false
58- env:
59: - name: KXI_PACKAGES
60- value: mynewpkg:0.0.1
--
95- rdb:
96- allowPartialResults: true
97- enforceSchema: false
98- env:
99: - name: KXI_PACKAGES
100- value: mynewpkg:0.0.1
We should then be able to deploy the package and see our "hello"
log message turn up in our daps!
Reading logs
In order to read logs you can:
- See them from the Insights UI
- Use the REST endpoint for logs
- Log onto your cluster and find the processes
kxi deploy mynewpkg/0.0.1
Neat!
Deploying Analytics: Aggregator
Though getting analytics into the aggregator is done in a similar method to the above, the aggregator is not currently handled by the packaging framework and so this must be added manually during the kdb Insights Enterprise installation (or afterwards if you have access to the cluster using kubectl
).