The Stream Processor images can be run and managed individually, or as part of a group using Docker Compose or similar container groupings.
See the example workflow for an end-to-end example of running the Stream Processor. This documentation outlines additional options for running pipelines.
For clarity in the examples below, environment variables are used to configure the pipelines, though a
config.yaml file could be used as well. More information about using the
config.yaml and overriding with environment variables can be found in the configuration documentation.
Working from the following project directory:
$ ls spec.q
With the following
$ cat spec.q .qsp.run .qsp.read.fromCallback[`upd] .qsp.window.timer[00:00:05] .qsp.write.toConsole
Running in Kubernetes¶
To deploy and run in Kubernetes using the provided Coordinator service, follow the Kubernetes configuration and deployment instructions for launching the Coordinator within the cluster. The instructions also detail how to deploy and teardown a pipeline once the Coordinator service has started.
Running in Docker Compose¶
The above examples can be run in Docker Compose with an appropriate Docker Compose file (
For information about adding Service Discovery or Monitoring, see configuration.
An example Docker Compose file is provided here:
# docker-compose.yaml version: "3.3" services: controller: image: registry.dl.kx.com/kxi-sp-controller:0.8.2 ports: - 6000:6000 environment: - KDB_LICENSE_B64 command: ["-p", "6000"] deploy: restart_policy: condition: on-failure worker: image: registry.dl.kx.com/kxi-sp-worker:0.8.2 ports: - 5000 environment: - KDB_LICENSE_B64 - KXI_SP_SPEC=/app/spec.q - KXI_SP_PARENT_HOST=controller:6000 volumes: - .:app command: ["-p", "5000"] deploy: restart_policy: condition: on-failure depends_on: - controller
With this Docker Compose file, the Controller and Worker can be created at once with:
$ docker-compose up
Alternatively, to run with multiple Workers as before, change the Controller to expect more Workers:
controller: .. environment: .. - KXI_SP_MIN_WORKERS=3 ..
Then, change the worker to use an ephemeral host port (if the Worker needs to reach the host network):
worker: .. ports: - 5000 ..
Then scale the Docker Compose by running with the
$ docker-compose up --scale worker=3
Or setting a replica count:
worker: .. deploy: replicas: 3 ..
Running separate containers¶
Running with one Worker¶
First, create a
kx network and a Controller to orchestrate and manage the pipeline:
$ docker network create kx $ docker run -it -p 6000:6000 \ --network=kx \ -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \ # Set the kdb+ license to use --restart unless-stopped \ # In this example, restart the Controller if it dies registry.dl.kx.com/kxi-sp-controller:0.8.2 -p 6000
A Controller then needs Workers to orchestrate. We need the to know the hostname of the Controller:
$ docker ps CONTAINER ID IMAGE .. PORTS NAMES 0d05f4679db2 kxi-sp-controller:0.8.2 .. 0.0.0.0:6000->6000/tcp, :::6000->6000/tcp cranky_mclaren
Take note of the container ID of the Controller, and change the
$KXI_SP_PARENT_HOST below to the container ID output from that command.
A Worker can be created with:
$ docker run -it -p 5000:5000 \ --network=kx \ -v "$(pwd)":/app \ # Bind in the project directory to make the spec available -e KXI_SP_SPEC="/app/spec.q" \ # Point to the bound spec file -e KXI_SP_PARENT_HOST="0d05f4679db2:6000" \ # Point the Worker to it's Controller -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \ # Set the kdb+ license to use --restart unless-stopped \ # In this example, restart the Worker if it dies registry.dl.kx.com/kxi-sp-worker:0.8.2 -p 5000
Running with multiple Workers¶
Rather than running the pipeline with a single Worker, some pipelines (such as those reading from Kafka or callback functions) can be parallelized by orchestrating multiple Workers.
To do this, start a new Controller with a greater number of required Workers:
$ docker run -it -p 6000:6000 \ --network=kx \ -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \ -e KXI_SP_MIN_WORKERS=3 \ # Set this pipeline to use 3 Workers --restart unless-stopped \ registry.dl.kx.com/kxi-sp-controller:0.8.2 -p 6000
Then launch the required number of Workers, here using a loop to set each to a known port.
Make sure to change the
$KXI_SP_PARENT_HOST to the new Controller's container ID
$ for port in 5001 5002 5003; do docker run -it -p $port:5000 \ --network=kx \ -v "$(pwd)":/app \ -e KXI_SP_SPEC="/app/spec.q" \ -e KXI_SP_PARENT_HOST="0d05f4679db2:6000" \ -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \ registry.dl.kx.com/kxi-sp-worker:0.8.2 -p 5000 done
There will now be three Workers up and running, running on ports