Skip to content

Running the Stream Processor

Run and manage Stream Processor images individually, or as part of a group

Simple example Docker workflow

Simple example Kubernetes workflow

For clarity in the examples below, environment variables are used to configure the pipelines; a config.yaml file could be used as well.

Setup

Working from the following project directory:

$ ls
spec.q

With the following spec.q:

cat spec.q
.qsp.run
  .qsp.read.fromCallback[`upd]
  .qsp.window.timer[00:00:05]
  .qsp.write.toConsole[]

Running in Kubernetes

To deploy and run in Kubernetes using the provided Coordinator service, follow the Kubernetes configuration and deployment instructions for launching the Coordinator within the cluster. The instructions also detail how to deploy and teardown a pipeline once the Coordinator service has started.

Running in Docker Compose

The above examples can be run in Docker Compose with an appropriate Docker Compose file (docker-compose.yaml).

Configuration to add Service Discovery or Monitoring

docker-compose.yaml example:

version: "3.3"
services:
  controller:
    image: registry.dl.kx.com/kxi-sp-controller:1.3.2
    ports:
      - 6000:6000
    environment:
      - KDB_LICENSE_B64
    command: ["-p", "6000"]
    deploy:
      restart_policy:
        condition: on-failure

  worker:
    image: registry.dl.kx.com/kxi-sp-worker:1.3.2
    ports:
      - 5000
    environment:
      - KDB_LICENSE_B64
      - KXI_SP_SPEC=/app/spec.q
      - KXI_SP_PARENT_HOST=controller:6000
    volumes:
      - .:/app
    command: ["-p", "5000"]
    deploy:
      restart_policy:
        condition: on-failure
    depends_on:
      - controller

With this Docker Compose file, the Controller and Worker can be created at once with:

docker-compose up

Alternatively, to run with multiple Workers as before, change the Controller to expect more Workers:

  controller:
    ..
    environment:
      ..
      - KXI_SP_MIN_WORKERS=3
    ..

If the Worker needs to reach the host network, change the Worker to use an ephemeral host port:

  worker:
    ..
    ports:
      - 5000
    ..

Then scale the Docker Compose by running with the --scale argument:

docker-compose up --scale worker=3

or by setting a replica count:

  worker:
    ..
    deploy:
      replicas: 3
    ..

Running separate containers

Running with one Worker

First, create a kx network and a Controller to orchestrate and manage the pipeline.

docker network create kx
docker run -it -p 6000:6000 \
    --network=kx \
    -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \     # Set the kdb+ license to use
    --restart unless-stopped \                  # Restart the Controller if it dies
    registry.dl.kx.com/kxi-sp-controller:1.3.2 -p 6000

A Controller then needs Workers to orchestrate. We need to know the hostname of the Controller.

docker ps
CONTAINER ID   IMAGE                   .. PORTS                                       NAMES
0d05f4679db2   kxi-sp-controller:1.3.2 .. 0.0.0.0:6000->6000/tcp, :::6000->6000/tcp   cranky_mclaren

Note the container ID of the Controller, and change the KXI_SP_PARENT_HOST below to the container ID output from that command.

Bind in the project directory to make the spec available.

A Worker can be created with:

docker run -it -p 5000:5000 \
    --network=kx \
    -v "$(pwd)":/app \                           # Bind in the project directory
    -e KXI_SP_SPEC="/app/spec.q" \               # Point to the bound spec file
    -e KXI_SP_PARENT_HOST="0d05f4679db2:6000" \  # Point Worker to its Controller
    -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \      # Set the kdb+ license to use
    --restart unless-stopped \                   # Restart the Worker if it dies
    registry.dl.kx.com/kxi-sp-worker:1.3.2 -p 5000

Set the kdb+ license

Running with multiple Workers

Rather than running the pipeline with a single Worker, some pipelines (such as those reading from Kafka or callback functions) can be parallelized by orchestrating multiple Workers.

To do this, start a new Controller with a greater number of required Workers:

$ docker run -it -p 6000:6000 \
    --network=kx \
    -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \
    -e KXI_SP_MIN_WORKERS=3 \                   # Set this pipeline to use 3 Workers
    --restart unless-stopped \
    registry.dl.kx.com/kxi-sp-controller:1.3.2 -p 6000

Then launch the required number of Workers. Here we use a loop to set each to a known port.

Change KXI_SP_PARENT_HOST to the new Controller’s container ID.

for port in 5001 5002 5003;
do
    docker run -it -p $port:5000 \
        --network=kx \
        -v "$(pwd)":/app \
        -e KXI_SP_SPEC="/app/spec.q" \
        -e KXI_SP_PARENT_HOST="0d05f4679db2:6000" \
        -e "KDB_LICENSE_B64=$KDB_LICENSE_B64" \
        registry.dl.kx.com/kxi-sp-worker:1.3.2 -p 5000
done

There will now be three Workers up and running, on ports 5001, 5002, and 5003.