Skip to content

Model generation and deployment

The following example provides a sample workflow for:

  1. Generation of a model to be used in a production environment
  2. Persistence of this model to cloud storage for use in deployment
  3. Deployment of the model and preprocessing steps to a production environment

This is intended to provide a sample of such a workflow and is not intended to be fully descriptive, users are encouraged to follow the API documentation here to get full use of the functionality.

Model Generation

1) Start the docker container as a development environment following the instructions here.

Ensure that the image has been started such that it points explicitly to a cloud storage bucket, in the example below this is done using S3.


For this example a user is expected to have write access to a pre generated AWS bucket at s3://ml-aws-storage.

docker run -it -p 5000:5000 \
    -aws s3://my-aws-storage -p 5000

2) Retrieve a dataset for generation of a model

In this case we are using the Wisconsin Breast Cancer dataset to predict if a tumour is malignant or benign. This example follows broadly that outlined in the ml-notebooks here.

q)dataset :.p.import[`sklearn.datasets;`:load_breast_cancer][]
q)target  :dataset[`:target]`

3) Split the data into a training and testing set to validate model performance

To validate that the model is performing appropriately we set aside a testing set which can be used to independently validate the performance of the model. This is done using the function .ml.trainTestSplit provided with the kdb Insights Machine Learning package. To ensure we are seeing enough samples in the training phase the test size is 10% of the original data.


4) Build and train a model

In this example we use embedPy to generate the models used. These could equally be created using functionality within the kdb Insights Machine Learning package however this example is intended to showcase the use of Python models in this regime

// Fit a Decision Tree Classifier
q)clf:clf[`max_depth pykw 3]

// Fit a Random Forest Classifier
q)rf:rf[pykwargs rfkwargs]

5) Validate model performance

Calculate the accuracy of predictions for each of the models:

q)show .ml.accuracy[clf[`:predict][data`xtest]`;data`ytest];
q)show .ml.accuracy[rf[`:predict][data`xtest]`;data`ytest];

6) Publish models to the Registry

Once you are happy with the performance of the models publish them to the Machine Learning Registry at s3://my-aws-storage. This follows the documentation outlined in the Registry section here.

// Set the decision tree classifier to the 'Wisconsin' experiment

// Set the random forest classifier to the 'Wisconsin' experiment

Model Docker Deployment

1) Generate a spec.q file defining deployment of the model generated above[`publish][
    {select from x};
    .qsp.use (!) . flip (
      (`registry ; enlist[`aws]!enlist "s3://my-aws-storage");
      (`model    ; "RandomForest");
      (`version  ; 1 0)

2) Set up a Docker Compose file for the example

# docker-compose.yaml

version: "3.3"
      - 6000:6000
      - KDB_LICENSE_B64                        # Which kdb+ license to use, see note below
    command: ["-p", "6000"]

      - 5000:5000
      - .:/app                                 # Bind in the spec.q file
      - KXI_SP_SPEC=/app/spec.q                # Point to the bound spec.q file
      - KXI_SP_PARENT_HOST=controller:6000     # Point to the parent Controller
      - KDB_LICENSE_B64
      - AWS_ACCESS_KEY_ID                     # Use AWS_ACCESS_KEY_ID defined in process
      - AWS_SECRET_ACCESS_KEY                 # Use AWS_SECRET_ACCESS_KEY defined in process
      - AWS_REGION                            # Use AWS_REGION defined in processor
      - KXI_SP_CHECKPOINT_FREQ=0              # Set the checkpoint frequency to 0
    command: ["-p", "5000"]

3) Start the container and follow logs

$ docker-compose up -d
$ docker-compose logs -f

4) Connect to the q process running the pipeline and push data for prediction

$ q p.q
q)h:hopen 5000
q)data  :.p.import[`sklearn.datasets;`:load_breast_cancer][]
q)feat  :data[`:data]`
q)fnames:`$ssr[;" ";"_"]each data[`:feature_names][`:tolist][]`
q)tab:flip fnames!flip feat

Model Kubernetes Deployment

1) Generate a spec.q file defining deployment of the model generated above[`publish][
    {select from x};
    .qsp.use (!) . flip (
      (`registry ; enlist[`aws]!enlist "s3://my-aws-storage");
      (`model    ; "RandomForest");
      (`version  ; 1 0)

2) Follow the kubernetes setup outlined here to generate a Stream Processor Coordinator.

3) Deploy the SP ML Worker image with the defined specification in step 1 above.

$ jobname=$(curl -X POST http://localhost:5000/pipeline/create -d \
    "$(jq -n  --arg spec "$(cat spec.q)" \
        name     : "ml-example",
        type     : "spec",
        base     : "q-ml",
        config   : { content: $spec },
        settings : { minWorkers: "1", maxWorkers: "1" },
        env      : { AWS_ACCESS_KEY_ID      : "'"$AWS_ACCESS_KEY_ID"'",
                     AWS_SECRET_ACCESS_KEY  : "'"$AWS_SECRET_ACCESS_KEY"'",
                     AWS_REGION             : "'"$AWS_REGION"'",
                     KXI_SP_CHECKPOINT_FREQ : 0}
    }' | jq -asR .)" | jq -r .id)

4) Port-forward the SP worker

$ kubectl port-forward <INSERT_WORKER> spwork 8080

5) In a new process follow the logs of the worker process

$ kubectl logs <INSERT_WORKER> spwork -f

6) Start a q process and publish data to the SP ML Pipeline

$ q p.q
q)h:hopen 7000
q)data  :.p.import[`sklearn.datasets;`:load_breast_cancer][]
q)feat  :data[`:data]`
q)fnames:`$ssr[;" ";"_"]each data[`:feature_names][`:tolist][]`
q)tab:flip fnames!flip feat