Skip to content

Observability and metrics integration

Integrating the kdb+ Helm charts with Prometheus

Metrics

The charts support exporting application metrics via an HTTP endpoint, allowing scraping by monitoring systems, typically Prometheus. The charts use the KX Fusion Prometheus Exporter to do this.

Enabling metrics requires the building of an additional container, called metriccollector

metriccollector is built by default by QPacker. As with the other containers, you can use the metrics.image values keys to configure it.

Sidecar

Configuring Prometheus is a cluster administrator task, and is a large topic.

Metrics are disabled by default. They can be enabled via the metrics.enabled boolean in the values.

Enabling metrics for a specific component, the tickerplant:

cd charts/kx
helm install kx . --dependency-update \
  --set tp.metrics.enabled=true

Enable metrics via your values file.

cd charts/kx
helm install kx . --dependency-update \
  -f examples/gcp-metrics.yaml

Alternatively, to enable metrics globally, for every single component:

cd charts/kx
helm install kx . --dependency-update \
  --set global.metrics.enabled=true

Notice your deployed component pods now have a second container (with a -metrics suffix) running. You should now be able to open a shell to your component, and scrape its metrics port (TCP/8080 by default).

$ kubectl exec -it deploy/kx-tp -- bash
Defaulting container name to tp.
Use 'kubectl describe pod/kx-tp-79dffddd55-85gws -n acmck' to see all of the containers in this pod.
[root@kx-tp-79dffddd55-85gws /]# curl -s http://127.0.0.1:8080/metrics
# HELP kdb_info process information
# TYPE kdb_info gauge
kdb_info{release_date="2020.08.28", release_version="4", os_version="l64", process_cores="2", license_expiry_date="2021.11.12"} 1

IPC protocol

In the component chart'’s values, the metrics.protocol key lets you specify which communication handler to use for IPC between the component being instrumented, and the metrics sidecar exporter.

Metrics are gathered locally and published using the configured protocol to the metrics sidecar.

Once metrics are in the sidecar they can be scraped via a REST request to it, or via a ServiceMonitor.

The options are:

"uds"   use Unix Domain Sockets for IPC
        (only available on the same host, such as a sidecar)
""      use standard q TCP IPC, useful when aggregating to a remote host
"tls"   communicate with a TLS endpoint over TCP

We use a default of "uds", which is valid when using a sidecar pattern.

.z event handlers

Metrics are gathered from the component container by overriding the .z namespace event handlers.

You can toggle metrics for each of the handlers with the relevant metrics.handler.* key in your component values.

By default, every handler has its metrics published.

handler:
  po: true
  pc: true
  wo: true
  wc: true
  pg: true
  ps: true
  ws: true
  ph: true
  pp: true

ServiceMonitor

The charts can be configured to render ServiceMonitor resources. ServiceMonitor is a Custom Resource Definition (CRD) provided by the Prometheus Operator.

ServiceMonitor resources can be enabled with the metrics.serviceMonitor.enabled boolean in values.

The ServiceMonitor is disabled by default, and requires metrics to be enabled.

You can enable the ServiceMonitor, for a given component:

cd charts/kx
helm install kx . --dependency-update \
    --set global.metrics.enabled=true \
    --set global.metrics.serviceMonitor.enabled=true

Or more likely, via a your values file:

cd charts/kx
helm install kx . --dependency-update \
  -f examples/gcp-metrics.yaml

Prometheus

Prometheus is a popular as a monitoring and alerting toolkit for Kubernetes. A common method of deploying the Prometheus stack is with the Prometheus Operator.

This operator is typically bundled together with other components, such as Grafana for visualizations, and node metrics exporters to gather cluster-wide metrics. One example of this is the kube-prometheus-stack Helm chart, maintained by the Prometheus community.

Deploying the monitoring stack

We shall install the monitoring stack in a newly created monitoring namespace, using Helm.

The Helm release is called kx

It is how a default monitoring configuration is integrated.

Create the target namespace.

$ kubectl create ns monitoring
namespace/monitoring created

Configure the Helm repo for the kube-prometheus-stack chart.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://charts.helm.sh/stable
helm repo update

Deploy the chart and note the release is called kx.

$ helm install kx prometheus-community/kube-prometheus-stack -n monitoring
NAME: kx
LAST DEPLOYED: Thu Dec  3 14:49:04 2020
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace monitoring get pods -l "release=kx"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on 
how to create & configure Alertmanager and Prometheus instances using the Operator.

Wait a few moments; then confirm all your pods are up.

$ kubectl --namespace monitoring get pods -l "release=kx"
NAME                                                 READY   STATUS    RESTARTS   AGE
kx-kube-prometheus-stack-operator-7dd7865779-fz9dp   1/1     Running   0          71s
kx-prometheus-node-exporter-4txm7                    1/1     Running   0          71s

Check we can hit the Prometheus UI.

$ kubectl port-forward -n monitoring svc/kx-kube-prometheus-stack-prometheus 9090:9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

and connect to http://127.0.0.1:9090.

Check we can access the Grafana UI

Default credentials:

Username: admin
Password: prom-operator
$ kubectl port-forward -n monitoring svc/kx-grafana 8080:80
Forwarding from 127.0.0.1:8080 -> 3000
Forwarding from [::1]:8080 -> 3000

and connect to http://127.0.0.1:8080.

Finally, as we are using the operator, we can inspect our customer resource, called prometheus by default. We can run kubectl describe on this resource, which will describe the Prometheus installation.

Note the section Service Monitor Selector and the configured label selector.

$ kubectl describe -n monitoring prometheus [... SNIP ...] Spec: Alerting: Alertmanagers: API Version: v2 Name: kx-kube-prometheus-stack-alertmanager Namespace: monitoring Path Prefix: / Port: web Enable Admin API: false External URL: http://kx-kube-prometheus-stack-prometheus.monitoring:9090 Image: quay.io/prometheus/prometheus:v2.22.1 Listen Local: false Log Format: logfmt Log Level: info Paused: false Pod Monitor Namespace Selector: Pod Monitor Selector: Match Labels: Release: kx Port Name: web Probe Namespace Selector: Probe Selector: Match Labels: Release: kx Replicas: 1 Retention: 10d Route Prefix: / Rule Namespace Selector: Rule Selector: Match Labels: App: kube-prometheus-stack Release: kx Security Context: Fs Group: 2000 Run As Group: 2000 Run As Non Root: true Run As User: 1000 Service Account Name: kx-kube-prometheus-stack-prometheus Service Monitor Namespace Selector: Service Monitor Selector: Match Labels: Release: kx Version: v2.22.1 Events: <none>

Deploy kx Helm charts

To enable metrics, we:

  • enable the sidecar exporter
  • enable the ServiceMonitor resource
  • configure the labels correctly, so as our monitoring system will know to process our ServiceMonitor resource

We configure these settings via a values.yaml file.

Here is a snippet:

tp:
  image:
    repository: gcr.io/cloudpak/
  metrics:
    enabled: true
    serviceMonitor:
      enabled: true
      additionalLabels:
        release: kx

Note metrics and metrics.serviceMonitor are both enabled.

We also include custom labels, specifically release: kx. This allows the ServiceMonitor to be handled by the Prometheus operator installed.

Deploy the chart.

$ cd charts/kx
$ helm install --dependency-update kx . \
  -f examples/gcp-metrics.yaml
Hang tight while we grab the latest from your chart repositories...
[...]
Update Complete. ⎈Happy Helming!⎈
Saving 4 charts
Deleting outdated charts
NAME: kx
LAST DEPLOYED: Thu Dec  3 15:30:01 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Check your pods are up.

$ kubectl get pods
NAME                                        READY   STATUS    RESTARTS   AGE
kx-gw-686f7c489f-rz4pc                      2/2     Running   0          3h26m
kx-hdb-6979868b86-25hm2                     2/2     Running   0          3h26m
kx-rdb-758dc85c84-xglxp                     2/2     Running   0          3h26m
kx-tp-79dffddd55-85gws                      2/2     Running   0          3h26m

Check the service monitors have been created.

$ kubectl get servicemonitors
NAME     AGE
kx-gw    14m
kx-hdb   14m
kx-rdb   14m
kx-tp    14m

Using the metrics

Open your session to the Prometheus UI.

$ kubectl port-forward -n monitoring svc/kx-kube-prometheus-stack-prometheus 9090:9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

Connect to http://127.0.0.1:9090 and confirm metrics are available, by typing kdb into the query box.

kdb metrics

Your metrics should now be available to be visualized by Grafana.

Grafana dashboard

An example dashboard using the captured container metrics:

Grafana Dashboard

This a modified version of the dashboard available with Kx Fusion Prometheus Exporter.

kdb-grafana.json