Skip to content

Logging with Fluent Bit

The charts perform logging by writing to STDOUT. This lets you inspect the logs of any pod to investigate issues. In practice, however, it makes more sense to collect the logs in a single place.

The simplest way to do this is by including the Fluent Bit logging agent with the charts. It gets deployed to the Kubernetes cluster as a DaemonSet so it runs and collects logs on every node.

You can configure it to write to different destinations, e.g.

  • Amazon CloudWatch
  • Google Stackdriver
  • Azure Log Analytics

List of all destinations

Example

This example deploys the agent DaemonSet configured for AWS CloudWatch. It runs in its own logging namespace.

cd charts/kx-fluent-bit
helm install -n logging kx-log . \
    -f examples/fluent-bit-aws-cw.yaml --create-namespace

The examples/fluent-bit-aws-cw.yaml file contains the agent configuration. You can edit it.

Deploy the kx charts as normal. The logs should be published to CloudWatch. Similarly for other cloud providers.

GCP      examples/fluent-bit-gcp-stackdriver.yaml
Azure    examples/fluent-bit-azure-logs.yaml

Notes

The Azure file must be updated before running the charts. It requires a workspace ID and Auth key in order to write logs. Details on how to find these are also included in the QLog guide.

For GCP and AWS, the agent will use the default service account for authentication and writing. This account must have the required log-writing permissions.

Each of the clouds provides automatic log collection for Kubernetes clusters. In this setup, configuring the agent would not be required. This is usually configured when setting up the cluster. If enabled, the agent would not be required. However it might be desired in order to provide more control, e.g. for custom parsing, filtering or different endpoints.

See the QLog Quick Start guide for how to view the logs in cloud logging applications.