Skip to content



The goal of this tutorial is to set up and configure a Kubernetes cluster on GCP to allow users to install a kdb Insights Enterprise.

Terraform artifacts

To gain access to the kdb Insights Terraform modules, contact

You will need to download the artifact and extract it.


For this tutorial you will need:

A Google Cloud account.

A Google Cloud user with admin privileges.

A Google Cloud project with the following APIs enabled:

Cloud Resource Manager API
Compute Engine API
Kubernetes Engine API
Cloud Filestore API

Sufficient Quotas to deploy the cluster.

A client machine with Google Cloud SDK.

A client machine with Docker.


On Linux, additional steps are required to manage Docker as a non-root user.

Environment Setup

To extract the artifact, execute the following:

tar xzvf kxi-terraform-*.tgz

The above command will create the kxi-terraform directory. The commands below are executed within this directory and thus use relative paths.

To change to this directory execute the following:

cd kxi-terraform

The deployment process is performed within a Docker container which includes all tools needed by the provided scripts. A Dockerfile is provided in the config directory that can be used to build the Docker image. The image name should be kxi-terraform and can be built using the below command:

docker build -t kxi-terraform:latest ./config

Service Account Setup

The Terraform scripts require a Service Account with appropriate permissions which are defined in the kxi-gcp-tf-policy.txt file. The service account should already exist.


The below commands should be run by a user with admin privileges.

Create json key file for service account:

gcloud iam service-accounts keys create "${SERVICE_ACCOUNT}.json" --iam-account="${SERVICE_ACCOUNT_EMAIL}" --no-user-output-enabled


  • SERVICE_ACCOUNT is the name of an existing service account
  • SERVICE_ACCOUNT_EMAIL is the email address of an existing service account

The command will create the json file in the base directory. You will need to use filename later when updating the configuration file.

Grant roles to service account:

while IFS= read -r role
  gcloud projects add-iam-policy-binding "${PROJECT}" --member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" --role="${role}" --condition=None --no-user-output-enabled
done < config/kxi-gcp-tf-policy.txt


  • PROJECT is the GCP project used for deployment
  • SERVICE_ACCOUNT_EMAIL is the email address of the service account


The Terraform scripts are driven by environment variables which configure how the Kubernetes cluster will be deployed. These variables are stored in the kxi-terraform.env file in the base directory.

Copy environment file to base directory

cp config/kxi-terraform-gcp.env kxi-terraform.env
copy config\kxi-terraform-gcp.env kxi-terraform.env

Update kxi-terraform.env file and populate the following variables:

  • TF_VAR_project : The GCP project used for deployment

  • TF_VAR_gcp_project : The GCP project used for deployment

  • GOOGLE_APPLICATION_CREDENTIALS : The path of the json file which should start with /terraform. For example if the filename is account.json the value should be /terraform/account.json

  • ENV : Unique identifier for all resources. You will need to change it if you want to repeat the process and create an additional cluster. The variable can only contain lowercase letters and numbers

  • TF_VAR_region : Region to deploy the cluster. Make sure you update this to your desired region

  • TF_VAR_enable_logging: Enables forwarding of container logs to GCP Stackdriver. This is enabled by default and can be disabled by setting the variable to false.

  • TF_VAR_letsencrypt_account : Email account for Let's Encrypt registration and notifications. If you intend to use cert-manager to issue certificates then you will need to provide a valid email address if you wish to receive notifications related to certificate expiration

  • TF_VAR_default_node_type : Instance type for Kubernetes nodes

Autoscaling Consideration

The default node type uses local SSDs to provide the best possible performance. This will allow the cluster to scale up but block scaling down operations if the utilisation is low since the cluster autoscaler cannot remove nodes that run pods using local storage. Therefore, additional costs may be incurred.


To deploy the cluster and apply configuration, execute the following:



A pre-deployment check will be performed before proceeding further. If the check fails, the script will exit immediately to avoid deployment failures. You should resolve all issues before executing the command again.

This script will execute a series of Terraform and custom commands and may take some time to run. If the command fails at any point due to network issues/timeouts you can execute again until it completes without errors. If the error is related with the Cloud Provider account (e.g. limits) you should resolve them first before executing the command again.

If any variable in the configuration file needs to be changed, the cluster should be destroyed first and then re-deployed.

For easier searching and filtering, the created resources are named/tagged using the gcp-${ENV} prefix. For example, if the ENV is set to demo, all resource names/tags include the gcp-demo prefix.

Cluster Access

To access the cluster, execute the following:


The above command will start a shell session on a Docker container, generate a kubeconfig entry and connect to the VPN. Once the command completes, you will be able to manage the cluster via helm/kubectl.


The kxi-terraform directory on the host is mounted on the container on /terraform. Files and directories created while using this container will be persisted if they are created under /terraform directory even after the container is stopped.


If other users require access to the cluster, they will need to download and extract the artifact, build the Docker container and copy the kxi-terraform.env file as well as the terraform/gcp/client.ovpn file (generated during deployment) to their own extracted artifact directory on the same paths. Once these two files are copied, the above script can be used to access the cluster.

Below you can find kubectl commands to retrieve information about the installed components.

List Kubernetes Worker Nodes

kubectl get nodes
List Kubernetes namespaces

kubectl get namespaces

List cert-manager pods running on cert-manager namespace

kubectl get pods --namespace=cert-manager

List nginx ingress controller pod running on ingress-nginx namespace

kubectl get pods --namespace=ingress-nginx

List rook-ceph pods running on rook-ceph namespace

kubectl get pods --namespace=rook-ceph

Environment Destroy

Before you destroy the environment, make sure you don't have any active shell sessions on the Docker container. You can close the session by executing the following:


To destroy the cluster, execute the following:


If the command fails at any point due to network issues/timeouts you can execute again until it completes without errors.


Even after the cluster is destroyed, the disks created dynamically by the application may still be present and incur additional costs. You should review the GCE Disks to verify if the data is still needed.