Skip to content

Glossary

Assembly

An assembly comprises:

  • A database to store and access your data.
  • A schema to convert imported data to a format compatible with kdb Insights Enterprise (using kdb+ technology).
  • A stream to push event data into a database.
  • Pipelines to read data from source and write it to a kdb Insights Enterprise database.

Every assembly is labelled with its name.

Ceph

An open-source, scalable, simplified storage solution for data pipelines.

C-SDK

A software development kit to support C/C++ applications.

cli

"cli" is the command line interface. The cli runs Insights processes and is an alternative to the UI for power users.

Console (UI)

The console is where results from ad hoc queries run in the Scratchpad are presented.

The console is part of the Query window.

Database

A database is a data store built on kdb+ technology. A database offers rdb (real-time database storage), idb (interval data storage) and at least one hdb (historic database storage) - sub-tiers of a hdb may be on the database too.

A database also includes:

  • A schema to convert imported data to a kdb+ compatible format.
  • A stream to help push event data to the database.
  • Optional pipelines to import data to the platform

I want to build my own database.

Database wizard

A step-by-step guide to help you build a database. At the end of a wizard building process you will have a fully functional database to store your data.

Entity-tree

The entity-tree is a dynamic menu, always available in the left margin of the kdb Insights Enterprise user interface. The content of the menu changes depending on the interaction in the platform. On the Overview page, for example, the entity-tree shows a list of assemblies, databases, schemas, pipelines, streams, queries and views you have created. On the pipeline page, the entity-tree lists the nodes used to build data pipelines to import data from source and transform it to a format compatible with a kdb Insights Enterprise database.

hdb

A hdb is a mount for storing historic data on a database. A historic database is the final destination for interval data.

idb

An idb is a mount for storing interval data on a database. It takes data from a real-time database (rdb), stores this data for a set period, e.g. 10 minutes, before the data is written to a historic database.

Import Wizard

A step-by-step process for building a pipeline to import, transform and write data to your database.

I want to learn more about the import wizard

Java-SDK

A software development kit for developing applications in Java.

kdb+

Kdb+ is an ultra-fast time series columnar database.

Keycloak

Keycloak is an open-source, single-sign-on authentication service and management tool. It offers enhanced user security built from existing protocols and can support authentication via social platform providers like Google, Facebook or GitHub.

kodbc

An open database connectivity driver for connecting kdb+ databases.

Kubernetes

Kubernetes is an open-source tool for bundling and managing clusters of containerized applications.

Label

Coming soon.

Mount

A mounted database is ready for use; a database can have a hdb, idb and/or rdb mounts.

Nodes

Nodes are used by pipelines to read, write and transform data from its source to the database.

Each node has a defined function and set of properties to edit. Some nodes allow for q or python code.

Machine Learning nodes offer more advanced manipulations of imported data before writing to the database.

Object Storage

A data storage system managing data as objects, compared to a file hierachy or block based storage architecture. Object storage is used for unstructured data, eliminating the scaling limitations of traditional file storage. Limitless scale is the reason object storage is the storage of the cloud; Amazon, Google and Microsoft all employ object storage as their primary storage.

Output Variable

Results from a database query are written to an Output Variable. The Output Variable can be queried in the scratchpad using q or python.

Partitioning

When a data table is written to a database it must be partitioned to be compatible with a kdb+ time series database.

Partitioning is handled by a Timestamp column, and defined in a schema. Every table must have a Timestamp column.

Pipeline

Pipelines are a linked set of processes to read data from its source, transform it to a format compatible with kdb Insights Enterprise, then write it to a database for later querying.

Pipelines can be created using the Import Wizard or a visual pipeline builder. The pipeline builder offers a set of nodes to help read, writer or transform data; nodes are connected together in a workspace to form a linked chain of events or pipeline template. Additional machine learning nodes are available for more advanced data interactions.

Pipelines can be deployed individually or associated with a database; pipelines associated with a database will be deployed and activated when the database is deployed.

I want to learn more about pipelines

Pipeline template

The Pipeline Template is the layout of the nodes that together make a Pipeline.

Protocol Buffers

Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data - think XML, but smaller, faster, and simpler. Data structure is first defined before specially generated source code read-and-writes structured data, to-and-from a variety of data streams, using a variety of programming languages.

q

q is the programming language used to query a kdb+ database.

Query

Interact with your data by building queries in the Query window. Build queries with filters, or execute with q, or SQL code. Queries reference the name of the table generated by the pipeline and results written to an Output Variable for use by the Scratchpad.

Ad hoc queries in the scratchpad use q or python with the Output Variable; results are outputted to the console as a table or chart.

I want to learn more about data exploration

Reader

Coming soon.

rdb

Real-time event data is stored on an rdb mount of the database, before it's written to the interval database (idb).

RT Bridge

An RT Bridge is a q process which enables applications outside of Insights which deliver data to a q process, to deliver the same data to Insights without further modification.

RT stream

some text to say this is the same as KXI stream.

Schema

A schema is how data is converted from its source format, to a format compatible with a kdb+ database. Every data table has its own schema.

I want to learn more about schemas.

Scratchpad

Scratchpad is part of the Query window. With scratchpad you can make ad hoc queries against an Output Variable generated by a query against a table in the database.

You can also create data tables directly in the scratchpad editor. The scratchpad editor supports q or python code.

Results from a scratchpad query are presented in the console, or as a table or chart.

SDK

kdb Insights Enterprise uses a number of software development kits (SDK) to help you organize your data on the platform.

SDKs are available for Java, C/C++ and q.

SQL

SQL (Structured Query Language) is a standard language for accessing databases, and is supported by kdb+ databases.

Stream

Streams is how event data is written to a database. Event data is typically real-time as may be generated by a price or sensor feed.

Real-time data is stored on an real-time database (rdb), moved to an interval database (idb), before the data is written to an historic (hdb) database.

Terraform

Some high level text about terraform scripts and how we use them.

Transform

A Transform node is required for most pipelines. A transform node takes imported data and transforms it to a kdb+ format suitable for storage on the database.

UI

UI is the User Interface for kdb Insights Enterprise.

View

Views is how you build visualizations in kdb Insights Enterprise. Views are powered by KX Dashboards technology.

Writer

A Writer node is an essential part of any pipeline.  The writer nodes takes the transformed (kdb+) data you have read from its source and writes it to a kdb+ database.