System overview (UI)
Before you sign in to the kdb Insights Enterprise UI we recommend you review this high-level overview of the workflow for importing data, writing it to a database then querying and visualizing that data in kdb Insights Enterprise.
- Create a database and schema to store your data
- Create a pipeline the process which will import data into your database
- Query our data using q, SQL or python
- Visualize your data using the UIs dashboards
We also recommend that you take the interactive guided tour to put your knowledge into practice.
Create a database
Data is stored in kdb Insights Enterprise using kdb+, column-based, relational time series database technology. Create a database by giving a new instance a name; an instance can be opened from the [+] header menu, or Build a Database on the Overview page.
Create a schema
A schema contains table definitions to ensure imported data is compatible with kdb+ data types. You must create a schema for the data that you want to import. This is done manually, or using a JSON file, as part of the database.
Create a pipeline
- A reader - reads data from a source
- A transformer - applies a schema to import data
- A writer - writes data to the kdb Insights Enterprise database.
Other nodes are available.
You must create one or more pipeline templates to import data.
To import data, you must:
Deploy the database
Deploy at least one pipeline
Data ingestion begins when you deploy a pipeline. You can deploy several pipelines at once, or, if resources are limited, one at a time, tearing down each pipeline after the import.
You can query imported data using q, SQL or Python. View results in the console window, a formatted table, or a simple chart.
Imported data tables are available in a visualization tool. You can incorporate data into charts, maps and more, and share these views with colleagues.