Get Data - Weather (Object Storage)
To use data on kdb Insights Enterprise it has to be imported to an active database. Object storage is the storage of the cloud; a
weather dataset is hosted on each of the major cloud providers for use by kdb Insights Enterprise.
No kdb+ knowledge required
This example assumes no prior experience with q/kdb+; replace the url provided with any other Object Storage url to gain similar results.
1. Create and deploy a database
To use a pipeline, your database must be deployed and active.
2. Import Data
Open the import wizard by selecting 2. Import from the Overview page. Next, you will be prompted to select a reader node. The import process creates a pipeline; a pipeline is a connection of nodes to read data from source, transform to a kdb+ compatible format, then write to a kdb Insights Enterprise database.
3. Select a Reader
A reader stores details of data to import, including any required authentication. Select one of the major cloud providers: Amazon, Google and Microsoft.
Select from one of the Cloud providers.
Complete the reader properties for the selected cloud provider. Additional Paths can be added by clicking the . Properties marked with an
* are required.
Click Next when done.
4. Select a Decoder
The decoder node defines the type of data imported. The
weather data is a csv file, select the csv decoder and complete the settings.
Select the csv decoder for the
weather data set.
Click Next when done.
5. Define the Schema
The schema converts data to a type compatible with a kdb+ database. Every imported data table requires a schema; and every data table must have a
timestamp key to be compatible with kdb's time series columnar database.
insights-demo has a predefined schema for
|Apply a Schema
insights-demoschema from the dropdown
weathertable from the
weatherParse Strings is set to
autofor all fields.
Manual entry weather schema
If not adding a schema with a table from a database, add the following columns with the button instead. Column descriptions are optional and not required here:
Determines the requirement to parse input string data to other datatypes. Generally, parse strings is enabled for all
string fields unless your input is IPC or RT; retain the
Auto default if unsure.
Click Next when done.
6. Configure the Writer
Write transformed data to the kdb Insights Enterprise database.
|Write Direct to HDB
|Set Timeout Value
Click to review the pipeline in the pipeline viewer.
A pipeline created by the import wizard, reads data from its source, transforms to a kdb+ compatible format, and writes it to a kdb Insights Enterprise database.
Writer - KX Insights Database node is essential for exploring data in a pipeline. The node defines the database to write too that must be active to receive data. Stream data uses its own writer node,
Writer - KX Insights Stream.
7. Review Pipeline
weather pipeline following the import steps.
8. Save the Pipeline
Save and name the pipeline. This name should be unique to the pipeline; for example,
Save the pipeline as
weather-1 is listed under Pipelines in the left-hand menu.
The list of available pipelines for deployment in the left-hand menu.
A test deploy previews your pipeline prior to deployment by returning a picture of the data at each step along the pipeline. It does not write to the database.
Click on a node to view the data state at the selected step.
Click Full Test
Select a Node in the pipeline to view the data output from the step in the lower panel.
Test deploy results display in lower panel of pipeline template view.
9. Deploy the Pipeline
Deploy a pipeline to access its data.
Deploy to activate the pipeline and write its data to the database.
The pipeline runs through the deployment process and returns a status of
Finished under Running Pipelines in the Overview page when successfully deployed.
A successfully deployed pipeline shows as
Finished under Running Pipelines.
Database Deployment: If not already active, ensure
insights-demo, or the database created with the
weather schema table, is deployed from Databases in the left-hand menu for it to receive data from the pipeline.
Teardown an active pipeline when it's no longer required; tearing down a pipeline returns resources. Click on the
X in Running Pipelines of Overview to teardown a pipeline.
X to teardown a pipeline.
Clear Pipeline State removes all data written to the database; leave unchecked to continue working with the data in the current session.
Test deploys are automatically torn down on completion.
Teardown a pipeline to free up resources.
Reported errors can be checked against the logs of the deployment process. Click View diagnostics in Running Pipelines of Overview to review the status of a deployment.
Click View Diagnostics in Running Pipelines of Overview to view the status of a pipeline deployment.
10. Query the Data
insights-demo database and
weather-1 pipeline if not active or finished.
Query data on a successful deploy to the database.
11. Visualize the Data
Build a visualization from the data.