KDB-X Database Service API (0.1.0)

Download OpenAPI specification:

OpenAPI spec for the KDB-X Database Service.

Tables

List table names

Returns the names of all tables in the database.

Responses

Request samples

session.list_tables()

Response samples

Content type
application/json
[
  • "forex",
  • "sensor"
]

Create table

Creates a table from the provided schema definition.

path Parameters
table
required
string

Table name

Request Body schema: application/json
required
One of
type
required
string
Enum: "partitioned" "splayed" "splayed_mem" "basic"
prtnCol
required
string

Partition column to use for partitioned tables. The column chosen is your primary time filter: it is used by a query's startTS/endTS and determines the data layout on disk.

  • Required when table type="partitioned".
  • prtnCol must reference a timestamp column.
  • Always considered UTC.
required
Array of objects (TableColumnSpec)
primaryKeys
Array of strings

Names of columns to use as primary keys for this table. Primary keys identify unique rows. When provided, the table is keyed by these columns, and rows with matching keys update existing records.

blockSize
integer
Default: 10000

Block size for partitioned storage writes.

sortColsMem
Array of strings

A list of columns to use for sorting columns in a memory tier. Setting this value will sort data as it arrives in memory.

sortColsOrd
Array of strings

A list of columns to use for sorting columns in an ordinal partitioned tier. Setting this value will sort data as it migrates into ordinal partitions.

sortColsDisk
Array of strings

A list of columns to use for sorting columns in a normal disk tier. Settings this value will sort data as it is migrated into disk partitions.

description
string

Responses

Request samples

Content type
application/json
{
  • "type": "partitioned",
  • "prtnCol": "realTime",
  • "columns": [
    ],
  • "blockSize": 10000,
  • "sortColsOrd": [
    ],
  • "sortColsDisk": [
    ],
  • "description": "Quote data"
}

Response samples

Content type
application/json
{
  • "name": "d9726870-1c5c-0d13-4b90-be5fbcb2ad2f",
  • "pipeline": "",
  • "database": "db",
  • "updtype": "schemaChange",
  • "status": "completed",
  • "details": [ ],
  • "tbls": [ ],
  • "dates": [ ],
  • "progress": {
    },
  • "error": "",
  • "warnings": [ ],
  • "updated": "2026-03-30T13:37:44.365774085"
}

Describe table

Returns the table definition (schema/columns) and associated metadata for a given table.

path Parameters
table
required
string

Table name

Responses

Request samples

session.describe_table(table="quote")

Response samples

Content type
application/json
{
  • "description": "Quote data",
  • "type": "partitioned",
  • "blockSize": 10000,
  • "prtnCol": "realTime",
  • "sortColsOrd": [
    ],
  • "sortColsDisk": [
    ],
  • "columns": [
    ],
  • "name": "quote"
}

Drop table

Drops a table and its associated data.

path Parameters
table
required
string

Table name

Responses

Request samples

session.drop_table(table="quote")

Response samples

Content type
application/json
{
  • "name": "c4ceac2c-664a-ce9c-aa5c-b1498f850d22",
  • "pipeline": "",
  • "database": "db",
  • "updtype": "schemaChange",
  • "status": "completed",
  • "details": [ ],
  • "tbls": [ ],
  • "dates": [ ],
  • "progress": {
    },
  • "error": "",
  • "warnings": [ ],
  • "updated": "2026-03-31T05:36:22.900404270"
}

Imports

Get batch ingest job status

Returns the current status of a batch ingest job.

path Parameters
jobId
required
string

Batch ingest job identifier.

Responses

Request samples

session.get_import(job_id="3fa85f64-5717-4562-b3fc-2c963f66afa6")

Response samples

Content type
application/json
{
  • "name": "26adfb1d-29bc-fab9-5e32-73f80ea51300",
  • "pipeline": "",
  • "database": "db",
  • "updtype": "ingest",
  • "status": "completed",
  • "details": {
    },
  • "tbls": [
    ],
  • "dates": [
    ],
  • "progress": {
    },
  • "error": "",
  • "warnings": [ ],
  • "updated": "2026-03-22T13:25:44.312559923"
}

Cancel batch ingest job

Attempts to cancel a running batch ingest job.

path Parameters
jobId
required
string

Batch ingest job identifier.

Responses

Request samples

session.cancel_import(job_id="3fa85f64-5717-4562-b3fc-2c963f66afa6")

Response samples

Content type
application/json
{
  • "name": "26adfb1d-29bc-fab9-5e32-73f80ea51300",
  • "pipeline": "",
  • "database": "db",
  • "updtype": "ingest",
  • "status": "completed",
  • "details": {
    },
  • "tbls": [
    ],
  • "dates": [
    ],
  • "progress": {
    },
  • "error": "",
  • "warnings": [ ],
  • "updated": "2026-03-22T13:25:44.312559923"
}

Batch ingest from file(s) into a table

Submit a batch ingest job for a table using files on the server.

For CSV ingest, processing order is types (initial parse), then postparse (transform/derive), then include (final output column selection).

Returns a pending job response; ingestion continues after the response. Final success/failure is available via GET /api/v0/imports/{jobId}.

Request Body schema: application/json
required
table
required
string

Target table name.

path
required
string

Path to input data on the server filesystem.

Accepted forms:

  • Single file
  • Directory
  • Glob pattern

Glob semantics:

  • * matches within a single path segment (does not cross /).
  • ** matches across subdirectories, e.g. quotes/**/quote*.csv.

The service ingests all matched files.

Example values:

  • quote.csv
  • quotes/
  • quote_2026-01-*.csv
  • quotes/**/quote*.csv
format
string
Enum: "qbinary" "csv" "parquet"

Input data format. If omitted, the service may auto-detect it from the file(s).

mode
string
Default: "merge"
Enum: "overwrite" "merge"

Optional ingest mode. If omitted, the service uses merge behavior. Mode behavior is applied per partition.

delimiter
string
Default: ","

Delimiter character for delimited text. Ignored for parquet and qbinary.

decimal
string
Default: "."

Decimal separator character for delimited text. Ignored for parquet and qbinary.

createTable
boolean
Default: false

If true and the table does not exist, create it prior to ingest.

header
Array of strings

If the file has headers, this is optional.

If provided, it overrides header names by position.

If the file has no headers and either postparse is used or the file has unused columns, this is required (names by position).

If createTable=true and the table doesn’t exist, these headers are used as the new table’s column names.

include
Array of strings

Optional list of column names to ingest (and their order).

Only these columns are kept.

Applied after both headers are resolved & postparse is applied.

types
string

Optional type string in CSV parse format (uppercase), e.g. F=float, J=long, P=timestamp, S=symbol, *=string. See https://code.kx.com/q/ref/tok/ for full details.

object

A dictionary of data transforms, made up of column names as keys and q expression strings as values.

Existing columns can be modified, and new ones created.

Columns from the file are available for use in the expression via a data dictionary variable, and the file name is available via a path string variable.

For example: {'timestamp': 'data[`date]+data[`time]'}.

Runs after loading and parsing (including typecasting) of data in the file is done.

Note that columns used as inputs in the transform are by default not included after use, therefore must be re-added if they are required.

For example, to preserve the date column in the above, the postparse dictionary should be: {'timestamp': 'data[`date]+data[`time]', 'date': 'data[`date]'}

headerRowIndex
integer
Default: 0

Header row index (0-based). Use -1 when there is no header row.

Responses

Request samples

Content type
application/json
Example
{
  • "table": "quote",
  • "path": "quote.csv"
}

Response samples

Content type
application/json
{
  • "name": "26adfb1d-29bc-fab9-5e32-73f80ea51300",
  • "pipeline": "",
  • "database": "db",
  • "updtype": "ingest",
  • "status": "pending",
  • "details": {
    },
  • "tbls": [ ],
  • "dates": [ ],
  • "progress": {
    },
  • "error": "",
  • "warnings": [ ],
  • "updated": "2026-03-22T13:25:44.177025580"
}

Batch ingest from a kdb database

Submit a batch ingest job for a kdb-native data location readable by the service. The provided path may reference a partitioned HDB directory and may also support other kdb-native layouts such as splayed tables or basic/q-binary tables. The service returns a job ID and continues ingestion asynchronously after the response.

Request Body schema: application/json
required
path
required
string

Path to a kdb-native data location readable by the service, such as a partitioned HDB directory, a parent directory containing one or more splayed tables, or a parent directory containing basic/q-binary tables. Path is resolved relative to the configured import root.

mode
string
Default: "merge"
Enum: "overwrite" "merge"

Optional ingest mode. If omitted, the service uses merge behavior. Mode behavior is applied per partition.

createTable
boolean
Default: false

If true and the target table does not exist, create it before ingest.

Responses

Request samples

Content type
application/json
{
  • "path": "/path/to/hdb",
  • "mode": "merge",
  • "createTable": false
}

Response samples

Content type
application/json
{
  • "name": "26adfb1d-29bc-fab9-5e32-73f80ea51300",
  • "pipeline": "",
  • "database": "db",
  • "updtype": "ingest",
  • "status": "pending",
  • "details": {
    },
  • "tbls": [ ],
  • "dates": [ ],
  • "progress": {
    },
  • "error": "",
  • "warnings": [ ],
  • "updated": "2026-03-22T13:25:44.177025580"
}

Ingest data from the JSON content of the request

Insert data provided inline in the request body. Supports either a row-matrix (array of rows) or a list of JSON dictionaries (one object per row). Returns a pending job response; final success/failure is available via GET /api/v0/imports/{jobId}.

Request Body schema: application/json
required
table
required
string

Target table name.

required
Array of any or Array of objects
createTable
boolean
Default: false
mode
string
Default: "merge"
Enum: "overwrite" "merge"

Optional ingest mode. If omitted, the service uses merge behavior. Mode behavior is applied per partition.

columnNames
Array of strings
types
string

Optional type string (case-sensitive), e.g. f=float, j=long, p=timestamp, s=symbol, C=string, *=generic list. See https://code.kx.com/q/ref/cast/ for full details.

Responses

Request samples

Content type
application/json
Example
{
  • "table": "quote",
  • "mode": "merge",
  • "data": [
    ]
}

Response samples

Content type
application/json
{
  • "name": "26adfb1d-29bc-fab9-5e32-73f80ea51300",
  • "pipeline": "",
  • "database": "db",
  • "updtype": "ingest",
  • "status": "pending",
  • "details": {
    },
  • "tbls": [ ],
  • "dates": [ ],
  • "progress": {
    },
  • "error": "",
  • "warnings": [ ],
  • "updated": "2026-03-22T13:25:44.177025580"
}

Query

Structured query (parameterized)

Structured query using explicit parameters (table, time range, filters, aggregations) rather than a free-form query string.

Request Body schema: application/json
required
table
required
string

Table name

startTS
string

Inclusive start of timestamp range

endTS
string

Exclusive end of timestamp range

inputTZ
string

Timezones should be supplied in tz database format, for example America/New_York or Europe/London. inputTZ is used to interpret startTS and endTS.

outputTZ
string

Timezones should be supplied in tz database format, for example America/New_York or Europe/London. outputTZ is used for timestamp conversion in query output.

outputTZCols
Array of strings[ items ]

Columns that should be converted to 'outputTZ'

Array of strings or Array of arrays

The filter parameter takes a list of filter triples in the following form:

  • [operator, column, values]

It is also possible to nest filters with the logical operators and, not, and or, for example:

  • ["and", ["=", "size", 100], ["<", "price", 500]]

Supported filter operators are =, <>, <, >, <=, >=, in, like, within.

groupBy
Array of strings

A list of column names to group by

agg
Array of strings[ items ]

agg controls what the query returns. Use it to select columns, rename columns, or compute aggregated values.

  1. A simple list of column names specifies what columns to include in the result:
["sym", "bid", "ask"]
  1. A list of tuples specifies what columns to include in the result, and renames them:
[["Symbol", "sym"], ["BidPrice", "bid"], ["AskPrice", "ask"]]
  1. A list of triples runs the given aggregation function on the specified column and renames it:
[
  ["o", "first", "bid"],
  ["h", "max", "bid"],
  ["l", "min", "bid"],
  ["c", "last", "bid"]
]

Available aggregation functions are:

all, any, avg, cor, count, cov, dev, distinct, first, last, max, min, prd, scov, sdev, sum, svar, var, wavg, wsum.

Note: distinct always returns a list.

fill
string

fill can be used to fill in null values:

  • forward will replace nulls with the previous non-null value.
  • zero will replace nulls with the 0 equivalent value of the column type.
temporality
string

Temporality controls how the query interprets the time range between startTS and endTS.

Supported values are:

  • snapshot Returns one continuous block of data from startTS to endTS. This is the default if temporality is not specified.

  • slice Returns data between the time of startTS and the time of endTS for each day in the date range. For example, if startTS is 2021.01.01D10:00:00 and endTS is 2021.01.03D13:00:00, data is returned from 10:00:00 to 13:00:00 on each of these dates:

    • 2021.01.01
    • 2021.01.02
    • 2021.01.03
sortCols
Array of strings

Columns to be sorted, post query execution

limit
Array of integers

Limit of rows to return

Responses

Request samples

Content type
application/json
{
  • "table": "quote",
  • "startTS": "2024.11.02T00:00:00.000",
  • "endTS": "2024.11.03T00:00:00.000",
  • "inputTZ": "UTC",
  • "outputTZ": "UTC",
  • "outputTZCols": [
    ],
  • "filter": [
    ],
  • "groupBy": [
    ],
  • "agg": [
    ],
  • "fill": "zero",
  • "temporality": "slice",
  • "sortCols": [
    ],
  • "limit": [
    ]
}

Response samples

Content type
{
  • "header": {
    },
  • "payload": [
    ]
}

SQL query (free-form)

Executes a SQL query string (SELECT-only) against the database.

Request Body schema: application/json
required
query
required
string

SQL expression. Must be a SELECT. No INSERT, DROP, or other extension is allowed.

Responses

Request samples

Content type
application/json
{
  • "query": "select sym, bid, ask from quote where bid > 100 by sym"
}

Response samples

Content type
{
  • "header": {
    },
  • "payload": [
    ]
}

q query (free-form)

Executes a q query string against the database.

Request Body schema: application/json
required
query
required
string

Freeform q query

agg
string

The gateway concatenates query results from each tier (RDB, IDB, HDB) by performing a raze. agg specifies a q lambda to be used instead. It can be specified as either:

  • A unary function, for example "distinct".

  • A composition of named unary functions, for example "'[distinct;raze]".

  • A unary lambda, for example "{distinct raze x}".

Responses

Request samples

Content type
application/json
{
  • "query": "select from quote where sym=`DNDF.CAN, price>300.5",
  • "agg": "{select minPx:min price by sym from x}"
}

Response samples

Content type
{
  • "header": {
    },
  • "payload": [
    ]
}