Skip to content

kxi.sp.read

Stream Processor readers.

Readers are a specialized type of operator that allow users to feed data from different external data sources into a streaming pipeline. Each reader has its own start, stop, setup, and teardown functions to handle different streaming lifecycle events. Readers are assumed to be asynchronous and push data into a pipeline using sp.push.

Some readers, such as from_callback and from_kafka, have implemented a stream-partitioning interface. When using these readers, the partitions are distributed over multiple Workers, orchestrated by a Controller.

FileChunking Objects

class FileChunking(IntEnum)

Enum for file chunking options.

Chunking a file splits the file into smaller batches, and streams the batches through the pipeline.

These enum values can be provided as True or False for enabled and disabled respectively.

disabled

Do not split the file into chunks.

enabled

Split the file into chunks.

auto

Automatically determine the size of the target file, and if it is sufficiently large (more than a few megabytes) it will be read in chunks.

FileMode Objects

class FileMode(AutoNameEnum)

Enum for file mode options.

These enum values can be provided as enum member objects (e.g. FileMode.binary), or as strings matching the names of members (e.g. 'binary').

binary

Read the content of the file into a byte vector.

text

Read the content of the file into strings, and split on newlines.

from_callback

@Reader
def from_callback(callback: Union[str, kx.SymbolAtom],
                  *,
                  partitions: List = (),
                  key: Optional[Union[str, kx.SymbolAtom]] = None,
                  replay: Union[bool, kx.BooleanAtom] = False) -> Reader

Read from a callback function defined in the global q namespace.

Notes:

To enable beta features, set the environment variable KXI_SP_BETA_FEATURES to true.

Arguments:

  • callback - The name of the callback function that will be defined in the global q namespace.
  • partitions - A list of partition identifiers to distribute over available workers.
  • key - Name of the field which contains the key of the published event, or None for unkeyed data.
  • replay - (Beta feature) If True, message replay is enabled for the Callback reader. On recovery, messages that arrived after the last checkpoint will be pushed to the pipeline.
  • external - Allow users to execute the callback function via an IPC connection.

Returns:

A from_callback reader, which can be joined to other operators or pipelines.

Examples:

Calling a callback multiple times, with a window to batch the data:

>>> from kxi import sp
>>> import numpy as np
>>> import pykx as kx
>>> sp.init()
>>> sp.run(sp.read.from_callback('publish')
| sp.window.timer(np.timedelta64(10, 's'), count_trigger=12)
| sp.map(lambda x: x * x)
| sp.write.to_console(timestamp='none'))
>>> kx.q('publish', range(4))
pykx.Identity(pykx.q('::'))
>>> kx.q('publish', range(4))
pykx.Identity(pykx.q('::'))
>>> kx.q('publish', range(10))
0 1 4 9 0 1 4 9 0 1 4 9 16 25 36 49 64 81
pykx.Identity(pykx.q('::'))

Callback with message replay:

>>> from kxi import sp
>>> import pykx as kx
>>> import os
>>> os.makedirs('/tmp/journals')
>>> os.environ['KXI_SP_JOURNAL_DIR'] = '/tmp/journals'
>>> pipeline = (sp.read.from_callback('publish', replay=True)
| sp.write.to_variable('out'))
>>> sp.run(pipeline)
>>> kx.q('publish', 0)
>>> kx.q('publish', 1)
>>> print(kx.q('out'))
0 1
>>> sp.teardown()
>>> kx.q('out: ()')
>>> sp.run(pipeline)
>>> print(kx.q('out'))
0 1

from_expr

@Reader
def from_expr(expr: Union[str, bytes, Callable]) -> Reader

Evaluate expression or function into the pipeline.

Arguments:

  • expr - Either a q expression as a string, which will be evaluated to produce data for the pipeline, or a nullary function, which will be called to produce data for the pipeline.

Returns:

A from_expr reader, which can be joined to other operators or pipelines.

from_file

@Reader
def from_file(path: Union[os.PathLike, List[os.PathLike]],
              mode: FileMode = FileMode.binary,
              *,
              offset: int = 0,
              chunking: FileChunking = FileChunking.auto,
              chunk_size: Union[int, CharString] = '1MB') -> Reader

Read file contents into the pipeline.

Arguments:

  • path - A filepath or list of file paths.
  • mode - How the content of the file should be interpreted by the reader.
  • offset - How many bytes into the file reading should begin.
  • chunking - If/how the file should be split into chunks.
  • chunk_size - The size of chunks to read when chunking is enabled. Can be specified as an integer number of bytes, or as a string with the unit, e.g. '1MB'.

Returns:

A from_file reader, which can be joined to other operators or pipelines.

KafkaOffset Objects

class KafkaOffset(IntEnum)

Where to start consuming a Kafka partition.

beginning

Start consumption at the beginning of the partition.

end

Start consumption at the end of the partition.

from_kafka

@Reader
def from_kafka(topic: str,
               brokers: Union[str, List[str]] = 'localhost:9092',
               *,
               retries: int = 10,
               retry_wait: Timedelta = (1, 's'),
               poll_limit: int = 1000,
               offset: Optional[Dict[int, KafkaOffset]] = None,
               options: Optional[Dict[str, Any]] = None,
               registry: Optional[Union[CharString, str]] = '',
               as_list: Optional[bool] = False) -> Reader

Consume data from a Kafka topic.

Note: Maximum poll limit The Kafka reader reads multiple messages in a single poll iteration. A global limit is imposed on the number of messages to read in a single cycle to avoid locking the process in a read loop only serving Kafka messages. By default this limit is set to 1000 messages. Setting the configuration option pollLimit will change this value. This limit is global so if multiple readers set this value, only one will be used.

Note: Reserved keys group.id, metadata.broker.list, and consumer.id are reserved values and are maintained directly by the Kafka reader. The Kafka reader will error if these options are used.

Arguments:

  • topic - The name of a topic.
  • brokers - Brokers identified a 'host:port' string, or a list of 'host:port' strings.
  • retries - Maximum number of retries that will be attempted for Kafka API calls.
  • retry_wait - How long to wait between retry attempts.
  • poll_limit - Maximum number of records to process in a single poll loop.
  • offset - Dictionary mapping from partition IDs to their offsets.
  • options - Dictionary of Kafka consumer options.
  • registry - Optional URL to a Kafka Schema Registry. When provided, Kafka Schema Registry mode is enabled, allowing for payload decoding.
  • as_list - Boolean value which, when set to True results in Kafka Schema Registry messages omitting field names when decoding Protocol Buffer schemas and returning values as a list.

Returns:

A from_kafka reader, which can be joined to other operators or pipelines.

from_postgres

@Reader
def from_postgres(query: CharString,
                  database: Optional[CharString] = None,
                  server: Optional[CharString] = None,
                  port: Optional[CharString] = None,
                  username: Optional[CharString] = None,
                  password: Optional[CharString] = None) -> Reader

Execute query on a PostgreSQL database.

Any parameter (except for the query) can be omitted, in which case it will be sourced from an environment variable following the format $KXI_SP_POSTGRES_<param> where <param> is the parameter name in uppercase.

Note: The pipeline will be torn down after processing a Postgres query After the query has completed processing, the Postgres reader will signal a 'finish' command which will teardown the pipeline if there are no other pending requests.

Arguments:

  • query - Query to execute on the Postgres database.
  • database - Name of the database to connect to. Defaults to $KXI_SP_POSTGRES_DATABASE.
  • server - Address of the database to connect to. Defaults to $KXI_SP_POSTGRES_SERVER.
  • port - Port of the database. Defaults to $KXI_SP_POSTGRES_PORT.
  • username - Username to authenticate with. Defaults to $KXI_SP_POSTGRES_USERNAME.
  • password - Password to authenticate with. Defaults to $KXI_SP_POSTGRES_PASSWORD.

Returns:

A from_postgres reader, which can be joined to other operators or pipelines.

from_sqlserver

@Reader
def from_sqlserver(query: CharString,
                   database: Optional[CharString] = None,
                   server: Optional[CharString] = None,
                   port: Optional[Union[int, CharString]] = None,
                   username: Optional[CharString] = None,
                   password: Optional[CharString] = None) -> Reader

Execute query on a SQLServer database.

Any parameter (except for the query) can be omitted, in which case it will be sourced from an environment variable following the format $KXI_SP_SQLSERVER_<param> where <param> is the parameter name in uppercase.

Arguments:

  • query - Query to execute on the SQLServer database.
  • database - Name of the database to connect to. Defaults to $KXI_SP_SQLSERVER_DATABASE.
  • server - Address of the database to connect to. Defaults to $KXI_SP_SQLSERVER_SERVER.
  • port - Port of the database. Defaults to $KXI_SP_SQLSERVER_PORT.
  • username - Username to authenticate with. Defaults to $KXI_SP_SQLSERVER_USERNAME.
  • password - Password to authenticate with. Defaults to $KXI_SP_SQLSERVER_PASSWORD.

Returns:

A from_sqlserver reader, which can be joined to other operators or pipelines.

from_stream

@Reader
def from_stream(table: Optional[str] = None,
                stream: Optional[str] = None,
                *,
                prefix: CharString = '',
                assembly: Optional[str] = None,
                insights: bool = True,
                index: int = 0) -> Reader

Read data using a kdb Insights Stream.

Arguments:

  • table - Name of the table to filter the stream on. By default, no filtering is performed.
  • stream - Name of stream to subscribe to. By default, the stream specified by the $RT_SUB_TOPIC environment variable is used.
  • prefix - Prefix to add to the hostname for RT cluster. By default, the prefix given by the $RT_TOPIC_PREFIX environment variable is used.
  • assembly - The kdb Insights assembly to read from. By default, no assembly is used.
  • insights - Whether the stream being subscribed to uses Insights message formats.
  • index - The position in the stream to replay from.

Returns:

A from_stream reader, which can be joined to other operators or pipelines.

from_amazon_s3

@Reader
def from_amazon_s3(path: Union[CharString, List[CharString]],
                   mode: FileMode = FileMode.binary,
                   *,
                   offset: int = 0,
                   chunking: FileChunking = FileChunking.auto,
                   chunk_size: Union[int, CharString] = '1MB',
                   tenant: CharString = '',
                   domain: CharString = '',
                   region: CharString = 'us-east-1',
                   credentials: CharString = '') -> Reader

Reads a file from Amazon S3.

Arguments:

  • path - The path of an object or multiple objects to read from S3.
  • mode - How the content of the file should be interpreted by the reader.
  • offset - How many bytes into the file reading should begin.
  • chunking - A FileChunking enum value, or string equivalent.
  • chunk_size - The size of chunks to read when chunking is enabled. Can be specified as an integer number of bytes, or as a string with the unit, e.g. '1MB'.
  • tenant - The authorization tenant.
  • domain - A custom Amazon S3 domain.
  • region - The AWS region to authenticate against.
  • credentials - The secret name for the Amazon S3 credentials. Refer to the authentication section of the .qsp.read.fromAmazonS3 documentation for more information.

Returns:

A from_amazon_s3 reader, which can be joined to other operators or pipelines.

from_azure_storage

@Reader
def from_azure_storage(path: Union[CharString, List[CharString]],
                       mode: FileMode = FileMode.binary,
                       *,
                       offset: int = 0,
                       chunking: FileChunking = FileChunking.auto,
                       chunk_size: Union[int, CharString] = '1MB',
                       account: CharString = '',
                       tenant: CharString = '',
                       domain: CharString = '',
                       credentials: CharString = '') -> Reader

Reads a file from Azure Blob Storage.

Arguments:

  • path - The path of an object or multiple objects to read from Microsoft Azure Storage.
  • mode - How the content of the file should be interpreted by the reader.
  • offset - How many bytes into the file reading should begin.
  • chunking - A FileChunking enum value, or string equivalent.
  • chunk_size - The size of chunks to read when chunking is enabled. Can be specified as an integer number of bytes, or as a string with the unit, e.g. '1MB'.
  • account - The Azure account to read from.
  • tenant - The authorization tenant.
  • domain - A custom Azure domain.
  • credentials - The secret name for the Azure credentials. Refer to the authentication section of the .qsp.read.fromAzureStorage documentation for more information.

Returns:

A from_azure_storage reader, which can be joined to other operators or pipelines.

from_google_storage

@Reader
def from_google_storage(path: Union[CharString, List[CharString]],
                        mode: FileMode = FileMode.binary,
                        *,
                        offset: int = 0,
                        chunking: FileChunking = FileChunking.auto,
                        chunk_size: Union[int, CharString] = '1MB',
                        tenant: Optional[CharString] = None) -> Reader

Read a file hosted on Google Cloud Storage.

Arguments:

  • path - The path of an object or multiple objects to read from Google Cloud Storage.
  • mode - A FileMode enum value, or the string equivalent.
  • offset - How many bytes into the file reading should begin.
  • chunking - A FileChunking enum value, or string equivalent.
  • chunk_size - The size of chunks to read when chunking is enabled. Can be specified as an integer number of bytes, or as a string with the unit, e.g. '1MB'.
  • tenant - The authentication tenant.

Returns:

A from_google_storage reader, which can be joined to other operators or pipelines.

from_http

@Reader
def from_http(url: CharString,
              method: CharString = 'GET',
              *,
              body: CharString = '',
              header: dict = None,
              on_response: OperatorFunction = None,
              follow_redirects: bool = True,
              max_redirects: int = 5,
              max_retry_attempts: int = 10,
              timeout: int = None,
              tenant: CharString = '',
              insecure: bool = False,
              binary: bool = False,
              sync: bool = False,
              reject_errors: bool = True) -> Reader

Requests data from an HTTP endpoint.

Arguments:

  • url - The URL to send a request to.
  • method - The HTTP method for the HTTP request (ex. GET, POST, etc.).
  • body - The payload of the HTTP request.
  • header - A map of header fields to their corresponding values.
  • on_response - After a response, allows the response to be preprocessed or to trigger another request. Returning 'None' will process the return from the original request immediately. A return of a string will issue another request with the return value as the URL. A return of a dictionary allows for any of the operator parameters to be reset and a new HTTP request issued. A special 'response' key can be used in the return dictionary to change the payload of the response. If the response key is set to 'None', no data is pushed into the pipeline.
  • follow_redirects - If set, any redirect return will automatically be followed up to the maximum number of redirects.
  • max_redirects - The maximum number of redirects to follow before reporting an error.
  • max_retry_attempts - The number of times to retry a connection after a request timeout.
  • timeout - The duration in milliseconds to wait for a request to be completed before reporting an error.
  • tenant - The request tenant to use for providing request authentication details.
  • insecure - Indicates if unverified server SSL/TLS certificates should be trusted.
  • binary - Indicates that the resulting payload should be returned as binary data, otherwise text is assumed.
  • sync - Indicates if this request should be made synchronously or asynchronously. Setting the request to be synchronous will block the process until the request is completed.
  • reject_errors - Non-successful response codes will generate an error and stop the pipeline.

Returns:

A pipeline comprised of the from_http reader, which can be joined to other pipelines.

from_mqtt

@Reader
def from_mqtt(topic,
              broker,
              *,
              username: str = "",
              password: str = "") -> Reader

Read from an MQTT broker.

Arguments:

  • topic - The name of the topic to subscribe to.
  • broker - The address of the MQTT broker.
  • username - Username for the MQTT broker.
  • password - Password for the MQTT broker.

Returns:

A from_mqtt reader, which can be joined to other operators or pipelines.