Skip to content

kxi.sp.encode

Stream Processor encoders.

PayloadType Objects

class PayloadType(AutoNameEnum)

Enum to specify the payload type for a protobuf encoding.

table

table.

dict

dictionary.

array

array.

arrays

arrays.

ArrowPayloadType Objects

class ArrowPayloadType(AutoNameEnum)

Enum to specify the payload type for Arrow encoding.

table

table.

arrays

arrays.

CSVHeader Objects

class CSVHeader(AutoNameEnum)

Enum for csv header options.

These enum values can be provided as enum member objects (e.g. CSVHeader.always), or as strings matching the names of the members (e.g. 'always').

first

Only first batch starts with a header row.

none

Encoded data never starts with a header row.

always

Encoded data always starts with a header row.

arrow

@Encoder
def arrow(*,
          schema: Optional[Union[kx.Table, Dict[str,
                                                Union[str, int,
                                                      kx.CharAtom]]]] = None,
          match_cols: Union[bool, kx.BooleanAtom] = True,
          payload_type: ArrowPayloadType = ArrowPayloadType.auto) -> Encoder

Encodes data as Arrow.

Arguments:

  • schema - A table from which the schema is extracted and used when serializing each batch of data. If not provided, schema is inferred at runtime for each batch.
  • match_cols - When set to True and a schema is provided, tables passed in have their columns rearranged to match the column ordering of the schema. No reordering is done otherwise.
  • payload_type - A symbol indicating the message payload that will be encoded (one of auto, table, or arrays).

Returns:

An arrow encoder, which can be joined to other operators or pipelines.

csv

@Encoder
def csv(delimiter: Union[str, bytes, kx.CharAtom] = kx.q('","'),
        *,
        header: CSVHeader = CSVHeader.first) -> Encoder

Encodes data as CSV.

Arguments:

  • delimiter - A field separator for the records in the encoded data, defaults to comma.
  • header - Whether encoded data starts with a header row.

Returns:

A csv encoder, which can be joined to other operators or pipelines.

json

@Encoder
def json(*, split: bool = False) -> Encoder

Encodes data as JSON.

Arguments:

  • split - Whether a batch should be encoded as a single JSON object, or split into a separate JSON object for each value in the batch. When the input is a table, this encodes each row as a JSON object with the column names as keys mapping to the corresponding row values.

Returns:

A json encoder, which can be joined to other operators or pipelines.

protobuf

@Encoder
def protobuf(message: str,
             path: Optional[str] = None,
             *,
             format: Optional[str] = None,
             payload_type: Optional[PayloadType] = None) -> Encoder

Encodes data as per protobuf.

Arguments:

  • message - The name of the Protocol Buffer message type to encode.
  • path - The path to a .proto file containing the message type definition.
  • format - A string definition of the Protocol Buffer message format to encode.
  • payload_type - A PayloadType enum value, or the string equivalent.

Returns:

A protobuf encoder, which can be joined to other operators or pipelines.