Skip to content

kxi.sp.window

Stream Processor windows.

timer

@Window
def timer(period: Union[timedelta, np.timedelta64],
          *,
          count_trigger: int = 2**63 - 1,
          skip_empty_windows: bool = False,
          accept_dictionaries: bool = True) -> Window

Aggregate the stream into windows by processing time.

Arguments:

  • period - The frequency at which windows should fire.
  • count_trigger - The number of buffered records at which the buffer will be flushed automatically.
  • skip_empty_windows - True to only emit non-empty windows.
  • accept_dictionaries - If batches will never be dictionaries, this can be False to increase performance.

Returns:

A timer window, which can be joined to other operators or pipelines.

sliding

@Window
def sliding(period: Timedelta,
            duration: Timedelta,
            time_column: Optional[Union[str, kx.SymbolAtom]] = None,
            *,
            lateness: Timedelta = (0, 's'),
            passthrough: bool = False,
            sort: bool = False,
            count_trigger: int = 2**63 - 1,
            time_assigner: Optional[Union[str, Callable]] = None,
            skip_empty_windows: bool = False,
            accept_dictionaries: bool = True) -> Window

Aggregate the stream into potentially overlapping windows based on event time.

Arguments:

  • period - The frequency at which windows should fire.
  • duration - The length of a window.
  • time_column - Name of the column containing the event timestamps. Mutually exclusive with the time_assigner argument.
  • lateness - The time delay before emitting a window to allow late events to arrive.
  • passthrough - Whether to send late events through the pipeline with the next batch rather than dropping them.
  • sort - Whether to sort the window in ascending time order.
  • count_trigger - The number of buffered records at which the buffer will be flushed automatically.
  • time_assigner - A function which will be called with the data (or the parameters specified by the params keyword argument) which should return a list of timestamps with a value for each record in the data. Mutually exclusive with the time_column argument.
  • skip_empty_windows - True to only emit non-empty windows. This can increase performance on sparse historical data.
  • accept_dictionaries - If batches will never be dictionaries, this can be False to increase performance.

Returns:

A sliding window, which can be joined to other operators or pipelines.

tumbling

@Window
def tumbling(period: Timedelta,
             time_column: Optional[Union[str, kx.SymbolAtom]] = None,
             *,
             lateness: Timedelta = (0, 's'),
             passthrough: bool = False,
             sort: bool = False,
             count_trigger: int = 2**63 - 1,
             time_assigner: Optional[Union[str, Callable]] = None,
             skip_empty_windows: bool = False,
             accept_dictionaries: bool = True) -> Window

Aggregate stream into non-overlapping windows based on event time.

Arguments:

  • period - The frequency at which windows should fire.
  • time_column - Name of the column containing the event timestamps. Mutually exclusive with the time_assigner argument.
  • lateness - The time delay before emitting a window to allow late events to arrive.
  • passthrough - True to send late events through the pipeline with the next batch rather than dropping them.
  • sort - True to sort the window in ascending time order.
  • count_trigger - The number of buffered records at which the buffer will be flushed automatically.
  • time_assigner - A function which will be called with the data (or the parameters specified by the params keyword argument) which should return a list of timestamps with a value for each record in the data. Mutually exclusive with the time_column argument.
  • skip_empty_windows - True to only emit non-empty windows. This can increase performance on sparse historical data.
  • accept_dictionaries - If batches will never be dictionaries, this can be False to increase performance.

Returns:

A tumbling operator, which can be joined to other operators or pipelines.

count

@Window
def count(size: int,
          frequency: Optional[int] = None,
          accept_dictionaries: bool = True) -> Window

Split the stream into evenly sized windows.

Arguments:

  • size - The exact number of records to include in each window.
  • frequency - The number of records between the starts of consecutive windows. If this is less than size, the windows will overlap. If None, it defaults to the size argument.
  • accept_dictionaries - If batches will never be dictionaries, this can be False to increase performance.

Returns:

A count operator, which can be joined to other operators or pipelines.

global_window

@Window
def global_window(trigger: OperatorFunction,
                  *,
                  mixed_schemas: bool = False,
                  accept_dictionaries: bool = True) -> Window

Aggregate the stream using a custom trigger.

Note: This window breaks the naming conventions of kxi.sp Defining/using a module attribute named global would be a syntax error, so this function is named global_window as a workaround.

Arguments:

  • trigger - A function that splits the stream (see below).
  • mixed_schemas - True to support batches being tables with different schemas.
  • accept_dictionaries - If batches will never be dictionaries, this can be False to increase performance.

Returns:

A global window, which can be joined to other operators or pipelines.

The trigger function is passed the following parameters:

  • the operator's id
  • the buffered records
  • an offset of where the current batch starts
  • the current batch's metadata
  • the current batch's data

As batches are ingested, the trigger function will be applied to the batch, and data will be buffered. However, the buffering behavior will depend on the output of the trigger function:

  • If the trigger function returns an empty list or generic null, the incoming batch will be buffered and nothing will be emitted.
  • If the trigger function returns numbers, the buffer will be split on those indices, with each index being the start of a new window.

  • Note - Last data batch The last list will remain in the buffer. This last list can be emitted by returning the count of the buffer as the last index. To map indices in the current batch to indices in the buffer, add the offset parameter to the indices.

The buffered records cannot be modified from the trigger function.

Batches with mixed schemas are only supported when using the mixed_schemas option.

  • Note - Caveat when using mixed_schemas When this is set, the buffer passed to the trigger will be a list of batches, rather than a single table. The indices returned by the trigger function will still work as though the buffer were a single list.