How to Read Data from Named Pipes (FIFOs)
Learn how to use KDB-X to read data from Unix named pipes (FIFOs) for both simple blocking reads and continuous data streaming from external processes.
Overview
This guide walks you through the two primary methods for reading a named pipe in a KDB-X process. You will learn how to:
- Read raw data: Perform simple, blocking reads into a byte vector using
hopenandread1orread0 - Stream and process data: Continuously read and process data from a pipe using
.Q.fps, ideal for integrating with external processes
Prerequisites
Before you begin, ensure you have the following set up:
- A Unix-like environment (such as WSL, Linux, or macOS)
- kdb+ V3.4 or later
Read raw data from a pipe
To read from a named pipe, open a handle using hopen with the `:fifo:// prefix. This opens a read-only handle that processes data sequentially.
The following examples demonstrate how to read data from a named pipe using both binary and text methods.
Processing data with a specified buffer size
Both functions accept an optional buffer size. When called with only a handle (for example, read1 h), they read all currently available data. When a length is provided (for example, read1 (h; 100)), they read up to that number of bytes or characters.
Understanding blocking reads
Both methods perform blocking reads, meaning execution pauses until data becomes available or the pipe is closed. A blocking read causes your q process to wait indefinitely at the read step until the writing process pushes data into the pipe.
Example 1: Reading binary data
This is the standard method for simple interactions with raw byte data.
-
Prepare the environment. In your shell, create a named pipe using
mkfifoand write to it. Theechocommand will block until q connects.# Terminal 1 $ mkfifo bytepipe $ echo "hello pipe" > bytepipe -
Read from the pipe. Open the pipe, read the specific bytes, and close the handle.
// Terminal 2 (q session) q)// Open a handle to the named pipe q)h: hopen `:fifo://bytepipe q)// Perform a blocking read for all available data q)read1 h 0x68656c6c6f20706970650a q)// Close the handle when done q)hclose hEnd-of-file (EOF) behavior
When
read1reaches the end of the file (EOF), which occurs when the writing process closes its end of the pipe, it returns an empty byte vector (`byte$()).
A `:fifo:// handle is also useful for reading certain non-searchable or zero-length system files or devices. For example:
q)a:hopen`:fifo:///dev/urandom
q)read1 (a;8)
0x8f172b7ea00b85e6
q)hclose a
Example 2: Reading text data
While read1 handles raw bytes, read0 is optimized for text processing and returns strings.
-
Prepare the environment. In your shell, create a named pipe and write multiple lines of text to it.
# Terminal 1 $ mkfifo textpipe $ (echo "first line"; echo "second line"; echo "third line") > textpipe -
Read text strings. Open the pipe and read text data in chunks.
// Terminal 2 (q session) q)h: hopen `:fifo://textpipe q)// Read text data (returns strings, not bytes) q)read0 (h; 20) "first line" "second li" q)// Read remaining text q)read0 (h; 20) "ne" "third line" q)hclose h
Stream and process data from a pipe
For continuous data streams, use the .Q.fps (pipe stream) utility. This is a powerful technique for processing the output of another program. For example, decompressing a file on the fly and loading it directly into a KDB-X table.
Performance and efficiency
This streaming approach is highly efficient because it avoids the overhead of saving a large, intermediate file to disk. Data flows directly from the source into your q process's memory.
The following examples demonstrate how to stream data from a compressed CSV file into a trade table.
First, create a sample CSV file named t.csv:
MSFT,12:01:10.000,A,O,300,55.60
AAPL,12:01:20.000,B,O,500,67.70
IBM,12:01:20.100,A,O,100,61.11
Example 1: Stream from a Gzip-compressed file
This example uses gunzip to decompress t.csv.gz and pipes the output directly to q for processing.
-
Prepare the environment. In your shell, compress the
t.csvfile. In q, create an emptytradetable with the correct schema.# Shell $ gzip t.csv -
Set up and start the stream. The following q code sets up the pipeline:
- Creates a named pipe called
fifousingmkfifo - Executes
gunzipin the background (&), directing its output to thefifo - Uses
.Q.fpsto read from thefifo. For each chunk of datax, it parses the CSV content using0:and inserts it into thetradetable usinginsert
Caution with
rm -fThe command
rm -f fifoforcefully removes the pipe if it exists. Be cautious when usingrm -fin scripts to avoid accidentally deleting important files.q)// Safely create a new, empty pipe q)system "rm -f fifo && mkfifo fifo" q)// Define the target table with the correct column types using flip q)trade: flip `sym`time`ex`cond`size`price!"STCCFF"$\:() q)// Decompress the file in the background using system, piping output to 'fifo' q)system "gunzip -cf t.csv.gz > fifo &" q)// Start the pipe stream processor q)// .Q.fps[function; handle] reads the pipe and applies the function to each data chunk q).Q.fps[{`trade insert ("STCCFF";",")0:x}]`:fifo q)// Verify the data has been loaded q)trade sym time ex cond size price -------------------------------------- MSFT 12:01:10.000 A O 300 55.6 AAPL 12:01:20.000 B O 500 67.7 IBM 12:01:20.100 A O 100 61.11 - Creates a named pipe called
Example 2: Stream from a ZIP archive
This example uses the unzip command to stream the contents of a ZIP archive, demonstrating the versatility of this pattern.
-
Prepare the environment. In your shell, create a ZIP archive containing
t.csv.# Shell $ zip t.zip t.csv -
Set up and start the stream. This code is nearly identical to the
gunzipexample but substitutes theunzip-pcommand, which prints file contents to standard output without extracting.q)// Safely create a new, empty pipe q)system "rm -f fifo && mkfifo fifo" q)// Ensure the target table is empty q)trade: flip `sym`time`ex`cond`size`price!"STCCFF"$\:() q)// Unzip to stdout in the background, piping output to 'fifo' q)system "unzip -p t.zip > fifo &" q)// Start the same pipe stream processor q).Q.fps[{`trade insert ("STCCFF";",")0:x}]`:fifo q)// Verify the data q)trade sym time ex cond size price -------------------------------------- MSFT 12:01:10.000 A O 300 55.6 AAPL 12:01:20.000 B O 500 67.7 IBM 12:01:20.100 A O 100 61.11
Summary
In this guide, you:
- Learned the two primary methods for reading from Unix named pipes in KDB-X
- Performed a simple, blocking read from a pipe using
hopen,read1, andread0 - Implemented a continuous data stream using
.Q.fpsto load and process data from external commands likegunzipandunzip - Understood the performance benefits of streaming data to avoid intermediate disk I/O