Piping Responses

Streaming Responses

As described in the overall design, a SQLFlow job could be a standard or an extended SQL statemnt, where an extended SQL statement will be translated into a Python program. Therefore, each job might generate up to the following data streams:

  • standard output, where each element is a line of text,
  • standard error, where each element is a line of text,
  • data rows, where the first element consists of fields name/types, and each of the rest is a row of data,
  • status, where the element could be pending, failed, and succeeded.To create good user experience, we need to pipe these responses from SQLFlow jobs to Jupyter Notebook in real-time.

Stages in the Pipe

The pipe that streams outputs from SQLFlow jobs to the Jupyter Notebook consists of the following stages:

  1. Web browser
  2. | HTTP
  3. Jupyter Notebook server
  4. | ZeroMQ streams: Shell, IOPub, stdin, Controls, Heartbeat
  5. iPython kernel
  6. | IPython magic command framework
  7. SQLFlow magic command for Jupyter
  8. | gRPC
  9. SQLFlow server
  10. | Go channels
  11. SQLFlow job manager (Go functions)
  12. | IPC with Go's standard library
  13. SQLFlow jobs

In the above figure, from the SQLFlow magic command to the bottom layer are our work.

Streaming

We have two alternative ideas: multiple streams and a multiplexing stream.We decided to use a multiplexing stream because we had a unsuccessful trial with the multiple streams idea: we make the job writes to various Go channels and forward each Go channel to a streaming gRPC call, as the following:

Multiple streams

The above figure shows that there are multiple streams between the Jupyter Notebook server and Jupyter kernels. According to the document, there are five: Shell, IOPub, stdin, Control, and Heartbeat. These streams are ZeroMQ streams. We don’t use ZeroMQ, but we can take the idea of having multiple parallel streams in the pipe.

  1. service SQLFlow {
  2. rpc File(string sql) returns (int id) {}
  3. rpc ReadStdout(int id) returns (stream string) {}
  4. rpc ReadStderr(int id) returns (stream string) {}
  5. rpc ReadData(int id) returns (stream Row) {}
  6. rpc ReadStatus(int id) returns (stream int) {}
  7. }

However, we realized that if the user doesn’t call any one of the SQLFlow.Read… call, there would be no forwarding from the Go channel to Jupyter, thus the job would block forever at writing.

A Multiplexing Stream

Another idea is multiplexing all streams into one. For example, we can have only one ZeroMQ stream, where each element is a polymorphic type – could be a text string or a data row.

  1. service SQLFlow {
  2. rpc Run(string sql) returns (stream Response) {}
  3. }
  4. // Only one of the following fields should be set.
  5. message Response {
  6. oneof record {
  7. repeated string head = 1; // Column names.
  8. repeated google.protobuf.Any row = 2; // Cells in a row.
  9. string log = 3; // A line from stderr or stdout.
  10. }
  11. }