Data Types

In Apache Flink’s Python DataStream API, a data type describes the type of a value in the DataStream ecosystem. It can be used to declare input and output types of operations and informs the system how to serailize elements.

Pickle Serialization

If the type has not been declared, data would be serialized or deserialized using Pickle. For example, the program below specifies no data types.

  1. from pyflink.datastream import StreamExecutionEnvironment
  2. def processing():
  3. env = StreamExecutionEnvironment.get_execution_environment()
  4. env.set_parallelism(1)
  5. env.from_collection(collection=[(1, 'aaa'), (2, 'bbb')]) \
  6. .map(lambda record: (record[0]+1, record[1].upper())) \
  7. .print() # note: print to stdout on the worker machine
  8. env.execute()
  9. if __name__ == '__main__':
  10. processing()

However, types need to be specified when:

  • Passing Python records to Java operations.
  • Improve serialization and deserialization performance.

Passing Python records to Java operations

Since Java operators or functions can not identify Python data, types need to be provided to help to convert Python types to Java types for processing. For example, types need to be provided if you want to output data using the StreamingFileSink which is implemented in Java.

  1. from pyflink.common.serialization import Encoder
  2. from pyflink.common.typeinfo import Types
  3. from pyflink.datastream import StreamExecutionEnvironment
  4. from pyflink.datastream.connectors import StreamingFileSink
  5. def streaming_file_sink():
  6. env = StreamExecutionEnvironment.get_execution_environment()
  7. env.set_parallelism(1)
  8. env.from_collection(collection=[(1, 'aaa'), (2, 'bbb')]) \
  9. .map(lambda record: (record[0]+1, record[1].upper()),
  10. output_type=Types.ROW([Types.INT(), Types.STRING()])) \
  11. .add_sink(StreamingFileSink
  12. .for_row_format('/tmp/output', Encoder.simple_string_encoder())
  13. .build())
  14. env.execute()
  15. if __name__ == '__main__':
  16. streaming_file_sink()

Improve serialization and deserialization performance

Even though data can be serialized and deserialized through Pickle, performance will be better if types are provided. Explicit types allow PyFlink to use efficient serializers when moving records through the pipeline.

Supported Data Types

You can use pyflink.common.typeinfo.Types to specify types in Python DataStream API. The table below shows the type supported now and how to define them:

PyFlink TypeUsageCorresponding Python Type
BOOLEANTypes.BOOLEAN()bool
SHORTTypes.SHORT()int
INTTypes.INT()int
LONGTypes.LONG()int
FLOATTypes.FLOAT()float
DOUBLETypes.DOUBLE()float
CHARTypes.CHAR()str
BIG_INTTypes.BIG_INT()bytes
BIG_DECTypes.BIG_DEC()decimal.Decimal
STRINGTypes.STRING()str
BYTETypes.BYTE()int
TUPLETypes.TUPLE()tuple
PRIMITIVE_ARRAYTypes.PRIMITIVE_ARRAY()list
ROWTypes.ROW()dict