CSV format

To use the CSV format you need to add the Flink CSV dependency to your project:

  1. <dependency>
  2. <groupId>org.apache.flink</groupId>
  3. <artifactId>flink-csv</artifactId>
  4. <version>1.16.0</version>
  5. </dependency>

For PyFlink users, you could use it directly in your jobs.

Flink supports reading CSV files using CsvReaderFormat. The reader utilizes Jackson library and allows passing the corresponding configuration for the CSV schema and parsing options.

CsvReaderFormat can be initialized and used like this:

  1. CsvReaderFormat<SomePojo> csvFormat = CsvReaderFormat.forPojo(SomePojo.class);
  2. FileSource<SomePojo> source =
  3. FileSource.forRecordStreamFormat(csvFormat, Path.fromLocalFile(...)).build();

The schema for CSV parsing, in this case, is automatically derived based on the fields of the SomePojo class using the Jackson library.

Note: you might need to add @JsonPropertyOrder({field1, field2, ...}) annotation to your class definition with the fields order exactly matching those of the CSV file columns.

Advanced configuration

If you need more fine-grained control over the CSV schema or the parsing options, use the more low-level forSchema static factory method of CsvReaderFormat:

  1. CsvReaderFormat<T> forSchema(Supplier<CsvMapper> mapperFactory,
  2. Function<CsvMapper, CsvSchema> schemaGenerator,
  3. TypeInformation<T> typeInformation)

Below is an example of reading a POJO with a custom columns’ separator:

  1. //Has to match the exact order of columns in the CSV file
  2. @JsonPropertyOrder({"city","lat","lng","country","iso2",
  3. "adminName","capital","population"})
  4. public static class CityPojo {
  5. public String city;
  6. public BigDecimal lat;
  7. public BigDecimal lng;
  8. public String country;
  9. public String iso2;
  10. public String adminName;
  11. public String capital;
  12. public long population;
  13. }
  14. Function<CsvMapper, CsvSchema> schemaGenerator = mapper ->
  15. mapper.schemaFor(CityPojo.class).withoutQuoteChar().withColumnSeparator('|');
  16. CsvReaderFormat<CityPojo> csvFormat =
  17. CsvReaderFormat.forSchema(() -> new CsvMapper(), schemaGenerator, TypeInformation.of(CityPojo.class));
  18. FileSource<CityPojo> source =
  19. FileSource.forRecordStreamFormat(csvFormat, Path.fromLocalFile(...)).build();

The corresponding CSV file:

  1. Berlin|52.5167|13.3833|Germany|DE|Berlin|primary|3644826
  2. San Francisco|37.7562|-122.443|United States|US|California||3592294
  3. Beijing|39.905|116.3914|China|CN|Beijing|primary|19433000

It is also possible to read more complex data types using fine-grained Jackson settings:

  1. public static class ComplexPojo {
  2. private long id;
  3. private int[] array;
  4. }
  5. CsvReaderFormat<ComplexPojo> csvFormat =
  6. CsvReaderFormat.forSchema(
  7. CsvSchema.builder()
  8. .addColumn(
  9. new CsvSchema.Column(0, "id", CsvSchema.ColumnType.NUMBER))
  10. .addColumn(
  11. new CsvSchema.Column(4, "array", CsvSchema.ColumnType.ARRAY)
  12. .withArrayElementSeparator("#"))
  13. .build(),
  14. TypeInformation.of(ComplexPojo.class));

For PyFlink users, a csv schema can be defined by manually adding columns, and the output type of the csv source will be a Row with each column mapped to a field.

  1. schema = CsvSchema.builder() \
  2. .add_number_column('id', number_type=DataTypes.BIGINT()) \
  3. .add_array_column('array', separator='#', element_type=DataTypes.INT()) \
  4. .set_column_separator(',') \
  5. .build()
  6. source = FileSource.for_record_stream_format(
  7. CsvReaderFormat.for_schema(schema), CSV_FILE_PATH).build()
  8. # the type of record will be Types.ROW_NAMED(['id', 'array'], [Types.LONG(), Types.LIST(Types.INT())])
  9. ds = env.from_source(source, WatermarkStrategy.no_watermarks(), 'csv-source')

The corresponding CSV file:

  1. 0,1#2#3
  2. 1,
  3. 2,1

Similarly to the TextLineInputFormat, CsvReaderFormat can be used in both continues and batch modes (see TextLineInputFormat for examples).

For PyFlink users, CsvBulkWriters could be used to create BulkWriterFactory to write records to files in CSV format.

  1. schema = CsvSchema.builder() \
  2. .add_number_column('id', number_type=DataTypes.BIGINT()) \
  3. .add_array_column('array', separator='#', element_type=DataTypes.INT()) \
  4. .set_column_separator(',') \
  5. .build()
  6. sink = FileSink.for_bulk_format(
  7. OUTPUT_DIR, CsvBulkWriters.for_schema(schema)).build()
  8. ds.sink_to(sink)