CSV Files

Spark SQL provides spark.read().csv("file_name") to read a file or directory of files in CSV format into Spark DataFrame, and dataframe.write().csv("path") to write to a CSV file. Function option() can be used to customize the behavior of reading or writing, such as controlling behavior of the header, delimiter character, character set, and so on.

  1. // A CSV dataset is pointed to by path.
  2. // The path can be either a single CSV file or a directory of CSV files
  3. val path = "examples/src/main/resources/people.csv"
  4. val df = spark.read.csv(path)
  5. df.show()
  6. // +------------------+
  7. // | _c0|
  8. // +------------------+
  9. // | name;age;job|
  10. // |Jorge;30;Developer|
  11. // | Bob;32;Developer|
  12. // +------------------+
  13. // Read a csv with delimiter, the default delimiter is ","
  14. val df2 = spark.read.option("delimiter", ";").csv(path)
  15. df2.show()
  16. // +-----+---+---------+
  17. // | _c0|_c1| _c2|
  18. // +-----+---+---------+
  19. // | name|age| job|
  20. // |Jorge| 30|Developer|
  21. // | Bob| 32|Developer|
  22. // +-----+---+---------+
  23. // Read a csv with delimiter and a header
  24. val df3 = spark.read.option("delimiter", ";").option("header", "true").csv(path)
  25. df3.show()
  26. // +-----+---+---------+
  27. // | name|age| job|
  28. // +-----+---+---------+
  29. // |Jorge| 30|Developer|
  30. // | Bob| 32|Developer|
  31. // +-----+---+---------+
  32. // You can also use options() to use multiple options
  33. val df4 = spark.read.options(Map("delimiter"->";", "header"->"true")).csv(path)
  34. // "output" is a folder which contains multiple csv files and a _SUCCESS file.
  35. df3.write.csv("output")
  36. // Read all files in a folder, please make sure only CSV files should present in the folder.
  37. val folderPath = "examples/src/main/resources";
  38. val df5 = spark.read.csv(folderPath);
  39. df5.show();
  40. // Wrong schema because non-CSV files are read
  41. // +-----------+
  42. // | _c0|
  43. // +-----------+
  44. // |238val_238|
  45. // | 86val_86|
  46. // |311val_311|
  47. // | 27val_27|
  48. // |165val_165|
  49. // +-----------+

Find full example code at “examples/src/main/scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala” in the Spark repo.

  1. import org.apache.spark.sql.Dataset;
  2. import org.apache.spark.sql.Row;
  3. // A CSV dataset is pointed to by path.
  4. // The path can be either a single CSV file or a directory of CSV files
  5. String path = "examples/src/main/resources/people.csv";
  6. Dataset<Row> df = spark.read().csv(path);
  7. df.show();
  8. // +------------------+
  9. // | _c0|
  10. // +------------------+
  11. // | name;age;job|
  12. // |Jorge;30;Developer|
  13. // | Bob;32;Developer|
  14. // +------------------+
  15. // Read a csv with delimiter, the default delimiter is ","
  16. Dataset<Row> df2 = spark.read().option("delimiter", ";").csv(path);
  17. df2.show();
  18. // +-----+---+---------+
  19. // | _c0|_c1| _c2|
  20. // +-----+---+---------+
  21. // | name|age| job|
  22. // |Jorge| 30|Developer|
  23. // | Bob| 32|Developer|
  24. // +-----+---+---------+
  25. // Read a csv with delimiter and a header
  26. Dataset<Row> df3 = spark.read().option("delimiter", ";").option("header", "true").csv(path);
  27. df3.show();
  28. // +-----+---+---------+
  29. // | name|age| job|
  30. // +-----+---+---------+
  31. // |Jorge| 30|Developer|
  32. // | Bob| 32|Developer|
  33. // +-----+---+---------+
  34. // You can also use options() to use multiple options
  35. java.util.Map<String, String> optionsMap = new java.util.HashMap<String, String>();
  36. optionsMap.put("delimiter",";");
  37. optionsMap.put("header","true");
  38. Dataset<Row> df4 = spark.read().options(optionsMap).csv(path);
  39. // "output" is a folder which contains multiple csv files and a _SUCCESS file.
  40. df3.write().csv("output");
  41. // Read all files in a folder, please make sure only CSV files should present in the folder.
  42. String folderPath = "examples/src/main/resources";
  43. Dataset<Row> df5 = spark.read().csv(folderPath);
  44. df5.show();
  45. // Wrong schema because non-CSV files are read
  46. // +-----------+
  47. // | _c0|
  48. // +-----------+
  49. // |238val_238|
  50. // | 86val_86|
  51. // |311val_311|
  52. // | 27val_27|
  53. // |165val_165|
  54. // +-----------+

Find full example code at “examples/src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java” in the Spark repo.

  1. # spark is from the previous example
  2. sc = spark.sparkContext
  3. # A CSV dataset is pointed to by path.
  4. # The path can be either a single CSV file or a directory of CSV files
  5. path = "examples/src/main/resources/people.csv"
  6. df = spark.read.csv(path)
  7. df.show()
  8. # +------------------+
  9. # | _c0|
  10. # +------------------+
  11. # | name;age;job|
  12. # |Jorge;30;Developer|
  13. # | Bob;32;Developer|
  14. # +------------------+
  15. # Read a csv with delimiter, the default delimiter is ","
  16. df2 = spark.read.option("delimiter", ";").csv(path)
  17. df2.show()
  18. # +-----+---+---------+
  19. # | _c0|_c1| _c2|
  20. # +-----+---+---------+
  21. # | name|age| job|
  22. # |Jorge| 30|Developer|
  23. # | Bob| 32|Developer|
  24. # +-----+---+---------+
  25. # Read a csv with delimiter and a header
  26. df3 = spark.read.option("delimiter", ";").option("header", True).csv(path)
  27. df3.show()
  28. # +-----+---+---------+
  29. # | name|age| job|
  30. # +-----+---+---------+
  31. # |Jorge| 30|Developer|
  32. # | Bob| 32|Developer|
  33. # +-----+---+---------+
  34. # You can also use options() to use multiple options
  35. df4 = spark.read.options(delimiter=";", header=True).csv(path)
  36. # "output" is a folder which contains multiple csv files and a _SUCCESS file.
  37. df3.write.csv("output")
  38. # Read all files in a folder, please make sure only CSV files should present in the folder.
  39. folderPath = "examples/src/main/resources"
  40. df5 = spark.read.csv(folderPath)
  41. df5.show()
  42. # Wrong schema because non-CSV files are read
  43. # +-----------+
  44. # | _c0|
  45. # +-----------+
  46. # |238val_238|
  47. # | 86val_86|
  48. # |311val_311|
  49. # | 27val_27|
  50. # |165val_165|
  51. # +-----------+

Find full example code at “examples/src/main/python/sql/datasource.py” in the Spark repo.

Data Source Option

Data source options of CSV can be set via:

  • the .option/.options methods of
    • DataFrameReader
    • DataFrameWriter
    • DataStreamReader
    • DataStreamWriter
  • the built-in functions below
    • from_csv
    • to_csv
    • schema_of_csv
  • OPTIONS clause at CREATE TABLE USING DATA_SOURCE
Property NameDefaultMeaningScope
sep,Sets a separator for each field and value. This separator can be one or more characters.read/write
encodingUTF-8For reading, decodes the CSV files by the given encoding type. For writing, specifies encoding (charset) of saved CSV files. CSV built-in functions ignore this option.read/write
quoteSets a single character used for escaping quoted values where the separator can be part of the value. For reading, if you would like to turn off quotations, you need to set not null but an empty string. For writing, if an empty string is set, it uses u0000 (null character).read/write
quoteAllfalseA flag indicating whether all values should always be enclosed in quotes. Default is to only escape values containing a quote character.write
escape\Sets a single character used for escaping quotes inside an already quoted value.read/write
escapeQuotestrueA flag indicating whether values containing quotes should always be enclosed in quotes. Default is to escape all values containing a quote character.write
commentSets a single character used for skipping lines beginning with this character. By default, it is disabled.read
headerfalseFor reading, uses the first line as names of columns. For writing, writes the names of columns as the first line. Note that if the given path is a RDD of Strings, this header option will remove all lines same with the header if exists. CSV built-in functions ignore this option.read/write
inferSchemafalseInfers the input schema automatically from data. It requires one extra pass over the data. CSV built-in functions ignore this option.read
preferDatetrueDuring schema inference (inferSchema), attempts to infer string columns that contain dates as Date if the values satisfy the dateFormat option or default date format. For columns that contain a mixture of dates and timestamps, try inferring them as TimestampType if timestamp format not specified, otherwise infer them as StringType.read
enforceSchematrueIf it is set to true, the specified or inferred schema will be forcibly applied to datasource files, and headers in CSV files will be ignored. If the option is set to false, the schema will be validated against all headers in CSV files in the case when the header option is set to true. Field names in the schema and column names in CSV headers are checked by their positions taking into account spark.sql.caseSensitive. Though the default value is true, it is recommended to disable the enforceSchema option to avoid incorrect results. CSV built-in functions ignore this option.read
ignoreLeadingWhiteSpacefalse (for reading), true (for writing)A flag indicating whether or not leading whitespaces from values being read/written should be skipped.read/write
ignoreTrailingWhiteSpacefalse (for reading), true (for writing)A flag indicating whether or not trailing whitespaces from values being read/written should be skipped.read/write
nullValueSets the string representation of a null value. Since 2.0.1, this nullValue param applies to all supported types including the string type.read/write
nanValueNaNSets the string representation of a non-number value.read
positiveInfInfSets the string representation of a positive infinity value.read
negativeInf-InfSets the string representation of a negative infinity value.read
dateFormatyyyy-MM-ddSets the string that indicates a date format. Custom date formats follow the formats at Datetime Patterns. This applies to date type.read/write
timestampFormatyyyy-MM-dd’T’HH:mm:ss[.SSS][XXX]Sets the string that indicates a timestamp format. Custom date formats follow the formats at Datetime Patterns. This applies to timestamp type.read/write
timestampNTZFormatyyyy-MM-dd’T’HH:mm:ss[.SSS]Sets the string that indicates a timestamp without timezone format. Custom date formats follow the formats at Datetime Patterns. This applies to timestamp without timezone type, note that zone-offset and time-zone components are not supported when writing or reading this data type.read/write
enableDateTimeParsingFallbackEnabled if the time parser policy has legacy settings or if no custom date or timestamp pattern was provided.Allows falling back to the backward compatible (Spark 1.x and 2.0) behavior of parsing dates and timestamps if values do not match the set patterns.read
maxColumns20480Defines a hard limit of how many columns a record can have.read
maxCharsPerColumn-1Defines the maximum number of characters allowed for any given value being read. By default, it is -1 meaning unlimited lengthread
modePERMISSIVEAllows a mode for dealing with corrupt records during parsing. It supports the following case-insensitive modes. Note that Spark tries to parse only required columns in CSV under column pruning. Therefore, corrupt records can be different based on required set of fields. This behavior can be controlled by spark.sql.csv.parser.columnPruning.enabled (enabled by default).
  • PERMISSIVE: when it meets a corrupted record, puts the malformed string into a field configured by columnNameOfCorruptRecord, and sets malformed fields to null. To keep corrupt records, an user can set a string type field named columnNameOfCorruptRecord in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. A record with less/more tokens than schema is not a corrupted record to CSV. When it meets a record having fewer tokens than the length of the schema, sets null to extra fields. When the record has more tokens than the length of the schema, it drops extra tokens.
  • DROPMALFORMED: ignores the whole corrupted records. This mode is unsupported in the CSV built-in functions.
  • FAILFAST: throws an exception when it meets corrupted records.
read
columnNameOfCorruptRecord(value of spark.sql.columnNameOfCorruptRecord configuration)Allows renaming the new field having malformed string created by PERMISSIVE mode. This overrides spark.sql.columnNameOfCorruptRecord.read
multiLinefalseParse one record, which may span multiple lines, per file. CSV built-in functions ignore this option.read
charToEscapeQuoteEscapingescape or \0Sets a single character used for escaping the escape for the quote character. The default value is escape character when escape and quote characters are different, \0 otherwise.read/write
samplingRatio1.0Defines fraction of rows used for schema inferring. CSV built-in functions ignore this option.read
emptyValue(for reading), “” (for writing)Sets the string representation of an empty value.read/write
localeen-USSets a locale as language tag in IETF BCP 47 format. For instance, this is used while parsing dates and timestamps.read
lineSep\r, \r\n and \n (for reading), \n (for writing)Defines the line separator that should be used for parsing/writing. Maximum length is 1 character. CSV built-in functions ignore this option.read/write
unescapedQuoteHandlingSTOP_AT_DELIMITERDefines how the CsvParser will handle values with unescaped quotes.
  • STOP_AT_CLOSING_QUOTE: If unescaped quotes are found in the input, accumulate the quote character and proceed parsing the value as a quoted value, until a closing quote is found.
  • BACK_TO_DELIMITER: If unescaped quotes are found in the input, consider the value as an unquoted value. This will make the parser accumulate all characters of the current parsed value until the delimiter is found. If no delimiter is found in the value, the parser will continue accumulating characters from the input until a delimiter or line ending is found.
  • STOP_AT_DELIMITER: If unescaped quotes are found in the input, consider the value as an unquoted value. This will make the parser accumulate all characters until the delimiter or a line ending is found in the input.
  • SKIP_VALUE: If unescaped quotes are found in the input, the content parsed for the given value will be skipped and the value set in nullValue will be produced instead.
  • RAISE_ERROR: If unescaped quotes are found in the input, a TextParsingException will be thrown.
read
compression(none)Compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, bzip2, gzip, lz4, snappy and deflate). CSV built-in functions ignore this option.write

Other generic options can be found in Generic File Source Options.