Hive Tables

Spark SQL also supports reading and writing data stored in Apache Hive. However, since Hive has a large number of dependencies, these dependencies are not included in the default Spark distribution. If Hive dependencies can be found on the classpath, Spark will load them automatically. Note that these Hive dependencies must also be present on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries (SerDes) in order to access data stored in Hive.

Configuration of Hive is done by placing your hive-site.xml, core-site.xml (for security configuration), and hdfs-site.xml (for HDFS configuration) file in conf/.

When working with Hive, one must instantiate SparkSession with Hive support, including connectivity to a persistent Hive metastore, support for Hive serdes, and Hive user-defined functions. Users who do not have an existing Hive deployment can still enable Hive support. When not configured by the hive-site.xml, the context automatically creates metastore_db in the current directory and creates a directory configured by spark.sql.warehouse.dir, which defaults to the directory spark-warehouse in the current directory that the Spark application is started. Note that the hive.metastore.warehouse.dir property in hive-site.xml is deprecated since Spark 2.0.0. Instead, use spark.sql.warehouse.dir to specify the default location of database in warehouse. You may need to grant write privilege to the user who starts the Spark application.

  1. import java.io.File
  2. import org.apache.spark.sql.{Row, SaveMode, SparkSession}
  3. case class Record(key: Int, value: String)
  4. // warehouseLocation points to the default location for managed databases and tables
  5. val warehouseLocation = new File("spark-warehouse").getAbsolutePath
  6. val spark = SparkSession
  7. .builder()
  8. .appName("Spark Hive Example")
  9. .config("spark.sql.warehouse.dir", warehouseLocation)
  10. .enableHiveSupport()
  11. .getOrCreate()
  12. import spark.implicits._
  13. import spark.sql
  14. sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive")
  15. sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
  16. // Queries are expressed in HiveQL
  17. sql("SELECT * FROM src").show()
  18. // +---+-------+
  19. // |key| value|
  20. // +---+-------+
  21. // |238|val_238|
  22. // | 86| val_86|
  23. // |311|val_311|
  24. // ...
  25. // Aggregation queries are also supported.
  26. sql("SELECT COUNT(*) FROM src").show()
  27. // +--------+
  28. // |count(1)|
  29. // +--------+
  30. // | 500 |
  31. // +--------+
  32. // The results of SQL queries are themselves DataFrames and support all normal functions.
  33. val sqlDF = sql("SELECT key, value FROM src WHERE key < 10 ORDER BY key")
  34. // The items in DataFrames are of type Row, which allows you to access each column by ordinal.
  35. val stringsDS = sqlDF.map {
  36. case Row(key: Int, value: String) => s"Key: $key, Value: $value"
  37. }
  38. stringsDS.show()
  39. // +--------------------+
  40. // | value|
  41. // +--------------------+
  42. // |Key: 0, Value: val_0|
  43. // |Key: 0, Value: val_0|
  44. // |Key: 0, Value: val_0|
  45. // ...
  46. // You can also use DataFrames to create temporary views within a SparkSession.
  47. val recordsDF = spark.createDataFrame((1 to 100).map(i => Record(i, s"val_$i")))
  48. recordsDF.createOrReplaceTempView("records")
  49. // Queries can then join DataFrame data with data stored in Hive.
  50. sql("SELECT * FROM records r JOIN src s ON r.key = s.key").show()
  51. // +---+------+---+------+
  52. // |key| value|key| value|
  53. // +---+------+---+------+
  54. // | 2| val_2| 2| val_2|
  55. // | 4| val_4| 4| val_4|
  56. // | 5| val_5| 5| val_5|
  57. // ...
  58. // Create a Hive managed Parquet table, with HQL syntax instead of the Spark SQL native syntax
  59. // `USING hive`
  60. sql("CREATE TABLE hive_records(key int, value string) STORED AS PARQUET")
  61. // Save DataFrame to the Hive managed table
  62. val df = spark.table("src")
  63. df.write.mode(SaveMode.Overwrite).saveAsTable("hive_records")
  64. // After insertion, the Hive managed table has data now
  65. sql("SELECT * FROM hive_records").show()
  66. // +---+-------+
  67. // |key| value|
  68. // +---+-------+
  69. // |238|val_238|
  70. // | 86| val_86|
  71. // |311|val_311|
  72. // ...
  73. // Prepare a Parquet data directory
  74. val dataDir = "/tmp/parquet_data"
  75. spark.range(10).write.parquet(dataDir)
  76. // Create a Hive external Parquet table
  77. sql(s"CREATE EXTERNAL TABLE hive_bigints(id bigint) STORED AS PARQUET LOCATION '$dataDir'")
  78. // The Hive external table should already have data
  79. sql("SELECT * FROM hive_bigints").show()
  80. // +---+
  81. // | id|
  82. // +---+
  83. // | 0|
  84. // | 1|
  85. // | 2|
  86. // ... Order may vary, as spark processes the partitions in parallel.
  87. // Turn on flag for Hive Dynamic Partitioning
  88. spark.sqlContext.setConf("hive.exec.dynamic.partition", "true")
  89. spark.sqlContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
  90. // Create a Hive partitioned table using DataFrame API
  91. df.write.partitionBy("key").format("hive").saveAsTable("hive_part_tbl")
  92. // Partitioned column `key` will be moved to the end of the schema.
  93. sql("SELECT * FROM hive_part_tbl").show()
  94. // +-------+---+
  95. // | value|key|
  96. // +-------+---+
  97. // |val_238|238|
  98. // | val_86| 86|
  99. // |val_311|311|
  100. // ...
  101. spark.stop()

Find full example code at “examples/src/main/scala/org/apache/spark/examples/sql/hive/SparkHiveExample.scala” in the Spark repo.

  1. import java.io.File;
  2. import java.io.Serializable;
  3. import java.util.ArrayList;
  4. import java.util.List;
  5. import org.apache.spark.api.java.function.MapFunction;
  6. import org.apache.spark.sql.Dataset;
  7. import org.apache.spark.sql.Encoders;
  8. import org.apache.spark.sql.Row;
  9. import org.apache.spark.sql.SparkSession;
  10. public static class Record implements Serializable {
  11. private int key;
  12. private String value;
  13. public int getKey() {
  14. return key;
  15. }
  16. public void setKey(int key) {
  17. this.key = key;
  18. }
  19. public String getValue() {
  20. return value;
  21. }
  22. public void setValue(String value) {
  23. this.value = value;
  24. }
  25. }
  26. // warehouseLocation points to the default location for managed databases and tables
  27. String warehouseLocation = new File("spark-warehouse").getAbsolutePath();
  28. SparkSession spark = SparkSession
  29. .builder()
  30. .appName("Java Spark Hive Example")
  31. .config("spark.sql.warehouse.dir", warehouseLocation)
  32. .enableHiveSupport()
  33. .getOrCreate();
  34. spark.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive");
  35. spark.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src");
  36. // Queries are expressed in HiveQL
  37. spark.sql("SELECT * FROM src").show();
  38. // +---+-------+
  39. // |key| value|
  40. // +---+-------+
  41. // |238|val_238|
  42. // | 86| val_86|
  43. // |311|val_311|
  44. // ...
  45. // Aggregation queries are also supported.
  46. spark.sql("SELECT COUNT(*) FROM src").show();
  47. // +--------+
  48. // |count(1)|
  49. // +--------+
  50. // | 500 |
  51. // +--------+
  52. // The results of SQL queries are themselves DataFrames and support all normal functions.
  53. Dataset<Row> sqlDF = spark.sql("SELECT key, value FROM src WHERE key < 10 ORDER BY key");
  54. // The items in DataFrames are of type Row, which lets you to access each column by ordinal.
  55. Dataset<String> stringsDS = sqlDF.map(
  56. (MapFunction<Row, String>) row -> "Key: " + row.get(0) + ", Value: " + row.get(1),
  57. Encoders.STRING());
  58. stringsDS.show();
  59. // +--------------------+
  60. // | value|
  61. // +--------------------+
  62. // |Key: 0, Value: val_0|
  63. // |Key: 0, Value: val_0|
  64. // |Key: 0, Value: val_0|
  65. // ...
  66. // You can also use DataFrames to create temporary views within a SparkSession.
  67. List<Record> records = new ArrayList<>();
  68. for (int key = 1; key < 100; key++) {
  69. Record record = new Record();
  70. record.setKey(key);
  71. record.setValue("val_" + key);
  72. records.add(record);
  73. }
  74. Dataset<Row> recordsDF = spark.createDataFrame(records, Record.class);
  75. recordsDF.createOrReplaceTempView("records");
  76. // Queries can then join DataFrames data with data stored in Hive.
  77. spark.sql("SELECT * FROM records r JOIN src s ON r.key = s.key").show();
  78. // +---+------+---+------+
  79. // |key| value|key| value|
  80. // +---+------+---+------+
  81. // | 2| val_2| 2| val_2|
  82. // | 2| val_2| 2| val_2|
  83. // | 4| val_4| 4| val_4|
  84. // ...

Find full example code at “examples/src/main/java/org/apache/spark/examples/sql/hive/JavaSparkHiveExample.java” in the Spark repo.

  1. from os.path import abspath
  2. from pyspark.sql import SparkSession
  3. from pyspark.sql import Row
  4. # warehouse_location points to the default location for managed databases and tables
  5. warehouse_location = abspath('spark-warehouse')
  6. spark = SparkSession \
  7. .builder \
  8. .appName("Python Spark SQL Hive integration example") \
  9. .config("spark.sql.warehouse.dir", warehouse_location) \
  10. .enableHiveSupport() \
  11. .getOrCreate()
  12. # spark is an existing SparkSession
  13. spark.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive")
  14. spark.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
  15. # Queries are expressed in HiveQL
  16. spark.sql("SELECT * FROM src").show()
  17. # +---+-------+
  18. # |key| value|
  19. # +---+-------+
  20. # |238|val_238|
  21. # | 86| val_86|
  22. # |311|val_311|
  23. # ...
  24. # Aggregation queries are also supported.
  25. spark.sql("SELECT COUNT(*) FROM src").show()
  26. # +--------+
  27. # |count(1)|
  28. # +--------+
  29. # | 500 |
  30. # +--------+
  31. # The results of SQL queries are themselves DataFrames and support all normal functions.
  32. sqlDF = spark.sql("SELECT key, value FROM src WHERE key < 10 ORDER BY key")
  33. # The items in DataFrames are of type Row, which allows you to access each column by ordinal.
  34. stringsDS = sqlDF.rdd.map(lambda row: "Key: %d, Value: %s" % (row.key, row.value))
  35. for record in stringsDS.collect():
  36. print(record)
  37. # Key: 0, Value: val_0
  38. # Key: 0, Value: val_0
  39. # Key: 0, Value: val_0
  40. # ...
  41. # You can also use DataFrames to create temporary views within a SparkSession.
  42. Record = Row("key", "value")
  43. recordsDF = spark.createDataFrame([Record(i, "val_" + str(i)) for i in range(1, 101)])
  44. recordsDF.createOrReplaceTempView("records")
  45. # Queries can then join DataFrame data with data stored in Hive.
  46. spark.sql("SELECT * FROM records r JOIN src s ON r.key = s.key").show()
  47. # +---+------+---+------+
  48. # |key| value|key| value|
  49. # +---+------+---+------+
  50. # | 2| val_2| 2| val_2|
  51. # | 4| val_4| 4| val_4|
  52. # | 5| val_5| 5| val_5|
  53. # ...

Find full example code at “examples/src/main/python/sql/hive.py” in the Spark repo.

When working with Hive one must instantiate SparkSession with Hive support. This adds support for finding tables in the MetaStore and writing queries using HiveQL.

  1. # enableHiveSupport defaults to TRUE
  2. sparkR.session(enableHiveSupport = TRUE)
  3. sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING) USING hive")
  4. sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")
  5. # Queries can be expressed in HiveQL.
  6. results <- collect(sql("FROM src SELECT key, value"))

Find full example code at “examples/src/main/r/RSparkSQLExample.R” in the Spark repo.

Specifying storage format for Hive tables

When you create a Hive table, you need to define how this table should read/write data from/to file system, i.e. the “input format” and “output format”. You also need to define how this table should deserialize the data to rows, or serialize rows to data, i.e. the “serde”. The following options can be used to specify the storage format(“serde”, “input format”, “output format”), e.g. CREATE TABLE src(id int) USING hive OPTIONS(fileFormat 'parquet'). By default, we will read the table files as plain text. Note that, Hive storage handler is not supported yet when creating table, you can create a table using storage handler at Hive side, and use Spark SQL to read it.

Property NameMeaning
fileFormatA fileFormat is kind of a package of storage format specifications, including “serde”, “input format” and “output format”. Currently we support 6 fileFormats: ‘sequencefile’, ‘rcfile’, ‘orc’, ‘parquet’, ‘textfile’ and ‘avro’.
inputFormat, outputFormatThese 2 options specify the name of a corresponding InputFormat and OutputFormat class as a string literal, e.g. org.apache.hadoop.hive.ql.io.orc.OrcInputFormat. These 2 options must be appeared in a pair, and you can not specify them if you already specified the fileFormat option.
serdeThis option specifies the name of a serde class. When the fileFormat option is specified, do not specify this option if the given fileFormat already include the information of serde. Currently “sequencefile”, “textfile” and “rcfile” don’t include the serde information and you can use this option with these 3 fileFormats.
fieldDelim, escapeDelim, collectionDelim, mapkeyDelim, lineDelimThese options can only be used with “textfile” fileFormat. They define how to read delimited files into rows.

All other properties defined with OPTIONS will be regarded as Hive serde properties.

Interacting with Different Versions of Hive Metastore

One of the most important pieces of Spark SQL’s Hive support is interaction with Hive metastore, which enables Spark SQL to access metadata of Hive tables. Starting from Spark 1.4.0, a single binary build of Spark SQL can be used to query different versions of Hive metastores, using the configuration described below. Note that independent of the version of Hive that is being used to talk to the metastore, internally Spark SQL will compile against built-in Hive and use those classes for internal execution (serdes, UDFs, UDAFs, etc).

The following options can be used to configure the version of Hive that is used to retrieve metadata:

Property NameDefaultMeaningSince Version
spark.sql.hive.metastore.version2.3.9Version of the Hive metastore. Available options are 0.12.0 through 2.3.9 and 3.0.0 through 3.1.3.1.4.0
spark.sql.hive.metastore.jarsbuiltinLocation of the jars that should be used to instantiate the HiveMetastoreClient. This property can be one of four options:
  1. builtin
  2. Use Hive 2.3.9, which is bundled with the Spark assembly when -Phive is enabled. When this option is chosen, spark.sql.hive.metastore.version must be either 2.3.9 or not defined.
  3. maven
  4. Use Hive jars of specified version downloaded from Maven repositories. This configuration is not generally recommended for production deployments.
  5. path
  6. Use Hive jars configured by spark.sql.hive.metastore.jars.path in comma separated format. Support both local or remote paths. The provided jars should be the same version as spark.sql.hive.metastore.version.
  7. A classpath in the standard format for the JVM. This classpath must include all of Hive and its dependencies, including the correct version of Hadoop. The provided jars should be the same version as spark.sql.hive.metastore.version. These jars only need to be present on the driver, but if you are running in yarn cluster mode then you must ensure they are packaged with your application.
1.4.0
spark.sql.hive.metastore.jars.path(empty)Comma-separated paths of the jars that used to instantiate the HiveMetastoreClient. This configuration is useful only when spark.sql.hive.metastore.jars is set as path.
The paths can be any of the following format:
  1. file://path/to/jar/foo.jar
  2. hdfs://nameservice/path/to/jar/foo.jar
  3. /path/to/jar/(path without URI scheme follow conf fs.defaultFS‘s URI schema)
  4. [http/https/ftp]://path/to/jar/foo.jar
Note that 1, 2, and 3 support wildcard. For example:
  1. file://path/to/jar/,file://path2/to/jar//.jar
  2. hdfs://nameservice/path/to/jar/,hdfs://nameservice2/path/to/jar//.jar
3.1.0
spark.sql.hive.metastore.sharedPrefixescom.mysql.jdbc,
org.postgresql,
com.microsoft.sqlserver,
oracle.jdbc

A comma-separated list of class prefixes that should be loaded using the classloader that is shared between Spark SQL and a specific version of Hive. An example of classes that should be shared is JDBC drivers that are needed to talk to the metastore. Other classes that need to be shared are those that interact with classes that are already shared. For example, custom appenders that are used by log4j.

1.4.0
spark.sql.hive.metastore.barrierPrefixes(empty)

A comma separated list of class prefixes that should explicitly be reloaded for each version of Hive that Spark SQL is communicating with. For example, Hive UDFs that are declared in a prefix that typically would be shared (i.e. org.apache.spark.*).

1.4.0