Use Hive connector in scala shell

Flink Scala Shell is a convenient quick way to try flink. You can use hive in scala shell as well instead of specifying hive dependencies in pom file, packaging your program and submitting it via flink run command.In order to use hive connector in scala shell, you need to put the following hive connector dependencies under lib folder of flink dist .

  • flink-connector-hive_{scala_version}-{flink.version}.jar
  • flink-hadoop-compatibility_{scala_version}-{flink.version}.jar
  • flink-shaded-hadoop-2-uber-{hadoop.version}-{flink-shaded.version}.jar
  • hive-exec-2.x.jar (for Hive 1.x, you need to copy hive-exec-1.x.jar, hive-metastore-1.x.jar, libfb303-0.9.2.jar and libthrift-0.9.2.jar)Then you can use hive connector in scala shell like following:
  1. Scala-Flink> import org.apache.flink.table.catalog.hive.HiveCatalog
  2. Scala-Flink> val hiveCatalog = new HiveCatalog("hive", "default", "<Replace it with HIVE_CONF_DIR>", "2.3.4");
  3. Scala-Flink> btenv.registerCatalog("hive", hiveCatalog)
  4. Scala-Flink> btenv.useCatalog("hive")
  5. Scala-Flink> btenv.listTables
  6. Scala-Flink> btenv.sqlQuery("<sql query>").toDataSet[Row].print()