hawq check

Verifies and validates HAWQ platform settings.

Synopsis

  1. hawq check -f <hostfile_hawq_check> | (-h <hostname> | --host <hostname>)
  2. [--hadoop <hadoop_home> | --hadoop-home <hadoop_home>]
  3. [--config <config_file>]
  4. [--stdout | --zipout]
  5. [--kerberos]
  6. [--hdfs-ha]
  7. [--yarn]
  8. [--yarn-ha]
  9. hawq check --zipin <hawq_check_zipfile>
  10. hawq check --version
  11. hawq check -?

Description

The hawq check utility determines the platform on which you are running HAWQ and validates various platform-specific configuration settings as well as HAWQ and HDFS-specific configuration settings. In order to perform HAWQ configuration checks, make sure HAWQ has been already started and hawq config works. For HDFS checks, you should either set the $HADOOP_HOME environment variable or provide the full path to the hadoop installation location using the --hadoop option.

The hawq check utility can use a host file or a file previously created with the --zipoutoption to validate platform settings. If GPCHECK_ERROR displays, one or more validation checks failed. You can also use hawq check to gather and view platform settings on hosts without running validation checks. When running checks, hawq check compares your actual configuration setting with an expected value listed in a config file ($GPHOME/etc/hawq_check.cnf by default). You must modify your configuration values for “mount.points” and “diskusage.monitor.mounts” to reflect the actual mount points you want to check, as a comma-separated list. Otherwise, the utility only checks the root directory, which may not be helpful.

An example is shown below:

  1. [linux.mount]
  2. mount.points = /,/data1,/data2
  3. [linux.diskusage]
  4. diskusage.monitor.mounts = /,/data1,/data2

Arguments

-f

The name of a file that contains a list of hosts that hawq check uses to validate platform-specific settings. This file should contain a single host name for all hosts in your HAWQ system (master, standby master, and segments).

-h, —host

Specifies a single host on which platform-specific settings will be validated.

--zipin

Use this option to decompress and check a .zip file created with the --zipout option. If you specify the --zipin option, hawq check performs validation tasks against the specified file.

Options

--config

The name of a configuration file to use instead of the default file $GPHOME/etc/hawq_check.cnf.

--hadoop, —hadoop-home

Use this option to specify the full path to your hadoop installation location so that hawq check can validate HDFS settings. This option is not needed if the $HADOOP_HOME environment variable is set.

--stdout

Send collected host information from hawq check to standard output. No checks or validations are performed.

--zipout

Save all collected data to a .zip file in the current working directory. hawq check automatically creates the .zip file and names it hawq_check_timestamp.tar.gz. No checks or validations are performed.

--kerberos

Use this option to check HDFS and YARN when running Kerberos mode. This allows hawq check to validate HAWQ/HDFS/YARN settings with Kerberos enabled.

--hdfs-ha

Use this option to indicate that HDFS-HA mode is enabled, allowing hawq check to validate HDFS settings with HA mode enabled.

--yarn

If HAWQ is using YARN, enables yarn mode, allowing hawq check to validate the basic YARN settings.

--yarn-ha

Use this option to indicate HAWQ is using YARN with High Availability mode enabled, to allow hawq check to validate HAWQ-YARN settings with YARN-HA enabled.

--version

Displays the version of this utility.

-? (help)

Displays the online help.

Examples

Verify and validate the HAWQ platform settings by entering a host file and specifying the full hadoop install path:

  1. $ hawq check -f hostfile_hawq_check --hadoop /usr/hdp/<version>/hadoop

Verify and validate the HAWQ platform settings with HDFS HA enabled, YARN HA enabled and Kerberos enabled:

  1. $ hawq check -f hostfile_hawq_check --hadoop /usr/local/hadoop-<version> --hdfs-ha --yarn-ha --kerberos

Verify and validate the HAWQ platform settings with HDFS HA enabled, and Kerberos enabled:

  1. $ hawq check -f hostfile_hawq_check --hadoop /usr/hdp/<version>/hadoop --hdfs-ha --kerberos

Save HAWQ platform settings to a zip file, when the $HADOOP_HOME environment variable is set:

  1. $ hawq check -f hostfile_hawq_check --zipout

Verify and validate the HAWQ platform settings using a zip file created with the --zipout option:

  1. $ hawq check --zipin hawq_check_timestamp.tar.gz

View collected HAWQ platform settings:

  1. $ hawq check -f hostfile_hawq_check --hadoop /usr/local/hadoop-<version> --stdout

See Also

hawq checkperf