HAWQ Administrative Log Files

Log files are files that include messages and other information about your HAWQ deployment, including the database and utilities. HAWQ administrative log files reside in pre-defined or configured locations on the local file system of the HAWQ node. These log files are distinctly located, formatted, configured, and managed.

Every database instance in HAWQ (master, standby, and segments) runs a PostgreSQL database server with its own server log file. You generate log files when you invoke HAWQ management utilities directly, or indirectly via Ambari management operations. Additionally, other components in your HAWQ cluster (PXF, HDFS) generate log files of their own.

Configurable logging parameters may affect what, when, and where messages are logged. You configure HAWQ administrative logging options via HAWQ server configuration parameters or command line options.

Log files may be created and/or rotated at pre-defined or configured intervals. Note that administrative log files are not automatically truncated or deleted. The administrator must implement and periodically run scripts to clean up these log files.

HAWQ Database Server Log Files

Locating HAWQ Log Files

Each HAWQ master, standby, and segment database instance has its own server log file. Daily log files are created in the pg_log subdirectory of the master and segment data directory on the respective HAWQ node. You can obtain the master data directory location from the hawq_master_directory property value set in the $GPHOME/etc/hawq-site.xml configuration file. Similarly, you can obtain the segment data directory location from the hawq_segment_directory property value from hawq-site.xml.

The naming convention for HAWQ database server log files is hawq-<date>_<time>.[csv|log]. For example, hawq-2017-01-02_061611.csv or hawq-2017-01-03_001704.log. The number and size of log files present for a given <date> is dependent upon the values of certain HAWQ server configuration parameters (discussed later in this document).

HAWQ Log Format

The HAWQ server log files are written in text or comma-separated values (CSV) format.

HAWQ log entries may include the following fields:

#Field NameData TypeDescription
1event_timetimestamp with time zoneTh time that the log entry was written to the log
2user_namevarchar(100)The database user name
3database_namevarchar(100)The database name
4process_idvarchar(10)The system process ID (prefixed with “p”)
5thread_idvarchar(50)The thread count (prefixed with “th-”)
6remote_hostvarchar(100)The hostname/address of the client machine (if on master node). The hostname/address of the master (if on segment node).
7remote_portvarchar(10)The segment or master port number
8session_start_timetimestamp with time zoneThe time the session connection was opened
9transaction_idintThe top-level transaction ID on the master; this ID is the parent of any subtransactions.
10gp_session_idtextThe session identifier number (prefixed with “con”)
11gp_command_counttextThe command number within a session (prefixed with “cmd”)
12gp_segmenttextThe segment content identifier. The master always has a content ID of -1.
13slice_idtextThe slice ID (portion of the query plan being executed)
14distr_tranx_idtextThe distributed transaction identifier
15local_tranx_idtextThe local transaction identifier
16sub_tranx_idtextThe subtransaction identifier
17event_severityvarchar(10)The event severity; values include: LOG, ERROR, FATAL, PANIC, DEBUG1, DEBUG2
18sql_state_codevarchar(10)The SQL state code associated with the log message
19event_messagetextThe log or error message text
20event_detailtextThe detail message text associated with an error or warning message
21event_hinttextThe hint message text associated with an error or warning message
22internal_querytextThe internally-generated query text
23internal_query_posintThe cursor index into the internally-generated query text
24event_contexttextThe context in which this message is generated
25debug_query_stringtextUser-supplied query string with full detail for debugging. This string can be modified for internal use.
26error_cursor_posintThe cursor index into the query string
27func_nametextThe function in which this message is generated
28file_nametextThe name of the source file in which the message originated
29file_lineintThe line number of the source file in which the message originated
30stack_tracetextThe stack trace text associated with this message

Note: Log entries may not include values for all log fields. For example, the slice_id field is populated only for log entries associated with a query worker process.

Example HAWQ server log file entries in a .csv file:

  1. 2017-01-02 01:19:55.293050 PST,"gpadmin","testdb",p381034,th-1259067104,"[local]",,2017-01-02 01:19:21 PST,28083,con3595,cmd7,seg-1,,,x28083,sx1,"LOG","00000","ConnID 207. Acquired resource from resource manager, (256 MB, 0.062500 CORE) x 1.",,,,,,"INSERT INTO extblwstrings VALUES ('my string');",0,,"rmcomm_QD2RM.c",868,
  2. 2017-01-02 06:16:06.383827 PST,,,p360256,th-1259067104,,,,0,,,seg-10000,,,,,"LOG","00000","database system is shut down",,,,,,,0,,"xlog.c",7882,

Example HAWQ server log file entries in a .log file:

  1. 2017-01-03 00:17:04.848268 PST|||706424|startup||:-LOG: database system was shut down at 2017-01-03 00:16:58 PST
  2. 2017-01-03 00:17:04.988778 PST|gpadmin|template1|706437|cmd1|x31410|:-LOG: statement: SELECT oid,rolname,rolsuper,rolresqueue FROM pg_authid
  3. 2017-01-03 00:17:04.990520 PST|||706432|master resource manager||:-LOG: Resource manager successfully loaded role specifications.

Examining HAWQ Log Files

You will examine the HAWQ log files to obtain information about your HAWQ deployment, as well as diagnose problems.

Identify log entries of related transactions using the transaction_id. You can identify related log entries of a particular query by the query’s session identifier (gp_session_id) and command identifier (gp_command_count).

Alternatively, you can use the hawq_toolkit administrative schema to query HAWQ log files when the gp_log_format server configuration parameter value specifies csv format log files.

For example, the following hawq_toolkit query displays the time and message for all ERROR logs related to the database named testdb:

  1. =# SELECT logtime, logmessage FROM hawq_toolkit.hawq_log_master_concise
  2. WHERE logdatabase='testdb' AND logseverity='ERROR';

Searching HAWQ Log Files

Use the gplogfilter HAWQ utility to search through a HAWQ log file for entries matching specific criteria. By default, this utility searches through the HAWQ master log file in the default location.

For example, to display the entries logged to the master log file after 2:00pm on January 18, 2016:

  1. $ gplogfilter -b '2016-01-18 14:00'

Run gplogfilter using the hawq ssh utility to search through all segment log files simultaneously. For example, create a <seg_hosts> file that includes all segment hosts of interest, then invoke gplogfilter to display the last three lines of each segment log file on each segment host. (Note: enter the commands after the => prompt, do not include the =>.):

  1. $ hawq ssh -f <seg_hosts>
  2. => source /usr/local/hawq/greenplum_path.sh
  3. => gplogfilter -n 3 /data/hawq/segment/pg_log/hawq*.csv

Configuring HAWQ Logging

You can configure the logging characteristics of HAWQ server operations using HAWQ server configuration Logging Parameters. These configuration parameters work in conjunction with each other to determine what information is logged to the HAWQ server log file, and when.

HAWQ server logging-related configuration parameters include:

Server Configuration ParameterDescription
client_min_messagesIdentify log level for client messages
debug_pretty_printIndent/format debug output to make it more readable
gp_log_formatIdentify format of server log files
gp_max_csv_line_lengthSet the maximum line length of csv-formatted file
log_autostatsLog information about automatic statistics generation
log_connectionsLog each client connection
log_disconnectionsLog each client connection termination
log_dispatch_statsLog information related to statement dispatch
log_durationLog the duration of completed statements satisfying log_statement
log_error_verbosityGovern how much detail is written to the log
log_hostnameLog the hostname of the connecting host
log_min_duration_statementConfigure runtime duration at which a statement should be logged
log_min_error_statementWrite sql statement causing the error condition
log_min_messagesIdentify log level for writing to log file
log_statementControl which SQL statements are logged
log_timezoneSet the timezone used in log file timestamps

Configuring Log Rotation

When log file rotation is enabled, you can control the rotation options with these HAWQ server configuration parameters:

Server Configuration ParameterDescription
log_rotation_ageConfigure the lifetime of log file
log_rotation_sizeConfigure the maximum size of log file
log_truncate_on_rotationIdentify whether to truncate or append to the log file

Configuring Query Logging

HAWQ exposes a set of server configuration parameters dealing exclusively with GPORCA query executor and optimizer logging. These include:

Server Configuration ParameterDescription
debug_print_parseLog the query parse tree
debug_print_planLog the query plan
debug_print_prelim_planLog the preliminary query plan
debug_print_rewrittenLog the query rewriter output
debug_print_slice_tableLog the query slice plan
log_executor_statsLog query executor performance statistics
log_parser_statsLog query parser performance statistics
log_planner_statusLog the performance statistics of the legacy query optimizer (planner)
log_statement_statsLog the total performance statistics of the query parser, planner, and executor

Query Minidump Log Files identifies configuration options and additional information related to generating query optimizer minidump log files.

Managing HAWQ Log Files

HAWQ log output tends to be voluminous, especially at higher debug levels. You should not need to save this information indefinitely. HAWQ administrators typically configure HAWQ to rotate the log files periodically so new log files are created.

Daily log files are created in the pg_log subdirectory of the master and each segment data directory. Although log files are rolled over daily, they are not automatically truncated or deleted. Administrators must implement and run scripts to periodically clean up old log files in the pg_log directory of the master, standby, and every segment instance.

Management Utility Log Files

You invoke HAWQ management utilities both from the command line and when you perform cluster management activities using the Ambari user interface. Regardless of source, these utilities log information about command execution status and results. The messages are logged to both stdout and a log file.

Each management utility, when invoked, creates/manages a daily log file. The logs for a particular execution of a utility are added or appended to its daily log file each time that utility is run that day.

Note: Some management utilities are aliases for invoking one or more other management utilities. For example, hawq restart invokes hawq stop and hawq start. Logs are written to the daily log files associated with the two invoked utilities.

Locating Management Utility Log Files

The default location of HAWQ management utility log files is /home/gpadmin/hawqAdminLogs/. You may choose to specify an alternate log file directory.

The naming convention for HAWQ management utility log files is <cmdname>_<date>.log. For example, hawq_state_20170102.log or hawq_start_20161228.log.

Management Utility Log Format

HAWQ management utility log files are written in text format.

Management utility commands’ log entries are formed as follows:

  1. `<date>:<time>:<pid> <cmdname>:<host>:<user>-[<loglevel>]:-<message>`

Log entry fields are described in the table below.

Log Entry FieldDescription
dateThe date (month, day, year) the entry was logged
timeThe time the entry was logged
pidThe process id associated with the command invocation
cmdnameThe name of the HAWQ management utility command
hostThe name of the host on which the command was executed
userThe name of the user invoking the command
loglevelThe log level, one of DEBUG, INFO, WARN, or FATAL. The default logging level will log INFO and more severe messages.
messageThe log message

Example management utility log entries:

  1. `20170102:06:16:06:451225 hawq_stop:hmast1:gpadmin-[INFO]:-Master host=hmast1`
  2. `20170102:06:16:15:451826 hawq_start:hmast1:gpadmin-[INFO]:-Start hawq with args: ['start', 'segment']`

Note: The log files for certain management utility commands, for example hawq init, diverge from these conventions.

Examining Management Utility Log Entries

You will examine the HAWQ management utility log files to get more detailed information about the execution of the command. Additionally, the most recent log file for a particular command provides date/time and status information about the last time you invoked the command.

Configuring Management Utility Logging

Most management utilities support options for verbose logging and identifying alternate log file directories.

Management utility logging-related configuration command options include:

OptionDescription
-l | —logdir <dir>Use alternate log file directory <dir> for this command invocation.
-v | —verboseInclude DEBUG log messages in output. The default logging level will log INFO and more severe messages.

Invoke the management utility command with the --help option to determine support for changing the log directory and/or enabling verbose output.

Managing Management Utility Log Files

Although Management Utility log files are rolled over daily, they are not automatically truncated or deleted. Administrators should implement and run scripts or programs to periodically clean up old Management Utility log files.

Query Minidump Log Files

You can configure the HAWQ GPORCA query optimizer to generate output minidump files describing the optimization context for a given query. (The information in a minidump file is not in a format that you can easily understand. Rather, you would generate and provide the minidump file to the HAWQ development team or support.)

GPORCA minidump files include the following query-related information:

  • Catalog objects including data types, tables, operators, and statistics required by GPORCA.
  • An internal representation (DXL) of the query.
  • An internal representation (DXL) of the query plan produced by GPORCA.
  • System configuration information passed to GPORCA, including server configuration parameters, cost and statistics configuration, and number of segments.
  • A stack trace of errors generated while optimizing the query.

Query minidump files are generated in a minidumps/ subdirectory in the HAWQ master data directory. The minidump file naming convention is Minidump_<date>_<time>_<gp_session_id>_<gp_command_count>.mdp, for example Minidump_20170102_213701_904_12.mdp.

Use the optimizer_minidump HAWQ server configuration parameter to configure minidump generation. The default setting generates minidump files only on error.

PXF Log Files

PXF provides both service- and database-level logging. Refer to PXF Logging for specific PXF logging configuration and management information.

Ambari Log Files

Ambari log files may be useful in helping diagnose general cluster problems. The Ambari server log files are located in the /var/log/ambari-server/ directory. Ambari agent log files are located in /var/log/ambari-agent/. Refer to Reviewing Ambari Log Files for additional information.

Ranger Log Files

The HAWQ Ranger Plug-in Service log files may be useful in helping diagnose Ranger connectivity and authorization problems. You will find these log files in the $GPHOME/ranger/plugin-service/logs/ directory. In addition to HAWQ Ranger Plug-in service-related logs, this directory includes the log4j provider audit.log file. (Refer to Auditing Authorization Events for information on configuring HAWQ Ranger audit logging.)

Ranger log files are located in the /var/log/ranger/admin/ directory.

Hadoop Log Files

Hadoop log files are managed by your Hadoop distribution. If you built Hadoop from source, your log file locations and configuration may be governed by the build and/or install options you specified during the build process.