gplogfilter

Searches through HAWQ log files for specified entries.

Synopsis

  1. gplogfilter [<timestamp_options>] [<pattern_matching_options>]
  2. [<output_options>] [<input_options>]
  3. gplogfilter --help
  4. gplogfilter --version

where:

  1. <timestamp_options> =
  2. [-b <datetime> | --begin <datetime>]
  3. [-e <datetime> | --end <datetime>]
  4. [-d <time> | --duration <time>]
  5. <pattern_matching_options> =
  6. [-c i[gnore] | r[espect] | --case i[gnore] | r[espect]]
  7. [-C '<string>' | --columns '<string>']
  8. [-f '<string>' | --find '<string>']
  9. [-F '<string> | --nofind '<string>']
  10. [-m <regex> | --match <regex>]
  11. [-M <regex>] | --nomatch <regex>]
  12. [-t | --trouble]
  13. <output_options> =
  14. [-n <integer> | --tail <integer>]
  15. [-s <offset> [<limit>] | --slice <offset> [<limit>]]
  16. [-o <output_file> | --out <output_file>]
  17. [-z <0..9> | --zip <0..9>]
  18. [-a | --append]
  19. <input_options> =
  20. [<input_file> [-u | --unzip]]

Description

The gplogfilter utility can be used to search through a HAWQ log file for entries matching the specified criteria. To read from standard input, use a dash (-) as the input file name. Input files may be compressed using gzip. In an input file, a log entry is identified by its timestamp in YYYY-MM-DD [hh:mm[:ss]] format.

You can also use gplogfilter to search through all segment log files at once by running it through the hawq ssh utility. For example, to display the last three lines of each segment log file:

  1. $ hawq ssh -f seg_hostfile_hawqssh
  2. => source /usr/local/hawq/greenplum_path.sh
  3. => gplogfilter -n 3 /data/hawq-install-path/segmentdd/pg_log/hawq*.csv

By default, the output of gplogfilter is sent to standard output. Use the -o option to send the output to a file or a directory. If you supply an output file name ending in .gz, the output file will be compressed by default using maximum compression. If the output destination is a directory, the output file is given the same name as the input file.

Options

The name of the input log file(s) to search through. To read from standard input, use a dash (-) as the input file name.

-u, —unzip

Uncompress the input file using gunzip. If the input file name ends in .gz, it will be uncompressed by default.

-n, —tail

Limits the output to the last of qualifying log entries found.

-s, —slice []

From the list of qualifying log entries, returns the number of entries starting at the entry number, where an of zero (0) denotes the first entry in the result set and an of any number greater than zero counts back from the end of the result set.

-o, —out

Writes the output to the specified file or directory location instead of STDOUT.

-z, —zip <0..9>

Compresses the output file to the specified compression level using gzip, where 0 is no compression and 9 is maximum compression. If you supply an output file name ending in .gz, the output file will be compressed by default using maximum compression.

-a, —append

If the output file already exists, appends to the file instead of overwriting it.

-c, —case i[gnore] | r[espect]

Matching of alphabetic characters is case sensitive by default unless proceeded by the --case=ignore option.

-C, —columns ’

Selects specific columns from the log file. Specify the desired columns as a comma-delimited string of column numbers beginning with 1, where the second column from left is 2, the third is 3, and so on.

-f, —find ’

Finds the log entries containing the specified string.

-F, —nofind ’

Rejects the log entries containing the specified string.

-m, —match

Finds log entries that match the specified Python regular expression. See http://docs.python.org/library/re.html for Python regular expression syntax.

-M, —nomatch

Rejects log entries that match the specified Python regular expression. See http://docs.python.org/library/re.html for Python regular expression syntax.

-t, —trouble

Finds only the log entries that have ERROR:, FATAL:, or PANIC: in the first line.

-b, —begin

Specifies a starting date and time to begin searching for log entries in the format of YYYY-MM-DD [hh:mm[:ss]].

If a time is specified, the date and time must be enclosed in either single or double quotes. This example encloses the date and time in single quotes:

  1. $ gplogfilter -b '2016-02-13 14:23'

-e, —end

Specifies an ending date and time to stop searching for log entries in the format of YYYY-MM-DD [hh:mm[:ss]].

If a time is specified, the date and time must be enclosed in either single or double quotes. This example encloses the date and time in single quotes:

  1. $ gplogfilter -e '2016-02-13 14:23'

-d, —duration

Specifies a time duration to search for log entries in the format of [hh][:mm[:ss]]. If used without either the -b or -e option, will use the current time as a basis.

Other Options

--help

Displays the online help.

--version

Displays the version of this utility.

Examples

Display the last three error messages in the identified log file:

  1. $ gplogfilter -t -n 3 "/data/hawq/master/pg_log/hawq-2016-09-01_134934.csv"

Display the last five error messages in a date-specified log file:

  1. $ gplogfilter -t -n 5 "/data/hawq-file-path/hawq-yyyy-mm-dd*.csv"

Display all log messages in the date-specified log file timestamped in the last 10 minutes:

  1. $ gplogfilter -d :10 "/data/hawq-file-path/hawq-yyyy-mm-dd*.csv"

Display log messages in the identified log file containing the string |con6 cmd11|:

  1. $ gplogfilter -f '|con6 cmd11|' "/data/hawq/master/pg_log/hawq-2016-09-01_134934.csv"

Using hawq ssh, run gplogfilter on the segment hosts and search for log messages in the segment log files containing the string con6 and save output to a file.

  1. $ hawq ssh -f /data/hawq-2.x/segmentdd/pg_hba.conf -e 'source /usr/local/hawq/greenplum_path.sh ;
  2. gplogfilter -f con6 /data/hawq-2.x/pg_log/hawq*.csv' > seglog.out

See Also

hawq ssh