The osquery daemon uses a default filesystem logger plugin. Like the config, output from the filesystem plugin is written as JSON. Results from the query schedule are written to /var/log/osquery/osqueryd.results.log.

There are two types of logs:

  • Status logs (INFO, WARNING, ERROR)
  • Query schedule results logs, including logs from snapshot queriesIf you run osqueryd in a verbose mode then peek at /var/log/osquery/:
  1. $ ls -l /var/log/osquery/
  2. total 24
  3. lrwxr-xr-x 1 root wheel 77 Sep 30 17:37 osqueryd.INFO -> osqueryd.INFO.20140930
  4. -rw------- 1 root wheel 1226 Sep 30 17:37 osqueryd.INFO.20140930
  5. -rw------- 1 root wheel 388 Sep 30 17:37 osqueryd.results.log

On Windows this directory defaults to C:\Program Files\osquery\log.

Logger plugins

osquery includes logger plugins that support configurable logging to a variety of interfaces. The built in logger plugins are filesystem (default), tls, syslog (for POSIX), windows_event_log (for Windows), kinesis, firehose, and kafka_producer. Multiple logger plugins may be used simultaneously, effectively copying logs to each interface. To enable multiple loggers set the —logger_plugin option to a comma separated list, do not include spaces, of the requested plugins.

For information on configuring logger plugins, see logging/results flags. Developing new logger plugins is explored in the development docs. We recommend setting the logger plugin and logger settings via the osquery flagfile.

Status logs

Status logs are generated by the Glog logging framework. The default filesystem logger plugin writes these logs to disk the same way Glog would. Logger plugins may intercept these status logs and write them to system or otherwise.

As the above directory listing reveals, osqueryd.INFO is a symlink to the most recent execution's INFO log. The same is true for the WARNING, ERROR and FATAL logs. For more information on the format of Glog logs, please refer to the Glog documentation.

Note: The osqueryi shell only shows WARNING and ERROR status logs, the INFO logs are silenced for a better shell-like experience.

By default the osqueryd daemon sends INFO, WARNING, and ERROR logs to the configured logger plugins and to the process's stderr. You may configure this behavior using several flags documented in CLI flags.

  • To disable writing status logs to stderr use —logger_stderr=false
  • To set the minimum status log severity (INFO=0) written to stderr use —logger_min_stderr=0
  • To set the minimum status log written to stderr and logger plugins use —logger_min_status=0Note: In the LaunchDaemon, systemd, and initscript provided in the osquery packages, the minimum stderr reporting is limited to WARNING to help minify the content duplicated to syslog.

Results logs

Differential logs

The results of your scheduled queries are logged to the "results log". These are differential changes between the last (most recent) query execution and the current execution. Each log line is a JSON string that indicates what data has been added/removed by which query. The first time the query is executed (there is no "last" run), the last run is treated as having null results, so the differential consists entirely of log lines with the added indication. There are two format options, single, or event, and batched. Some queries do not make sense to log "removed" events like:

  1. SELECT i.*, p.resident_size, p.user_time, p.system_time, t.minutes AS c
  2. FROM osquery_info i, processes p, time t
  3. WHERE p.pid = i.pid;

By adding an outer join of time and using time.minutes as a counter this query will always log a single "added" and a single "removed" line. The purpose is to create a continuous monitor of osquery's performance. For these cases add a "removed": false to the scheduled query.

  1. {
  2. "schedule": {
  3. "osquery_monitor": {
  4. "query": "SELECT ... t.minutes AS c FROM time t WHERE ...",
  5. "interval": 60,
  6. "removed": false
  7. }
  8. }
  9. }

Snapshot logs

Snapshot logs are an alternate form of query result logging. A snapshot is an 'exact point in time' set of results, no differentials. If you always want a list of mounts, not the added and removed mounts, use a snapshot. In the mounts case, where differential results are seldom emitted (assuming hosts do not often mount and unmount), a complete snapshot will log after every query execution. This will be a lot of data amortized across your fleet.

Data snapshots may generate a large amount of output. For log collection safety, output is written to a dedicated sink. The filesystem logger plugin writes snapshot results to /var/log/osquery/osqueryd.snapshots.log.

To schedule a snapshot query, use:

  1. {
  2. "schedule": {
  3. "mounts": {
  4. "query": "SELECT * FROM mounts;",
  5. "interval": 3600,
  6. "snapshot": true
  7. }
  8. }
  9. }

Logging as a Kafka producer.

Users can configure logs to be directly published to a Kafka topic.

Configuration

There are 3 Kafka configurations are exposed as option: a comma delimited list of brokers with or without the port (by default 9092) [default value: localhost], a default topic [default value: ""], and acks (the number acknowledgments the logger requires from the Kafka leader before the considering the request complete) [default: all; valid values: 0, 1, all]. See official documentation for more details.

To publish queries to specific topics, add a kafka_topics field at the top level of osquery.conf (see example below). If a given query was not explicitly configured in kafka_topics then the base topic will be used. If there is no base topic configured, then that query will not be logged. There is however a performance cost for the falling back of unconfigured queries to the base topic, so it is advised that when using multiple topics to explicitly configure all scheduled queries in kafka_topics.

The configuration parameters are exposed via command line options and can be set in a JSON configuration file as exampled here:

  1. {
  2. "options": {
  3. "logger_kafka_brokers": "some.example1.com:9092,some.example2.com:9092",
  4. "logger_kafka_topic": "base_topic",
  5. "logger_kafka_compression": "gzip",
  6. "logger_kafka_acks": "1"
  7. },
  8. "packs": {
  9. "system-snapshot": {
  10. "queries": {
  11. "some_query1": {
  12. "query": "select * from system_info",
  13. "snapshot": true,
  14. "interval": 60
  15. },
  16. "some_query2": {
  17. "query": "select * from md_devices",
  18. "snapshot": true,
  19. "interval": 60
  20. },
  21. "some_query3": {
  22. "query": "select * from md_drives",
  23. "snapshot": true,
  24. "interval": 60
  25. }
  26. }
  27. }
  28. },
  29. "kafka_topics": {
  30. "test1_topic": [
  31. "pack_system-snapshot_some_query1"
  32. ],
  33. "test2_topic": [
  34. "pack_system-snapshot_some_query2"
  35. ],
  36. "test3_topic": [
  37. "pack_system-snapshot_some_query3"
  38. ],
  39. }
  40. }

Client ID and msg key used are a concatenation of the OS hostname and binary name (argv[0]). Currently there can only be one topic passed into the configuration, so all logs will be published to the same topic.

Schedule results

Event format

Event is the default result format. Each log line represents a state change.This format works best for log aggregation systems like Logstash or Splunk.

Example output of SELECT name, path, pid FROM processes; (whitespace added for readability):

  1. {
  2. "action": "added",
  3. "columns": {
  4. "name": "osqueryd",
  5. "path": "/usr/local/bin/osqueryd",
  6. "pid": "97830"
  7. },
  8. "name": "processes",
  9. "hostname": "hostname.local",
  10. "calendarTime": "Tue Sep 30 17:37:30 2014",
  11. "unixTime": "1412123850",
  12. "epoch": "314159265",
  13. "counter": "1"
  14. }
  1. {
  2. "action": "removed",
  3. "columns": {
  4. "name": "osqueryd",
  5. "path": "/usr/local/bin/osqueryd",
  6. "pid": "97650"
  7. },
  8. "name": "processes",
  9. "hostname": "hostname.local",
  10. "calendarTime": "Tue Sep 30 17:37:30 2014",
  11. "unixTime": "1412123850",
  12. "epoch": "314159265",
  13. "counter": "1"
  14. }

This tells us that a binary called "osqueryd" was stopped and a new binary with the same name was started (note the different pids). The data is generated by keeping a cache of previous query results and only logging when the cache changes. If no new processes are started or stopped, the query won't log any results.

Snapshot format

Snapshot queries attempt to mimic the differential event format, instead of emitting "columns", the snapshot data is stored using "snapshot". An action is included as, you guessed it, "snapshot"!

Consider the following example:

  1. {
  2. "action": "snapshot",
  3. "snapshot": [
  4. {
  5. "parent": "0",
  6. "path": "/sbin/launchd",
  7. "pid": "1"
  8. },
  9. {
  10. "parent": "1",
  11. "path": "/usr/sbin/syslogd",
  12. "pid": "51"
  13. },
  14. {
  15. "parent": "1",
  16. "path": "/usr/libexec/UserEventAgent",
  17. "pid": "52"
  18. },
  19. {
  20. "parent": "1",
  21. "path": "/usr/libexec/kextd",
  22. "pid": "54"
  23. }
  24. ],
  25. "name": "process_snapshot",
  26. "hostIdentifier": "hostname.local",
  27. "calendarTime": "Mon May 2 22:27:32 2016 UTC",
  28. "unixTime": "1462228052",
  29. "epoch": "314159265",
  30. "counter": "1"
  31. }

Batch format

If a query identifies multiple state changes, the batched format will include all results in a single log line. If you're programmatically parsing lines and loading them into a backend datastore, this is probably the best solution.

To enable batch log lines, launch osqueryd with the —logger_event_type=false argument.

Example output of SELECT name, path, pid FROM processes; (whitespace added for readability):

  1. {
  2. "diffResults": {
  3. "added": [
  4. {
  5. "name": "osqueryd",
  6. "path": "/usr/local/bin/osqueryd",
  7. "pid": "97830"
  8. }
  9. ],
  10. "removed": [
  11. {
  12. "name": "osqueryd",
  13. "path": "/usr/local/bin/osqueryd",
  14. "pid": "97650"
  15. }
  16. ]
  17. },
  18. "name": "processes",
  19. "hostname": "hostname.local",
  20. "calendarTime": "Tue Sep 30 17:37:30 2014",
  21. "unixTime": "1412123850",
  22. "epoch": "314159265",
  23. "counter": "1"
  24. }

Most of the time the Event format is the most appropriate. The next section in the deployment guide describes log aggregation methods. The aggregation methods describe collecting, searching, and alerting on the results from a query schedule.

Schedule epoch

When differential logs were described above, we mentioned that after the initial execution of a scheduled query, only differential results are logged. While this is very efficient from a size-of-logs perspective, it introduces some challenges. To begin with, if the logs are stored in a log management system of some kind, it becomes difficult or impossible to identify which log results are from the initial run of the query, and which ones are differentials to the initial results. In some situations, this becomes problematic - for example, for some tables like the users table that don't change very often at all and so don't generate differential results very often, one would have to search far into historical logs to find the last results returned by osquery; conversely, for some tables like processes that change frequently, one would have to do a fair amount of logic applying the effects of added and removed rows to reconstruct the current state of running processes.

To aid with this, osquery maintains an epoch marker along with each scheduled query execution, and calculates differentials only if the epoch of the last run matches the current epoch. If it doesn't, then it treats the current execution of the query as an initial run. You can set the epoch marker by starting osquery with the —schedule_epoch= flag or by updating the schedule_epoch flag remotely from a TLS backend. The epoch is transmitted with each log result, so that it is easy to identify which results belong to which execution of the scheduled query.

Schedule counter

When setting up alerts for differential logs data you might want to skip the initial added records. counter can be used to identify if the added records are all records from initial query of if they are new records. For initial query results that includes all records counter will be "0". For subsequent query executions counter will be incremented by 1. When epoch changes, counter will be reset back to "0".

Unique host identification

If you need a way to uniquely identify hosts embedded into osqueryd's results log, then the —host_identifier flag is what you're looking for.By default, host_identifier is set to "hostname". The host's hostname will be used as the host identifier in results logs. If hostnames are not unique or consistent in your environment, you can launch osqueryd with —host_identifier=uuid.