osquery is designed to work with any environment's existing data infrastructure. Since the problem space of forwarding logs is so well developed, osquery does not implement log forwarding internally.

In short, the act of forwarding logs and analyzing logs is mostly left as an exercise for the reader. This page offers advice and some options for you to consider, but at the end of the day, you know your infrastructure best and you should make your decisions based on that knowledge.

Aggregating logs

When it comes to aggregating the logs that osqueryd generates, you have several options. If you use the filesystem logger plugin (which is the default), then you're responsible for shipping the logs off somewhere. There are many open source and commercial products which excel in this area. This section will explore a few of those options.

Logstash

LogStash is an open source tool enabling you to collect, parse, index and forward logs. Logstash enables you to ingest osquery logs with its file input plugin and then send the data to an aggregator via its extensive list of output plugins. A common datastore for logstash logs is ElasticSearch.

An example Logstash to ElasticSearch config may look like this:

  1. input {
  2. file {
  3. path => "/var/log/osquery/osqueryd.results.log"
  4. type => "osquery_json"
  5. codec => "json"
  6. }
  7. }
  8. filter {
  9. if [type] == "osquery_json" {
  10. date {
  11. match => [ "unixTime", "UNIX" ]
  12. }
  13. }
  14. }
  15. output {
  16. stdout {}
  17. elasticsearch {
  18. hosts=> "127.0.0.1:9200"
  19. }
  20. }

This will send the JSON formatted logs from the results log to an ElasticSearch instance listening on 127.0.0.1. This can be an Elasticsearch node at any endpoint address.

Splunk

If you use Splunk, you're probably already familiar with the Splunk Universal Forwarder. An example Splunk forwarder (inputs) config may look as follows:

  1. [monitor:///var/log/osquery/osqueryd.results.log]
  2. index = main
  3. sourcetype = osquery:results
  4. [monitor:///var/log/osquery/osqueryd.*INFO*]
  5. index = main
  6. sourcetype = osquery:info
  7. [monitor:///var/log/osquery/osqueryd.*ERROR*]
  8. index = main
  9. sourcetype = osquery:error
  10. [monitor:///var/log/osquery/osqueryd.*WARNING*]
  11. index = main
  12. sourcetype = osquery:warning

Fluentd

Fluentd is an open source data collector and log forwarder. It's very extensible and many people swear by it.

Rsyslog

rsyslog is a tried and testing UNIX log forwarding service. If you are deploying osqueryd in a production Linux environment where you do not have to worry about lossy network connections, this may be your best option.

Analyzing logs

The way in which you analyze logs is very dependent on how you aggregate logs. At the end of the day, osquery produces results logs in JSON format, so the logs are very easy to analyze on most modern backend log aggregation platforms.

Kibana

If you are forwarding logs with LogStash to ElasticSearch, then you probably want to perform your analytics using Kibana.

Logstash will index logs into ElasticSearch using a default index format of logstash-YYYY-MM-DD. Kibana has a default Logstash dashboard and automatically field-extracts all log lines making them available for search.

An example Kibana log entry:

Aggregating Logs - 图1

Splunk

Splunk will automatically extract the relevant fields for analytics, as shown below:

Aggregating Logs - 图2

Rsyslog, Fluentd, Scribe, etc.

If you are using a log forwarder which has less requirements on how data is stored (for example, Splunk Forwarders require the use of Splunk, etc.), then you have many options on how you can interact with osqueryd data. It is recommended that you use whatever log analytics platform that you are comfortable with.

Many people are very comfortable with Logstash. If you already have an existing Logstash/Elasticsearch deployment, that is a great option to exercise. If your organization uses a different backend log management solution, osquery should tie into that with minimal effort.