Showing posts with label fleet. Show all posts
Showing posts with label fleet. Show all posts

Sunday, May 26, 2019

osquery - Part III - Queries and Packs

By Tony Lee and Matt Kemelhar

This series on osquery will take us on a journey from stand-alone agents, to managing multiple agents with Kolide Fleet, and then finally onto more advanced integrations and analysis.  So far, we have already covered the following topics:

Part I - Local Agent Interaction:  http://securitysynapse.blogspot.com/2019/05/osquery-part-i-local-agent-interaction.html
Part II - Kolide Centralized Management:  http://securitysynapse.blogspot.com/2019/05/osquery-part-ii-kolide-centralized.html


Even though we now have a centralized management platform, reading the query output in the Kolide Fleet UI does not scale to hundreds of thousands of hosts -- thus we need to integrate with a big data analytics platform so we can stack and perform statistical analysis on the data.  In order to do that, we first need to cover Query Packs and the resulting logs.

What is Query Pack?

The Kolide Fleet Web UI does an excellent job succinctly describing the query packs in the following manner:

"Osquery supports grouping of queries (called query packs) which run on a scheduled basis and log the results to a configurable destination.

Query Packs are useful for monitoring specific attributes of hosts over time and can be used for alerting and incident response investigations. By default, queries added to packs run every hour (interval = 3600s).

Queries can be run in two modes:

  1. Differential = Only record data that has changed.
  2. Snapshot = Record full query result each time.

Packs are distributed to specified targets. Targets may be individual hosts or groups of hosts called labels.

The results of queries run via query packs are stored in log files for your convenience. We recommend forwarding this logs to a log aggregation tool or other actionable tool for further analysis. These logs can be found in the following locations:

    Status Log: /path/to/status/logs
    Result Log: /path/to/result/logs"


Creating Saved Queries

Packs sound like a great step toward big data integration, but first we need to create a saved search by doing the following (our example below queries users):

In the Kolide Web UI, click Query on the left hand navigation > New Query
  • Query Title:  Users Query
  • SQL:  SELECT * FROM users
  • Description:  Query all users
  • Select Targets:  All Hosts
Click the Save button > Save as New

Figure 1:  Adding a new saved user query

Creating Query Packs

Now that we have a saved query, let's schedule it using a Pack.

Click Packs on the left hand navigation > New Pack

  • Query Pack Title:  Users Pack
  • Query Pack Description:  Query all users

Click the Save Query Pack button

Figure 2:  Creating a new users pack

In the next screen, on the far right hand side, select the Users Query that we created earlier and fill in the fields to define the pack properties:
  • Interval:  60  (Just so we get some data to play with)
  • Platform:  All
  • Minimum version:  All
  • Logging:  Snapshot  (Just so we get some data to play with)

Figure 3:  Defining the User Pack properties

Query Pack Output

With our current minimalist configuration (shown below in fleet.yaml), the packs logs are being sent by default to disk here:
  • /tmp/osquery_result
  • /tmp/osquery_status


cat /opt/fleet/conf/fleet.yaml 

mysql:
  address: 127.0.0.1:3306
  database: kolide
  username: root
  password: toor
redis:
  address: 127.0.0.1:6379
server:
  cert: /opt/fleet/ssl/fleetserver-cert.crt
  key: /opt/fleet/ssl/fleetserver-cert.key
  address: 0.0.0.0:443
auth:
  jwt_key: strong_key
logging:
  json: true


If we wanted to send the logs to a lager drive, we could add the following to our fleet.yaml configuration (the enable log rotation provides 500 Mb or 28 days of data):

filesystem:
  status_log_file: /path/to/drive/osquery/status.log
  result_log_file: /path/to/drive/osquery/result.log
  enable_log_rotation: true

(For our lab environment, we wrote it to:  /data/osquery/)

Just remember to restart the Kolide Fleet service using the following:

service fleet-service restart

For a full list of Fleet configuration options (such as sending to firehose, etc.):
https://github.com/kolide/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md


Conclusion

This article covered how to create saved queries, configure and schedule query packs to run on a regular basis, and how to send this data to a specified file so we can pick up the results and send them to a big data analytics platform.  In the next couple of articles we will cover how to manage the fleet manager and advanced integrations.

Saturday, May 25, 2019

osquery - Part II - Kolide Centralized Management

By Tony Lee and Matt Kemelhar

This series on osquery will take us on a journey from stand-alone agents, to managing multiple agents with Kolide, and then finally onto more advanced integrations and analysis.  We already covered stand-alone local osquery interaction in Part I of this series:

http://securitysynapse.blogspot.com/2019/05/osquery-part-i-local-agent-interaction.html

However, we quickly noticed that it does not scale to hundreds of thousands of hosts -- thus we need a centralized management platform.  In this article, we will examine the freely available Kolide Fleet.

What is Kolide?

Kolide (https://kolide.com/) is a centralized osquery agent management platform.  As of the writing of this article, there are two versions:  Cloud and on-prem Fleet.  Currently Kolide Cloud runs about $6 per endpoint.  However, for our needs, we will kick the tires with the on-prem Kolide Fleet (https://kolide.com/fleet) which is offered free of charge.


Kolide Fleet Dependencies and Installation

Kolide Fleet has a few significant dependencies:
  • *nix based operating system
  • MySQL version 5.7 (or greater) - used as Fleet's primary database
  • Redis - "ingest and queue results of distributed queries, cache data, etc."

Due to these dependencies, setup can be a little painful and time consuming, however we found a pretty awesome Fleet installation script (https://github.com/deeso/fleet-deployment) from Adam Pridgen (https://www.linkedin.com/in/-dso-/) that works great for our lab environment running Ubuntu.

Installation

Follow these steps to get up and running quickly:

git clone https://github.com/deeso/fleet-deployment.git
cd fleet-deployment/fleet-server-install
cp passwords.example passwords.sh

** Using your favorite text editor (such as vim), update the MYSQL_PASS and JWT_KEY variable with the sql password:

vim passwords.sh

Now run the installer script:
bash install.sh

NOTE:  During the SSL certificate creation phase, you will be asked for a "Common Name" / server FQDN (see below) -- be sure to use the server name. 

Ex: Common Name (e.g. server FQDN or YOUR name) []:<ENTER IT HERE>


 It will matter later when you try to connect via fleetctl.  If you do not specify the server name, you will see the following error message upon login attempt:

"error logging in: POST /api/v1/kolide/login: Post https://<hostname>:443/api/v1/kolide/login: x509: certificate is not valid for any names, but wanted to match localhost"


Check on the status of the service:

service fleet-service status

When complete, open a browser and navigate to https://localhost to complete the Kolide setup to specify the user, organization, and Kolide URL.

Figure 1:  Kolide Fleet Setup Complete


Joining an agent to Kolide Fleet

If you installed osquery as a stand-alone during the part I article, feel free to uninstall it.  We now need to install some osquery agents and get them to connect to our Kolide server.

"To connect a host to Kolide Fleet, you have two general options. 

1)  You can install the osquery binaries on your hosts via the packages distributed at https://osquery.io/downloads 

- or -  

2)  You can use the Kolide osquery Launcher

The Launcher is a light wrapper that aims to make running and deploying osquery easier by adding a few features and minimizing the configuration interface. Some features of The Launcher are:

  • Secure autoupdates to the latest stable osqueryd
  • Remote communication via a strongly-typed, versioned, modern gRPC server API
  • a curated kolide_best_practices table which includes a curated set of standards for the modern enterprise"

Source:  https://github.com/kolide/fleet/blob/master/docs/infrastructure/adding-hosts-to-fleet.md


Using Kolide osqery Launcher

For this article, we will use the Kolide osquery Launcher to connect a host to our Kolide Fleet server.  The launcher can be obtained as source or pre-compiled binaries from here:  https://github.com/kolide/launcher/releases

Then you will need to obtain the enrollment secret from the Kolide Fleet Server web UI by clicking on the "Add New Host" link.

Figure 2:  Obtain enrollment secret from the Kolide Fleet web UI

Once you have the launcher binary and enrollment secret, run something similar to the following (where 192.168.21.129 is your Kolide server):

launcher.exe --hostname=192.168.21.129:443 --root_directory=c:\programdata\osquery --enroll_secret=6Ua**snip**rUc --insecure

The host will check in and you will be able to run queries from Kolide Fleet.

Figure 3:  Host checked into Kolide Fleet

To run queries, use the side navigator in the Kolide Fleet UI and click Query > New Query.  Type the SQL query you want to run (autocomplete is present), select the target(s), and click run.  The output from the hosts will be at the bottom of the screen.


Figure 4:  Running a query from Kolide Fleet


Conclusion

At this point you should have the basic building blocks for deploying osquery agents and having them check into Kolide Fleet.  This centralized management is quite powerful.  Being able to view (and export) the data from multiple hosts is also powerful, but viewing the results in this interface is a bit limiting--especially when processing results from thousands of hosts.  In the next couple of articles we will examine fleet control and integration possibilities that will allow processing and stacking the data using a big data analytics platform.