Wednesday, May 29, 2019

osquery - Part V - Integration

By Tony Lee and Matt Kemelhar

This series on osquery will take us on a journey from stand-alone agents, to managing multiple agents with Kolide Fleet, and then finally onto more advanced integrations and analysis.  We already covered the following topics:

Part I - Local Agent Interaction:  http://securitysynapse.blogspot.com/2019/05/osquery-part-i-local-agent-interaction.html
Part II - Kolide Centralized Management:  http://securitysynapse.blogspot.com/2019/05/osquery-part-ii-kolide-centralized.html
Part III - Queries and Packs:  http://securitysynapse.blogspot.com/2019/05/osquery-part-iii-queries-and-packs.html
Part IV - Fleet Control Using fleetctl - http://securitysynapse.blogspot.com/2019/05/osquery-part-iv-fleet-control-using-fleetctl.html


Even though we now have a centralized management platform that we can manage, reading the query output in the Kolide Fleet UI does not scale to hundreds of thousands of hosts -- thus we need to integrate with a big data analytics platform so we can stack and perform statistical analysis on the data.  In this article, we will examine Kolide Fleet output + Splunk integration.  As a bonus, we are releasing a Kolide Fleet App for Splunk -- free of charge in Splunkbase.  In the first version of the app, it will be able to parse, normalize, and display the following information:

  • Overview information
  • Status Log
  • osquery_info query
  • programs query
  • process_open_sockets query
  • users query
Screenshots of the app are shown below.  Splunk app is available here:  https://splunkbase.splunk.com/app/4518/

Figure 1:  Overview page

Figure 2:  Status Log

Figure 3:  osquery_info page


Expected Kolide Packs and Queries

The first version of the Kolide App for Splunk needs the pack, query names, and output to conform to what is shown in the fleetctl get commands below. For this reason, we are sharing our exported packs and queries here:


Remember in Part IV of this series, we covered how to import the queries and packs using fleetctl:

fleetctl apply -f kolide_splunk_app.yaml 
[+] applied 4 queries
[+] applied 4 packs


Pack and Query details

fleetctl get p
+---------------------------+----------+-------------------------------+
|           NAME            | PLATFORM |          DESCRIPTION          |
+---------------------------+----------+-------------------------------+
| users pack                |          | Query all users               |
+---------------------------+----------+-------------------------------+
| osquery_info pack         |          | Query the version of osquery  |
+---------------------------+----------+-------------------------------+
| process_open_sockets pack |          | Pack for process_open_sockets |
+---------------------------+----------+-------------------------------+
| programs pack             |          | pack for programs             |
+---------------------------+----------+-------------------------------+


fleetctl get q
+----------------------------+------------------------------+--------------------------------+
|            NAME            |         DESCRIPTION          |             QUERY              |
+----------------------------+------------------------------+--------------------------------+
| users query                | Query all users              | SELECT * FROM users            |
+----------------------------+------------------------------+--------------------------------+
| osquery_info query         | Query the version of osquery | SELECT * FROM osquery_info     |
+----------------------------+------------------------------+--------------------------------+
| process_open_sockets query | Query process_open_sockets   | SELECT DISTINCT proc.name,     |
|                            |                              | proc.path, proc.cmdline,       |
|                            |                              | pos.pid, pos.protocol,         |
|                            |                              | pos.local_address,             |
|                            |                              | pos.local_port,                |
|                            |                              | pos.remote_address,            |
|                            |                              | pos.remote_port FROM           |
|                            |                              | process_open_sockets AS pos    |
|                            |                              | JOIN processes AS proc ON      |
|                            |                              | pos.pid = proc.pid;            |
+----------------------------+------------------------------+--------------------------------+
| programs query             | query for programs           | SELECT * FROM programs         |
+----------------------------+------------------------------+--------------------------------+



Kolide Output

Once the Packs and Queries above are imported using the fleetctl apply command above, applied to targets, and scheduled to run, we need to gather the output and send it to Splunk. You might remember that in Part III of this series, we mentioned that we added a statement to our fleet.yaml configuration file to send the results and status output to the following path with log rotation:

filesystem:
  status_log_file: /data/osquery/status.log
  result_log_file: /data/osquery/results.log
  enable_log_rotation: true


This sets us up perfectly to use a Splunk forwarder to send the data to Splunk. If not already completed, download and install the Splunk forwarder here:

https://www.splunk.com/en_us/download/universal-forwarder.html

Once installed, configure the forwarder to send data to your indexers.

Install the Splunk App and Create the Index

In order to prepare for the data's arrival, we now install the Splunk app and create an index for the osquery data:

1) Install the following Kolide Fleet App For Splunk: https://splunkbase.splunk.com/app/4518/
2) Create an index called osquery

If you already had osquery data going to Splunk to a different index and sourcetype, all is not lost.  You can modify the eventtypes.conf file to account for it.

(Optional) Modify eventtypes.conf as Needed

If you already had Kolide setup and sending data to Splunk under a different index and sourcetype name, that's not a problem.  As long as the data is being parsed correctly, we can just modify eventtypes.conf within Splunk to still make all the dashboards function correctly for your index and sourcetype names.  Modify index=osquery to match your index.  Modify sourcetype=osquery:results and osquery:status to match your sourcetypes.


cat eventtypes.conf


[osquery_index]
search = index=osquery

[osquery_status]
search = eventtype=osquery_index sourcetype=osquery:status

[osquery_results]
search = eventtype=osquery_index sourcetype=osquery:results



(Optional) Modify props.conf as Needed

Currently the only two stanzas in props.conf that are used are osquery:results and osquery:status shown below.  Feel free to change the stanza to match the sourcetype if needed.  Minimal parsing is accomplished below:

cat props.conf

## Results log
[osquery:results]
TRUNCATE = 50000
KV_MODE = json
SHOULD_LINEMERGE = 0
category = osquery
pulldown_type = 1
MAX_TIMESTAMP_LOOKAHEAD = 10
TIME_FORMAT = %s
TIME_PREFIX = unixTime\"\:
EVAL-vendor_product = "osquery"
FIELDALIAS-user = decorations.username as user
FIELDALIAS-username = username as user
FIELDALIAS-dest = decorations.hostname as host


## Status log
[osquery:status]
KV_MODE = json
SHOULD_LINEMERGE = 0
category = osquery
pulldown_type = 1
MAX_TIMESTAMP_LOOKAHEAD = 10
TIME_FORMAT = %s
TIME_PREFIX = unixTime\"\:
EVAL-vendor_product = "osquery"
FIELDALIAS-user = decorations.username as user
FIELDALIAS-dest = host as dest

Send logs to Splunk via Splunk forwarder (inputs.conf)

Once our app is installed on the search head, Splunk forwarder is installed on the Kolide host and Kolide is writing the status and results logs to disk, we need to let the fowarder know where to gather the logs. For this, we use the following inputs.conf file:

cat inputs.conf 
[monitor:///data/osquery/results.log]
index = osquery
sourcetype = osquery:results
disabled = 1

[monitor:///data/osquery/status.log]
index = osquery
sourcetype = osquery:status
disabled = 1

If all went as planned, you should see data populating in the Splunk app.  :-)

Conclusion

This article covered how to import the required queries to populate the current version of the Splunk app.  It then explained where the Kolide Fleet logs should appear and how to forward those logs to Splunk.  We covered installing the newly created Kolide Fleet App for Splunk and optionally configure the eventtypes.conf and/or props.conf for any deviation in expected index or sourcetype.  At the end of this effort, you should have data flowing from Kolide Fleet to Splunk properly ingested, parsed, and displayed.  For any questions, please post in the comments section below.  Otherwise, stay tuned additional integration efforts!

Props to the osquery TA for getting us started.


Bonus for the curious reader -- Splunk Magic

Normally, JSON is not the prettiest of data to table in Splunk.  However, we discovered a series of tricks that makes panel and dashboard development scale a little easier.  Our searches in many cases end up looking something like this:

eventtype=osquery_results  name="pack/network_connection_listening/Windows_Process_Listening_Port" | dedup host, _time | spath output=data path=snapshot{} | mvexpand data | rename data as _raw | extract pairdelim="," kvdelim=":" | eval pname=mvindex(name,1) | table _time, host, pname, path, protocol, address, port 


It is a lot of digest all at once, so let's break it down:

  1. Find the data we want:  eventtype=osquery_results  name="pack/network_connection_listening/Windows_Process_Listening_Port"
  2. Get the latest result by host and time:  dedup host, _time
  3. Convert multivalue field into data field, one per line kv pair:  spath output=data path=snapshot{}
  4. Expand multivaue fields into separate events:  mvexpand data
  5. Rename data as _raw since extract only works on _raw:  rename data as _raw
  6. Extract key/value pair (regardless of key names):  extract pairdelim="," kvdelim=":"
  7. Avoid conflict for "name" variable:  eval pname=mvindex(name,1)
  8. Table remaining values extracted:  table <extracted field names>

Figure 4:  Pure joy of JSON data in Splunk

Figure 5:  Data is ready to table after our SPL trickery

Tuesday, May 28, 2019

osquery - Part IV - Fleet Control Using fleetctl

By Tony Lee and Matt Kemelhar

This series on osquery will take us on a journey from stand-alone agents, to managing multiple agents with Kolide Fleet, and then finally onto more advanced integrations and analysis.  We already covered the following topics:

Part I - Local Agent Interaction:  http://securitysynapse.blogspot.com/2019/05/osquery-part-i-local-agent-interaction.html
Part II - Kolide Centralized Management:  http://securitysynapse.blogspot.com/2019/05/osquery-part-ii-kolide-centralized.html
Part III - Queries and Packs:  http://securitysynapse.blogspot.com/2019/05/osquery-part-iii-queries-and-packs.html


Now that we have a centralized osquery management platform using Kolide Fleet, we need to learn how to manage the manager.  This may not be difficult with a single Kolide Fleet instance, but keep in mind you could also run multiple load balanced Kolide Fleet instances.  Plus exporting and importing queries and packs are helpful tasks if you want to share or consume queries and packs with/from others.

Managing Fleets using fleetctl

Fortunately, there is a command-line tool that interacts with the Fleet API called fleetctl.  While it isn't perfect, at least it is a start.  This article will hopefully provide some additional tips on usage and limitations.


Official documentation can be found at the link below, but we will give you a primer with examples:  

https://github.com/kolide/fleet/blob/master/docs/cli/README.md


In order to use fleetctl, we first need to set it up (replace <hostname> with the hostname of your server):

Setup
fleetctl config set --address https://<hostname>:443

fleetctl login
Log in using the standard Fleet credentials.
Email: <my email address>
Password: 
[+] Fleet login successful and context configured!

If you see the following error message while attempting to log in, it is probably because you are using a self-signed cert without a common name:
"error logging in: POST /api/v1/kolide/login: Post https://<hostname>:443/api/v1/kolide/login: x509: certificate is not valid for any names, but wanted to match localhost"

Please see our previous article for details, however hopefully this will correct this issue for a self-signed cert:
fleetctl config set --rootca /opt/fleet/ssl/fleetserver-cert.crt


Otherwise, now that you are logged in, try something easy (this value should be familiar from the GUI):

Quick Test
fleetctl get enroll-secret
6U**********rUc


To see other options, use -h for help.

Help Menu
fleetctl -h

NAME:
   fleetctl - CLI for operating Kolide Fleet

USAGE:
   fleetctl [global options] command [command options] [arguments...]

VERSION:
   2.1.0

COMMANDS:
     apply    Apply files to declaratively manage osquery configurations
     delete   Specify files to declaratively batch delete osquery configurations
     setup    Setup a Kolide Fleet instance
     login    Login to Kolide Fleet
     logout   Logout of Kolide Fleet
     query    Run a live query
     get      Get/list resources
     config   Modify how and which Fleet server to connect to
     convert  Convert osquery packs into decomposed fleet configs
     help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --help, -h     show help
   --version, -v  print the version


Get Command Syntax (Queries, Packs, Labels, Host, Enroll-Secret)
fleetctl get
NAME:
   fleetctl get - Get/list resources

USAGE:
   fleetctl get command [command options] [arguments...]

COMMANDS:
     queries, query, q  List information about one or more queries
     packs, pack, p     List information about one or more packs
     labels, label, l   List information about one or more labels
     options            Retrieve the osquery configuration
     hosts, host, h     List information about one or more hosts
     enroll-secret      Retrieve the osquery enroll secret

OPTIONS:
   --help, -h  show help


Get A List of All Packs -- NOTE:  fleetctl does not currently provide an option to change the format
fleetctl get p
+---------------------------+----------+-------------------------------+
|           NAME            | PLATFORM |          DESCRIPTION          |
+---------------------------+----------+-------------------------------+
| users pack                |          | Query all users               |
+---------------------------+----------+-------------------------------+
| osquery_info pack         |          | Query the version of osquery  |
+---------------------------+----------+-------------------------------+
| process_open_sockets pack |          | Pack for process_open_sockets |
+---------------------------+----------+-------------------------------+
| programs pack             |          | pack for programs             |
+---------------------------+----------+-------------------------------+

(Note:  For a list of the queries, change the p to a q)


Get a Pack as Config File

For information about the config file format, please see this link:
https://github.com/kolide/fleet/blob/master/docs/cli/file-format.md

By specifying the exact query or pack in the "get" command, we get the properly formatted output, but we cannot find a way to say all packs or all queries (*).

fleetctl get p "users pack"

apiVersion: v1
kind: pack
spec:
  description: Query all users
  id: 1
  name: users pack
  queries:
  - description: ""
    interval: 60
    name: Users Query
    platform: ""
    query: users query
    removed: false
    snapshot: true
    version: ""
  targets:
    labels: null

(Note:  This also works for query names too)

Exporting All Queries and Packs

Unfortunately, we cannot find a single command or option within fleetctl to export all queries and packs to re-import them elsewhere.  The documentation states that fleetctl functions similarly to kubectl, but the -o yaml option does not appear to be implemented yet... so we had to perform some Linux trickery to get a list of all packs and queries in a non-tabled format.

To get a list of all of the packs, we used:

fleetctl get p | grep -v '+' | tail -n +2 | cut -f 2 -d '|' | sed 's/^.//' | sed -r '/^\s*$/d'

users pack                
osquery_info pack         
process_open_sockets pack 
programs pack           


To get a list of all of the queries, we just changed the first p to a q:

fleetctl get q | grep -v '+' | tail -n +2 | cut -f 2 -d '|' | sed 's/^.//' | sed -r '/^\s*$/d'

users query                
osquery_info query         
process_open_sockets query 
programs query           

We will try to expand on this when we get more time...


Import a Simple Query

If you happen to have a simple query such as the one below, you can import it using the fleetctl apply command:

cat querytoimport.yaml 

apiVersion: v1
kind: query
spec:
  description: Query the version of osquery
  name: osquery_info query
  query: SELECT * FROM osquery_info

fleetctl apply -f querytoimport.yaml
[+] applied 1 queries

This should now show up in the Web UI.

Importing Third Party Query Packs

Now that we know how to create a query pack, it does not mean that we need to create them all by ourselves.  There is already a significant amount of content being freely shared that we can leverage, such as the following:
Note:  Some of the content above is provided as an osquery pack and not a fleet config that can be directly imported using fleetctl apply.  For these, use fleetctl convert first.  For example:

wget https://raw.githubusercontent.com/teoseller/osquery-attck/master/network_connection_listening.conf

fleetctl convert -f network_connection_listening.conf > network_connection_listening.yaml

fleetctl apply -f network_connection_listening.yaml 
[+] applied 4 queries
[+] applied 1 packs

 More on this later... but we wanted to introduce the concept here.

Conclusion

This article covered how to manage Kolide Fleet using a tool called fleetctl.  We covered how to setup fleetctl, how to get a list of packs, queries, labels, users, and even how to import third part query packs.  You may have noticed that each article lays the groundwork to build an integration which will really weaponize Kolide Fleet...  As such, our next article we will use all of the information from the first four articles to create an integration with Splunk!  :-)

Sunday, May 26, 2019

osquery - Part III - Queries and Packs

By Tony Lee and Matt Kemelhar

This series on osquery will take us on a journey from stand-alone agents, to managing multiple agents with Kolide Fleet, and then finally onto more advanced integrations and analysis.  So far, we have already covered the following topics:

Part I - Local Agent Interaction:  http://securitysynapse.blogspot.com/2019/05/osquery-part-i-local-agent-interaction.html
Part II - Kolide Centralized Management:  http://securitysynapse.blogspot.com/2019/05/osquery-part-ii-kolide-centralized.html


Even though we now have a centralized management platform, reading the query output in the Kolide Fleet UI does not scale to hundreds of thousands of hosts -- thus we need to integrate with a big data analytics platform so we can stack and perform statistical analysis on the data.  In order to do that, we first need to cover Query Packs and the resulting logs.

What is Query Pack?

The Kolide Fleet Web UI does an excellent job succinctly describing the query packs in the following manner:

"Osquery supports grouping of queries (called query packs) which run on a scheduled basis and log the results to a configurable destination.

Query Packs are useful for monitoring specific attributes of hosts over time and can be used for alerting and incident response investigations. By default, queries added to packs run every hour (interval = 3600s).

Queries can be run in two modes:

  1. Differential = Only record data that has changed.
  2. Snapshot = Record full query result each time.

Packs are distributed to specified targets. Targets may be individual hosts or groups of hosts called labels.

The results of queries run via query packs are stored in log files for your convenience. We recommend forwarding this logs to a log aggregation tool or other actionable tool for further analysis. These logs can be found in the following locations:

    Status Log: /path/to/status/logs
    Result Log: /path/to/result/logs"


Creating Saved Queries

Packs sound like a great step toward big data integration, but first we need to create a saved search by doing the following (our example below queries users):

In the Kolide Web UI, click Query on the left hand navigation > New Query
  • Query Title:  Users Query
  • SQL:  SELECT * FROM users
  • Description:  Query all users
  • Select Targets:  All Hosts
Click the Save button > Save as New

Figure 1:  Adding a new saved user query

Creating Query Packs

Now that we have a saved query, let's schedule it using a Pack.

Click Packs on the left hand navigation > New Pack

  • Query Pack Title:  Users Pack
  • Query Pack Description:  Query all users

Click the Save Query Pack button

Figure 2:  Creating a new users pack

In the next screen, on the far right hand side, select the Users Query that we created earlier and fill in the fields to define the pack properties:
  • Interval:  60  (Just so we get some data to play with)
  • Platform:  All
  • Minimum version:  All
  • Logging:  Snapshot  (Just so we get some data to play with)

Figure 3:  Defining the User Pack properties

Query Pack Output

With our current minimalist configuration (shown below in fleet.yaml), the packs logs are being sent by default to disk here:
  • /tmp/osquery_result
  • /tmp/osquery_status


cat /opt/fleet/conf/fleet.yaml 

mysql:
  address: 127.0.0.1:3306
  database: kolide
  username: root
  password: toor
redis:
  address: 127.0.0.1:6379
server:
  cert: /opt/fleet/ssl/fleetserver-cert.crt
  key: /opt/fleet/ssl/fleetserver-cert.key
  address: 0.0.0.0:443
auth:
  jwt_key: strong_key
logging:
  json: true


If we wanted to send the logs to a lager drive, we could add the following to our fleet.yaml configuration (the enable log rotation provides 500 Mb or 28 days of data):

filesystem:
  status_log_file: /path/to/drive/osquery/status.log
  result_log_file: /path/to/drive/osquery/result.log
  enable_log_rotation: true

(For our lab environment, we wrote it to:  /data/osquery/)

Just remember to restart the Kolide Fleet service using the following:

service fleet-service restart

For a full list of Fleet configuration options (such as sending to firehose, etc.):
https://github.com/kolide/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md


Conclusion

This article covered how to create saved queries, configure and schedule query packs to run on a regular basis, and how to send this data to a specified file so we can pick up the results and send them to a big data analytics platform.  In the next couple of articles we will cover how to manage the fleet manager and advanced integrations.

Saturday, May 25, 2019

osquery - Part II - Kolide Centralized Management

By Tony Lee and Matt Kemelhar

This series on osquery will take us on a journey from stand-alone agents, to managing multiple agents with Kolide, and then finally onto more advanced integrations and analysis.  We already covered stand-alone local osquery interaction in Part I of this series:

http://securitysynapse.blogspot.com/2019/05/osquery-part-i-local-agent-interaction.html

However, we quickly noticed that it does not scale to hundreds of thousands of hosts -- thus we need a centralized management platform.  In this article, we will examine the freely available Kolide Fleet.

What is Kolide?

Kolide (https://kolide.com/) is a centralized osquery agent management platform.  As of the writing of this article, there are two versions:  Cloud and on-prem Fleet.  Currently Kolide Cloud runs about $6 per endpoint.  However, for our needs, we will kick the tires with the on-prem Kolide Fleet (https://kolide.com/fleet) which is offered free of charge.


Kolide Fleet Dependencies and Installation

Kolide Fleet has a few significant dependencies:
  • *nix based operating system
  • MySQL version 5.7 (or greater) - used as Fleet's primary database
  • Redis - "ingest and queue results of distributed queries, cache data, etc."

Due to these dependencies, setup can be a little painful and time consuming, however we found a pretty awesome Fleet installation script (https://github.com/deeso/fleet-deployment) from Adam Pridgen (https://www.linkedin.com/in/-dso-/) that works great for our lab environment running Ubuntu.

Installation

Follow these steps to get up and running quickly:

git clone https://github.com/deeso/fleet-deployment.git
cd fleet-deployment/fleet-server-install
cp passwords.example passwords.sh

** Using your favorite text editor (such as vim), update the MYSQL_PASS and JWT_KEY variable with the sql password:

vim passwords.sh

Now run the installer script:
bash install.sh

NOTE:  During the SSL certificate creation phase, you will be asked for a "Common Name" / server FQDN (see below) -- be sure to use the server name. 

Ex: Common Name (e.g. server FQDN or YOUR name) []:<ENTER IT HERE>


 It will matter later when you try to connect via fleetctl.  If you do not specify the server name, you will see the following error message upon login attempt:

"error logging in: POST /api/v1/kolide/login: Post https://<hostname>:443/api/v1/kolide/login: x509: certificate is not valid for any names, but wanted to match localhost"


Check on the status of the service:

service fleet-service status

When complete, open a browser and navigate to https://localhost to complete the Kolide setup to specify the user, organization, and Kolide URL.

Figure 1:  Kolide Fleet Setup Complete


Joining an agent to Kolide Fleet

If you installed osquery as a stand-alone during the part I article, feel free to uninstall it.  We now need to install some osquery agents and get them to connect to our Kolide server.

"To connect a host to Kolide Fleet, you have two general options. 

1)  You can install the osquery binaries on your hosts via the packages distributed at https://osquery.io/downloads 

- or -  

2)  You can use the Kolide osquery Launcher

The Launcher is a light wrapper that aims to make running and deploying osquery easier by adding a few features and minimizing the configuration interface. Some features of The Launcher are:

  • Secure autoupdates to the latest stable osqueryd
  • Remote communication via a strongly-typed, versioned, modern gRPC server API
  • a curated kolide_best_practices table which includes a curated set of standards for the modern enterprise"

Source:  https://github.com/kolide/fleet/blob/master/docs/infrastructure/adding-hosts-to-fleet.md


Using Kolide osqery Launcher

For this article, we will use the Kolide osquery Launcher to connect a host to our Kolide Fleet server.  The launcher can be obtained as source or pre-compiled binaries from here:  https://github.com/kolide/launcher/releases

Then you will need to obtain the enrollment secret from the Kolide Fleet Server web UI by clicking on the "Add New Host" link.

Figure 2:  Obtain enrollment secret from the Kolide Fleet web UI

Once you have the launcher binary and enrollment secret, run something similar to the following (where 192.168.21.129 is your Kolide server):

launcher.exe --hostname=192.168.21.129:443 --root_directory=c:\programdata\osquery --enroll_secret=6Ua**snip**rUc --insecure

The host will check in and you will be able to run queries from Kolide Fleet.

Figure 3:  Host checked into Kolide Fleet

To run queries, use the side navigator in the Kolide Fleet UI and click Query > New Query.  Type the SQL query you want to run (autocomplete is present), select the target(s), and click run.  The output from the hosts will be at the bottom of the screen.


Figure 4:  Running a query from Kolide Fleet


Conclusion

At this point you should have the basic building blocks for deploying osquery agents and having them check into Kolide Fleet.  This centralized management is quite powerful.  Being able to view (and export) the data from multiple hosts is also powerful, but viewing the results in this interface is a bit limiting--especially when processing results from thousands of hosts.  In the next couple of articles we will examine fleet control and integration possibilities that will allow processing and stacking the data using a big data analytics platform.


Friday, May 24, 2019

osquery - Part I - Local Agent Interaction

By Tony Lee and Matt Kemelhar

This series on osquery will take us on a journey from stand-alone agents, to managing multiple agents with Kolide, and then finally onto more advanced integrations, queries, and analysis.  Crawl, walk, run, right?  Ok, let's start crawling.

What is osquery?

osquery (https://osquery.io/) is an open source agent developed by Facebook that allows organizations to query endpoints of varying operating system using the same SQL syntax. These queries can be used for security, compliance, or DevOps as event-based, user-driven, or scheduled information gathering. Once the user learns the SQL syntax and osquery schema it will work the same across multiple operating systems [Windows, macOS, FreeBSD, Debian, RPM Linux, etc.] (for the most part).

For example, to list processes on Windows, it can be accomplished natively using the tasklist command.  For Linux/Unix this same task can be accomplished using the ps command.  If you are in osquery, regardless of the operating system, it can be accomplished with select * from processes;  While this may seem more cumbersome at first, there is an advantage of a single query and normalized output across all supported operating systems.

Installation

Installation is simple using one of the provided installers found here: 
https://osquery.io/downloads/official

There are installation instructions for each operating system in the docs section of the site:

For example, if you are looking for Windows installation instructions you would go here: 
https://osquery.readthedocs.io/en/stable/installation/install-windows/

For the majority of our article, it is simple, we will download the Windows .msi and double click it.

Interaction

Once osquery is installed (in this example on Windows), you can check to make sure the default installation path was created and populated.  In windows, it is:  C:\ProgramData\osquery


Then in a command prompt, check to see if the osqueryd agent is running using the following command:

C:\>sc.exe query osqueryd

SERVICE_NAME: osqueryd
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 4  RUNNING
                                (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
        WIN32_EXIT_CODE    : 0  (0x0)
        SERVICE_EXIT_CODE  : 0  (0x0)
        CHECKPOINT         : 0x0

        WAIT_HINT          : 0x0


If it is not running, try using:

C:\>sc.exe start osqueryd


Once running, we should be able to start the local client (osqueryi.exe) and run some queries.  By default it is located in:  c:\programdata\osquery\osqueryi.exe.  Run this from the command line and you will receive a new osquery prompt.  Try the following to ensure that the agent and client are working properly:

osquery> select * from uptime;

+------+-------+---------+---------+---------------+
| days | hours | minutes | seconds | total_seconds |
+------+-------+---------+---------+---------------+
| 21   | 10    | 17      | 34      | 1851454       |
+------+-------+---------+---------+---------------+

Here are a few useful commands to remember:
.help = help menu
.tables = list all the possible tables to query
.summary = version and configuration
.mode = change the output mode:  csv, column, line, list, pretty (default)
.exit = leave the program

Pro-tip:  The osqueryi client remembers command history so use the up and down arrows liberally.

Online Schema

We showed you a couple of queries so far, but how are you supposed to know what else exists?

1)  You can run .tables within the osqueryi client

2)  You can use the online schema (https://osquery.io/schema/) that contains every table, all columns, types, descriptions, and even displays the operating systems supported.


Figure 1:  The osquery schema - a great reference

Linux Example

For those with Linux, it is just as easy.  At the time of this writing here is the latest release:

Download:
wget https://pkg.osquery.io/deb/osquery_3.3.2_1.linux.amd64.deb


Install:
dpkg -i osquery_3.3.2_1.linux.amd64.deb


Usage:
root@ubuntu:~/osquery# osqueryi 
-- snip --
successfully completed!
Using a virtual database. Need help, type '.help'

osquery> select * from osquery_info;


Uninstall:
dpkg --remove osquery


Conclusion

Now that we understand the basics of osquery installation and local client usage, it should be very apparent that this will not scale to hundreds of thousands of hosts.  Thus, we need an osquery manager to make it enterprise ready.  However, we will leave this topic to the next article.

Thursday, May 9, 2019

Splunk Dashboard Tricks - Update Time Range for All Panels Using Splunk Timechart Selection

By Tony Lee


Have you ever wanted to update the time range for all of the panels in a dashboard using a timechart selection? (See screenshot below)


Figure 1:  Timechart selection to update earliest and latest variables

This feat is possible using the smallest amount of code, but it is not the most intuitive process -- which makes it a perfect blog article to highlight this ability.

At first we thought this would be a drilldown feature and spent many precious minutes in the GUI editor. However, my sharp colleague Arjun Mathew pointed out an obscure docs article that contained information regarding "selection". Then we found this other more concise article on Chart Controls:

https://docs.splunk.com/Documentation/Splunk/7.2.3/Viz/Chartcontrols


How it works

As mentioned before, we do not believe this is exposed through the GUI, so you will need to use the simple XML editor.  We are updating the dashboard code (first timechart panel) we provided in the 4740 account lockout article (http://securitysynapse.com/2018/08/troubleshooting-windows-account-lockout-part-i.html) to now possess this feature.

Inside of the <chart> tags, we will add the following:

        <selection>
          <set token="form.time.earliest">$start$</set>
          <set token="form.time.latest">$end$</set>
        </selection>

This will now set the form.time.earliest and latest fields in that dashboard in real time. This controls all of the remaining panels in the 4740 dashboard and makes a perfect use case in which we may want to use a timechart to control the sub panels.

Conclusion

We hope by highlighting the selection tag that it gets more use in creating a better user experience. For right now, it is not controlled via the web UI editor, however as its popularity grows, this may change.  Happy Splunking!