Thursday, July 25, 2019

Wyze Cameras - Keeping Honest Vendors Honest - Part II – Test Results

By Tony Lee

Quick Recap from part I in the series:
It can be difficult for a security nerd to inherently trust cloud vendors and products that do not keep all data on-premises—especially when it comes to home automation IoT devices such as cameras since they can record sensitive data. One such product with excellent reviews, ample capability, and a very reasonable price is the ever-popular Wyze Camera. So, I snatched up a Wyze Cam v2 triple pack that went on-sale, but became concerned after reading reviews and even a Reddit thread found here (https://www.reddit.com/r/wyzecam/comments/beq0sk/do_you_trust_wyze/) and here (https://www.reddit.com/r/wyzecam/comments/7cykgf/wyzecam_sending_data_to_servers_other_than_aws/) mentioning the data is possibly going to China. Note: while no country is perfect, not all countries condone state-sponsored corporate espionage and mass general population data collection.  Just sayin...

One positive note at the time was that it seemed that the manufacturer was chiming in on the Reddit threads explaining that they attempted a fix and that they needed someone to test again. So, in order to test the validity of the reviews and to help answer WyzeTao in the second article:  “We are asking help from some Reddit forum helpers to help check.”, we needed to set up our own environment.  This blog series outlines both the setup involved and then the results.
If you are setting this up yourself, you should refer to Part I - Setup here:   http://www.securitysynapse.com/2019/07/wyze-cameras-keeping-honest-vendors-honest-I.html

Spoiler:
Our camera arrived with firmware version 4.9.2.52 (Release date: October 22, 2018) and we upgraded to the latest version at the time of: 4.9.3.64 (Release date: December 17, 2018). We found that the other reviewers were correct in that the data was going to China (and other countries) due to a content distributor that Wyze useshowever, after working with the very responsive manufacturer, Wyze corrected the issue for everyone. So a huge thanks goes to Tao And Martin at Wyze for their great handling of this responsible disclosure.  Now, please update your mobile app and camera to the latest version (or newer) found below :-)

Corrected Versions:
Mobile app:  V2.4.24 (release date:  July 9th, 2019)
Wyze Cam v2 Firmware V4.9.4.108 (Release date: July 8, 2019)  <-- Update your camera firmware!



Traffic Analysis

After completing the setup in Part I of this series and opening wireshark, it is now time to analyze the traffic. We mentioned previously that we set a display filter (ex:  ip.addr==192.168.8.214) to narrow in on only traffic to and from the Wyze Camera.

Figure 5:  Wyze Cam v2 traffic


As you can see, the Wyze Camera is making DNS requests for:

  • gm.iotcplatform.com
  • cm.iotcplatform.com


These FQDNs resolved to the following IP addresses:
gm.iotcplatform.com

  • 52.79.197.188
  • 50.7.98.242
  • 198.16.70.58


cm.iotcplatform.com

  • 120.24.59.150


Using MaxMind GeoIP2, these IPs are located in the following countries:


Figure 6:  GeoIP resolution


This leaves us with the following:
gm.iotcplatform.com

  • 52.79.197.188 - Incheon, South Korea - Amazon.com
  • 50.7.98.242 - Los Angeles, United States - FDCservers.net
  • 198.16.70.58 - Amsterdam, Netherlands - FDCservers.net 


cm.iotcplatform.com

  • 120.24.59.150 - China - Hangzhou Alibaba Advertising Co.,Ltd.


Traffic Sent

If you were wondering if actual camera traffic was sent through China (via 120.24.59.150), it was indeed.


Figure 7:  Traffic sent to China


That said, the data by default does not use RTSP and could not easily be interpreted. Per Wyze, “The contents are encrypted via AES 128-bit encryption to protect the security of the live stream and playback data. During the connection process, every device in the process has its own secret key and certification, so that we can validate their identity during handshake. Even if a hacker intercepts the data package, the data cannot be decrypted.”

Source:  https://support.wyzecam.com/hc/en-us/articles/360009314072-Security-Privacy- 


Working with Wyze

After reporting the issue to Wyze tech support, they were extremely professional and concerned that the previous patch did not work. They worked quickly to provide a solution and test firmware (test version 4.9.4.44) that appeared to fix the issue.

Instead of the previous firmware querying "gm.iotcplatform.com" and "cm.iotcplatform.com", the new firmware queries "us-master.iotcplatform.com".  Just to be thorough, we let it run a bit and monitored for other traffic and found the following:

api.wyzecam.com

  • 34.208.107.136
  • 35.161.164.220
  • 35.167.190.246


wyze-iot.s3-us-west-2.amazonaws.com

  • 52.218.160.49


a24rq1e5m4mtei-ats.iot.us-west-2.amazonaws.com

  • 35.160.15.131


us-master.iotcplatform.com

  • 50.19.254.134
  • 50.7.98.242

Figure 8:  New GeoIP results


Conclusion

The initial contact took a little while, however over a one month period of working with the vendor, they were able to correct the issue. The level of detail and follow-through was greatly appreciated. Wyze engineers took our concerns seriously and delivered an acceptable solution. Based on our interactions, they appear to be an honest and transparent company that is focused on doing right by their customers. That is just one more reason in my book for us to purchase more Wyze cameras.


Disclaimer:  We do not work for Wyze (or any of the vendors mentioned) and do not benefit from this article in anyway. All cameras were purchased the same as anyone else.  We do like their customer service, quality of the goods, and prices though.  :-)

Monday, July 22, 2019

Wyze Cameras - Keeping Honest Vendors Honest - Part I - Setup

By Tony Lee

Background:
It can be difficult for a security nerd to inherently trust cloud vendors and products that do not keep all data on-premises—especially when it comes to home security/automation IoT devices such as cameras since they can record sensitive data. One such product with excellent reviews, ample capability, and a very reasonable price is the ever-popular Wyze Camera. So, I snatched up a Wyze Cam v2 triple pack that went on-sale, but became concerned after reading reviews and even Reddit threads found here (https://www.reddit.com/r/wyzecam/comments/beq0sk/do_you_trust_wyze/) and here (https://www.reddit.com/r/wyzecam/comments/7cykgf/wyzecam_sending_data_to_servers_other_than_aws/) mentioning the data is possibly going to China. Note: while no country is perfect, not all countries condone state-sponsored corporate espionage and mass general population data collection.  Just sayin...

One positive note at the time was that it seemed that the manufacturer was chiming in on the Reddit threads explaining that they attempted a fix and that they needed someone to test again. So, in order to test the validity of the reviews and to help answer WyzeTao in the second article:  “We are asking help from some Reddit forum helpers to help check.”, we needed to set up our own environment.  This blog series outlines both the setup involved and then the results.

Spoiler:
Since this is a two part series and we want readers to benefit from the latest security enhancements, we are providing a spoiler in the first article. Our camera arrived with firmware version 4.9.2.52 (Release date: October 22, 2018) and we upgraded to the latest version at the time of: 4.9.3.64 (Release date: December 17, 2018). We found that the other reviewers were correct in that the data was going to China (and other countries) due to a content delivery network that Wyze useshowever, after working with the very responsive manufacturer to test and retest, Wyze corrected the issue for everyone. So a huge thanks goes to Tao And Martin at Wyze for listening to customer concerns and their great handling of responsible disclosures. Now, please update your mobile app and camera to the latest versions (or newer) found below :-)

Corrected Versions:
Mobile app:  V2.4.24 (release date:  July 9th, 2019)
Wyze Cam v2 Firmware V4.9.4.108 (Release date: July 8, 2019)  <-- Update your camera firmware too!

Figure 1:  The ever-popular (and pretty awesome) Wyze Cam 1080p HD Indoor Wireless Smart Home Camera

Test Environment

The hardware and software in our environment is a mixture of what we had on hand and what was required to compensate for lack of existing features.  Also keep in mind that there are quite a few ways to test these devices, however we are presenting just one of the solutions here.

Hardware:

  • Wyze Cam v2
  • eero Pro WiFi System (Set of 3 eero Pros) – 2nd Generation
  • GL-iNet AR750s
  • Standard laptop
  • USB Ethernet adapter


Software:

  • Windows 10 base OS
  • Kali Linux OS running in VMWare Workstation with USB ethernet adapter connected as pass through



Quick Note on Limitations of Mesh Routers (Including the Eero Pro WiFi System)

One potentially tricky scenario in monitoring wireless traffic on a mesh network is determining the AP to which the device connects and keeping it on that AP. To avoid that issue, ideally it should be simple to monitor the last hop AP that connects to the source of Internet (cable modem in our case), but this is not always a provided feature. It certainly isn’t a feature in the Eero Pro. Don’t get us wrong, the Eero hardware and reliability makes it one of the best mesh setups around, but the lack of advanced features is depressing—especially for the price tag (~$500) (https://www.amazon.com/eero-Home-WiFi-System-Beacon/dp/B071DWXLYL/). Maybe things will change after the semi-recent Amazon acquisition (https://www.theverge.com/2019/2/11/18220960/amazon-eero-acquisition-announced).  Fingers crossed!

Figure 2:  Typical Mesh network diagram (courtesy of Eero)


Work Around to Sniff Wireless Traffic

Since the Eero woefully lacks a way to route the traffic to a SPAN port, we purchased a GL.iNet GL-AR750S-Ext Gigabit Travel AC Router (https://www.amazon.com/GL-iNet-GL-AR750S-Ext-pre-Installed-Cloudflare-Included/dp/B07GBXMBQF) to do so.  The impressive stats on this compact device are as follows:


  • Dual band AC750 Wi-Fi: 433Mbps(5G) +300Mbps(2.4G)
  • QCA9563,@775MHz SoC
  • 128MB RAM, 16MB NOR Flash and 128 MB NAND Flash
  • Up to 128GB MicroSD slot
  • USB 2.0 port
  • Three ethernet ports (1 WAN, 2 LANs)
  • Powered by Micro USB 5V/2A power supply
  • And best of all:  OpenWrt pre-installed


Configuration and Setup

Now that we know the hardware, let’s jump into it.

Wiring:
 1) Cable Modem --> Wireless router --> Wireless Mesh receiver --> Hardwire to WAN port of AR750s
 2) AR750s Switch port --> USB ethernet adapter (connected to Kali VM)

Figure 3:  Wiring and configuration

First configuration of the router:

  • Power up the router
  • Connect wirelessly using the supplied wireless SSID and default password: goodlife
  • Upon connecting to the web UI (ex: http://192.168.8.1) you will be required to set a password for the router admin


Figure 4:  Web UI that shows the Wyze Cam v2 target and the Kali host to send the SPAN data


Setting up a SPAN port:
 Putty or SSH to router (ex: 192.168.8.1) with proper credentials (ex:  root:<password set above>)
 - Run the following to set up a SPAN port:

Syntax to setup a SPAN port:
iptables -t mangle -A PREROUTING -j TEE --gateway <IP of Kali VM>
iptables -t mangle -A POSTROUTING -j TEE --gateway <IP of Kali VM>

Example (where our Kali VM IP is 192.168.8.217):
iptables -t mangle -A PREROUTING -j TEE --gateway 192.168.8.217
iptables -t mangle -A POSTROUTING -j TEE --gateway 192.168.8.217

NOTE:  If you get the following error:
iptables v1.6.2: unknown option "--gateway"

 - Run the following and then the iptables commands again
opkg update
opkg install iptables-mod-tee kmod-ipt-tee


Figure 5:  SSH into the AR750S and setting up the SPAN port to go to the Kali Linux host (192.168.8.217)



Sniff Traffic:

  • Open Wireshark and sniff on the same interface specified above and you should now see all traffic to and from the AR750S.
  • Pro-tip:  Use a filter in Wireshark to limit traffic to just the device you want to monitor (in our case it is the Wyze Camera)
    • Ex:  ip.addr==<IP ADDRESS>


Conclusion

Now that we covered our hardware, software, setup and how to create a SPAN port of the AR750S wireless traffic… we are ready to cover our findings.  But, we will save that for Part II of the series found here:  http://securitysynapse.blogspot.com/2019/07/wyze-cameras-keeping-honest-vendors-honest-II.html

Tuesday, July 9, 2019

Creating Network Device CLI Visibility in Splunk

By Tony Lee

Creating visibility is not always a popular topic--especially when that visibility can be used to hold folks accountable. But if you have ever had a network outage due to an incorrect command or change in configuration, it may save your bacon to know how and where to correct it. The same is especially true if a network administrator account is ever compromised.

This article will show how to create a running log of searchable commands executed on Cisco ASA and Juniper devices. As a bonus, we will provide the dashboard code at the end of the article.

Figure 1:  Network Change Management dashboard

Raw Logs

Let's take a moment to see how these logs typically look.

Cisco

2017-10-20T08:33:09+00:00 admin : %ASA-5-111010: User 'TONYLEE', running 'CLI' from IP 10.10.10.10, executed 'write terminal'

Juniper

2017-10-20T04:18:30+00:00 JuniperRTR mgd[91265]: UI_CMDLINE_READ_LINE: User 'TONYLEE', command 'show igmp interface'




Logic to Find CLI Commands

We provided the exact Cisco and Juniper packets you are seeking, but these will not be the only packets. This is the logic we used to find it:

Cisco

index=cisco-asa message_id=111010 command!="changeto*"

Juniper

index=juniper UI_CMDLINE_READ_LINE

Source:  https://www.cisco.com/en/US/docs/security/asa/asa80/system/message/logmsgs.html


Fields to Parse

There are some fields that are critical in terms of making this data useful, such as:

  • Event time
  • Device changed
  • Source of change
  • User
  • Command executed

Fortunately, Splunk should have a Cisco and Juiniper TA to parse these events.  If not, respond here and we will help with the regex.


Searches

Now that we have the fields parsed, we need two searches to help us gain visibility into CLI commands:

Cisco:
index=cisco-asa message_id=111010 command!="changeto*" $wild$ | table _time, host, user, src, command | rename host AS CiscoHost, src as SourceIP


Juniper:
index=juniper UI_CMDLINE_READ_LINE $wild$ | table _time, host, user, command, _raw


Conclusion

Creating visibility is not always popular, but it sure is helpful when troubleshooting. We hope this article helped others save time. Let us know what you think by leaving a comment below. Happy Splunking!


Dashboard Code

The following dashboard assumes that the appropriate logs are being collected and sent to Splunk. Additionally, the dashboard code assumes an index of cisco-asa and an index of juniper. Feel free to adjust as necessary. Splunk dashboard code provided below:


<form>
  <label>Network Change Management</label>
  <fieldset submitButton="true" autoRun="true">
    <input type="time" token="time" searchWhenChanged="false">
      <label>Time Range</label>
      <default>
        <earliest>-24h@h</earliest>
        <latest>now</latest>
      </default>
    </input>
    <input type="text" token="wild" searchWhenChanged="false">
      <label>Wildcard Search</label>
      <default>*</default>
    </input>
  </fieldset>
  <row>
    <panel>
      <table>
        <title>Cisco ASA (message_id=111010 excluding changeto events)</title>
        <search>
          <query>index=cisco-asa message_id=111010 command!="changeto*" $wild$ | table _time, host, user, src, command | rename host AS CiscoHost, src as SourceIP</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
          <sampleRatio>1</sampleRatio>
        </search>
        <option name="drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
  </row>
  <row>
    <panel>
      <table>
        <title>Juniper (UI_CMDLINE_READ_LINE)</title>
        <search>
          <query>index=juniper UI_CMDLINE_READ_LINE $wild$ | table _time, host, user, command, _raw</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
          <sampleRatio>1</sampleRatio>
        </search>
        <option name="drilldown">none</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
  </row>
</form>



Monday, July 1, 2019

Quick and Flexible IOC Hunting in Splunk

By Tony Lee and Arjun Mathew

Imagine that you are battling a known threat actor.  You have gathered indicators of compromise (IOCs) from reversing malware as well as helpful contributions from the rest of the security community.  But how could those IOCs be tasked across your existing data quickly in order to track attacker movement in real-time?  Here is one possible solution:
  1. Use a lookup file
  2. Clever Splunk search
  3. Even more clever dashboard
This article will outline the process and even share an example dashboard (shown in the screenshot below).

Figure 1:  Known IOC Dashboard provided at the end of the article

Lookup File

We used the following process to create a lookup file and definition.  Create a file in excel and save it as a CSV called known_iocs.csv (similar to the file below).

Figure 2:  CSV that we initially populated with our IOCs

Then within Splunk, navigate to the following to create the lookup and the definition:

Settings > Lookups > Lookup table files > Add new

  • Destination app:  Select the app
  • Upload a lookup file:  known_iocs.csv
  • Destination filename:  known_iocs.csv


Settings > Lookups > Lookup definitions > Add new

  • Destination app:  Select the app
  • Name:  known_iocs.csv
  • Type:  File-based
  • Lookup file:  known_iocs.csv


Now, here is the problem.  How do you scale this solution to a group effort to update a lookup table with IOCs?  It does not work well to pass the CSV around and then constantly upload.  Enter another graceful solution from Luke Murphey -- The Lookup File Editor Splunk App (https://splunkbase.splunk.com/app/1724/).

Figure 3:  Lookup File Editor App from Luke Murphey

Once the Lookup File Editor Splunk App is installed, navigate to it, search for your known_ioc.csv file.  Open it and right click on the bottom line and "Insert a new row".  You can edit the lookup file right in Splunk.  Once it is saved, the correlation searches will automatically run with the new IOC data.


Figure 4:  Inserting a new line to our known_ioc.csv file

Clever Search

Now that we have a lookup table that has our IOCs in it and a convenient way to edit it, we just need a search that will apply the IOCs to our data.  The example below applies the IOCs to the cylance_protect index, but feel free to change the index name as needed.  Additionally, we show how to search just one column of the IOC data as well as multiple columns.


One type of IOC (Hash):

index=cylance_protect [|inputlookup known_iocs.csv | rename Hash as query | table query] | stats count


Two types of IOCs (Hash & FileName)

index=cylance_protect [|inputlookup known_iocs.csv | rename Hash as query | table query] OR [|inputlookup known_iocs.csv | rename FileName as query | table query] | stats count

Note the OR statement between the two inputlookups -- needed when querying multiple columns.


Figure 5:  What will be our top panels showing a count of the hits


Even More Clever Dashboard

Now that we have functional searches, we need a dashboard to monitor our different data feeds such as:
  • Proxy
  • Firewalls
  • DNS
  • Antivirus Hits
  • Email Protection
  • Windows Event Logs

You can see in the screenshot below that we use Single Value panels on the top row.  Each of these panels contains a dynamic drilldown to populate the panel below it with the contents of the Single Value panel when clicked.


Figure 6:  Dashboard displayed at the start of the article and in the Sample Dashboard section below

The drilldown for each Single Value panel sets a token which is essentially the search, but without the stats count (feel free to table the data as needed):

        <drilldown>
          <set token="alert">index=proxy $wild$ [|inputlookup known_iocs.csv | rename Domain as query | table query] | table _raw</set>
        </drilldown>



Then the bottom panel is just a search of the token set in the drilldown above.

        <search>
          <query>| search $alert$</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>

Conclusion

Using a clever combination of features that already exist within Splunk (for the most part), we were able to create a quick method to update an IOC list and apply it against existing data within Splunk. Simply monitor these dashboards and use it to track the attacker's activities in real-time.


Sample Dashboard

The sample dashboard below uses a number of indexes to search over different data feeds.  Just change these indexes to the ones you are interested in monitoring.


<form>
  <label>Known IOC Hits</label>
  <description>Threat Actor</description>
  <fieldset submitButton="true">
    <input type="time" searchWhenChanged="true" token="time">
      <label>Time Range</label>
      <default>
        <earliest>-24h@h</earliest>
        <latest>now</latest>
      </default>
    </input>
    <input type="text" searchWhenChanged="true" token="wild">
      <label>Wildcard Search</label>
      <default>*</default>
    </input>
  </fieldset>
  <row>
    <panel>
      <single>
        <title>Proxy</title>
        <search>
          <query>index=proxy $wild$ [|inputlookup known_iocs.csv | rename Domain as query | table query] | stats count</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="colorMode">none</option>
        <option name="drilldown">all</option>
        <option name="rangeColors">["0x65a637","0xd93f3c"]</option>
        <option name="rangeValues">[0]</option>
        <option name="useColors">1</option>
        <drilldown>
          <set token="alert">index=proxy $wild$ [|inputlookup known_iocs.csv | rename Domain as query | table query] | table _raw</set>
        </drilldown>
      </single>
    </panel>
    <panel>
      <single>
        <title>Firewalls</title>
        <search>
          <query>index=firewalls $wild$ [|inputlookup known_iocs.csv | rename IP as query | table query] | stats count</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">all</option>
        <option name="rangeColors">["0x65a637","0xd93f3c"]</option>
        <option name="rangeValues">[0]</option>
        <option name="refresh.display">progressbar</option>
        <option name="useColors">1</option>
        <drilldown>
          <set token="alert">index=firewalls $wild$ [|inputlookup known_iocs.csv | rename IP as query | table query] | table _raw</set>
        </drilldown>
      </single>
    </panel>
    <panel>
      <single>
        <title>DNS</title>
        <search>
          <query>index=dns $wild$ [|inputlookup known_iocs.csv | rename Domain as query | table query] | stats count</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">all</option>
        <option name="rangeColors">["0x65a637","0xd93f3c"]</option>
        <option name="rangeValues">[0]</option>
        <option name="useColors">1</option>
        <drilldown>
          <set token="alert">index=dns $wild$ [|inputlookup known_iocs.csv | rename Domain as query | table query] | table _raw</set>
        </drilldown>
      </single>
    </panel>
    <panel>
      <single>
        <title>Antivirus Hits</title>
        <search>
          <query>index=av $wild$ [|inputlookup known_iocs.csv | rename Hash as query | table query] | stats count</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">all</option>
        <option name="rangeColors">["0x65a637","0xd93f3c"]</option>
        <option name="rangeValues">[0]</option>
        <option name="useColors">1</option>
        <drilldown>
          <set token="alert">index=av $wild$ [|inputlookup known_iocs.csv | rename Hash as query | table query] | table _raw</set>
        </drilldown>
      </single>
    </panel>
    <panel>
      <single>
        <title>Email Protection</title>
        <search>
          <query>index=mail_protection $wild$ [|inputlookup known_iocs.csv | rename Domain as query | table query] | stats count</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">all</option>
        <option name="rangeColors">["0x65a637","0xd93f3c"]</option>
        <option name="rangeValues">[0]</option>
        <option name="useColors">1</option>
        <drilldown>
          <set token="alert">index=mail_protection $wild$ [|inputlookup known_iocs.csv | rename Domain as query | table query] | table _raw</set>
        </drilldown>
      </single>
    </panel>
  </row>
  <row>
    <panel>
      <title>Information Table (Click one of the numbers above to populate this table with Details)</title>
      <table>
        <search>
          <query>| search $alert$</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">cell</option>
      </table>
    </panel>
  </row>
</form>

Friday, June 21, 2019

Microsoft Bug? Shift + Delete does not generate 4660 Event

By Tony Lee

If you have ever searched for anything along the lines of:  "How do I discover who deleted a file", you will probably find a dozen articles or more telling you to check for Windows Event ID 4660.

Some examples of excellent search results and resources can be found below (lmgtfy):

Then the article will most likely mention that Event ID 4660 lacks the object name (son of a biscuit!) and you will need to map the event using the handle ID to Event ID 4656 or Event ID 4663 (with an Accesses=DELETE).  No problem, we've got this.  But what do you do if 4660 is not always created? This can happen!  (Dun... dun... duuuuunn.....)

Discovery

There are obviously multiple ways to delete a file such as the following:
  • Delete key
  • Right Click > Delete
  • "del" from a command prompt
  • Shift + Delete
Every single method above generates a 4660 (and 4663) except the last one, Shift + Delete, which happens to be my personal favorite way to to delete a file.  :-(  Delete it like you mean it...

Test Methodology

The discovery was frustrating and quite accidental. While deleting files, we noticed that no 4660 (or even 4663) logs were being created when we used Shift + Delete. In utter disbelief, we set up the following to prove our sanity:
  • Enable all necessary auditing (lots of articles on this)
  • Open Event Viewer > Windows Logs > Security > Filter Current Log > 4660 in the filter box
  • Create 4 text files in which you will delete using the methods above
  • Delete one file at a time and wait for Event Viewer to notify you of a new log
  • Notice that Shift + Delete DOES NOT GENERATE A 4660!

Figure 1:  Test methodology shown above with Event Viewer, filters, notifications, and four files to delete


Conclusion

We all know that Windows logging is horrible, but this one takes the cake. It just seems scary that holding down shift while pressing delete will omit the log whose entry starts with: "An object was deleted." One possible work around is enabling the ever noisy Event ID 4656 and filtering that down--which still has its own pitfalls. Anyway, we hope this article helped debunk the myth that using Event ID 4660 for detecting file deletes is reliable (regardless of the name of the log entry).


Sample Logs

Some sample logs from our friend Randy Franklin Smith:

https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4663#examples

https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4660#examples

https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4656#examples

Wednesday, June 19, 2019

Parsing and Displaying Windows Firewall data in Splunk

By Tony Lee

Have you ever wondered what it would be like to harness all of the Windows firewall data in your environment and crunch that activity using big data analytics?  What you find might shock you.

Fortunately for us Splunk heads, Andreas Roth has already completed most of the work to send the logs to Splunk and even parse them.  https://splunkbase.splunk.com/app/3300/#/details

Figure 1:  TA to collect and parse the logs already exists!

Big shout out to Andreas for the jump start.  However, we added some parsing to get the layer 4 transport protocol and then created a dashboard (shown below) that we are going to share here.

Figure 2:  Dashboard provided in this article

Prerequisites

There are some things you will need to do before we can make use of the Windows Firewall logs:
1)  Enable Windows Firewall Logging 
   Tip:  Use the link in the Log Location section below to enable and configure the firewall via GPO

2)  Forward the logs (written to disk) to Splunk via a Splunk UF, beats agent, etc.
   Tip:  This is made "easy" by installing the TA mentioned above on your forwarders

3)  Parse the logs
   Tip:  This is made easy by installing the TA mentioned above on your indexers

4)  Display the logs
   Tip:  This is made easy by using our dashboard code found at the end of this article

Log Location

The first thing we need to do is discover where those logs are located. After a bit of research, you will find that by default they should be located here:

%systemroot%\Windows\system32\LogFiles\Firewall\pfirewall.log

However, when I tried searching my machine for any sign of the logs, I discovered that they were turned off.  In fact, Windows does not log these to disk by default.

Figure 1:  Windows Firewall Logs off by default

To enable Windows Firewall logging, see the following article:  https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-firewall/configure-the-windows-firewall-log#to-configure-the-windows-firewall-log


Raw Log

What do these logs look like once they are written to disk? Well, they are short sweet and to the point.  See the example log below:

2018-07-03 14:19:55 DROP UDP 192.168.2.1 224.0.0.252 50859 5355 56 - - - - - - - RECEIVE

There are a lot of fields there, but Microsoft kindly places a header in the log file to indicate the field names:

date time action protocol src-ip dst-ip src-port dst-port size tcpflags tcpsym tcpack tcpwin icmptype icmpcode info path

As mentioned before, if you installed the TA-winfw on your search head and universal forwarders most of the parsing will be performed for you.  Although, there was one field which did not seem to be parsed for us.  No biggie, we don't have to do the real heavy lifting.

Parsing

Using our example event from earlier, the transport field did not seem to be parsed for us which typically indicates if the packet was UDP or TCP, etc..  Instead the protocol field just displayed "ip".

2018-07-03 14:19:55 DROP UDP 192.168.2.1 224.0.0.252 50859 5355 56 - - - - - - - RECEIVE


To correct this, we added the following regex generated from the Splunk field extractor for our sourcetype of winfw:

^(?:[^ \n]* ){3}(?P<transport>\w+)

Search String

After parsing out the "transport" field, we can now form our search string:

index=winfw | table _time, dvc, direction, action, transport, src_ip, src_port, dest_ip, dest_port

Taking this a step farther, we created a dashboard which is provided at the bottom of the article.

Conclusion

Using the dashboard code below I bet you can find some interesting events in your network. Even if you don't find something malicious, you can probably find a misconfiguration or two. Correcting these issues will not only save on host performance and network performance, but now Splunk performance too.  Happy Splunking!

Great  resource:
https://www.howtogeek.com/220204/how-to-track-firewall-activity-with-the-windows-firewall-log/

Dashboard Code

The following dashboard assumes that the appropriate logs are being collected and sent to Splunk. Additionally, the dashboard code assumes an index of winfw and a sourcetype of winfw. Feel free to adjust as necessary. Splunk dashboard code provided below:


<form>
  <label>Windows Firewall</label>
  <fieldset submitButton="true" autoRun="true">
    <input type="time" searchWhenChanged="false" token="time">
      <label>Time Range</label>
      <default>
        <earliest>-15m</earliest>
        <latest>now</latest>
      </default>
    </input>
    <input type="text" searchWhenChanged="false" token="wild">
      <label>Wildcard Search</label>
      <default>*</default>
    </input>
  </fieldset>
  <row>
    <panel>
      <table>
        <title>Event Count</title>
        <search>
          <query>| tstats count where index=winfw by host</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">cell</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
    <panel>
      <chart>
        <title>Top Action</title>
        <search>
          <query>index=winfw $wild$ | table _time, action | top action</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="charting.chart">pie</option>
        <option name="refresh.display">progressbar</option>
      </chart>
    </panel>
    <panel>
      <table>
        <title>Top Source IP</title>
        <search>
          <query>index=winfw $wild$ | table _time, src_ip | top limit=0 src_ip</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="count">10</option>
        <option name="drilldown">cell</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
    <panel>
      <table>
        <title>Top Dest IP</title>
        <search>
          <query>index=winfw  $wild$ | table _time, dest_ip | top limit=0 dest_ip</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="count">10</option>
        <option name="drilldown">cell</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
    <panel>
      <table>
        <title>Top Dest Port</title>
        <search>
          <query>index=winfw $wild$ | table _time, dest_port | top limit=0 dest_port</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="count">10</option>
        <option name="drilldown">cell</option>
        <option name="refresh.display">progressbar</option>
      </table>
    </panel>
  </row>
  <row>
    <panel>
      <table>
        <title>Details</title>
        <search>
          <query>index=winfw $wild$ | table _time, dvc, direction, action, transport, src_ip, src_port, dest_ip, dest_port</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
          <sampleRatio>1</sampleRatio>
        </search>
        <option name="count">20</option>
        <option name="dataOverlayMode">none</option>
        <option name="drilldown">cell</option>
        <option name="percentagesRow">false</option>
        <option name="refresh.display">progressbar</option>
        <option name="rowNumbers">false</option>
        <option name="totalsRow">false</option>
        <option name="wrap">true</option>
      </table>
    </panel>
  </row>
</form>

Monday, June 17, 2019

Parsing and Displaying Okta Data in Splunk - Part III - App Lookup Tool

By Tony Lee

If you are reading this page chances are good that you have both Splunk and Okta. The good news is that there is a pre-built TA (https://splunkbase.splunk.com/app/2806/) to help with the data ingest and parsing, plus an app (https://splunkbase.splunk.com/app/2821/) to help with the visualizations. However, there is always room to improve and thus we created and are sharing some additional lookup dashboards to make the data more actionable.

Figure 1:  At the time of this article, an Okta TA and App exists

The first two articles of this series covered two useful lookup tools:
1)  User Lookup - http://securitysynapse.blogspot.com/2019/06/parsing-and-displaying-okta-data-part-i-user-lookup.html
2)  Group Lookup - http://securitysynapse.blogspot.com/2019/06/parsing-and-displaying-okta-data-part-ii-group-lookup.html

In this third article, we will show how to create an app lookup tool (with group and user drilldown!) using the information contained within the Okta logs. Since Okta has quite a bit of user and group information, the existing data makes a useful Rolodex that is available to Splunk. This is especially useful to a SOC analyst who might be tracking down user or group access based on application name.


Figure 2:  App Lookup Tool created using Okta data!


Data Categorization

Okta data brought in via the TA is easily distinguishable via the source field.  For example:
  • okta:user
  • okta:event
  • okta:group
  • okta:app
Thus, for app data, we will use source=okta:app 


Raw Log

A sample app event is shown below (this should be close to what you see using the TA). Our event also contained two multi-value fields called assigned_users{} which contains the user IDs for the users assigned to that app and assigned_groups{} which contains the group IDs for the groups assigned to that app:

[
  {
    "id": "0oa1gjh63g214q0Hq0g4",
    "name": "testorgone_customsaml20app_1",
    "label": "Custom Saml 2.0 App",
    "status": "ACTIVE",
    "lastUpdated": "2016-08-09T20:12:19.000Z",
    "created": "2016-08-09T20:12:19.000Z",
    "accessibility": {
      "selfService": false,
      "errorRedirectUrl": null,
      "loginRedirectUrl": null
    },
    "visibility": {
      "autoSubmitToolbar": false,
      "hide": {
        "iOS": false,
        "web": false
      },
      "appLinks": {
        "testorgone_customsaml20app_1_link": true
      }
    },
    "features": [],
    "signOnMode": "SAML_2_0",
    "credentials": {
      "userNameTemplate": {
        "template": "${fn:substringBefore(source.login, \"@\")}",
        "type": "BUILT_IN"
      },
      "signing": {}
    },
    "settings": {
      "app": {},
      "notifications": {
        "vpn": {
          "network": {
            "connection": "DISABLED"
          },
          "message": null,
          "helpUrl": null
        }
      },
      "signOn": {
        "defaultRelayState": "",
        "ssoAcsUrl": "https://{yourOktaDomain}",
        "idpIssuer": "http://www.okta.com/${org.externalKey}",
        "audience": "https://example.com/tenant/123",
        "recipient": "http://recipient.okta.com",
        "destination": "http://destination.okta.com",
        "subjectNameIdTemplate": "${user.userName}",
        "subjectNameIdFormat": "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress",
        "responseSigned": true,
        "assertionSigned": true,
        "signatureAlgorithm": "RSA_SHA256",
        "digestAlgorithm": "SHA256",
        "honorForceAuthn": true,
        "authnContextClassRef": "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport",
        "spIssuer": null,
        "requestCompressed": false,
        "attributeStatements": []
      }
    },
    "_links": {
      "logo": [
        {
          "name": "medium",
          "href": "http://testorgone.okta.com/assets/img/logos/default.6770228fb0dab49a1695ef440a5279bb.png",
          "type": "image/png"
        }
      ],
      "appLinks": [
        {
          "name": "testorgone_customsaml20app_1_link",
          "href": "http://testorgone.okta.com/home/testorgone_customsaml20app_1/0oa1gjh63g214q0Hq0g4/aln1gofChJaerOVfY0g4",
          "type": "text/html"
        }
      ],
      "help": {
        "href": "http://testorgone-admin.okta.com/app/testorgone_customsaml20app_1/0oa1gjh63g214q0Hq0g4/setup/help/SAML_2_0/instructions",
        "type": "text/html"
      },
      "users": {
        "href": "http://testorgone.okta.com/api/v1/apps/0oa1gjh63g214q0Hq0g4/users"
      },
      "deactivate": {
        "href": "http://testorgone.okta.com/api/v1/apps/0oa1gjh63g214q0Hq0g4/lifecycle/deactivate"
      },
      "groups": {
        "href": "http://testorgone.okta.com/api/v1/apps/0oa1gjh63g214q0Hq0g4/groups"
      },
      "metadata": {
        "href": "http://testorgone.okta.com:/api/v1/apps/0oa1gjh63g214q0Hq0g4/sso/saml/metadata",
        "type": "application/xml"
      }
    }
  },
  {
    "id": "0oabkvBLDEKCNXBGYUAS",
    "name": "template_swa",
    "label": "Sample Plugin App",
    "status": "ACTIVE",
    "lastUpdated": "2013-09-11T17:58:54.000Z",
    "created": "2013-09-11T17:46:08.000Z",
    "accessibility": {
      "selfService": false,
      "errorRedirectUrl": null
    },
    "visibility": {
      "autoSubmitToolbar": false,
      "hide": {
        "iOS": false,
        "web": false
      },
      "appLinks": {
        "login": true
      }
    },
    "features": [],
    "signOnMode": "BROWSER_PLUGIN",
    "credentials": {
      "scheme": "EDIT_USERNAME_AND_PASSWORD",
      "userNameTemplate": {
        "template": "${source.login}",
        "type": "BUILT_IN"
      }
    },
    "settings": {
      "app": {
        "buttonField": "btn-login",
        "passwordField": "txtbox-password",
        "usernameField": "txtbox-username",
        "url": "https://example.com/login.html"
      }
    },
    "_links": {
      "logo": [
        {
          "href": "https:/example.okta.com/img/logos/logo_1.png",
          "name": "medium",
          "type": "image/png"
        }
      ],
      "users": {
        "href": "https://{yourOktaDomain}/api/v1/apps/0oabkvBLDEKCNXBGYUAS/users"
      },
      "groups": {
        "href": "https://{yourOktaDomain}/api/v1/apps/0oabkvBLDEKCNXBGYUAS/groups"
      },
      "self": {
        "href": "https://{yourOktaDomain}/api/v1/apps/0oabkvBLDEKCNXBGYUAS"
      },
      "deactivate": {
        "href": "https://{yourOktaDomain}/api/v1/apps/0oabkvBLDEKCNXBGYUAS/lifecycle/deactivate"
      }
    }
  }
]

Source:  https://developer.okta.com/docs/api/resources/apps#list-applications

Fields we need to parse

Fortunately, the available TA already parses the data for us, but the fields that we are most interested in for this lookup dashboard are the following:
  • dest
  • name
  • label
  • signOnMode
  • created
  • lastUpdated
  • status
  • assigned_users{}
  • assigned_groups{}
Feel free to modify the search and replace fields as needed.

Search String

A simple search string that gets us the table needed is shown below. We deduplicated the results by app id since it is a unique field. We also added a count of the number of users (assigned_users) and groups (assigned_groups) in each app. Now just add filters such as the ones we provided in our dashboard code at the end of the article and you are in business! 

index=okta source=okta:app | dedup id | eval assigned_users=mvcount('assigned_users{}') | eval assigned_groups=mvcount('assigned_groups{}') | fillnull value=0 assigned_users, assigned_groups | table dest, name, label, signOnMode, assigned_groups, assigned_users, created, lastUpdated, status, assigned_users{}, assigned_groups{}

The dashboard code we included below also contains a user and group drilldown to reveal the users and groups assigned to selected app. Simply click the row of the application you are interested in and it will show you the users and groups assigned to that app. This interactive drilldown pulls the assigned_users{} and assigned_groups{} multi-value fields and performs a user lookup (source=okta:user and source=okta:group) as seen in the previous article. Note, we also use a clever <fields> trick to hide the assigned_users{} and assigned_groups{} columns, while still making that data usable in the drilldown.

Conclusion

Even though we had a Splunk TA and App to perform the parsing and help create visibility, we extended the usefulness of the data to build an app lookup tool with a user and group drilldown. We hope this article helps others gain additional insight into their user and group data via Okta logs. Happy Splunking!

Dashboard Code

The following dashboard assumes that the appropriate logs are being collected and sent to Splunk. Additionally, the dashboard code assumes an index of okta. Feel free to adjust as necessary. Splunk dashboard code provided below:


<form>
  <label>Okta App Lookup</label>
  <description>index=okta source=okta:app   (First try last 6 hours, then try a longer time range)</description>
  <fieldset autoRun="true" submitButton="true">
    <input type="time" token="time">
      <label>Time Range</label>
      <default>
        <earliest>-6h@h</earliest>
        <latest>now</latest>
      </default>
    </input>
    <input type="text" token="wild">
      <label>Wildcard Search</label>
      <default>*</default>
      <initialValue>*</initialValue>
    </input>
    <input type="text" token="name">
      <label>Name (Exact match)</label>
      <default>*</default>
      <initialValue>*</initialValue>
    </input>
    <input type="text" token="label">
      <label>Label (Exact Match)</label>
      <default>*</default>
      <initialValue>*</initialValue>
    </input>
    <input type="dropdown" token="status">
      <label>Status</label>
      <choice value="*">ALL</choice>
      <choice value="ACTIVE">ACTIVE</choice>
      <choice value="INACTIVE">INACTIVE</choice>
      <default>ACTIVE</default>
      <initialValue>ACTIVE</initialValue>
    </input>
  </fieldset>
  <row>
    <panel>
      <table>
        <title>App Details</title>
        <search>
          <query>index=okta source=okta:app $wild$ name=$name$ label=$label$ status=$status$ | dedup id | eval assigned_users=mvcount('assigned_users{}') | eval assigned_groups=mvcount('assigned_groups{}') | fillnull value=0 assigned_users, assigned_groups | table dest, name, label, signOnMode, assigned_groups, assigned_users, created, lastUpdated, status, assigned_users{}, assigned_groups{}</query>
          <earliest>-6h@h</earliest>
          <latest>now</latest>
          <sampleRatio>1</sampleRatio>
        </search>
        <option name="dataOverlayMode">none</option>
        <option name="drilldown">cell</option>
        <option name="percentagesRow">false</option>
        <option name="rowNumbers">false</option>
        <option name="totalsRow">false</option>
        <option name="wrap">true</option>
        <fields>["dest","name","label","signOnMode","assigned_groups","assigned_users","created","lastUpdated","status"]</fields>
        <drilldown>
          <set token="users">$row.assigned_users{}$</set>
          <set token="groups">$row.assigned_groups{}$</set>
        </drilldown>
      </table>
    </panel>
  </row>
  <row>
    <panel>
      <table>
        <title>Assigned Groups (Click a row above to fetch the groups assigned to the app)</title>
        <search>
          <query>| stats count as id | eval id=split("$groups$", ",") | mvexpand id | join type=left id [search index=okta source=okta:group id IN ($groups$)] | eval num_members=mvcount('members{}') | fillnull value=0 num_members | table dest, type, profile.groupScope, profile.windowsDomainQualifiedName, profile.name, profile.description, created, lastUpdated, lastMembershipUpdated, num_members</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">none</option>
        <option name="rowNumbers">true</option>
      </table>
    </panel>
  </row>
  <row>
    <panel>
      <table>
        <title>Assigned Users (Click a row above to fetch the users assigned to the app)</title>
        <search>
          <query>| stats count as id | eval id=split("$users$", ",") | mvexpand id | join type=left id [search index=okta source=okta:user id IN ($users$) | table id, credentials.provider.type, profile.title, profile.firstName, profile.middleName, profile.lastName, profile.email, profile.primaryPhone, created, passwordChanged, lastLogin, status]</query>
          <earliest>$time.earliest$</earliest>
          <latest>$time.latest$</latest>
        </search>
        <option name="drilldown">none</option>
        <option name="rowNumbers">true</option>
      </table>
    </panel>
  </row>
</form>