Showing posts with label indexer. Show all posts
Showing posts with label indexer. Show all posts

Tuesday, February 5, 2019

Splunk and ELK – Impartial Comparison Part I - Similarities


By Tony Lee

This series is not intended to start a “Big Data” holy war, but instead hopefully offer some unbiased insight for those looking to implement Splunk, ELK or even both platforms.  After all both platforms are highly regarded in their abilities to collect, parse, analyze, and display log data.  In fact, the first article in this series will show how the two competing technologies are similar in the following areas:
  • Purpose
  • Architecture
  • Cost

Caveat

Most articles on this subject seem to have some sort of agenda to push folks in one direction or another—so we will do our absolute best to keep it unbiased. We admit that we know Splunk better than we know the ELK stack, so we are banking on ELK (and even Splunk) colleagues and readers to help keep us honest. Lastly, our hope is to update this article as we learn or receive more information and the two products continue to mature.

Similar Purpose

Both Splunk and ELK stack are designed to be highly efficient in log collection and search while allowing users to create visualizations and dashboards.  The similar goal and purpose of the two platforms naturally means that many of the concepts are also similar.  One minor annoyance is that the concepts are referred to by different names.  Thus, the table below should help those that are familiar with one platform map ideas and concepts to the other.


Splunk
ELK Stack
Search Head
Kibana
Indexer
Elastic Search
Forwarder
Logstash
Universal Forwarder
Beats (Filebeat, Metricbeat, Packetbeat, Winlogbeat, Auditbeat, Heartbeat, etc.)
Search Processing Language (SPL)
Lucene query syntax
Panel
Panel
Index
Index


Similar Architecture

In many ways, even the architecture between Splunk and ELK are very similar.  The diagram below highlights the key components along with the names of each component in both platforms.

Figure 1:  Architectural similarities

Cost

This is also an area where there are more similarities than most would imagine due to a misconception that ELK (with comparable features to Splunk) is free.  While the core components may be free, the extensions that make ELK an enterprise-scalable log collection platform are not free—and this is by design.  According to Shay Banon, Founder, CEO and Director of Elasticsearch:

“We are a business. And part of being a business is the belief that those businesses who can pay us, should. And those who cannot, should not be paying us. In return, our responsibility is to ensure that we continue to add features valuable to all our users and ensure a commercial relationship with us is beneficial to our customers. This is the balance required to be a healthy company.”

Elastic does this by identifying “high-value features and to offer them as commercial extensions to the core software. This model, sometimes called ‘open core’, is what culminated in our creation of X-Pack. To build and integrate features and capabilities that we maintain the Intellectual Property (IP) of and offer either on a subscription or a free basis. Maintaining this control of our IP has been what has allowed us to invest the vast majority of our engineering time and resources in continuing to improve our core, open source offerings.”


That said, which enterprise-critical features aren’t included in the open source or even basic free license?  The subscription comparison screenshot found below shows that one extension not included for free is Security (formerly Shields).  This includes Encrypted communications, Role-based Access Control (RBAC), and even authentication.  Most would argue that an enterprise needs a login page and the ability to control who can edit vs. view searches, visualizations, and dashboards, thus it is not a fair comparison to say that Splunk costs money while ELK is free.  There are alternatives to X-PACK, but we will leave that to another article since it is not officially developed and maintained as part of the ELK stack.

Figure 2:  Encryption, RBAC, and even authentication is not free
In terms of host much Splunk costs vs. ELK, there are also many arguments there--some of which include the cost of build time, maintenance, etc.  It mostly depends on your skills to negotiate with each vendor.

Conclusion

Splunk and ELK stack are similar in many ways.  In fact, knowing one platform can help a security practitioner learn the other because many of the concepts are close enough to transfer.  The reduction in the learning curve is a huge advantage for those that need to convert from one platform to the other.  That said, there are differences, however we will discuss those in the next article.  In the meantime, we hope that this article was useful for you and we are open to feedback and corrections, so feel free to leave your comments below.  Please note that any inappropriate comments will not be posted—thanks in advance.  😊

Wednesday, January 30, 2019

rsyslog fun - Basic Splunk Log Collection and Forwarding - Part II

By Tony Lee

Welcome to part II in our series covering how to use rsyslog to route and forward logs to Splunk. Please see Part I of the series (http://securitysynapse.blogspot.com/2019/01/rsyslog-fun-basic-splunk-log-collection-part-i.html) for the basics in opening ports, routing traffic by IP address or hostname, and monitoring files to send the data on to Splunk Indexers. As a reminder, choosing between rsyslog, syslog-ng, or other software is entirely up to the reader and may depend on their environment and approved/available software. We also realize that this is not the only option for log architecture or collection, but it may help those faced with this task—especially if rsyslog is the standard in their environment. That said, let's look at some more advanced scenarios concerning file permissions, routing logs via regex, and routing logs via ports. We will wrap up with some helpful hints on a possible method to synchronize the rsyslog and Splunk configuration files.

File Permissions

There are times where you may need to adjust the file permissions for the files that rsyslog is writing to disk. For example, if following best practice and running the Splunk Universal Forwarder as a lower privileged account, it will need access to the logs files.  Using the following rsyslogd.conf directives at the top of the configuration file will change the permissions on the directories and files created.  The following example creates directories with permissions of 755 and files with a permission of 644:



$umask 0000
$DirCreateMode 0755
$FileCreateMode 0644



Routing logs via Regex

Another more advanced rsyslog option is the ability to drop or route data at the event level via regex. For example, maybe you want to drop certain packets -- such as Cisco Teardown packets generated from ASA's. Note: this rsyslog ability is useful since we are using Splunk Universal Forwarders in our example and not Splunk Heavy Forwarders.

Or maybe you have thousands of hosts and don't want to maintain a giant list of IP addresses in an if-statement. For example, maybe you want to route thousands of Cisco Meraki host packets to a particular file via a regex pattern.

Possibly even more challenging would be devices in a particular CIDR range that end in a specific octet.

These three examples are covered in the rsyslog.conf snippet below:



#Drop Cisco ASA Teardown packets
:msg, contains, ": Teardown " ~
& stop

#Route Cisco Meraki hosts to specific directory
if ($msg contains ' events type=') then ?ciscoMerakiFile
& stop

#ICS Devices 10.160.0.0/11 (last octet being .150)
:fromhost-ip, regex, "10\.\\(1[6-8][0-9]\\|19[0-1]\\)\..*\.150" -?icsDevices

& stop



Routing logs via Port

I know we just provided you the ability to route packets via regex, however sometimes that can be inefficient--especially at high events per second. If you are really fortunate, the source sending the data has the ability to send to a different port. Then it may be worth looking into routing data to different files based on port.  The example file below provides port 6517 and 6518 as an example.



#Dynamic template names
template(name="file6517" type="string" string="/rsyslog/port6517/%FROMHOST%/%$YEAR%-%$MONTH%-%$DAY%.log")

template(name="file6518" type="string" string="/rsyslog/port6518/%FROMHOST%/%$YEAR%-%$MONTH%-%$DAY%.log")

#Rulesets 
ruleset(name="port6517"){
    action(type="omfile" dynafile="file6517")

}

ruleset(name="port6518"){
    action(type="omfile" dynafile="file6518")
}

input(type="imtcp" port="6517" ruleset="port6517")
input(type="imtcp" port="6518" ruleset="port6518")



Synchronizing Multiple Rsyslog Servers

Since our architecture in part I outlined using a load balancer and multiple rsyslog servers, we will eventually need a way to synchronize the configuration files across the multiple rsyslog servers.  The example below provides two bash shell scripts to perform just that task. The first one will synchronize the rsyslog configuration and the second will synchronize the Splunk configuration--both scripts restart the respective service. Note: This is not the only method available for synchronization, but it is one possible method. Remember to replace <other_server> with the actual IP or FQDN of that server.

On the rsyslog server that you make the changes on, create these two bash scripts and modify the <other_server> section. Once you make a change to the rsyslog or Splunk UF configuration, run the necessary script.

sync-rsyslog.sh



scp /etc/rsyslog.conf <other_server>:/etc/rsyslog.conf
ssh <other_server> service rsyslog restart



sync-splunk.sh


scp /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/local/inputs.conf <other_server>:/opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/local/inputs.conf
ssh <other_server> /opt/splunkforwarder/bin/splunk restart



Conclusion

In this article, we outlined key advanced features within rsyslog that may not be immediately evident. Hopefully this article will save you some Googling time when trying to operationalize log collection and forwarding using rsyslog in your environment. After all, eventually you will probably need to deal with file permissions, routing logs via regex and/or port, and configuration synchronization. We hope you enjoyed the article and found it useful.  Feel free to post your favorite tips and tricks in the comments section below. Happy Splunking!


Basic Troubleshooting

So you did everything above and you are still not seeing data...  Walk through some of these steps:

  1. Ensure all network firewalls permit the traffic
  2. Ensure iptables allows traffic to the rsyslog server
  3. Run tcpdump on the source to ensure it is sending data
  4. Run tcpdump on the rsyslog server to ensure it is receiving data
  5. Verify permissions when writing the files to disk
  6. Attempt to telnet (or even web browse) to the rsyslog server port to see if anything is written to the directories


Sunday, January 27, 2019

rsyslog fun - Basic Splunk Log Collection and Forwarding - Part I


By Tony Lee

We found it a bit surprising that there are so few articles on how to use an rsyslog server to forward logs to Splunk. This provided the motivation to write this article and hopefully save others some Googling. Choosing between rsyslog, syslog-ng, or other software is entirely up to the reader and may depend on their environment and approved/available software. We realize that this is not the only option for log architecture or collection, but it may help those faced with this task—especially if rsyslog is the standard in their environment.

Warnings

Before we jump in, we wanted to remind you of three potential gotchas that may thwart your success and give you a troubleshooting migraine.
  1. Network firewalls – You may not own this, but make sure the network path is clear
  2. iptables – Complex rule sets can throw you for a loop
  3. SE Linux – Believe it or not, SE Linux when locked down can prevent the writing of the log files

If something is not working the way you expect it to work, it is most likely due to one of the three items mentioned above. It could be worth temporarily disabling them until you get everything working. Just don’t forget to go back and lock it down.

Note:  We will also be using Splunk Universal Forwarders (UF) in this article.  Universal Forwarders have very little pre-processing or filtering capabilities when compared to Heavy Forwarders.  If significant filtering is necessary, consider using a Splunk Heavy Forwarder in the same fashion as we are using the UFs below.

Architecture

Whether your Splunk instance is on-prem or it is in the cloud, you will most likely need syslog collectors and forwarders at some point. The architecture diagram below shows one potential configuration. The number of each component is configurable and dependent upon the volume of traffic.



Figure 1:  Architecture diagram illustrating traffic flow from data sources to the Index Cluster

Rsyslog configuration

Rsyslog is a flexible service, but in this case rsyslog’s primary role will be to:

  • Open the sockets to accept data from the sources
  • Properly route traffic to local temporary files that Splunk will forward on to the indexers

If you are fortunate enough to be able to route network traffic to different ports, you may be able to reduce the if-then logic shown below for routing the events to separate files. In this case, we were not able to open separate ports from the load balancer, thus we needed to do the routing on our end. In the next article we will cover more advanced routing to include regex and traffic coming in on different ports.

Note:  Modern rsyslog is designed to run extra config files that exist in the /etc/rsyslog.d/ directory. If that directory exists, place the following 15-splunk-rsyslog.conf file in that directory. Otherwise, the /etc/rsyslog.conf file is interpreted from top to bottom, so make a copy of your current config file (cp /etc/rsyslog.conf /etc/rsyslog.bak) and selectively add the following at the top of the new active rsyslog.conf file. This addition to the rsyslog configuration will do the following (assuming the day is 2018-06-01:
  • Open TCP and UDP 514
  • Write all data from 192.168.1.1 to:  /rsyslog/cisco/192.168.1.1/2018-06-01.log
  • Write all data from 192.168.1.2 to:  /rsyslog/cisco/192.168.1.2/2018-06-01.log
  • Write all data from 10.1.1.* to /rsyslog/pan/10.1.1.*/2018-06-01.log (where * is the last octet of the source IP
  • Write all remaining data to /rsyslog/unclaimed/<host>/2018-06-01.log (where <host> is the source IP or hostname of the sender)
Note:  If the rsyslog server sees the hosts by their hostname instead of IP address, feel free to use $fromhost == '<hostname>' in the configuration file below.

/etc/rsyslog.d/15-splunk-rsyslog.conf


$ModLoad imtcp
$ModLoad imudp
$UDPServerRun 514
$InputTCPServerRun 514

# do this in FRONT of the local/regular rules

$template ciscoFile,"/rsyslog/cisco/%fromhost%/%$YEAR%-%$MONTH%-%$DAY%.log"
$template PANFile,"/rsyslog/pan/%fromhost%/%$YEAR%-%$MONTH%-%$DAY%.log"
$template unclaimedFile,"/rsyslog/unclaimed/%fromhost%/%$YEAR%-%$MONTH%-%$DAY%.log"

if ($fromhost-ip == '192.168.1.1' or $fromhost-ip == '192.168.1.2') then ?ciscoFile
& stop

if $fromhost-ip startswith '10.1.1' then ?PANFile
& stop

else ?unclaimedFile
& stop

# local/regular rules, like
*.* /var/log/syslog.log



Note:  Rsyslog should create directories that don't already exist, but just in case it doesn't, you need to create the directories and make them writable.  For example:


mkdir -p /rsyslog/cisco/
mkdir -p /rsyslog/pan/
mkdir -p /rsyslog/unclaimed/



Pro tip:  After making changes to the rsyslog config file, you can verify that there are no syntax errors BEFORE you restart the rsyslog daemon.  For a simple rsyslog config validation.  Try using the following command: 

rsyslogd -N 1

If there are no errors, then you should be good to restart the rsyslog service so your changes take effect:

service rsyslog restart

Log cleanup

The rsyslog servers in our setup are not intended to store the data permanently. They are intended to act as a caching server for temporary storage before shipping the logs off to the Splunk Indexers for proper long-term storage. Since disk space is not unlimited on these caching servers we will need to implement log rotation and deletion so we do not fill up the hard disk. Our rsyslog config file already takes care of the log rotation with the template parameter specifying the name of the file as “%$YEAR%-%$MONTH%-%$DAY%.log", however, we still need to clean up the files, so they don’t sit there indefinitely. One possible solution is to use a daily cron job to clean up files in the /rsyslog/ directory that are more than x days old (where x is defined by the organization). Once you have some files in the /rsyslog/ directory, try the following command to see what would potentially be deleted. The command below lists files in the rsyslog directory that are older than two days.

find /rsyslog/ -type f -mtime +1 -exec ls -l "{}" \;

If you are happy with a two-day cache period, add it to a daily cron job (as shown below).  Otherwise feel free to play with the +1 until you are comfortable with what it will delete and use that for your cron job.

/etc/cron.daily/logdelete.sh


find /rsyslog/ -type f -mtime +1 -delete



Splunk Universal Forwarder (UF) Configuration

Splunk Forwarders are very flexible in terms of data ingest. For example, they can create listening ports, monitor directories, run scripts, etc. In this case, since rsyslog is writing the information to a directory, we will use a Splunk UF to monitor those directories and send them to the appropriate indexes and label them with the appropriate sourcetypes.  See our example configuration below.

Note:  Make sure the indexes mentioned below exist prior to trying to send data there. These will need to be created within Splunk.  Also ensure that the UF is configured to forward data to indexers (out of the scope of this write up).

/opt/splunkforwarder/etc/apps/SplunkForwarder/local/inputs.conf 


[monitor:///rsyslog/cisco/]
whitelist = \.log$
host_segment=3
sourcetype = cisco:ios
index = cisco

[monitor:///rsyslog/pan/]
whitelist = \.log$
host_segment=3
sourcetype = pan:traffic
index = pan_logs

[monitor:///rsyslog/unclaimed/]
whitelist = \.log$
host_segment=3
sourcetype = syslog
index = lastchanceindex



Pro tip:  Remember to restart the Splunk UF after modifying files.  

/opt/splunkforwarder/bin/splunk restart

Conclusion

A simple Splunk search of index=cisco, index=pan_logs, or index=lastchanceindex should be able to confirm that you are now receiving data in Splunk. Keep monitoring the lastchanceindex to move hosts to where they need to go as they come on-line. Moving the hosts is accomplished by editing the rsyslog.conf file and possibly adding another monitor stanza within the Splunk UF config. This process can be challenging to create, but once it is going, it just needs a little care from time to time to make sure that all is well.  We hope you found this article helpful.  Happy Splunking!

Basic Troubleshooting

So you did everything above and you are still not seeing data...  Walk through some of these steps:

  1. Ensure all network firewalls permit the traffic
  2. Ensure iptables allows traffic to the rsyslog server
  3. Run tcpdump on the source to ensure it is sending data
  4. Run tcpdump on the rsyslog server to ensure it is receiving data
  5. Verify permissions when writing the files to disk
  6. Attempt to telnet (or even web browse) to the rsyslog server port to see if anything is written to the directories

References



Wednesday, December 20, 2017

Spelunking your Splunk – Part III (License Usage)

By Tony Lee

In our first article of the series, Spelunking your Splunk Part I (Exploring Your Data), we looked at a clever dashboard that can be used to quickly understand the indexes, sources, sourcetypes, and hosts in any Splunk environment.  In our second article of the series, Spelunking your Splunk – Part II (Disk Usage), we provided a dashboard that can be used to monitor data distribution across multiple indexers.  In this article, we will dive into understanding your license usage.

Finding and understanding license usage information

There easiest way to query your Splunk license information is to use the query below in the search bar:

index=_internal source=*license_usage.log type=Usage

This should return raw license usage data which includes:  index, host, source, sourcetype, and number of bytes as shown in the screenshot below.

Figure 1:  License usage fields

If this search returns nothing, you may need to forward your _internal index to the search peers as described in the article below:

https://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/Forwardmasterdata

After figuring out the fields you can get a little fancier and convert the bytes into GB and display that data over time as shown below.  Try this as both as a statistics table and a column chart.

index=_internal source=*license_usage.log type=Usage | timechart span=1d eval(round(sum(b)/1024/1024/1024,2)) AS "Total GB Used"

Now that you understand the basics, the sky is the limit.  You can display the license usage per index, source, sourcetype, host, etc.  Take a look at our dashboard at the end of this article and give it a try.


Figure 2:  One of our favorite dashboards for license usage

Conclusion

Splunk provides decent visibility into license usage via the Monitoring Console / DMC (Distributed management console), but we found this visual representation to be quite helpful for monitoring gaining additional insight.  We hope this helps you too.


Dashboard XML code

Below is the dashboard code needed to enumerate your license usage.  Feel free to modify the dashboard as needed:


<form>
  <label>License Usage</label>
  <fieldset submitButton="false" autoRun="true">
    <input type="time" searchWhenChanged="true" token="time1">
      <label></label>
      <default>
        <earliest>-7d@d</earliest>
        <latest>now</latest>
      </default>
    </input>
  </fieldset>
  <row>
    <panel>
      <chart>
        <title>Daily License Usage by Index</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage  | rename idx AS index  | timechart span=1d eval(round(sum(b)/1024/1024/1024,2)) AS "Total GB Used" by index</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
        <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
        <option name="charting.axisTitleX.text">Date</option>
        <option name="charting.axisTitleX.visibility">visible</option>
        <option name="charting.axisTitleY.text">License Usage</option>
        <option name="charting.axisTitleY.visibility">visible</option>
        <option name="charting.axisTitleY2.visibility">visible</option>
        <option name="charting.axisX.scale">linear</option>
        <option name="charting.axisY.scale">linear</option>
        <option name="charting.axisY2.enabled">false</option>
        <option name="charting.axisY2.scale">inherit</option>
        <option name="charting.chart">column</option>
        <option name="charting.chart.bubbleMaximumSize">50</option>
        <option name="charting.chart.bubbleMinimumSize">10</option>
        <option name="charting.chart.bubbleSizeBy">area</option>
        <option name="charting.chart.nullValueMode">gaps</option>
        <option name="charting.chart.sliceCollapsingThreshold">0.01</option>
        <option name="charting.chart.stackMode">default</option>
        <option name="charting.chart.style">shiny</option>
        <option name="charting.drilldown">all</option>
        <option name="charting.layout.splitSeries">0</option>
        <option name="charting.legend.labelStyle.overflowMode">ellipsisStart</option>
        <option name="charting.legend.placement">right</option>
        <option name="charting.axisLabelsY.majorUnit">10</option>
        <option name="charting.axisY.maximumNumber">60</option>
        <option name="charting.axisY.minimumNumber">0</option>
      </chart>
    </panel>
  </row>
  <row>
    <panel>
      <chart>
        <title>Total Daily License  Usage</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage  | timechart span=1d eval(round(sum(b)/1024/1024/1024,2)) AS "Total GB Used"</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
        <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
        <option name="charting.axisTitleX.text">Date</option>
        <option name="charting.axisTitleX.visibility">visible</option>
        <option name="charting.axisTitleY.text">GB</option>
        <option name="charting.axisTitleY.visibility">visible</option>
        <option name="charting.axisTitleY2.visibility">visible</option>
        <option name="charting.axisX.scale">linear</option>
        <option name="charting.axisY.scale">linear</option>
        <option name="charting.axisY2.enabled">0</option>
        <option name="charting.axisY2.scale">inherit</option>
        <option name="charting.chart">column</option>
        <option name="charting.chart.bubbleMaximumSize">50</option>
        <option name="charting.chart.bubbleMinimumSize">10</option>
        <option name="charting.chart.bubbleSizeBy">area</option>
        <option name="charting.chart.nullValueMode">gaps</option>
        <option name="charting.chart.sliceCollapsingThreshold">0.01</option>
        <option name="charting.chart.stackMode">default</option>
        <option name="charting.chart.style">shiny</option>
        <option name="charting.drilldown">all</option>
        <option name="charting.layout.splitSeries">0</option>
        <option name="charting.legend.labelStyle.overflowMode">ellipsisStart</option>
        <option name="charting.legend.placement">right</option>
        <option name="wrap">true</option>
        <option name="rowNumbers">false</option>
        <option name="dataOverlayMode">none</option>
        <option name="charting.axisLabelsY.majorUnit">25</option>
        <option name="charting.chart.showDataLabels">all</option>
        <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
      </chart>
    </panel>
    <panel>
      <table>
        <title>Daily License Usage by Index Stats</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage earliest=-7d@d  | rename idx AS index  | timechart span=1d eval(round(sum(b)/1024/1024/1024,2)) AS "Total GB Used" by index</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="wrap">true</option>
        <option name="rowNumbers">false</option>
        <option name="dataOverlayMode">none</option>
        <option name="drilldown">cell</option>
        <option name="count">10</option>
      </table>
    </panel>
  </row>
  <row>
    <panel>
      <chart>
        <title>License Usage by Host</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage | stats sum(b) AS bytes by h | eval GB= round(bytes/1024/1024/1024,2) | fields h GB | rename h as host | sort -GB</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
        <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
        <option name="charting.axisTitleX.visibility">visible</option>
        <option name="charting.axisTitleY.visibility">visible</option>
        <option name="charting.axisTitleY2.visibility">visible</option>
        <option name="charting.axisX.scale">linear</option>
        <option name="charting.axisY.scale">linear</option>
        <option name="charting.axisY2.enabled">false</option>
        <option name="charting.axisY2.scale">inherit</option>
        <option name="charting.chart">pie</option>
        <option name="charting.chart.bubbleMaximumSize">50</option>
        <option name="charting.chart.bubbleMinimumSize">10</option>
        <option name="charting.chart.bubbleSizeBy">area</option>
        <option name="charting.chart.nullValueMode">gaps</option>
        <option name="charting.chart.sliceCollapsingThreshold">0.01</option>
        <option name="charting.chart.stackMode">default</option>
        <option name="charting.chart.style">shiny</option>
        <option name="charting.drilldown">all</option>
        <option name="charting.layout.splitSeries">0</option>
        <option name="charting.legend.labelStyle.overflowMode">ellipsisStart</option>
        <option name="charting.legend.placement">right</option>
      </chart>
    </panel>
    <panel>
      <chart>
        <title>License Usage by Sourcetype</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage | stats sum(b) AS bytes by st | eval GB= round(bytes/1024/1024/1024,2) | fields st GB | rename st as Sourcetype | sort -GB</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
        <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
        <option name="charting.axisTitleX.visibility">visible</option>
        <option name="charting.axisTitleY.visibility">visible</option>
        <option name="charting.axisTitleY2.visibility">visible</option>
        <option name="charting.axisX.scale">linear</option>
        <option name="charting.axisY.scale">linear</option>
        <option name="charting.axisY2.enabled">false</option>
        <option name="charting.axisY2.scale">inherit</option>
        <option name="charting.chart">pie</option>
        <option name="charting.chart.bubbleMaximumSize">50</option>
        <option name="charting.chart.bubbleMinimumSize">10</option>
        <option name="charting.chart.bubbleSizeBy">area</option>
        <option name="charting.chart.nullValueMode">gaps</option>
        <option name="charting.chart.sliceCollapsingThreshold">0.01</option>
        <option name="charting.chart.stackMode">default</option>
        <option name="charting.chart.style">shiny</option>
        <option name="charting.drilldown">all</option>
        <option name="charting.layout.splitSeries">0</option>
        <option name="charting.legend.labelStyle.overflowMode">ellipsisStart</option>
        <option name="charting.legend.placement">right</option>
      </chart>
    </panel>
    <panel>
      <chart>
        <title>License Usage by Source</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage | stats sum(b) AS bytes by s | eval GB= round(bytes/1024/1024/1024,2) | fields s GB | rename s as Source | sort -GB</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="charting.chart">pie</option>
        <option name="charting.axisY2.enabled">undefined</option>
      </chart>
    </panel>
  </row>
  <row>
    <panel>
      <table>
        <title>License Usage by Host Stats</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage | stats sum(b) AS bytes by h | eval GB= round(bytes/1024/1024/1024,2) | fields h GB | rename h as host | sort -GB</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="wrap">true</option>
        <option name="rowNumbers">false</option>
        <option name="dataOverlayMode">none</option>
        <option name="drilldown">cell</option>
        <option name="count">10</option>
      </table>
    </panel>
    <panel>
      <table>
        <title>License Usage by Sourcetype Stats</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage | stats sum(b) AS bytes by st | eval GB= round(bytes/1024/1024/1024,2) | fields st GB | rename st as Sourcetype | sort -GB</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="wrap">true</option>
        <option name="rowNumbers">false</option>
        <option name="dataOverlayMode">none</option>
        <option name="drilldown">cell</option>
        <option name="count">10</option>
      </table>
    </panel>
    <panel>
      <table>
        <title>License Usage by Source Stats</title>
        <search>
          <query>index=_internal source=*license_usage.log type=Usage | stats sum(b) AS bytes by s | eval GB= round(bytes/1024/1024/1024,2) | fields s GB | rename s as Sourcetype | sort -GB</query>
          <earliest>$time1.earliest$</earliest>
          <latest>$time1.latest$</latest>
        </search>
        <option name="wrap">true</option>
        <option name="rowNumbers">false</option>
        <option name="dataOverlayMode">none</option>
        <option name="drilldown">cell</option>
        <option name="count">10</option>
      </table>
    </panel>
  </row>
</form>