Monday, February 15, 2016

Processing Mandiant Redline Files Using Splunk

By Tony Lee


Do you use Mandiant's Redline ( for performing host investigation?  Do you use Splunk for centralized log collection and monitoring?  How about using these two tools together?  The team behind the Splunk Forensic Investigator app ( is experimenting with ingesting Redline collections.  We have made good progress on proving that it is possible to automate the ingestion of Redline collections and use Splunk to carve and display data from multiple hosts at the same time.  However we were wondering how many people would find this capability useful enough to see the work completed.  Check out the prototyping below and let us know if you would find this useful by leaving a comment below (account not necessary).

We have example output below:

System info displayed in Redline

System info displayed in Splunk

Driver modules displayed in Redline

Driver modules displayed in Splunk

Above and beyond replication

Recreating the Redline output is all well and good, however keep in mind that ingesting the data into Splunk allows you to filter, search, and carve across multiple systems at the same time.  Additionally, it would allow you to use Splunk's big data crunching capabilities.  It is very simple to ask Splunk to apply statistical analysis to large data sets to help look for anomalies within hosts such as:
  • Drive letters/mappings that don't meet corporate standards
  • Logged in/on users that occur infrequently (such as service accounts)
  • Forgotten operating systems that may be weak points or exploited first within a network

Or when analyzing drivers on multiple hosts, an investigator could glance at a dashboard and determine any of the following and more:
  • Number of drivers per host
  • Largest driver
  • Smallest driver
  • Most common driver file name
  • Most common driver path
  • Least common driver file name
  • Least common driver path


 These are just some examples of interesting data one might pull from analyzing many collections.  The possibilities are probably endless.  Let us know what you think.  Thanks.


  1. Nice work. We would be interested in seeing where this goes.

  2. I don't see this in the Splunk app store. It is currently available?

    1. Ah, sorry... The updates are not pushed to the Splunk app store yet. We are putting out feelers at the moment, but will put together instructions soon. Thanks!

  3. I do use Redline, but it is not efficient at scale, and scale is where Splunk analysis would be most useful. Have you found a way to get large numbers of redlines done quickly?

    1. That is a great observation. I ran the question by my colleagues and came up with a multi-phased approach. Here are some thoughts:
      1) Initiating collection: Create the collector and initiate the batch file via psexec or powershell or other program.
      2) Forwarding: Create a common share for the output. Monitor that directory with Splunk for ingestion and indexing.
      I think the Helper.bat file has a variable for outputdir. It may be possible to use this to direct the output elsewhere.

      This may need to be flushed out and tested a bit, but we would be open to your thoughts. Thanks.

  4. This looks awesome! Any chance we could get a beta version?

    1. Thank you for the interest in Redline / MIR file processing. We just released version 1.1.7 of the app which supports this feature. Please check out the Details tab for instructions.

  5. This looks awesome! Any chance we could get a beta version?

    1. I hope to release a new version of the Forensic Investigator app by the end of the week which will have a ton of dashboards and should be able to handle both Redline and MIR output. Send me an email via the Help -> Send feedback functionality within the FI app and we can discuss the road map a bit. Thanks for the comment.

  6. This comment has been removed by the author.