If you have been using RSA Netwitness Packets for any length of time, you might have noticed that many large sessions are maxed out at approximately 32mb. Furthermore, there maybe multiple 32mb sessions between the two hosts.
Beginning in 10.5, a new meta key was added called 'session.split' to track follow-on sessions that are related. While the decoder settings may draw the line at 32mb (the default setting and I don't recommend changing) for a session, network traffic is not bound by such restraints. Network traffic can be as large as it has to be. All of this traffic is still captured, but there wasn't anything really tying all the sessions together. However, with session.split, we can see that there is more network data to be found. In the 'List View' screenshot above, you can see the numbers on the far right of the session. You can right click on that number and find the session fragments in a new tab.
If that view doesn't work for you, you can build your own custom view like the one below.
Recently, I was having a discussion with some colleagues about how to find uploads greater than 1 GB in size that are going outbound. This was to identify some potential exfiltration use cases. One thing that came to mind was using meta in 'session.split'. In a few short minutes, I had an application rule built by using some of the content from the Hunting Pack content (RSA Announces the Availability of the Hunting Pack in Live ). Let's break it down.
First, we know that it would be outbound network traffic. Therefore, we could start our application rule with:
('medium = 1 && direction = 'outbound')
If you don't have directionality setup in your decoders, you could substitute "direction = 'outbound'" with "org.dst exists".
Next, we look at the new meta key from the hunting pack called 'analysis.session' (aka Session Characteristics). This purpose of this meta key is to tell the analyst things that were observed about the network session. In our case, we are looking for 'ratio high transmitted'.
The meta 'ratio high transmitted' is a reference to a calculation of the transmitted bytes (requestpayload) vs the received bytes (responsepayload) in a network session. It provides a ratio score of 0 - 100 showing which side sent more data. A score of 0 means more bytes were received than transmitted. A score of 100 means more bytes were transmitted than received. Since we are looking for uploads, that would typically have more data being transmitted than received in a network session. Therefore, we can add this meta to our app rule.
('medium = 1 && direction = 'outbound' && analysis.session = 'ratio high transmitted')
However, we aren't done yet. How do we tell if it is around or over 1 GB? This is where session.split comes in. Since the sessions were being maxed at 32mb per the default decoder configuration, we can do some math to find out how many sessions it would take to get to approximately 1 GB.
1024 mb / 32 mb = 32 sessions.
Since there could be retransmitted data or some other anomalies in the traffic, lets give ourselves an approximate session count of 30. This means that if session.split reached 30 splits (really 31 since it starts from 0), then we have a large session and may want to have a closer look.
Therefore, our application rule looks like:
('medium = 1 && direction = 'outbound' && analysis.session = 'ratio high transmitted' && session.split >=30)
I called mine "large_outbound_transmit' but you can call it whatever you like. This will tag any of those follow-on sessions that matched the criteria we set in the app rule starting at session.split 30. To find all the session fragments, go back into the Investigation Events, select the List view. Right-click on the session.split number (not the little icon to the left of it) and select 'Refocus or Refocus New Tab'.
What's nice about this rule is that it works whether the content is encrypted or unencrypted. It is simply working against meta we've already collected. Now, I can tell if I have large network sessions leaving my network. If you regularly have large sessions, perhaps creating a filtering application rule or feed may help reduce some of that noise.