The Netwitness Suite provides out-of-the-box a number of tools to analyze your data. But there is a capability hidden under the hood which if implemented correctly may be precious to identify additional suspicious patterns: the development of a baseline to perform a trend analysis.
This approach can help whenever a significant change in the rate of a given value could imply a security issue. Of course not all the threats can be identified in this way!
To perform any statistical analysis, numbers are an obvious requirement and these have to be derived from the collected events first. The attached (unofficial) model, inspired by the new 10.6 Event source Automatic Monitoring functionality, offers a solid way to count the number of occurrences without requiring to buffer all the events in memory for a long timeframe.
For each value of a given meta key, the number of occurrences are counted every minute and then aggregated every five minutes, hour and day to minimize the impact on ESA performance. Then, for each hour (and for each day), a baseline is created.
In case there is a significant deviation in the rate of any meta value, an alert is generated.
The duration of the learning phase, the entity of the deviation and the duration of the baseline are all configurable parameters.
As an implementation best practice, do not use meta keys with too many unique values (e.g. ip.src) since would generate too many false positives. Start focusing on those with a few but significant unique values, like as:
All the details regarding the model, how it works, how to implement it and all the technical details can be found in the attached presentation together with the full EPL code.
For a different but complementary approach, I'd suggest reading this excellent post by Nikolay Klender: https://community.rsa.com/thread/187264
Please note this is not RSA official/supported content so use it at your own risk!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.