When working in a Security Operation Center, it is not uncommon to continuously adapt the people, the processes and the technologies to objectives that evolve over the time because both the business requirements are dynamic and the threat landscape out there is never the same. Every single environment is somehow unique and each organization has peculiar needs which are eventually reflected in the way the SOC operates and achieve its goals.
While adapting to new situations is inherent to the human nature, each piece of technology has instead embedded a logic that is not always easy to subvert. This is why relying on products which would allow a high degree of customization could become a key element in an organization's SOC strategy, leading to easier integrations with the enterprise environment, increased quality of the service and eventually a better Return on Investment.
Flexibility has always been a central element in Security Analytics and easily adapting the platform to handle custom use cases is a key factor. But you would say, let’s prove it then!
During the last few weeks, I have posted a few articles here about customizing the platform, intended to demonstrate how to get more value out of it or to achieve complex use cases.
In my first post (available at https://community.emc.com/thread/198366) I shared some simple rules intended to promote a standard naming convention and approach to "tag" inbound/outbound connections as well as to name our networks.
Understanding which connection is going in or out our network is key to better focus our investigation, running our reports, configuring our alerts. Tagging our network is on the other hand relevant to better determine which service is impacted, evaluate the risk and prioritize our follow-up actions accordingly.
In my second article (available at https://community.emc.com/thread/198832) I focused on how to enhance the log parsing mechanism by leveraging parsers commonly used to analyze a network stream, which are more flexible and powerful. I demonstrated a specific use case by providing a sample parser which is generating a hash of the entire log message and storing it in a meta key. This is for example a common scenario when a compliance requirement mandates to achieve a per-event integrity check.
In my third post (available at https://community.emc.com/thread/198976) I discussed a simple but interesting scenario. The Event Stream Analysis module, responsible in Security Analytics to correlate logs and packets meta to identify potentially malicious or anomalous activities, is often then the last link of the chain, transferring the elaboration outside the platform (to the analyst, to a ticketing system, etc.). There are however many relevant use cases that can be accomplished by feeding this information back into Security Analytics to, just to name a few, provide additional context during an investigating making available all the alerts triggered by a specific user/endpoint or implement a sort of multi-step correlation scenario. Sample parsers have been provided also in this case.
In my last post (available at https://community.emc.com/thread/199143) I wanted to recall a capability already mentioned a few times in the discussions here but never emphasized enough that is leveraging parsers to post-process the meta values created by a log parser with the purpose of generating new piece of meta. A typical example is to split a URL identified by all the parsers in domain, TLD, directory, page and extension. Applying the logic in all the log parsers generating URLs may be possible but does not scale very well. A single parser can instead do the job easily and effectively.
All of those examples are intended to prove how a technology, when designed to be flexible, can easily adapt to specific situations so supporting the achievement of complex use cases or ad-hoc requirements.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.