2013-10-22 11:49 PM
I have reviewed the Netwitness documentation and haven't found a way to do this so I will post it here for any alternate ideas.. Please let me know if you have suggestions.
I would like to create a feed with multiple keys. For instance, say I want to create a feed that matches multiple meta values such as a combination of hostnames, filenames, and directory names and adds an alert to those sessions.
Is this possible to do with Netwitness feeds?
2013-10-23 04:49 PM
The feeds work on a single meta key AFAIK. Sounds to me like you are trying to detect web URLs. IMHO opinion, this would be very difficult to maintain a running list of malicious URLs.
Take urlquery.net for instance. They have millions of unique malicious URLs, and relying on a single feed to detect malicious activity would be a burden to the system. I've had a lot of success creating rules that look for pattern matches for directory structures, filenames, action types, query length, contains statements, etc, and combining those together to detect a family of suspicious URLs.
That said, if you have a NW for logs, and you integrate with websense or bluecoat, you can use a correlation of those logs to help hunt for malicious URLs. But that will still only cover what those companies know to be bad. Using my method above can still find unknown URLs due to a matching pattern of meta combinations.
See my other posts about testing some rules.
2013-10-23 04:49 PM
The feeds work on a single meta key AFAIK. Sounds to me like you are trying to detect web URLs. IMHO opinion, this would be very difficult to maintain a running list of malicious URLs.
Take urlquery.net for instance. They have millions of unique malicious URLs, and relying on a single feed to detect malicious activity would be a burden to the system. I've had a lot of success creating rules that look for pattern matches for directory structures, filenames, action types, query length, contains statements, etc, and combining those together to detect a family of suspicious URLs.
That said, if you have a NW for logs, and you integrate with websense or bluecoat, you can use a correlation of those logs to help hunt for malicious URLs. But that will still only cover what those companies know to be bad. Using my method above can still find unknown URLs due to a matching pattern of meta combinations.
See my other posts about testing some rules.
2013-10-28 12:21 PM
If you want to do a simple combination of multiple keys you could create a parser that concatenates those values and writes them into a new report. For instance
alias.host = bad.com, directory = /malware/, filename = malware.exe
would become
full_uri = bad.com/malware/malware.exe
Then you can build a feed for full_uri.
You might also want to do the opposite in some cases, for instance split the directory key. So instead of just
directory = /Zd9Xf/86U0xCAA/leTcVAAAA/
you have
directory = /Zd9Xf/86U0xCAA/leTcVAAAA/
directory = Zd9Xf
directory = 86U0xCAA
directory = leTcVAAAA
Here's a parser in Lua that achieves this:
local dirsplitter=nw.createParser("directorysplitter",
"Split directory into components")
dirsplitter:setKeys(
{nwlanguagekey.create('directory')}
)
function dirsplitter:onDirectory(_, value)
if string.find(value, '/') then
for dir in string.gmatch(value, "[^/]+") do
nw.createMeta(self.keys['directory'], dir)
end
end
end
local callbacksTable = {
[nwlanguagekey.create('directory')] = dirsplitter.onDirectory
}
dirsplitter:setCallbacks(callbacksTable)
2014-04-15 11:44 AM
Hi Fielder, I have a really big list of url that i need to shearch/match for bluecoat, so how can i use these list on correlation rule? there are other way to do this?
thanks in advance.
2014-04-16 12:21 PM
Not knowing about your list of urls- whether it comes from bluecoat or you want to match it with bluecoat logs?
But URLs match back to several different keys, and it helps to understand how SA sees URLs.
The hostname maps to alias.host
There is an action- either get, put, head, etc
Next comes a directory, followed by a filename
Finally, the query is everything after the first question mark in the url.
I also work with lists of known malicious URLs. What I do is pull out the malicious hostnames, and add those to a feed of known bad hostnames. If I see a pattern of malicious filenames, I pull out those specific filenames for another feed. Finally, if there are patterns of queries or directories, I typically use those patterns in an app rule. This way, if the pattern is still detected, but not to a known bad hostname, it is likely the same malware family but using different hosts. Make sense?