Following on from my last post that focused on analysing web server logs https://community.rsa.com/community/products/netwitness/blog/2020/04/30/detecting-webshells-with-web-server-logs , this time we are going to look at the network based indicators from the ASD & NSA guide Detect and prevent web shell malware | Cyber.gov.au .
There are already some fantastic resources posted by my colleague from the IR team Lee Kirkpatrick and the NetWitness product Documentation team that provide great details on the different ways we can detect web shells using NetWitness for network visibility:
The focus of this post is taking the indicators published by the ASD & NSA in their guide, and showing how to use them in NetWitness.
Not all indicators are created equally, and this post should not be taken as an endorsement by this author or RSA on the effectiveness and fidelity of the indicators published by the ASD & NSA.
Now that’s out of the way, lets take a look at the network indicators.
This is really focused on the URIs being accessed on your servers and the user agents that are being used to access those pages. An easy way to detect new user agents, or new files being accessed on your website (depending on how dynamic your content is) is to use the show_whats_new report action. The show_whats_new action will filter your results from a query to only show new values that did not appear in the database prior to the timeframe of your report. Here’s an example from my lab – if I run a report to show all user agents seen in the last 6 hours I get 20 user agents in my report:
Using show_whats_new in the THEN clause of the rule filters the results and shows me only 2 user agents (which makes sense as my chrome browser recently updated):
Obviously just because a user agent is new doesn’t automatically mean it is a web shell, as web browsers get updates all the time. But it is another method for highlighting anomalies and changes in your environment.
One of the common techniques we use in the IR team is to review the HTTP request methods used against a server – finding sessions that do not follow the pattern of normal user web browsing are a good indicator for web shells. Normal user generated browsing will consist of GET requests followed POST. Sessions that have a POST action with no GET request and no referrer present are a good indicator as Lee covers in his post mentioned above.
As the ASD & NSA guide states itself, network signatures are an unreliable way to detect web shell traffic:
From the network perspective, signature-based detection of web shells is unreliable because web shell communications are frequently obfuscated or encrypted. Additionally, “hard-coded” values like variable names are easily modified to further evade detection. While unlikely to discover unknown web shells, signature-based network detection can help identify additional infections of a known web shell.
The guide nevertheless includes some Snort rules to detect network communication from common, unmodified web shells:
RSA NetWitness has always had the ability to use Snort rules on the Network Decoder, and that capability was recently enhanced with the 11.3 release adding the ability to map meta data generated by the snort parser to the Unified Data Model. For the steps required to install and configure Snort rules on your network decoder, follow these guides for details and more information:
Here’s the short version:
Copy the rules from the ASD & NSA guide into a file called webshells.rules
Mitigating-Web-Shells/network_signatures.snort.txt at master · nsacyber/Mitigating-Web-Shells · GitHub
Go to the Explore view for your Decoder, and go to decoder > parsers > config and add Snort=”udm=true” to the parsers.options field
Here we can see the Snort rules successfully loaded and available on the Network Decoder:
The ASD & NSA guide suggests monitoring the network for unexpected web servers, and provides a snort signature that simply alerts when a node in the targeted subnet responds to an HTTP(s) request by looking for traffic on port 80 or 443 with a destination IP address in a given subnet:
alert tcp 192.168.1.0/24 [443,80] -> any any (msg: "potential unexpected web server"; sid 4000921)
Rather than updating this rule with the right subnet details for your environment (that will only be available to be used by this rule), we can do this natively in NetWitness utilising the Traffic Flow parser and its associated traffic_flow_options file to label subnets and IP addresses. Using the traffic_flow_options file to do this labelling means the resulting meta can be used by other parsers, feeds, and app rules as well.
For more details on the Traffic Flow parser, go here: Traffic Flow Lua Parser
To configure your traffic_flow_options file, start with the subnet or IP addresses of known web servers and add them as a block in the INTERNAL section of the file, and label them “web servers”. When traffic is seen heading to those servers as a destination, the meta ‘web servers dst’ will be registered under the Network Name (netname) meta key.
Once the traffic_flow_options file is configured, we can translate the Snort rule from the guide into an app rule that will detect any HTTP or HTTPS traffic, or traffic destined to port 80 or 443, to any system that has not been added to our definition for web servers:
(service = 80,443 || tcp.dstport = 80,443) && netname != ‘web servers dst’
That covers the network based indicators included in the ASD & NSA guide. For more techniques to uncover web shell network traffic, check out the pages linked at the top of this blog, as well as the RSA IR Threat Hunting Guide for NetWitness:
Stay tuned for the next part where we take a look at the endpoint based indicators from the guide, and see how to apply them using NetWitness Endpoint.
Happy Hunting!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.