Understanding how attackers may gain a foothold on your network is an important part of being an analyst. If attackers want to get into your environment, they typically will find a way. It is up to you to detect and respond to these threats as effectively and efficiently as possible. This blog post will demonstrate how a host became infected with PoshC2, and subsequently how the C&C (Command and Control) communication looks from the perspective of the defender.
The attacker crafts a malicious Microsoft Word Document that contains a macro with their payload. This document is sent to an individual from the organisation they want to attack, in the hopes the user will open the document and subsequently execute the macro within. The Word document attempts to trick the user into enabling macros by containing content like the below:
The user enables the content and doesn't see any additional content, but in the background, the malicious macro executed and the computer is now part of the PoshC2 framework:
From here, the attacker can start to execute commands, such as tasklist, to view all currently running processes:
The attacker may also choose to setup persistence by creating a local service:
Prior to performing threat hunting, the analyst needs to assume a compromise, and generate a hypothesis as to what s/he is looking for. In this case, the analyst is going to focus on hunting for C2 traffic over HTTP. Now that the analyst has decided upon the hypothesis, this will dictate where they will look for that traffic, and what meta keys are of use to them to achieve the desired result. Refining the analysts approach toward threat hunting, will heed far greater results in detection, if the analysts have a path to walk down, and can exhaust all possible avenues of that path, before taking another route, the data set will be thoroughly sifted through in a more methodological manner, as there will be less distractions for the analyst.
Understanding how HTTP works, is vital in detecting malicious C2 over HTTP. To become familiar with this, analysts should analyse HTTP traffic generated by malware, and HTTP traffic generated by users, this allows the analyst to quickly determine what is out of place in a data set vs. what seems to be normal. This is a common strategy among malware authors, they want to blend in with regular network communications and appear as innocuous as possible, but by their very nature, Trojans are programmatic and structured, and when examined, it becomes clear the communications hold no business value.
Taking the information above into account, the analyst begins their investigation by focusing on the protocol of interest at this point in time, HTTP. This one simply query, quickly removes a large amount of the data set, and allows the analyst to place an analytical lens on just the protocol of interest. This is not to say that the analyst will not look at other protocols, but at this point in time, and for this hunt, their focus is on HTTP:
Now the data set has been significantly reduced, but that reduction needs to continue. A great way of reducing the data set to interesting sessions is to use the Service Analysis meta key. This meta key contains metadata that pulls apart the protocol, and details information about that session, that can help the analyst distinguish between user behavior, and automated malicious behavior. The analyst opens the meta key, and focuses on a few characteristics of the HTTP session that s/he thinks make the traffic more likely to be of interest:
Let's delve into these a little, and find out why s/he picked them:
There are a variety of other drills that could have been performed by the analyst, but for now, this will be sufficient for the path they want to take as they have reduced the data set to a more manageable amount. The analyst while perusing the other available metadata, observes an IP communicating directly to another IP, with requests for a diverse range of resources:
Opening the visualization to analyse the cadence of the communication, the analyst observes there to be some beaconing type behavior:
Reducing the time frame down, the beaconing is easier to see and appears to be ~every 5 minutes:
Upon opening the Event Analysis view, the analyst can see the beacon pattern, which is happening roughly every 5 minutes, the analyst also observes there to be a low variance in the payload size; this is indicative of some mechanical check-in type behavior, which is exactly what the analyst was looking for:
Now the analyst has found some interesting sessions, they can reconstruct the RAW payload, to see if there are further anomalies of interest. Browsing through the sessions, the analyst see's that the requests do not return any data, and are to random sounding resources. This seems like some sort of check-in type behavior:
The analyst comes back to the events view to see if there are any larger sessions toward to this IP, to get a better sense if any data is being sent back and forth. The analyst notices a few sessions that are larger than the others and decides to investigate those sessions:
Reconstructing one of the larger sessions, the analyst can see a large chunk of base64 is being returned:
As well as POST's with suspicious base64 encoded cookies header that does not conform to the RFC:
This seems to be the only type of data transferred between the two IP's and stands out as very suspicious, This should alert the analyst, that this is most likely some form of C2 activity:
The base64 is encrypted, and therefore the analyst cannot decode and find out the information being transferred.
THOUGHT: Maybe there is another way for us to get the key to decode this? Keep reading on!
The analyst has now found some suspicious activity, the next stage is to track this activity and see if it is happening elsewhere. This can easily be done by using an application rule, the analyst identifies somewhat unique criteria to this traffic using the investigation view, and converts that into an application rule, the following example would pick up on this activity and make it far easier for the analyst to track:
(service = 80) && (server = 'microsoft-httpapi/2.0') && (filename !exists) && (http.response = 'cachecontrol') && (resp.uniq = 'no-cache, no-store, must-revalidate') && query length 14-16
IMPORTANT NOTE: Before adding this application rule to the environment, it is important to note that the analyst thoroughly checked how many hits this logic would create in their environment before deploying. Application rules can work well in one environment, but can be very noisy in others.
It is also important to note that this application rule was generated specific to this environment and the traffic that was seen, not all PoshC2 traffic would look this way. It is up to the analyst to create application rules that suit their environment. It is also important to note that the
http.response
andresp.uniq
meta keys need to be enabled in the http_lua_options file as they are not enabled by default.
The analyst creates the application rule, and pushes this to all available Decoders:
Upon doing so, the analyst see's the application creating metadata as expected, but also notices that there is another C2, and also another host infected in their network by PoshC2:
This demonstrates the necessity for tracking activity on your network as and when it is found, it can uncover new endpoints infected and allows you to track that activity easily.
From here, the analyst from this point has multiple routes that they could take:
The analyst, while performing their daily activity of perusing IOC's (Indicators of Compromise), BOC's (Behaviors of Compromise), and EOC's (Enables of Compromise) - observes an BOC that stands out of interest to them, office application runs powershell:
Opening the Event Analysis view for this BOC, the analyst can better understand what document the user opened for this activity to happen. There are three events for this, because the user opened the document three times, probably as they weren't seeing any data from the document after enabling the macros within the document:
Opening the session itself, the analyst can see the whole raw payload of the PowerShell that was invoked from the Word document:
Running this through base64 decoding, the analyst can see that it is double base64 encoded, and the PowerShell has also been deflated, meaning more obfuscation was put in place:
Decoding the second set of base64 and inflating, the actual PowerShell that was executed can now be seen:
Perusing the PowerShell, the analyst observes that there is a decode function within. This function requires an IV and KEY to successfully decrypt. This could be useful to decrypt the information that we saw in the packet data:
The analyst calculates the IV from the key, which according to the PowerShell, is the key itself, minus 15 bytes from the beginning of said key, we then convert this to hex for ease of use:
Now the analyst has the key and the IV, they can decrypt the information they previously saw in the packets. The analyst navigates back to the packets and finds a session that contains some base64:
Using the newly identified information retrieved via the endpoint tracking, the analyst can now start to decode the information and see exactly what commands and data was sent to the C2:
Some of which can be incredibly beneficial, such as the below, which lists all the URL's this C2 will use:
The analyst also wants to find out if any other interesting activity was taking place on the endpoint, upon perusing the BOC meta key, the analyst spots the metadata, creates suspicious service running command shell:
The analyst opens the sessions in the Event Analysis view, and can see that PowerShell was spawning sc.exe to create a suspicious looking service called, CPUpdater:
This is the persistence mechanism that was chosen by the attacker. The analyst now has the full PowerShell command and base64 decode it, to confirm the assumptions:
Understanding the nuances between user based behavior and mechanical behavior, gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.
It is also important to note the advantages of having endpoint tracking data with this scenario as well. Without the endpoint tracking data, the original document with the malicious PowerShell may not have been recoverable, and therefore the decryption of the information between the C2 and the endpoint, would not have been possible; both tools heavily compliment one another in creating the full analytical picture.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.