The Havoc command and control (C2) framework is a free and open-source (FOSS) post-exploitation toolkit, created by that is publicly available via Github (https://github.com/HavocFramework/Havoc). Supporting documentation for installation and operation of the framework is maintained on a separate site (https://havocframework.com/). The GO (golang) based toolkit boasts a sleek modern UI as well malleable options for both the listeners (server-side socket), and associate “demons” (payloads/clients). Most technical features are listed on the https://havocframework.com/docs/welcome).
A unique feature of Havoc is an extendable open API. Although not documented, the code is there to extend the framework and create custom clients in languages such as Python. This is a very powerful feature of Havoc as it allows the operator to build custom payloads without relying on the default functionality within the framework. This is potentially troublesome from a defender’s perspective as the artifacts that could be leveraged for investigation or detection are prone to change. As we continue to research this framework, we’ll provide an update in the future to this post regarding the API use and ramifications.
C2 refers to any technique that is used to communicate with systems that are ‘infected’ with malicious client-side software on a victim network (MITRE Command and Control
http://attack.mitre.org/tactics/TA0011/). While C2 is not new to our industry and existed well before MITRE started classifying techniques, it is worthwhile mapping various techniques back to the MITRE framework for further use downstream in the investigative cycle. C2 frameworks often include post-exploitation capabilities to further spread throughout a network. Common C2 activity generally consists of an external server listening for communication from infected systems. An infected system typically generates beacon traffic to maintain communication with the server. As defenders, we can identify patterns associated with C2 like behavior. The various data types we have at our disposal will alter how we approach our investigations. Ideally, we’re able to hunt all major telemetry (logs, endpoint data, network traffic) seamlessly across solutions if not within a single UI. Each data type will provide different insight. From an NDR perspective, these patterns may look like somewhat sequential source ports associated with similar payload sizes and timestamps. Other artifacts such as certificate information (subject or common name), ja3/s (TLS fingerprinting https://github.com/salesforce/ja3), or anomlous domains may also be used when looking for C2 traffic to help identify a suspicious beaconing behavior. Modern frameworks such as Havoc also support ‘multiplayer’ where multiple operators may interact with any client communicating with the server.
A Havoc listener is the server-side socket that will wait for incoming beacon traffic and interact with the deployed client-side demons. Out-of-the-box, Havoc supports HTTPS, HTTP, and SMB as protocols for C2 communication. There is also an option for external (custom) listeners.
The listeners are malleable and allow for the operator to customize various hosting/socket binding fields as well as fields such as the user agent string, uris, and headers. The latter fields are common for use in both threat hunting and detection engineering efforts. While the user-agent string in the image above is populated by default, it will not be reliable for hunting or detection efforts due to it being malleable. The SMB listener is limited to customizing the named pipe as observed in the following image.
The demon acts as a client-side malicious executable that connects back to the server-side listener over the configured protocol. The payload configuration options are seen below.
The payload options allow the operator to select a listener per payload. If multiple listeners are being used, an operator can keep the deployed demons organized via this functionality. The remaining options allow for granular configuration of the demon’s behavior once executed. Notably, the sleep technique and injection methods allow for the operator to keep their demons from falling victim to common static fingerprinting methodology as each demon has the potential of behaving differently upon execution. This increases the burden on detection engineers as multiple rule sets may be required to identify combinations of the various demon behaviors. Once the operator is satisfied with the configuration options, the demon can be generated and renamed.
Once a demon and listener are communicating, the framework provides a wide variety of post-exploitation features as well as the ability to extend the capabilities via custom scripts. The various command options can be used for reconnaissance, lateral movement, privilege escalation, data exfiltration, or more. A few examples are below.
On top of the post-exploitation commands available within the framework, operators are also provided an option Python 3.11 shell within the UI to write custom script quickly and leverage them within infected hosts. This expands the operator's ability to further conduct reconiassance, move laterally, or write custom exploits as needed. It may also serve as another detection angle for defenders, but testing is still ongoing and will be part of a future update to this article. A screenshot is available below.
A very nice perk of FOSS offensive security tools is that we do not need to decompile a malicious sample to review the code and learn (unless the code has been altered by the threat actor). This allows us to identify hardcoded artifacts that are not easily configurable by the operator. Now, the artifacts could change if the underlying code is modified. For example, upon execution of the demon on a test workstation, we observed common artifacts such as the SSL subject or CA name changed with each new demon deployed. In instances where a domain was not used for the listener; the IPv4 address of the listener was observed in the SSL meta.
When reviewing the source code, we can see that potential values for the SSL cert attributes are pulled from hard coded lists.
Identifying these hardcoded artifacts allows us to write a simple rule to make our threat hunting or detection efforts more efficient. For example, the use of the following syntax as an application rule within NetWitness will provide an easy-to-hunt tag as well as a potentially high-fidelity IOC based alert.
direction = 'outbound' && service = 443 && (alias.host exists || alias.ip exists) && analysis.service = 'ssl certificate self-signed' && ((ssl.ca = 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug','co','llc','inc','corp','ltd' || ((ssl.ca begins 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug') && (ssl.ca ends 'co','llc','inc','corp','ltd'))) || (ssl.subject = 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug','co','llc','inc','corp','ltd' || ((ssl.subject begins 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug') && (ssl.subject ends 'co','llc','inc','corp','ltd'))))
The reason we use an application rule for this is because of the long syntax. The rule could be copied and pasted and used as a query instead but the use of an application rule allows for us to simplify our query. When the application rule triggers, it will populate the IOC meta key. A query to use for hunting purposes would then be:
ioc = 'Potential Havoc Cert'
Below are two screenshots that show the above. The longer query was deployed as an application rule then a new demon was deployed to ensure that new data was used when testing. The first screenshot is the original query that was used to help identify the initial havoc c2 session.
The second image below shows the simplified query that is available once the above query is deployed as an application rule.
Notice how both queries return the same result. The application rule matches the syntax criteria in real-time at the time of parsing and tags the traffic in the IOC meta key. By then looking for the name of the rule in the IOC key, we are presented with the same dataset more efficiently.
JA3 (client) and JA3s (server) hash values are also commonly used as IOC artifacts for both detection and hunting purposes. For more information on JA3/S please check out this article. Several Havoc demons were deployed throughout our research/testing, and we observed a fair amount of consistency in hash pairs as seen below.
JA3:
"a0e9f5d64349fb13191bc781f81f42e1", "eb8ae4a0c5efb65457acc43110d9e808", "d8d445743755e83f7dd2cefca7ecf071", "b04c4b3953982bb08128336e5a68f2a9", "5f8f0a80171ea54702f6d196188263d9"
JA3S:
"326de7c6719a77bb7ef65f6cac962193", "475c9302dc42b2751db9edcac3b74891"
A query such as the following could be used investigate for Havoc traffic.
ja3 = '8ae4a0c5efb65457acc43110d9e808','d8d445743755e83f7dd2cefca7ecf071','b04c4b3953982bb08128336e5a68f2a9','5f8f0a80171ea54702f6d196188263d9' && ja3s = '326de7c6719a77bb7ef65f6cac962193','475c9302dc42b2751db9edcac3b74891'
The above query looks like the following when queried
From a static file analysis perspective, we also observed that artifacts are available within the file’s strings. Regardless of how the demon was compiled, we found that “demon.x64.exe” always shows in the strings. Other artifacts included launch argument (ex. notepad), and the listener host (domain or IP).
IOC artifacts are valuable for as long as they remain valid. While the above sections are high fidelity artifacts today, they are prone to change in the future. As a result, it is crucial that a threat hunter be enabled to identify behavioral artifacts as well. We follow a process of data carving when hunting for suspicious activity within an environment. Data carving refers to the practice to removing data (events, sessions, etc.) from the set at hand so that you are left with the desired results. In our case of hunting, we want to carve known-good traffic from our data set so that we are left with unknown or suspicious traffic. We continue this process until we run out of questions to ask ourselves or have adequately created a timeline of events. The keys to success when hunting for C2 traffic (or any technique that requires a behaviorial analysis) is to have effective contextual tags within your solutions and possess a strong sense of situational awarness. We need to understand what is good so that we can find what is bad. Typical C2 hunting methodology applies to Havoc as well. While the sleep timer of a Havoc demon is malleable and supports jitter, the payload size remains relatively consistent. Jitter refers to a concept of randomness designed to thwart efforts to recognize a pattern within the beacon activity.
While jitter does a good job at evading automated beacon detection, it is still susceptible to a trained human eye. A good place to start when looking for C2 traffic is with the tcp source port (tcp.srcport key in NetWitness). Take a look at the image below and notice how there is a large block of mostly sequential source ports with consistent event counts. A block of sequential ports often stands out against other traffic.
By themselves, sequential source ports are not a sign of malice. I typically investigate a couple ports at random to see if there is another artifact (such as source or destination IP address) that is consistent with the traffic. In my first carve with this example below; we can see there is a lot of known good traffic such as Microsoft product callouts in the associated domains. However, the ’ctwospoons’ domain stands out from that traffic.
My second carve tends to leverage the freshly identified artifact (in this a case a domain) to see if there is any broader interesting traffic associated with it. As observed below, when we drill down on the domain only, we can see the sequential pattern to the tcp source port. This generally means that there is a process that is going to sleep and waking up to perform some function. This is consistent with both some normal traffic as well as C2 traffic.
At this point I generally switch my view to be more fitting for observing a pattern in chronological order.
HINT: In the solution you are using to hunt in, it can be very valuable to have the ability to organize the data in a way that is meaningful to you. Personally, for me, I like to organize data from left to right as follows to look for patterns of potential C2.
Protocol – IP Source – TCP Source Port – IP Destination – TCP Destination Port – Size – Domain(s)
An example is below.
In this example there are three columns of interest for identifying potential patterns of C2. These are collection time, tcp source port, and size. Due to the jitter function in Havoc, it will be more difficult to identify a pattern with time alone. If you are a visual person, it may be valuable to pull collection time out and graph it to look for patterns more easily. In NetWitness, this could also be done via the timeline feature as we can see there is some consistency in the times.
The next columns I personally review are tcp source port and size. We have already observed the pattern with sequential source ports. Moving on to size, we can see the sessions sizes are very consistent. In our research of Havoc, we found that beacon traffic was consistently between 3-6KB. Meanwhile, when we interacted with the demon from the Havoc Team Server, the session sizes grew significantly. At this point, it is beneficial to review other attributes of the traffic that may help us determine if we have legitimate malicious beaconing activity. We can start by leveraging threat and business intelligence to our hunt. Some example questions to ask yourself may be:
Answering these questions can help determine what level of risk the observed behavior poses. For example, we may look at the traffic with more suspicion depending on what function it serves. We can apply our research above to the hunt by reviewing the sessions for IOC artifacts. Behavioral + IOC identification is generally very high-fidelity.
In NetWitness Platform version 12.3+, application rules are also capable of generating alerts. This can be easily achieved by simply selecting ‘create future alert’ from the query bar. This feature generates both a meta based alert to simplify future queries as well as an alert in the Respond module. Most of the fields in the prompt that follows are automatically filled out (but editable in case you need to make any changes).
The following image reflects an Incident that was created when the rule noted above triggered.
While reviewing the code for Havoc, we noticied that some commands for the Demon were tagged with corresponding MITRE IDs. For red and blue teams, these tags are useful for conducting controlled tests of defenses that have been mapped back to MITRE. An example is seen below (sourced from https://github.com/HavocFramework/Havoc/blob/c393115fa1714748f368aff97e55da4aa81f5c56/client/Source/Havoc/Demon/Commands.cpp#L24).
direction = 'outbound' && service = 443 && (alias.host exists || alias.ip exists) && analysis.service = 'ssl certificate self-signed' && ((ssl.ca = 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug','co','llc','inc','corp','ltd' || ((ssl.ca begins 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug') && (ssl.ca ends 'co','llc','inc','corp','ltd'))) || (ssl.subject = 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug','co','llc','inc','corp','ltd' || ((ssl.subject begins 'ACME','Partners','Tech','Cloud','Synergy','Test','Debug') && (ssl.subject ends 'co','llc','inc','corp','ltd'))))
ioc = 'Potential Havoc Cert' (IF THE ABOVE QUERY IS TURNED INTO AN APPLICATION RULE)
ja3 = 8ae4a0c5efb65457acc43110d9e808", "d8d445743755e83f7dd2cefca7ecf071", "b04c4b3953982bb08128336e5a68f2a9", "5f8f0a80171ea54702f6d196188263d9" && ja3s = "326de7c6719a77bb7ef65f6cac962193", "475c9302dc42b2751db9edcac3b74891"
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.