The best way to test if a door is locked is to try it. Similarly, one of the best ways to test your computer security is to try to breach its defences using known tools. These are referred to as ‘grey hat’ tools and they are used legitimately by penetration testers and red teams to launch attacks with benign payloads on computers and networks. The majority of grey hat tools are licensed, open source tools, published and freely available on community source control repositories like GitHub and SourceForge.
Their features and functionalities are largely indistinguishable from malware. Many grey hat tools are designed to obfuscate a given executable, shellcode, or scripting language payload for the purposes of evading detection by anti-virus software. Others may provide library code containing common exploitation methods, keylogging, anti-debugging techniques, or code to detect the presence of a sandbox, virtual environment, or instruction emulation. Some simply help with facilitating communication to a command and control server by providing a framework for client-server communication with a given attack target.
Unlike malware, however, defenders know what these grey hat tools are and the techniques they use to test defences. Their appearance on the network may not automatically trigger concern. With the increasing popularity of living-off-the-land style attacks, there is growing concern about how and where some of these tools are used in the penetration testing process to ensure they cannot be abused by a real attacker.
Unfortunately, there is no way to prevent anyone with nefarious intentions from using these legitimate kits to produce, deliver, or enable a malicious attack, and many make the most of this. However, not all attackers abuse grey hat tools in the same way.
Grey hat tools and the cybercriminal pyramid
The cyberthreat landscape is a complex ecosystem and the malware community resembles a kind of pyramid. At the top are the apex predators, the advanced persistent threats, or APTs, that are highly skilled and resourced and often nation-state funded. At the bottom are the vast numbers of the ‘script kiddies’, unskilled attackers with few resources, out to make a quick buck by hiring or leveraging the tools of others. In the middle are the operators that have the skills need to modify some, but not all, tools and moderate resources to mount attacks. Grey hat tools are used and abused by all of them.
Entry-level and mediocre operators may use grey hat tools “out of the box,” unchanged. The advantage to a novice or poorly resourced attacker is that this can allow them to pull off a more complex attack by means of abstracting some of the difficult aspects of the job.
However, the more creative malware authors will often further modify a given grey hat tool in an attempt to expand or customise its capabilities or make it harder for security software to detect. Among others, multiple miner botnets, including Kingminer, Lemon Duck, and Wannaminer implement this approach.
Spot the difference
There is some good news: when the more entry level attackers decide to leverage such tools, it can actually make life easier for defenders. This is because the open source element means defenders can see exactly what a specific tool is capable of, and there is often little variation in the techniques involved if the tool is used as-is. This makes it easier to focus protection on the attack a given tool is capable of generating out of the box.
The fact remains that for cybersecurity professionals, the major disadvantage of open source, grey hat tools is that they are often used in legitimate scenarios, and it is not immediately obvious when they are being used for malicious purposes. One way to address this challenge is through application-security based detections, although this involves relying on end users, like corporate IT teams to do some of the work. Another way is to introduce a human-led threat hunting service. Known as Managed Detection and Response (MDR), this complements the advanced behaviour-based algorithms of security software with skilled humans who perform a nuanced analysis taking into account the context of a tool’s behaviour. If you can identify an adversary based on their goals and the behaviours/tactics/techniques seen while trying to achieve those goals, then whatever tool they are trying to do it with doesn’t matter as much, if at all.
Discussion about this post