Profile cover photo
Profile photo
Countercept
18 followers -
A MANAGED ATTACK DETECTION AND RESPONSE SERVICE
A MANAGED ATTACK DETECTION AND RESPONSE SERVICE

18 followers
About
Posts

Post has attachment
Add a comment...

Post has attachment
Add a comment...

Post has attachment
Add a comment...

Post has attachment
Add a comment...

Post has attachment
Add a comment...

Post has attachment
Add a comment...

Post has attachment
Add a comment...

Is whitelisting a panacea?

Whitelisting is an important security control and is considered so effective against targeted attacks that the Australian DSD rated it one of the four most important controls . By allowing only approved processes and DLLs to load, whitelisting can significantly raise the bar for attackers. Additionally, if blocked executions are investigated it can often be an early-day warning of an attack.
However, whitelisting does not prevent attack, it just makes it harder for the attacker, and organisations that may be targeted by more advanced actors need to consider whitelisting merely as a component of a mature set of security controls that provide much stronger holistic protection in combination than whitelisting would alone.

A number of actors have been observed using strategies that bypass the whitelisting in place and as it grows in popularity then this will only become more common.

Whitelisting bypasses

Memory-resident exploits

Many attacks exploit web browsers or common document editing software, such as Microsoft Office or Adobe Reader, in order to gain code execution inside the process that was exploited. At this point, more common attacks will use the code execution achieved to download full-feature malware as a binary executable to run on the system. This would be prevented by whitelisting as the downloaded malware would not be allowed to run. 

However, more advanced attackers will not download a binary directly and may remain fully memory-resident within the memory space of the exploited process. To avoid losing control when the targeted software is exited, the memory-resident malware may migrate to the memory space of another approved process that is expected to remain running for as long as the system is powered on, such as explorer.exe. 

Alternatively, a new approved process may be launched and used specifically to host the memory-resident malware. This technique is known as “process hollowing” and involves launching an approved process permitted by whitelisting and replacing the executable code in memory with malicious code. Recently, “Duqu 2” made use of this technique to make it appear that legitimate security software was running and active whilst really rendering them inactive and simultaneously using them to host Duqu’s own malicious code. 
Whilst these techniques may sound advanced, many common malware families make use of them and even freely available penetration testing frameworks like Metasploit have supported fully memory-resident operation for years.

Privilege escalation and kernel exploits

Whitelists are only strongly enforceable against users who do not have local administrative access to their systems because administrative access can be used to disable the protection, add exceptions or otherwise render it ineffective. An attacker who exploits an administrative user could make configuration changes such that any further malware they wanted to use would be permitted.
However, this technique is not limited to use against administrative users. Privilege escalation attacks could be used as a second stage attack in order to get around whitelisting. For example, our security consultants used a kernel exploit as part of a Chrome browser exploit in Pwn2Own to both gain remote code execution and escalate privileges to break out of the browser sandbox and gain administrative level access. At this point, whitelisting protection could be disabled.

Poorly configured whitelists

Whitelists are a good control but not all whitelists are created equal. Each organisation will set up a whitelist that suits its working habits and in MWR's experience, there are often holes in the whitelisting that an attacker can exploit to gain persistence. Common flaws include:

• Validating only the program name – this can easily be circumvented by renaming
• No validation of DLL loads – common tools such as rundll32.exe can be used to load executable content as DLLs instead
• Writable paths – Path rules are commonly used to allow execution and if these are writable then the whitelist can be bypassed by writing malicious executable content to the allowed paths

By identifying a flaw such as these, the attacker can compromise a system with whitelisting in place. 

Scripting and bytecode engines

Many whitelisting tools monitor native binary loads but do not appropriately account for other ways of executing arbitrary code. These are common scripting and bytecode engines that are either default or commonly installed by enterprises and can be used to achieve arbitrary code execution. Common examples include:

• Java
• PowerShell
• Office Macros
• VBScript
• Batch files
• InstallUtil bypass 

Our security consultants have successfully exploited environments with whitelisting in place using some of these technologies. Many whitelisting solutions cannot directly control these technologies other than to fully disable the entire scripting engine, which is often not viable when the technology is required for business applications.

Attacking hosts not subject to whitelisting

A simple bypass assumes not all hosts in an environment will be using whitelisting. Whilst whitelisting might have been rolled out to the majority of corporate desktops, alternative operating systems such as servers, Linux desktops and OSX hosts may not be whitelisted so aggressively. By targeting these hosts, attackers can gain the foothold on the network they need. For example, by targeting a Linux hosted web application or a marketing user with a MacBook, it may be that whitelisted solutions focused on windows hosts can be avoided completely.
 
Similarly, not all user profiles are subjected to whitelisting restrictions. Typically administrators and sometimes developers are able to run arbitrary executable code and so whitelisting can often be avoided by targeted these users. They also often have the highest privileges on the network and so are particularly attractive targets for an attacker anyway.

Mitigations

Secure configuration

Organisations should ensure that whitelisting is robustly and aggressively configured so that there are not obvious gaps that an attacker can exploit to plant malicious code on a system. This includes ensuring whitelisting is present on all hosts, regardless of operating system, that can be reached by an attacker. 
This should include tight control of scripting and bytecode engines such that they do not represent a generic way to bypass whitelisting controls in place. In many cases, most users will not require the use of technologies like powershell and other configuration controls exist for controlling execution of VBScript and Office macros. For technologies that may be more problematic to control directly, such as Java, centralised logging of process execution can be used to help detect malicious use of Java specifically. 

Other whitelisting policy violations should be logged and exported to SIEM infrastructure so that they can be investigated by security analysts as it may be an indicator of a compromised system or failed attack.

Network defences

The vast majority of malware will depend on the use of the network for command and control and data exfiltration. Therefore, robust network monitoring that can detect malicious channels is important, especially in environments that may be targeted by any malware that is above commodity level. 

Network monitoring should benefit from, but not rely on, signatures and indicators of compromise (IOCs) as more advanced attackers can typically evade these with ease. A historical investigation capability is important as in many cases organisations only learn about compromise by more advanced actors months or even years after the initial compromise. The ability to investigate the historic activity of compromised hosts to understand the full extent of the compromise is crucial.

Endpoint threat detection and response

Whitelisting is primarily an endpoint-focused preventative control and so it is also important that strong detection controls are in place on endpoints. Standard logging in Windows and Linux provides valuable information to investigators but much more advanced logging and analysis is required to detect more advanced threats. 
For example, detection of the use of process hollowing or thread injection as a technique to bypass whitelisting will require detailed process execution logging and live memory analysis to detect properly. Dedicated endpoint threat detection and response software will generally be required to achieve this. 

Prevention alone is not enough – attack detection is required

Application whitelisting is a great preventative security control and is arguably the most effective first line of defence against initial endpoint compromise. However, no security control can protect against every attack and so it does not replace the need for good attack detection for when preventative controls fail.
A large enterprise without strong whitelisting controls is likely to have endpoints compromised by generic malware and adware that anti-virus misses regularly. An advanced attack is likely to slip under the radar easily in this situation. 

Whitelisting can help reduce the noise of common malware infections, such that if a confirmed compromise is detected on a well patched endpoint with strong application whitelisting in place, then it is immediately cause for a stringent investigation, as it is much more likely to be the result of an advanced, targeted attack. Additionally, due to the low noise from common malware infections there should be much more resource available for investigation.
Add a comment...

Post has attachment
The actions taken by a network intruder manifest themselves in many varied ways, and forensic evidence can literally be littered throughout the environment. Much of that evidence is contained in the log files of various systems.

Log files can potentially be available which will reveal every phase of an attack from the initial delivery of a phishing email through to the exfiltration of sensitive data.

Consequently, log analysis frequently comes complete with its own set of problems to be solved:

What sources of log data are available?
How best to aggregate the data?
How to query the data?

Potentially useful sources of data may be: Email systems, DNS servers, DHCP servers, VPN logs, Syslog data, Routers, Firewalls, Servers and Windows Event Logs – all of which come in different formats so may not be easy to correlate. A good log analysis platform can overcome these problems and give intrusion analysts access to the data they need to perform their job. Having aggregated this data in a suitable analysis system it becomes possible to see the timeline of an attack as it develops, assuming you know what to look for.

The best intrusion analysts know the mindset of the attacker and know his methods; this helps them to hunt the attacker down. Knowledge of how the attacker will develop his compromise and target his objectives, and the manner in which that will yield certain evidence in log files is fundamental to uncovering new threats in your environment.
Photo
Add a comment...

Post has attachment
Attack detection is a historically ineffective field. We're all used to a signature-based anti-virus alert. We’re also used to ignoring an un-tuned IDS. These existing signature-based tools are great for detecting known attacks; when configured correctly, they should definitely form part of your arsenal.

However, countless demonstrations exist showing how easy it is to bypass systems that work in this way. By slightly modifying your attack, it no longer matches the signature and no alert is raised - so clearly something else is needed.

Newer, more effective approaches include heuristic and anomaly-based detection which look for what is "normal" and alert on deviations. Assuming we define "normal" effectively, this is a brilliant next step to take. We are no longer relying on an attacker sending the exact payload we are expecting, now we are watching for them doing something we are not expecting. But it’s not that easy to do this and more importantly the discussion of attack detection tends to focus around tools and appliances. Frequently, when faced with difficult problems, organisations purchase a new tool or system to solve the problem. There are many reasons why businesses go down this route, one main reason is appliances are capital expenditure, they don’t need a HR department and if they don’t work you can quietly put them in the bin without anyone noticing.

In addition to this, tools need to keep their false positive rate low to be useful so may disregard real threats in favour of keeping their FP rate useful- whereas experienced humans can quickly assess whether suspicious activity is worthy of raising an alert and further investigation.

Although tools can make life difficult for an attacker and can provide preventative value, as of 2015, the most advanced detection and prevention tools in the world are by themselves no match for an intelligent human with the right capabilities and resources at their fingertips.

Therefore the only effective way to detect and respond to attacks is by having the right people, with the right mindset.

But again, we should assume our protections aren’t bullet-proof; what if, somehow, an attacker obtains ‘legitimate’ access? If mail or remote access credentials are obtained, no amount of AV or IDS will detect attacker activity; they are now using the system for what it was designed to do. There is no anomaly in the network traffic or malware on the system.

In an advanced attack, it is often a primary goal for the attacker to obtain such legitimate access to stay under the radar.

When we are faced with detecting this attacker, we can examine logs to monitor for suspicious activity. Much like heuristic detection, we are looking for deviations from the norm. If one of our UK users logs in to the VPN at 3am from another country, it’s not a confirmed attack but it is worthy of further investigation.

This approach to detection can be enhanced when multiple log sources are combined. What if the same user has logged into the VPN from the UK during business hours. That event alone is expected behaviour and not worthy of alert. If however, we are also examining our port authentication logs and can see the same user’s laptop is physically plugged in to the network connection on their desk, the VPN access is now potentially a suspicious event.

Or it may not be, it depends on how your business operates. However, by exploring and simulating attack scenarios we can verify and extend our existing visibility and practice our response such that if an attack should happen, we can have increased confidence it will be caught and dealt with.
Photo
Add a comment...
Wait while more posts are being loaded