Five Questions to Ask When You’ve Been Compromised

1. Which systems and data were affected?
2. How did they do it?
3 . Who did this to us?
4. Is it really over?
5. Can it happen again?

Modern security analytics go far beyond looking at attack patterns. They are responsible for indexing, classifying, analyzing, reporting, alerting, augmenting with outside threat data and escalating to malware analysis tools. Regardless of how robust an organization’s defenses are, an intrusion is likely to happen. When it does, security teams need to answer five key questions (and the sooner, the better). Simply capturing sample packet data isn’t enough; you must be able to capture and analyze packets at wire speed. With more and more networks operating at high speeds, requirements of 2Gbps to 5Gbps are relatively common, with speeds of 10Gbps not unheard of. Then, data that is suspect must be moved out of line for deeper inspection. All this must be automated and conducted at speeds that do not interfere with business operations and productivity.

Which Systems and Data Were Affected?

Answering the question of which systems were affected can be as easy as looking for C2 and exfiltration traffic between internal systems and the attacker’s servers. If attackers aggregate C2 and exfiltration data out through a few hosts, then simply looking for command and control servers may not find all compromised hosts. Traffic analysis can help, but detecting traffic generated by compromised hosts or traveling to compromised hosts is only a start. Some packet capture and analysis systems are deployed at the network perimeter and do not record communications between internal hosts. Monitoring and inspecting traffic between hosts is also important, given that C2 servers can be moved around inside an organization to avoid detection.

After a vulnerability is made public, analyzing previously captured network traffic can also be used to determine which machines were previously exploited and are under the control of an attacker. Researchers recently revealed that this activity is being undertaken by nation-states.(1) If this technique works at international scale, it can certainly work for your organization.

EXPERT Advice: Monitoring and inspecting traffic between hosts is important, given that C2 servers can be moved around inside an organization to avoid detection.

How Did They Do It?

The question of how an attack was perpetrated is always significant. Consider a vulnerability such as Heartbleed. Suppose your servers were exposed to the Internet, unpatched, for a few days after Heartbleed was announced. Were they attacked? Do you know? Heartbleed didn’t trip IDS signatures or leave application logs.

The only way to know whether you suffered an attack is by using network capture data. By looking inside the decrypted traffic, analysts can identify whether their systems were attacked with the Heartbleed exploit and, more importantly, whether the server’s encryption keys (or other private data) were revealed. Finding this data in the sea of traffic is probably easier than it sounds. Exploits frequently make a lot of noise before they are successful—one researcher required more than 100,000 requests to compromise a server’s private key(2)

The Heartbleed exploit is a key case where analyzing with intelligence is critical. It will involve several distinctions; for example:

  • You need a system that can locate network traffic flows that use SSL.
  • Ideally, the system should allow analysts to craft new analytics without having to wait for the developers to update protections. After all, no one cared about the DTLS Heartbeat field length (exploited by Heartbleed) until the vulnerability came to light.
  • Analysts who wish to analyze this traffic would have to create custom decoders for the packet data, which is a feasible option only if their tools support it.

Take, for example, the CVE 2014-1776, which was a zero-day exploit used in targeted attacks. The exploit used vector markup language (VML) to abuse Internet Explorer’s mishandling of memory allocations. Because VML is not widely used, a common remediation suggestion was to disable VML at each endpoint in the network until patches were available.(3)

But would disabling VML break critical business processes? Analyzing network data can tell which (if any) hosts are using VML. Then, analysts can inspect the communications, look into the packets and determine whether they contain any of the exploit code.

Who Did This to Us?

Full network packet capture can be used to help differentiate an insider attack from an external attack. Either type of attack is likely to exfiltrate data over the network, but in the case of an insider attack, analysts would not observe any exploit traffic without employing full network packet capture. On the other hand, an external attacker is likely to send exploit traffic before exfiltrating data, so both inbound and outbound traffic, as well as traffic between sensitive internal systems, should be monitored.

Network capture should also include VPN traffic and other access from external business partners. In the infamous Target breach, the initial source of compromise was an external contractor with access to Target’s networks.9 All ingress and egress traffic to the Internet was covered. So, did the compromise come from inside or was it a business partner? Which one? Sensor placement matters when it comes to capturing network data. Figure 1 illustrates desirable locations for network traffic capture that are often ignored when choosing where to monitor. 

The ability to reconstruct network sessions is absolutely critical. Imagine the case of a credential-harvesting spearphishing attempt. It is important to be able to answer two questions: How many users clicked on the link to visit the site? Of those users, how many submitted their credentials to the site? Network data can provide the answers.

Malware authors often reuse code in their creations. This includes reuse of usernames, passwords, HTTP User Agents and other network-based artifacts. Once one of these artifacts is discovered, analysts can identify other network traffic that shares the same characteristics. Here too, protocol-level dissection is critical to establish the needed level of granularity.

Properly scoping an attack can have huge financial impacts on the organization. Consider, for example, a credit card breach. Host logs show that the attackers definitely had access to the database containing cardholder data. However, based on host logging, it is unclear what they actually accessed in the short time before they were eradicated from the network. The attackers’ backdoor program used encryption, and network traffic was sent to the attackers. But what was stolen? Do you need to notify (and reissue cards) to a million customers, a thousand customers or none? Without network traffic data, it is impossible to answer the questions. With the network data and a good reverse engineering team, the answer is within reach.

Employees Versus External Attackers:

A Lesson in Motivation In this scenario, we use network data capture to determine whether the attacker is an insider or an outsider who has compromised passwords to use now and return later to continue the exploit.

Organizations need to provide remote access to their employees. But how do they distinguish legitimate VPN connections from malicious ones? The answer may lie in the activity that happens after the connection: Attackers are often interested in exfiltrating large amounts of data, whereas employees might access an intranet web portal to perform some task and then disconnect.

By examining patterns of usage, made possible with network traffic capture, analysts can be alerted to anomalous behavior and can use the captured data to drill down and see the data being sent out and other activities being performed under a compromised user credential.

Is It Really Over?

When they first detect attackers, defenders often rush to eradicate the offenders from the network. This sounds like the correct approach, but attackers may change tools and tactics and continue their attack if they are not completely eradicated. This is what happened when Chinese attackers hit The New York Times(4) using new tools after being discovered by security researchers.(5)

When the compromise is first detected, defenders often remove the obvious infection in lieu of taking the wipe and reimage approach to remediation. But secondary backdoors on a compromised machine are devilishly difficult to locate. By instituting extra monitoring, defenders can identify resumed attacker communications with the machine—a clear indication that remediation has failed.

Can It Happen Again?

The short answer is, “Of course it can.” Just blocking access to untrusted domains neuters a huge percentage of spearphishing attacks. Even large enterprises communicate with a surprisingly small subset of the Internet on a regular basis. But trying to determine trustworthiness on your own, without access to reputation data, is a real challenge. Best practices dictate that reputation data be obtained from a reputable provider ingesting data from all over the globe.

The vulnerability used to compromise the network originally was likely patched in the remediation phase. But was the patch enough? Because attackers have already compromised machines, they know more about the environment and may be able to exploit vulnerabilities they would never have discovered through remote scanning. What if patches can’t be applied due to software constraints, such as a Java dependency with custom or third-party software? These situations call for enhanced monitoring to determine whether attackers are attempting to regain their control of the organization’s systems. The best method for this enhanced monitoring, particularly at scale, is through network traffic analysis.

Understanding normal network behaviors, traffic patterns and actions, and being able to detect what is not normal, are critical to determining whether attackers have returned to the network. A great example of anomaly detection (discussed earlier) is CVE 2014-1776. In the weeks following the announcement of that vulnerability, many organizations saw an increase in VML traffic. For organizations where VML is never used, this was a clear sign the attack was spreading, even as organizations tried to contain and mitigate the vulnerabilities.

Conclusion:

Despite the best efforts of security personnel, compromises do occur. And when they do, it is essential that analysts be able to provide the answer to five important questions detailing what was compromised, how it was compromised, who did it, whether it is truly over and whether it can happen again.

Adopting a life-cycle approach to network defense is a good starting point from which to build defenses. Protection, detection, investigation and analysis, containment and mitigation, and remediation are keys to a healthy security environment. Ideally, many of these technologies and processes would flow together, wrapped in a life-cycle management and workflow remediation process to reduce risk and attack surfaces while promoting faster response and reducing false positives.

Full network traffic capture is an important capability for intelligent analysis and response. By combining network traffic capture with proxies at the perimeter, particularly those that decrypt SSL data, organizations can obtain answers to key questions about the compromise and support the detection, response and remediation stages of the life cycle.

EXPERT Advice: Critical preventive measures consist of using secure practices for hardening systems, closing the loop on vulnerabilities and remediation, and monitoring for misconfigurations or signs of abuse.

EXPERT Advice: Systems allowing the analyst to pinpoint relevant communications in a sea of data are key to maximizing detection efficiency

EXPERT Advice: Be sure to size packet capture systems appropriately to handle your system load.

EXPERT Advice: Analysts need tools that provide context to what they are seeing. It’s critical to be able to reconstruct sessions at the protocol level.

EXPERT Advice: By examining network traffic, the organization can determine precisely whether hashes were stolen and make a knowledgeable decision about the need to change passwords.

EXPERT Advice: By using network packet data, defenders can be sure of detecting the attackers if they activate dormant backdoors.

EXPERT Advice: Attackers must use network communications to infect and spread. By analyzing packet data, analysts can identify the actual hosts targeted and infected by the attacker.

Ref——-|

1 “Some People Want A Time Limit On The NSA’s ‘Zero-Day’ Exploits — Here’s Why That’s A Terrible Idea,” www.businessinsider.com/why-a-time-limit-on-zero-days-is-a-bad-idea-2014-7

2 “Answering the Critical Question: Can You Get Private SSL Keys Using Heartbleed?” http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed

3 “Microsoft Security Bulletin MS14-021 – Critical,” https://technet.microsoft.com/en-us/library/security/ms14-021.aspx

4 “New York Times attackers update tools and change tactics,” www.computing.co.uk/ctg/news/2288203/new-york-times-attackers-update-tools-and-change-tactics

5 “Chinese Army Unit Is Seen as Tied to Hacking Against U.S.” www.nytimes.com/2013/02/19/technology/chinas-army-is-seen-as-tied-to-hacking-against-us.html