Security

00:04 AM
Connect Directly
LinkedIn
RSS
E-Mail
50%
50%

Beyond Heartbleed: 5 Basic Rules To Reduce Risk

Firms continue to migrate sensitive information into fewer web-based applications with homogenous environments, increasing the potential for damage.

When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed. To put some bounds on this lets define a “supercritical” vulnerability that would have similar impact to Heartbleed as one that meets all of the following 4 criteria (all of which Heartbleed does meet):

• Affects software that is in widespread use as an Internet-facing service that commonly handles sensitive data
• Is present in version(s) of that software representing a sizable percentage of the deployed base
• Can be exploited to reveal sensitive information (directly or indirectly) without logging in with a valid username and password
• Its method of exploitation is widely known

This definition narrows things down very quickly simply because there are only two types of services that can reasonably meet the first criteria: web servers and email. Over the past decade many of the services that would previously have existed as client/server applications with their own protocols have been migrated to web-based applications and over the past few years these web-based applications have begun a steady migration to a handful of cloud-based service providers, often running homogenous environments. We are increasingly putting more eggs in fewer and fewer baskets which increases the potential for damage if one of those baskets is found to be vulnerable. The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle.

We also have to remember that there are other protocols and pieces of software that underpin the frontend web and email services. These include encryption technologies, like the OpenSSL software that was affected by Heartbleed, as well as the operating system software itself. Pretty much the worst thing imaginable (much worse than Heartbleed) would be an arbitrary code execution vulnerability in the TCP/IP stack (the chunk of the operating system software that controls all network communications) of a widely used version (or versions) of a common server operating system like Windows, Linux, or FreeBSD. This would let an attacker take complete control of any affected system that can be reached from the Internet, likely regardless of what services it is or is not running.

Defending against these sorts of vulnerabilities is a daunting task for the IT administrator who is responsible for securing systems running software that he has no control of while keeping those services available even in the face of potential zero-day vulnerabilities. There are some basic rules that can be used to drastically reduce the risk:

Less software is better:Unfortunately most default software installs come with every possible package and option installed and enabled. Every piece of software contains vulnerabilities, some just haven’t been found yet, and every piece of software and functionality that gets installed and enabled exposes more of these potential vulnerabilities (even security software like OpenSSL, antivirus, and firewalls). Any unnecessary packages that are disabled, removed, or not installed in the first place will decrease the chances that a particular system is affected by the next big vulnerability.

Reduce privileges: Administrators have a bad habit of running software as a privileged account such as root, a local administrator, or a domain administrator. Similarly software developers and database administrators have a bad habit of using accounts with full read/write access to entire databases (or in some cases entire database servers containing multiple unrelated databases) for interaction between a web-based fronted and a database backend. These practices are convenient as they eliminate the need to consider user access controls but the consequence is that an attacker will gain these same unlimited privileges if he manages to exploit a vulnerability. The better security practice is to create dedicated accounts for each service and application that have the bare minimum access required for the software to function. It may not stop an attacker but it will limit the amount of immediate damage he can cause and provide the opportunity to stop him.

Patch:For those services that do have to stay enabled, make sure to stay on top of patches. Often times vulnerabilities are responsibly disclosed only to software vendors and patches are quietly released before the details of the vulnerability are publicly acknowledged. Administrators who proactively apply security patches will have already protected themselves while those who didn’t will be in a race with attackers to apply patches before exploits are developed and used.

Firewall: The definition of a “supercritical” vulnerability above includes “Internet-facing” for a reason: it is much easier to find a vulnerable system when it can be found by scanning across the open Internet. Services intended for internal use should be restricted to access from internal systems only, and for those internal service that require access from outside consider a VPN in order to authenticate users before allowing direct access to the system. Firewalls are not an excuse to avoid patching however, as many attacks are launched across internal networks from systems already compromised by malware.

Defense in Depth: There will always be some services that must be enabled and kept exposed to the Internet that may be affected by a zero-day vulnerability for which patches are not available yet. We must then start relying on our ability to detect and stop an attack before it succeeds. Hopefully the combination of techniques described above (especially reduced privileges) will slow an attacker down enough that he becomes easier to detect. We can also increase the chances of detection by looking for the signs of an attack (such as with antivirus, intrusion detection/prevention systems, file integrity monitoring, behavior monitoring, and log correlation) while making it harder for him to exfiltrate any sensitive data (firewalling outbound connections, data loss prevention technology, proxying). We must also always have a real live human ready to quickly respond to any alerts and intervene to stop an active attack, without that all of this technology is useless and it’s just a matter of time before an attacker finds a way around it.

 

Christopher Camejo is an integral part of the Consulting leadership team for NTT Com Security, one of the largest security consulting organizations in the world. He directs NTT Com Security's assessment services including ethical hacking and compliance assessments. Mr. Camejo ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Alison_Diana
50%
50%
Alison_Diana,
User Rank: Apprentice
5/21/2014 | 10:39:04 AM
Human at Work
I like your final point: We must keep a live, well-trained person in charge of all these great technological tools because there's always someone trying to develop a new malicious program to work around our defenses. Without a live gatekeeper surveying and tracking these monitors and defenses, you're right: all those investments are moot.
David F. Carr
50%
50%
David F. Carr,
User Rank: Apprentice
5/21/2014 | 10:42:39 AM
Better security in financial services?
While people can slip up in any industry, financial services pros ought to give themselves some credit for doing a better job with information security than most other industries. Too bad it's never enough.
Lorna Garey
50%
50%
Lorna Garey,
User Rank: Apprentice
5/21/2014 | 10:53:18 AM
Clarify ...
By this are you saying that SMTP is so vulnerable that we should not be using as an corporate email protocol? "The only widespread service to buck this web consolidation trend, at least in some part, and remain as a standalone service is corporate email that still relies on the SMTP protocol for transport. Personal email accounts are in double-jeopardy as the front-ends have very much moved to web-based services like Gmail, YahooMail, and Hotmail but these services still rely on the same basic underlying SMTP protocol for mail transport as corporate email systems making them vulnerable to attack from either angle."

Pretty scary - weakness on both sides, data in the middle!
ccamejo
50%
50%
ccamejo,
User Rank: Author
6/3/2014 | 5:13:52 PM
Re: Clarify ...
Lorna, It's not that SMTP is necessarily more vulnerable than any other protocol, in fact SMTP is fairly simple and has been around for so long that most of the bugs should be worked out of it by now. We should be more worried about the mail server software that implements the SMTP protocol, this software is updated with new features much more frequently than the SMTP protocol itself and these new feature updates are where bugs tend to sneak in.

The real point is that a mail server (running SMTP) is one of the few services that is essentially required to be exposed to the Internet by every organization that wants to send and receive email (you're not going to be getting email from anyone outside your own organization without it). The widespread exploitation of a vulnerability in one of the commonly used mail servers would be potentially catastrophic due to the large number of organizaitons that would be vulnerable and exposed during the critical "zero day" window when no patch was yet available. I can't imagine many organizations would be willing to shut off their ability to exchange email with the outside world until a patch came out.

Contrast this with protocols like ODBC that are critical but aren't (or at least shouldn't be) accessible from the Internet and therefore wouldn't be easy to attack, or protocols like FTP that may be Internet-facing but could be shut off temporarily in case of a critical vulnerability without causing a major crisis.

On a side note: people tend to underestimate the amount of data that can be gleaned from email. First think about how many sensitive documents are sitting in the average "deleted items" folder where they can still be retreived and searched, then think about what typically happens if you click "I lost my password" on just about any web site. The Mat Honan hack from a few years ago provides a great example of the potential consequences of such password reset shenanigans.
Register for Wall Street & Technology Newsletters
White Papers
Current Issue
Wall Street & Technology - Elite 8, October 2014
The in-depth profiles of this year's Elite 8 honorees focus on leadership, talent recruitment, big data, analytics, mobile, and more.
Video