All The Security Tools In The World Won't Help If You Don't Do This

In my travels as a Security Architect for Check Point Software Technologies, I have seen many different customer environments. Granted, there is only so much I can see over the course of a couple of days, but I’ve seen enough to get a sense for where companies are generally at.

Based on the amount of security incidents in the news lately, as well as the number of engagements Check Point’s Incident Response team is taking on, I’d say that companies generally have a long ways to go in securing all the things. And I’ve got a good sense for why.

No, it’s not because customers don’t have Check Point’s latest security technologies, which third party NSS Labs continues to evaluate as “Recommended” as you can Check Point’s results in the 2015 NSS Labs Breach Detection Systems test. You can have all the security products in the world and unlimited budget to buy them. They won’t do you a bit of good if you don’t do this one thing.

What is this thing? It’s something you can implement with the security products and staff you already have today. It’s hard work, no doubt, depending on the point you’re starting from, but it will pay dividends down the road, regardless of the evolving threat landscape.

I’m talking, of course, about segmentation. What do I mean by that? In response to Episode 126 of the Defensive Security Podcast, I crafted this tweet, which I think encapsulates the idea:

Segmentation is really about ensuring least privilege at all OSI layers.

By least privilege, I mean the minimum privilege or access required to do a particular task and no more. By OSI layers, of course, I am referring to the OSI model that we all know and love.

To see how to apply this, let’s evaluate two hosts, A and B and the “privileges” that A needs to initiate a communication to B. You could potentially do this with security zones as well, but if you’re doing this exercise properly, you also need to do it for items within the same security zone as well.

Let’s assume A should never talk to B under any circumstances. The best way to prevent this from happening is to make sure there is no physical or wireless path through which that can happen, i.e. Layer 1 of the OSI model. And by that, I mean no direct or indirect connection, either using a physical medium like Ethernet, or a wireless medium like Bluetooth or 802.11. That isn’t to say a really determined bad guy can’t somehow bridge the gap another way, but this automatically makes getting data between A and B much more difficult.

Let’s assume there is some potential connectivity between A and B. In almost all environments, that’s a certainty. The next layer up is the Data Link Layer, Layer 2. If Host A and B are on different network segments, this layer may not apply so much. However, it can apply for hosts on the same subnet, particular on the wireless side. If there is no reason two hosts on a given subnet should ever need to talk to each other, and there is a way to prevent it at the switch or wireless access point, do it.

Moving up the stack to Layers 3 and 4, otherwise known as IP address and port. What functions does a user on Host A need to access on Host B? What TCP/UDP ports are required for that communication? Do you really need all ports when, say, TCP port 443 will do? Those needs should be enumerated and explicitly permitted with access from all other hosts/ports denied.

Unlike in layers 1 and 2 where the ability to enforce a policy is pretty limited, there are a few places this policy can be enforced:

  • A host-based firewall on Host A
  • A host-based firewall on Host B
  • A firewall located in the network path between Host A and Host B

This sort of policy can be implemented with nearly any firewall that’s been deployed in the last two decades. It can (and should) be enforced in multiple places as well.

In current generation firewalls, these sorts of access policies can be augmented by incorporating identities associated with given IPs, e.g. from Active Directory, RADIUS, IF-MAP, or other sources. As the user logs into these identity sources for other reasons, an IP/user/group mapping is created on the firewall. The firewall can use this information to provide further granularity on the permitted traffic. For example, if someone from marketing sitting on the user segment wants to access a finance server, they will be denied, whereas someone from finance on that same segment will have no issue. This can happen regardless of the specific IP and doesn’t even require any application-specific intelligence since identities are acquired out-of-band.

Once we get to Layer 7, things get a lot more complicated. What applications are we allowing Host A to use to Host B? How can I differentiate access between access to http://intranet.example.com versus http://phonebook.example.com when these websites are hosted on the same server at the same IP? Or what happens when I don’t know where Host B actually is, because it’s in the cloud? This is where current so-called “next generation” firewalls can help as they can easily allow access to things based on more than just an IP address.

One thing to be aware of with application-aware firewalls is that some traffic needs to be allowed in order to identify the application being used. If the packets allowed in happen to be malicious, that could be problematic. This means you still need a strict IP/port based policy in addition to one that is application specific. Even Palo Alto Networks, whose marketing has historically said IPs and ports don’t matter, begrudgingly admits this point when pressed:

We still recommend that you use the following security policy best-practices:

  • For applications that you are enabling, you should assign a specific port (default or custom).
  • For applications that you explicitly want to block, expand the policy to any port, to maximize the identification footprint.

Network-based application-aware firewalls are one way to control application usage. Another is to implement Application Whitelisting on end user machines. By limiting what applications get run, you reduce the risk that something your users find on the Internet suddenly decides to make friends with all your machines. This is exceptionally difficult to manage, which is why it is something that is not seen in most environments.

For the applications that are allowed, there are controls on the server side as well, hopefully. Does the application have granular controls in place to limit the scope of what authenticated users are authorized to do? Are you actually using those? Are those critical applications being patched by their manufacturer for critical security bugs? Are those patches being deployed in a timely manner?

Active Directory is also an important thing to discuss. It has a pretty extensive permissions model. Are you actually using it? In other words, are your users–and administrators–configured with only the bare minimum permissions they need to perform their authorized functions? Are users using named accounts or generic accounts like “guest” or “administrator”?

There is one other type of segmentation control to be considered, and that’s around data that exists outside applications, such as Microsoft Office documents. Data can be protected with encryption, using technologies like full disk encryption, media encryption (USB sticks, etc), and document security (think DRM). You can prevent or track the movement of data in your enterprise (encrypted or otherwise) using a “port protection” product that restricts your ability to use USB drives, or an in-line network-based data loss prevention tool.

I could go on and on. These controls are not particularly groundbreaking. Most of them have been around for years, yet in company after company, I observe one or more of these controls not being used to their full potential. Why they don’t comes down to three basic reasons:

  • Lack of knowledge about the controls
  • A desire for convenience over security (includes not wanting to manage said controls)
  • No budget

Going from a flat network to a segmented one is by no means easy. It will require a lot of planning and work in order to implement successfully. The good news is you can get started on the basics using the products you have today, i.e. without spending money. Over time, you can work your way up the OSI model, adding more sophisticated controls, like those new-fangled Breach Detection System that NSS Labs recently reviewed.

Even as the threat landscape and security tools evolve, a properly segmented environment built around least privilege principles will always be a good and necessary first line of defense.

Disclaimer: These are my own thoughts. If you’re interested in what Check Point has to say on the matter, see the Software-Defined Protection framework.

When talking with Check Point customers, a common request I hear is for the ability to “decrypt” SSH traffic, see inside of said traffic, and make security decisions based on what it finds, including blocking tunneling. In my last post, I explain how the SSH Inspection feature provided by a couple of Check Point’s competitors actually works and why you might want to think twice about using it. If you don’t believe my analysis, check out the following video, which shows how these SSH decryption features can be used against you:

And in case you’re thinking I’m just picking on Palo Alto Networks, Fortinet’s SSH decryption has the same flaw:

There is clearly a need to control what happens with SSH connections. When used by the general user population, SSH is similar to an anonymizer like Tor or UltraSurf which hide the true intent of the encrypted traffic and serve no business purpose. How should you handle this requirement without comrpromising the security of SSH the way SSH Inspection does?

My general stance on allowing SSH still stands: SSH should only be permitted for specific individuals to specific hosts with a clear business need for this kind of access. That said, let’s assume you want to do more than just allow/block SSH access, prevent tunnelling, and log what your users do over SSH without compromizing the security of SSH in the process.

Many years ago, when I worked for Nokia, whenever I was sitting in the office and wanted to SSH out to the Internet, I had to go through an explicit SSH proxy server that operated on the following principle:

  1. You SSHed explicitly to the proxy
  2. You were prompted by the proxy for the desired destination IP and username to use on the remote server
  3. You logged into the remote SSH server

It was not possible to use port forwarding using this configuration. You also got some visual indication that the remote host key changed as the proxy would keep track of that the same way a regular SSH client would. This still didn’t allow authentication with an RSA key, but it is still an improvement.

Nokia’s SSH proxy was something that was home grown. You could do something similar with a jump server that authorized individuals could use to ssh to other hosts with full ability to use RSA keys and verify a remote server’s host key. You can prevent port forwarding and the like by disabling tunneling in the SSH daemon on the jump server. You could also set up whatever type of logging is necessary on this jump server to meet your requirements. The servers that could be connected to from this jump server would be allowed or blocked by your security gateway. Additional logging should be set up on the permitted servers as well.

If setting up a jump server is too much work, you still want to allow users to SSH to random hosts, and you’re just concerned that people might use SSH for things other than an interactive SSH session (e.g. tunneling), a way limit this is to implement QoS on all SSH connections. You would have to choose a limit that provides acceptable performance for interactive terminal sessions but makes tunneling other things very slow. On a Check Point Security Gateway, you can achieve this in the Application Control rulebase using the “limit” Action.

At the end of the day, it’s important to understand what your actual requirements are with respect to controlling and logging SSH traffic so the right controls can be put into place. The methods described herein balance security and usability far better than the SSH Inspection features some security gateways provide.

Disclaimer: While I did get some link love from Check Point, whom I work for, the thoughts herein are my own.

Edited to add video embeds and link to Check Point blog post on 13 Aug 2015

SSH is a wonderful tool for accessing remote systems via a CLI. It is encrypted, if set up properly, I can verify I am talking to the correct server using mutual key exchange and I can tunnel all kinds of stuff over it. If you’re so inclined, you can even use an SSH tunnel as a SOCKS proxy.

And therein lies the problem. SSH represents a potential way to bypass security controls, in much the same way as HTTPS. To mitigate this threat, security gateways can man-in-the-middle HTTPS and SSH to “see” inside the traffic and make further security decisions on it.

Fortinet has a feature called SSH Inspection that performs this man-in-the-middle on SSH. Palo Alto Networks calls their similar feature SSH Decryption. Throughout this post, I am going to refer to the general technology as SSH Inspection but my comments apply to both implementations.

Conceptually, SSH and HTTPS are man-in-the-middled in similar fashions even though the underlying protocols are very different. While SSH Inspection provides more visibility and control, there are some tradeoffs you should be aware of.

First, a brief explanation of what happens to web traffic when HTTPS is man-in-the-middled by a security gateway. For most web (looking) traffic, provided you can easily distribute a new Certificate Authority to client PCs, end users will be none the wiser their HTTPS is being inspected unless they check the “lock” on their browser to see what Certificate Authority signed the key of the remote server.

The security gateway ensures the connection between it and the destination site hasn’t been tampered with by validating the server certificate the same way a regular web browser would. The security gateway can even be configured to disallow connections to sites where this validation cannot take place. This gives a high degree of confidence that even though the security gateway is inspecting the connection, you are ultimately speaking with the legitimate destination server.

There are problems with this approach. If a specific application/site utilizes Certificate Pinning, then the security gateway-generated SSL certificate will be rejected because it is signed by a different Certificate Authority. Likewise, when client certificates are used for authentication, the process will fail through a gateway performing SSL Inspection via man-in-the-middle techniques as there is no way for the security gateway to perform client authentication on behalf of the user.

The good news is that the number of applications and webssites affected by these limitations are few and far between. Specific exceptions for these applications and sites can be included in the relevant SSL Inspection policy and those applications will work as before.

When SSH traffic is similarly inspected man-in-the-middle style, the exact same issues come up with SSH for the exact same reasons with a far greater impact. This is because, unlike HTTPS where client authentication is rare, mutual authentication of client and server is the norm with SSH.

When SSH Inspection is in use, end users are limited to password-based authentication for SSH connections as it is not possible for the security gateway to send the end user’s SSH key. Furthermore, you as an end user have no easy way to verify you are connecting to the correct host as the security gateway presents its own generated SSH host key instead of that of the remote server.

Why is that an issue? SSL/TLS utilizes certificate authorities, meaning some third party vouches for the veracity of a given certificate presented. In the case of man-in-the-middle for SSL, the firewall is, in essence, vouching for the veracity of the certificate as it is signing the certificate, and we trust the firewall, right?

SSH does not use certificate authorities, meaning that if the connection is being man-in-the-middled, we have no idea who is man-in-the-middling the connection, we just know the SSH host key changed. This might be ok if the firewall actually verifies the veracity of the SSH host key presented.

From what I can tell, neither Palo Alto Networks or Fortinet (two companies that have SSH Inspection features) provide a way to verify that a particular SSH host key hasn’t changed. As SSH does not make use of certificate authorities, there is no way to automatically verify that the key was changed legitimately. A human could do the verification if they could see the new host key coming from the remote SSH server. With SSH Inspection enabled, the user will never see that key, which leads to a potentially interesting attack vector:

And in case you’re thinking I’m just picking on Palo Alto Networks, Fortinet’s SSH decryption has the same flaw:

To summarize, when we man-in-the-middle SSH, we can only use password authentication and we lose the ability to verify we are connecting to the desired server. And what security benefits do we get from this degredation of SSH security? The ability to:

  • Prevent Port Forwarding (including “X11 Forwarding)
  • Block the use of ssh “commands” or “shells”
  • Possibly prevent file transfers provided they are done in a way the security gateway knows about (and there are many, many ways to do this over the CLI)
  • Possibly log commands issued over SSH

Personally, I don’t see the value of this SSH Inspection feature as it degregades SSH security unacceptably and provides little benefit above and beyond what you would get by blocking SSH entirely except for specific, trusted individuals accessing specific, known servers. If this is not feasible, I provide some suggestions for inspecting SSH connections securely in another blog post.

Disclaimer: I’m not sure what Check Point Software Technologies thinks about this issue as I didn’t ask. These thoughts are my own.

Edited to add video embed on 13 Aug 2015

From Palo Alto CEO: Beware the Internet of Things – and watch your car:

Meanwhile, corporate network security is already facing stiff challenges that have many experts saying that breaches are inevitable – something [Palo Alto Networks CEO Mark] McLaughlin isn’t willing to concede.

“It’s as if you and I would go home tonight and say to our families, ‘Somebody is going to break into the house, probably every night. They’re going to walk around, they may take stuff, take whatever they want, but they’re coming in any time they want to every day of the week, and there’s really nothing we can do about that, so we just have to be OK with that,’” he says. “Nobody’s OK with that. That’s sort of the equivalent.”

We can argue whether or not breaches are inevitable all day long. The real question is: when a breach happens, will you be ready?

People who espouse the “assume breach” mindset aren’t saying to be OK with it. What they’re saying is to make sure critical assets are protected and that when a breach occurs, the means are present to be alerted to it and contain it.

The reality is, many organizations aren’t adequately prepared for a breach and may not even know they’ve been breached.

What I regularly see in my customer engagements are flat networks beyond the perimeter and DMZ segments with no security controls in place between internal segments. The reflects the mindset that security devices will block 100% of all attacks, not letting anything malicious slip through. Which is a bit optimistic, and does not account for attacks that might not even go through the gateway.

In an “assume breach” mindset, you would design your network a bit differently, isolating user workstations from servers hosting critical applications with an enforcement point in the middle applying a strict access policy. The application servers themselves might be further segmented behind enforcement points of their own to make sure applications don’t unnecessarily talk to one another. The end user machines themselves would also have additional controls on them to provide protection when they are off network (as is often the case with laptops) and to protect data. All of this would be centrally managed all with appropriate logging, reporting, and alerting across the security infrastructure.

With all the enforcement points in place with a well designed policy, breaches, if they ever happen, can be spotted and contained quickly. With the addition of other security controls, further granularity of access control can be achieved.

If you don’t even know what your critical assets are, who should have access to them, have the ability to apply any sort of access policy, and report on activities (allowed or otherwise), you’ve got far bigger issues than any single security tool can solve.

While it’s not always a focus area of my PhoneBoy Speaks podcast, I do occasionally cover Information Security topics on my podcast. It happens often enough that I decided to create a dedicated RSS feed just for these topics: https://phoneboy.com/ps/infosec.xml

If you subscribe to the regular PhoneBoy Speaks feed, you don’t need to subscribe to this one as this is merely a subset of the episodes that appear on my regular feed. This is only for people who are interested only in the Information Security topics I cover and not some of the other stuff I prattle on about.