From Exclusive: The OPM breach details you haven’t seen:

According to the timeline, OPM officials did not know they had a problem until April 15, 2015, when the agency discovered “anomalous SSL traffic with [a] decryption tool” implemented in December 2014. OPM then notified DHS’ U.S. Computer Emergency Readiness Team, and a forensic investigation began.

The discovery of a threat to the background investigation data led to the finding two days later, on April 17, of a risk to the personnel records. US-CERT made the discovery by loading data on the April 15 incident to Einstein, the department’s intrusion-detection system. On April 23, US-CERT spotted signs of the Dec. 15 exfiltration in “historical netflow data,” and OPM decided that a major incident had occurred that required notifying Congress.

SSL/TLS traffic is pretty common. However, to most security tools, this traffic is opaque. Unlike SSH, which is impossible to inspect securely, SSL/TLS can be inspected inline safely in a way that, for the most part, maintains the end-to-end security of the communications.

For this SSL/TLS traffic to be inspected, inline security gateways must examine the data as clear text. Encrypted data sent by a client to a web server is:

  1. Intercepted by the security gateway and decrypted, effectively terminating the SSL/TLS connection by the client.
  2. Inspected by the relevant security functions.
  3. Encrypted again and sent to the designated web server, initiating a new SSL/TLS connection in the process,

When the security gateway terminates the connection, a certificate must be presented to the client. If the web server in question is protected by the security gateway, it can be configured with the private key of the website in question so it basically “pretends” to be the site. For a random site on the Internet, that’s obviously not feasible, so the gateway generates a certificate on the fly and signs it with a preconfigured certificate authority the client PCs have been configured to trust. This ensures at least the client to firewall connection hasn’t been compromised.

Once the connection is terminated on the security gateway, the traffic can be inspected or filtered just like regular plaintext traffic. The options available will depend on the vendor you have for your security gateway. On current (R77.x) versions of Check Point, you have the following security features at your disposal (assuming you are licensed for them): Data Loss Prevention (DLP), Anti Virus, Anti-Bot, Application Control, URL Filtering, Threat Emulation and Intrusion Prevention.

The firewall then initiates a connection as if it were the client to the server in question. The firewall validates the remote server presents a valid certificate for the site in question. If the server uses a self-signed certificate or some sort of certificate authority the firewall isn’t familiar with, you will have to add it to the configuration as valid. This step ensures the remote end of the connection is valid and trusted–something that SSH inspection is unable to do.

There are a couple of issues with SSL decryption:

  1. There may be privacy and legal regulations on the use of this feature depending on the country in which you are located. Please review your local laws and regulations.
  2. If client certificates are needed for authentication, SSL decryption cannot be used. This makes sense since the firewall won’t have these cerificates.
  3. Applications or site that use certificate pinning will fail since the certificate authority for these sites will differ once SSL decryption is enabled.
  4. When you visit a site with an extended validation (EV) certificate and SSL decryption is happening, you will not see an EV cert. This is because it is not possible to generate an EV cert with a typical certificate authority key.
  5. You cannot do SSL decryption on a network the general public uses. This is because you need to be able to distribute a certificate authority key to your end users. One cannot get a publicly-trusted root CA key for this purpose as certificate authorities do not allow this practice. Certificate authorities used for this purpose get revoked pretty quickly, as happened to a government agency in Turkey.
  6. Decrypting SSL/TLS traffic will have a performance impact. That said, it’s generally not an “all or nothing” thing. You can choose to be selective as to which traffic gets decrypted and which does not.

Bottom line: understading what hides inside your SSL/TLS traffic is critical to finding and eliminating network-based threats. A number of security gateway vendors, including Check Point, offer this feature, and it should be leveraged where possible. More information about Check Point’s implementation of this feature can be found in sk65123 in SecureKnowledge.

In order to allow your Nintendo Wii-U to participate in multiplayer online games, you have to configure your router/firewall/whatever in one of three ways per Nintendo:

  • Enable Universal Plug-n-Play (uPNP) on your router, which is widely known to be insecure.
  • Assign your Wii-U a static IP and use DMZ mode, which is opening your firewall to everyone on the Internet.
  • Assign your Wii-U a static IP and forward all UDP ports (1-65535) to your Wii-U. Note that mapping a couple of ports isn’t unusual, but when you’re mapping this many ports, is basically the same thing as DMZ mode, thus not any better.

None of these options are acceptable from a security point of view. Granted, I am reasonably security savvy and can mitigate the unnecessary risks involved with such a configuration. However, most consumers cannot and will not understand the risk they are undertaking.

While I’m not exactly fond of a CyberUL, if there was such a thing, this kind of configuration would clearly be non-compliant.

Of course, neither Microsoft or Sony are much better, as they push the DMZ and uPNP as primary solutions first, but at least they offer solutions that don’t require forwarding all ports like Nintendo does.

It’s the kind of thing I’m surprised I haven’t heard other security professionals rant about, honestly. It’s certainly going to make me think twice about buying any product from Nintendo in the future.

Edited to add on 10 Aug 2015: Links to why uPNP is bad as well as explanations on why the other options Nintendo offers aren’t much better.

All The Security Tools In The World Won't Help If You Don't Do This

In my travels as a Security Architect for Check Point Software Technologies, I have seen many different customer environments. Granted, there is only so much I can see over the course of a couple of days, but I’ve seen enough to get a sense for where companies are generally at.

Based on the amount of security incidents in the news lately, as well as the number of engagements Check Point’s Incident Response team is taking on, I’d say that companies generally have a long ways to go in securing all the things. And I’ve got a good sense for why.

No, it’s not because customers don’t have Check Point’s latest security technologies, which third party NSS Labs continues to evaluate as “Recommended” as you can Check Point’s results in the 2015 NSS Labs Breach Detection Systems test. You can have all the security products in the world and unlimited budget to buy them. They won’t do you a bit of good if you don’t do this one thing.

What is this thing? It’s something you can implement with the security products and staff you already have today. It’s hard work, no doubt, depending on the point you’re starting from, but it will pay dividends down the road, regardless of the evolving threat landscape.

I’m talking, of course, about segmentation. What do I mean by that? In response to Episode 126 of the Defensive Security Podcast, I crafted this tweet, which I think encapsulates the idea:

Segmentation is really about ensuring least privilege at all OSI layers.

By least privilege, I mean the minimum privilege or access required to do a particular task and no more. By OSI layers, of course, I am referring to the OSI model that we all know and love.

To see how to apply this, let’s evaluate two hosts, A and B and the “privileges” that A needs to initiate a communication to B. You could potentially do this with security zones as well, but if you’re doing this exercise properly, you also need to do it for items within the same security zone as well.

Let’s assume A should never talk to B under any circumstances. The best way to prevent this from happening is to make sure there is no physical or wireless path through which that can happen, i.e. Layer 1 of the OSI model. And by that, I mean no direct or indirect connection, either using a physical medium like Ethernet, or a wireless medium like Bluetooth or 802.11. That isn’t to say a really determined bad guy can’t somehow bridge the gap another way, but this automatically makes getting data between A and B much more difficult.

Let’s assume there is some potential connectivity between A and B. In almost all environments, that’s a certainty. The next layer up is the Data Link Layer, Layer 2. If Host A and B are on different network segments, this layer may not apply so much. However, it can apply for hosts on the same subnet, particular on the wireless side. If there is no reason two hosts on a given subnet should ever need to talk to each other, and there is a way to prevent it at the switch or wireless access point, do it.

Moving up the stack to Layers 3 and 4, otherwise known as IP address and port. What functions does a user on Host A need to access on Host B? What TCP/UDP ports are required for that communication? Do you really need all ports when, say, TCP port 443 will do? Those needs should be enumerated and explicitly permitted with access from all other hosts/ports denied.

Unlike in layers 1 and 2 where the ability to enforce a policy is pretty limited, there are a few places this policy can be enforced:

  • A host-based firewall on Host A
  • A host-based firewall on Host B
  • A firewall located in the network path between Host A and Host B

This sort of policy can be implemented with nearly any firewall that’s been deployed in the last two decades. It can (and should) be enforced in multiple places as well.

In current generation firewalls, these sorts of access policies can be augmented by incorporating identities associated with given IPs, e.g. from Active Directory, RADIUS, IF-MAP, or other sources. As the user logs into these identity sources for other reasons, an IP/user/group mapping is created on the firewall. The firewall can use this information to provide further granularity on the permitted traffic. For example, if someone from marketing sitting on the user segment wants to access a finance server, they will be denied, whereas someone from finance on that same segment will have no issue. This can happen regardless of the specific IP and doesn’t even require any application-specific intelligence since identities are acquired out-of-band.

Once we get to Layer 7, things get a lot more complicated. What applications are we allowing Host A to use to Host B? How can I differentiate access between access to http://intranet.example.com versus http://phonebook.example.com when these websites are hosted on the same server at the same IP? Or what happens when I don’t know where Host B actually is, because it’s in the cloud? This is where current so-called “next generation” firewalls can help as they can easily allow access to things based on more than just an IP address.

One thing to be aware of with application-aware firewalls is that some traffic needs to be allowed in order to identify the application being used. If the packets allowed in happen to be malicious, that could be problematic. This means you still need a strict IP/port based policy in addition to one that is application specific. Even Palo Alto Networks, whose marketing has historically said IPs and ports don’t matter, begrudgingly admits this point when pressed:

We still recommend that you use the following security policy best-practices:

  • For applications that you are enabling, you should assign a specific port (default or custom).
  • For applications that you explicitly want to block, expand the policy to any port, to maximize the identification footprint.

Network-based application-aware firewalls are one way to control application usage. Another is to implement Application Whitelisting on end user machines. By limiting what applications get run, you reduce the risk that something your users find on the Internet suddenly decides to make friends with all your machines. This is exceptionally difficult to manage, which is why it is something that is not seen in most environments.

For the applications that are allowed, there are controls on the server side as well, hopefully. Does the application have granular controls in place to limit the scope of what authenticated users are authorized to do? Are you actually using those? Are those critical applications being patched by their manufacturer for critical security bugs? Are those patches being deployed in a timely manner?

Active Directory is also an important thing to discuss. It has a pretty extensive permissions model. Are you actually using it? In other words, are your users–and administrators–configured with only the bare minimum permissions they need to perform their authorized functions? Are users using named accounts or generic accounts like “guest” or “administrator”?

There is one other type of segmentation control to be considered, and that’s around data that exists outside applications, such as Microsoft Office documents. Data can be protected with encryption, using technologies like full disk encryption, media encryption (USB sticks, etc), and document security (think DRM). You can prevent or track the movement of data in your enterprise (encrypted or otherwise) using a “port protection” product that restricts your ability to use USB drives, or an in-line network-based data loss prevention tool.

I could go on and on. These controls are not particularly groundbreaking. Most of them have been around for years, yet in company after company, I observe one or more of these controls not being used to their full potential. Why they don’t comes down to three basic reasons:

  • Lack of knowledge about the controls
  • A desire for convenience over security (includes not wanting to manage said controls)
  • No budget

Going from a flat network to a segmented one is by no means easy. It will require a lot of planning and work in order to implement successfully. The good news is you can get started on the basics using the products you have today, i.e. without spending money. Over time, you can work your way up the OSI model, adding more sophisticated controls, like those new-fangled Breach Detection System that NSS Labs recently reviewed.

Even as the threat landscape and security tools evolve, a properly segmented environment built around least privilege principles will always be a good and necessary first line of defense.

Disclaimer: These are my own thoughts. If you’re interested in what Check Point has to say on the matter, see the Software-Defined Protection framework.

When talking with Check Point customers, a common request I hear is for the ability to “decrypt” SSH traffic, see inside of said traffic, and make security decisions based on what it finds, including blocking tunneling. In my last post, I explain how the SSH Inspection feature provided by a couple of Check Point’s competitors actually works and why you might want to think twice about using it. If you don’t believe my analysis, check out the following video, which shows how these SSH decryption features can be used against you:

And in case you’re thinking I’m just picking on Palo Alto Networks, Fortinet’s SSH decryption has the same flaw:

There is clearly a need to control what happens with SSH connections. When used by the general user population, SSH is similar to an anonymizer like Tor or UltraSurf which hide the true intent of the encrypted traffic and serve no business purpose. How should you handle this requirement without comrpromising the security of SSH the way SSH Inspection does?

My general stance on allowing SSH still stands: SSH should only be permitted for specific individuals to specific hosts with a clear business need for this kind of access. That said, let’s assume you want to do more than just allow/block SSH access, prevent tunnelling, and log what your users do over SSH without compromizing the security of SSH in the process.

Many years ago, when I worked for Nokia, whenever I was sitting in the office and wanted to SSH out to the Internet, I had to go through an explicit SSH proxy server that operated on the following principle:

  1. You SSHed explicitly to the proxy
  2. You were prompted by the proxy for the desired destination IP and username to use on the remote server
  3. You logged into the remote SSH server

It was not possible to use port forwarding using this configuration. You also got some visual indication that the remote host key changed as the proxy would keep track of that the same way a regular SSH client would. This still didn’t allow authentication with an RSA key, but it is still an improvement.

Nokia’s SSH proxy was something that was home grown. You could do something similar with a jump server that authorized individuals could use to ssh to other hosts with full ability to use RSA keys and verify a remote server’s host key. You can prevent port forwarding and the like by disabling tunneling in the SSH daemon on the jump server. You could also set up whatever type of logging is necessary on this jump server to meet your requirements. The servers that could be connected to from this jump server would be allowed or blocked by your security gateway. Additional logging should be set up on the permitted servers as well.

If setting up a jump server is too much work, you still want to allow users to SSH to random hosts, and you’re just concerned that people might use SSH for things other than an interactive SSH session (e.g. tunneling), a way limit this is to implement QoS on all SSH connections. You would have to choose a limit that provides acceptable performance for interactive terminal sessions but makes tunneling other things very slow. On a Check Point Security Gateway, you can achieve this in the Application Control rulebase using the “limit” Action.

At the end of the day, it’s important to understand what your actual requirements are with respect to controlling and logging SSH traffic so the right controls can be put into place. The methods described herein balance security and usability far better than the SSH Inspection features some security gateways provide.

Disclaimer: While I did get some link love from Check Point, whom I work for, the thoughts herein are my own.

Edited to add video embeds and link to Check Point blog post on 13 Aug 2015

SSH is a wonderful tool for accessing remote systems via a CLI. It is encrypted, if set up properly, I can verify I am talking to the correct server using mutual key exchange and I can tunnel all kinds of stuff over it. If you’re so inclined, you can even use an SSH tunnel as a SOCKS proxy.

And therein lies the problem. SSH represents a potential way to bypass security controls, in much the same way as HTTPS. To mitigate this threat, security gateways can man-in-the-middle HTTPS and SSH to “see” inside the traffic and make further security decisions on it.

Fortinet has a feature called SSH Inspection that performs this man-in-the-middle on SSH. Palo Alto Networks calls their similar feature SSH Decryption. Throughout this post, I am going to refer to the general technology as SSH Inspection but my comments apply to both implementations.

Conceptually, SSH and HTTPS are man-in-the-middled in similar fashions even though the underlying protocols are very different. While SSH Inspection provides more visibility and control, there are some tradeoffs you should be aware of.

First, a brief explanation of what happens to web traffic when HTTPS is man-in-the-middled by a security gateway. For most web (looking) traffic, provided you can easily distribute a new Certificate Authority to client PCs, end users will be none the wiser their HTTPS is being inspected unless they check the “lock” on their browser to see what Certificate Authority signed the key of the remote server.

The security gateway ensures the connection between it and the destination site hasn’t been tampered with by validating the server certificate the same way a regular web browser would. The security gateway can even be configured to disallow connections to sites where this validation cannot take place. This gives a high degree of confidence that even though the security gateway is inspecting the connection, you are ultimately speaking with the legitimate destination server.

There are problems with this approach. If a specific application/site utilizes Certificate Pinning, then the security gateway-generated SSL certificate will be rejected because it is signed by a different Certificate Authority. Likewise, when client certificates are used for authentication, the process will fail through a gateway performing SSL Inspection via man-in-the-middle techniques as there is no way for the security gateway to perform client authentication on behalf of the user.

The good news is that the number of applications and webssites affected by these limitations are few and far between. Specific exceptions for these applications and sites can be included in the relevant SSL Inspection policy and those applications will work as before.

When SSH traffic is similarly inspected man-in-the-middle style, the exact same issues come up with SSH for the exact same reasons with a far greater impact. This is because, unlike HTTPS where client authentication is rare, mutual authentication of client and server is the norm with SSH.

When SSH Inspection is in use, end users are limited to password-based authentication for SSH connections as it is not possible for the security gateway to send the end user’s SSH key. Furthermore, you as an end user have no easy way to verify you are connecting to the correct host as the security gateway presents its own generated SSH host key instead of that of the remote server.

Why is that an issue? SSL/TLS utilizes certificate authorities, meaning some third party vouches for the veracity of a given certificate presented. In the case of man-in-the-middle for SSL, the firewall is, in essence, vouching for the veracity of the certificate as it is signing the certificate, and we trust the firewall, right?

SSH does not use certificate authorities, meaning that if the connection is being man-in-the-middled, we have no idea who is man-in-the-middling the connection, we just know the SSH host key changed. This might be ok if the firewall actually verifies the veracity of the SSH host key presented.

From what I can tell, neither Palo Alto Networks or Fortinet (two companies that have SSH Inspection features) provide a way to verify that a particular SSH host key hasn’t changed. As SSH does not make use of certificate authorities, there is no way to automatically verify that the key was changed legitimately. A human could do the verification if they could see the new host key coming from the remote SSH server. With SSH Inspection enabled, the user will never see that key, which leads to a potentially interesting attack vector:

And in case you’re thinking I’m just picking on Palo Alto Networks, Fortinet’s SSH decryption has the same flaw:

To summarize, when we man-in-the-middle SSH, we can only use password authentication and we lose the ability to verify we are connecting to the desired server. And what security benefits do we get from this degredation of SSH security? The ability to:

  • Prevent Port Forwarding (including “X11 Forwarding)
  • Block the use of ssh “commands” or “shells”
  • Possibly prevent file transfers provided they are done in a way the security gateway knows about (and there are many, many ways to do this over the CLI)
  • Possibly log commands issued over SSH

Personally, I don’t see the value of this SSH Inspection feature as it degregades SSH security unacceptably and provides little benefit above and beyond what you would get by blocking SSH entirely except for specific, trusted individuals accessing specific, known servers. If this is not feasible, I provide some suggestions for inspecting SSH connections securely in another blog post.

Disclaimer: I’m not sure what Check Point Software Technologies thinks about this issue as I didn’t ask. These thoughts are my own.

Edited to add video embed on 13 Aug 2015