There have been a few videos produced that show various ways to bypass Palo Alto Networks firewalls. This is the latest, complete with a configuration file and a pastebin log from the Evader tool showing the various exploits that were triggered:

I don’t know enough to evaluate the claim made in this video that these flaws are fundamental to the architecture of the Palo Alto Networks gateways. I do know that Palo Alto Networks disputed this video privately, and a response to it was recorded, showing the same issues as before. If the video is factually incorrect, why hasn’t Palo Alto Networks posted a public, formal response via their website, YouTube, or social media? The fact they haven’t, make of that what you will, but when challenged on similar issues in the past, they first denied it and later they recanted.

I wonder: how do organizations who purchase this product decide a particular product meets their needs? Are organizations doing a true evaluation pitting a number of security tools against a set of objectively-defined criteria or did a decision maker somewhere get wowed by the marketing and bought it without a serious evaluation?

Based on many of the request for proposals and proof of concepts I’ve been involved with, more often than not, it seems to be the latter a lot more often than the former.

Check Point CEO Gil Shwed said during the Q3 2015 earnings call: “We should work harder to expose the difference between marketing hype and technology that actually works.”

The best way to protect yourself from the marketing hype is to understand what your actual security needs are, define objective evaluation criteria, and put the tools through their paces to see which ones is best for you.

Disclaimer: I work for Check Point Software Technologies, which is a competitor of Palo Alto Networks. The views herein are my own.

It’s very easy to get discouraged in the information security business. Every piece of software, every software as a service we use is potentially vulnerable to security threats: some known, many likely not known. When these threats are exploited–it’s no longer a question of if–data and reputation loss are likely results. Even if you’re secured the central repositories of this data, the client devices that access that data, perhaps even storing that data, have their own vulnerabilities and threats. When you sprinkle in configuration errors that are all too prevalent and permit more access to resources than absolutely required, it’s easy to come to the conclusion that the game is over, the jig is up, we’re compromised, and we’re done.

The worst of all this is: you most likely don’t even know what resources you have. Even when you know, you probably don’t have a lot of say into who can access what resource how. When you try to bring this to the attention of the executives to get more resources to address the issues, the executives don’t see the value.

Over the last couple of years, I’ve been working with Check Point customers to understand their specific situations and come up with a long-term game plan. As part of that process, I try to find out what’s truly important at the business level. This means not talking to the technical people, but to the business leaders. This helps provide some clarity on what of the thousands of potential security issues out there needs the greatest focus.

It’s also important to enumerate what’s in the environment, starting with the critical assets. Where are they? Who accesses them? What security controls are in place to ensure only authorized persons can access those resources in a non-malicious way? A logical network diagram showing where everything is and understanding the various traffic flows is very helpful in figuring this out.

The presence of controls in the environment is one thing. Are they configured to per the principle of least privilege? Are those controls logging? Are you actually reading those logs and/or using a properly Security Information and Event Management product to help contextualize what’s happening? Are you acting on the information these tools are giving you? If a serious breach does occur, do you have a plan in place?

I’m sure there are a lot more questions I could ask (and sometimes do, depending on the customer). However, there is only so much information I can gather over the course of two or three days. I then take this information and write a report with recommendations. These reports can be somewhat long, depending on the customer.

What I’ve also started doing, which I believe is more valuable, is summarizing all the relevant information in a spreadsheet. It’s designed to be executive friendly, showing the issues, relative risks (with color codes), recommendations, cost to improve, and so on. It’s by no means perfect, but the goal is to bring a bit of order to the chaos–showing a potential plan to move forward and a framework you can use to re-evaluate the situation in the future.

The question I ask of my fellow information security professionals: how are you helping your organization bring order to the chaos of Information Security? Are you just reacting to events as they occur–something that is unavoidable–or do you have a long-term strategy in place that you are actively implementing?

From Exclusive: The OPM breach details you haven’t seen:

According to the timeline, OPM officials did not know they had a problem until April 15, 2015, when the agency discovered “anomalous SSL traffic with [a] decryption tool” implemented in December 2014. OPM then notified DHS’ U.S. Computer Emergency Readiness Team, and a forensic investigation began.

The discovery of a threat to the background investigation data led to the finding two days later, on April 17, of a risk to the personnel records. US-CERT made the discovery by loading data on the April 15 incident to Einstein, the department’s intrusion-detection system. On April 23, US-CERT spotted signs of the Dec. 15 exfiltration in “historical netflow data,” and OPM decided that a major incident had occurred that required notifying Congress.

SSL/TLS traffic is pretty common. However, to most security tools, this traffic is opaque. Unlike SSH, which is impossible to inspect securely, SSL/TLS can be inspected inline safely in a way that, for the most part, maintains the end-to-end security of the communications.

For this SSL/TLS traffic to be inspected, inline security gateways must examine the data as clear text. Encrypted data sent by a client to a web server is:

  1. Intercepted by the security gateway and decrypted, effectively terminating the SSL/TLS connection by the client.
  2. Inspected by the relevant security functions.
  3. Encrypted again and sent to the designated web server, initiating a new SSL/TLS connection in the process,

When the security gateway terminates the connection, a certificate must be presented to the client. If the web server in question is protected by the security gateway, it can be configured with the private key of the website in question so it basically “pretends” to be the site. For a random site on the Internet, that’s obviously not feasible, so the gateway generates a certificate on the fly and signs it with a preconfigured certificate authority the client PCs have been configured to trust. This ensures at least the client to firewall connection hasn’t been compromised.

Once the connection is terminated on the security gateway, the traffic can be inspected or filtered just like regular plaintext traffic. The options available will depend on the vendor you have for your security gateway. On current (R77.x) versions of Check Point, you have the following security features at your disposal (assuming you are licensed for them): Data Loss Prevention (DLP), Anti Virus, Anti-Bot, Application Control, URL Filtering, Threat Emulation and Intrusion Prevention.

The firewall then initiates a connection as if it were the client to the server in question. The firewall validates the remote server presents a valid certificate for the site in question. If the server uses a self-signed certificate or some sort of certificate authority the firewall isn’t familiar with, you will have to add it to the configuration as valid. This step ensures the remote end of the connection is valid and trusted–something that SSH inspection is unable to do.

There are a couple of issues with SSL decryption:

  1. There may be privacy and legal regulations on the use of this feature depending on the country in which you are located. Please review your local laws and regulations.
  2. If client certificates are needed for authentication, SSL decryption cannot be used. This makes sense since the firewall won’t have these cerificates.
  3. Applications or site that use certificate pinning will fail since the certificate authority for these sites will differ once SSL decryption is enabled.
  4. When you visit a site with an extended validation (EV) certificate and SSL decryption is happening, you will not see an EV cert. This is because it is not possible to generate an EV cert with a typical certificate authority key.
  5. You cannot do SSL decryption on a network the general public uses. This is because you need to be able to distribute a certificate authority key to your end users. One cannot get a publicly-trusted root CA key for this purpose as certificate authorities do not allow this practice. Certificate authorities used for this purpose get revoked pretty quickly, as happened to a government agency in Turkey.
  6. Decrypting SSL/TLS traffic will have a performance impact. That said, it’s generally not an “all or nothing” thing. You can choose to be selective as to which traffic gets decrypted and which does not.

Bottom line: understading what hides inside your SSL/TLS traffic is critical to finding and eliminating network-based threats. A number of security gateway vendors, including Check Point, offer this feature, and it should be leveraged where possible. More information about Check Point’s implementation of this feature can be found in sk65123 in SecureKnowledge.

In order to allow your Nintendo Wii-U to participate in multiplayer online games, you have to configure your router/firewall/whatever in one of three ways per Nintendo:

  • Enable Universal Plug-n-Play (uPNP) on your router, which is widely known to be insecure.
  • Assign your Wii-U a static IP and use DMZ mode, which is opening your firewall to everyone on the Internet.
  • Assign your Wii-U a static IP and forward all UDP ports (1-65535) to your Wii-U. Note that mapping a couple of ports isn’t unusual, but when you’re mapping this many ports, is basically the same thing as DMZ mode, thus not any better.

None of these options are acceptable from a security point of view. Granted, I am reasonably security savvy and can mitigate the unnecessary risks involved with such a configuration. However, most consumers cannot and will not understand the risk they are undertaking.

While I’m not exactly fond of a CyberUL, if there was such a thing, this kind of configuration would clearly be non-compliant.

Of course, neither Microsoft or Sony are much better, as they push the DMZ and uPNP as primary solutions first, but at least they offer solutions that don’t require forwarding all ports like Nintendo does.

It’s the kind of thing I’m surprised I haven’t heard other security professionals rant about, honestly. It’s certainly going to make me think twice about buying any product from Nintendo in the future.

Edited to add on 10 Aug 2015: Links to why uPNP is bad as well as explanations on why the other options Nintendo offers aren’t much better.

All The Security Tools In The World Won't Help If You Don't Do This

In my travels as a Security Architect for Check Point Software Technologies, I have seen many different customer environments. Granted, there is only so much I can see over the course of a couple of days, but I’ve seen enough to get a sense for where companies are generally at.

Based on the amount of security incidents in the news lately, as well as the number of engagements Check Point’s Incident Response team is taking on, I’d say that companies generally have a long ways to go in securing all the things. And I’ve got a good sense for why.

No, it’s not because customers don’t have Check Point’s latest security technologies, which third party NSS Labs continues to evaluate as “Recommended” as you can Check Point’s results in the 2015 NSS Labs Breach Detection Systems test. You can have all the security products in the world and unlimited budget to buy them. They won’t do you a bit of good if you don’t do this one thing.

What is this thing? It’s something you can implement with the security products and staff you already have today. It’s hard work, no doubt, depending on the point you’re starting from, but it will pay dividends down the road, regardless of the evolving threat landscape.

I’m talking, of course, about segmentation. What do I mean by that? In response to Episode 126 of the Defensive Security Podcast, I crafted this tweet, which I think encapsulates the idea:

Segmentation is really about ensuring least privilege at all OSI layers.

By least privilege, I mean the minimum privilege or access required to do a particular task and no more. By OSI layers, of course, I am referring to the OSI model that we all know and love.

To see how to apply this, let’s evaluate two hosts, A and B and the “privileges” that A needs to initiate a communication to B. You could potentially do this with security zones as well, but if you’re doing this exercise properly, you also need to do it for items within the same security zone as well.

Let’s assume A should never talk to B under any circumstances. The best way to prevent this from happening is to make sure there is no physical or wireless path through which that can happen, i.e. Layer 1 of the OSI model. And by that, I mean no direct or indirect connection, either using a physical medium like Ethernet, or a wireless medium like Bluetooth or 802.11. That isn’t to say a really determined bad guy can’t somehow bridge the gap another way, but this automatically makes getting data between A and B much more difficult.

Let’s assume there is some potential connectivity between A and B. In almost all environments, that’s a certainty. The next layer up is the Data Link Layer, Layer 2. If Host A and B are on different network segments, this layer may not apply so much. However, it can apply for hosts on the same subnet, particular on the wireless side. If there is no reason two hosts on a given subnet should ever need to talk to each other, and there is a way to prevent it at the switch or wireless access point, do it.

Moving up the stack to Layers 3 and 4, otherwise known as IP address and port. What functions does a user on Host A need to access on Host B? What TCP/UDP ports are required for that communication? Do you really need all ports when, say, TCP port 443 will do? Those needs should be enumerated and explicitly permitted with access from all other hosts/ports denied.

Unlike in layers 1 and 2 where the ability to enforce a policy is pretty limited, there are a few places this policy can be enforced:

  • A host-based firewall on Host A
  • A host-based firewall on Host B
  • A firewall located in the network path between Host A and Host B

This sort of policy can be implemented with nearly any firewall that’s been deployed in the last two decades. It can (and should) be enforced in multiple places as well.

In current generation firewalls, these sorts of access policies can be augmented by incorporating identities associated with given IPs, e.g. from Active Directory, RADIUS, IF-MAP, or other sources. As the user logs into these identity sources for other reasons, an IP/user/group mapping is created on the firewall. The firewall can use this information to provide further granularity on the permitted traffic. For example, if someone from marketing sitting on the user segment wants to access a finance server, they will be denied, whereas someone from finance on that same segment will have no issue. This can happen regardless of the specific IP and doesn’t even require any application-specific intelligence since identities are acquired out-of-band.

Once we get to Layer 7, things get a lot more complicated. What applications are we allowing Host A to use to Host B? How can I differentiate access between access to http://intranet.example.com versus http://phonebook.example.com when these websites are hosted on the same server at the same IP? Or what happens when I don’t know where Host B actually is, because it’s in the cloud? This is where current so-called “next generation” firewalls can help as they can easily allow access to things based on more than just an IP address.

One thing to be aware of with application-aware firewalls is that some traffic needs to be allowed in order to identify the application being used. If the packets allowed in happen to be malicious, that could be problematic. This means you still need a strict IP/port based policy in addition to one that is application specific. Even Palo Alto Networks, whose marketing has historically said IPs and ports don’t matter, begrudgingly admits this point when pressed:

We still recommend that you use the following security policy best-practices:

  • For applications that you are enabling, you should assign a specific port (default or custom).
  • For applications that you explicitly want to block, expand the policy to any port, to maximize the identification footprint.

Network-based application-aware firewalls are one way to control application usage. Another is to implement Application Whitelisting on end user machines. By limiting what applications get run, you reduce the risk that something your users find on the Internet suddenly decides to make friends with all your machines. This is exceptionally difficult to manage, which is why it is something that is not seen in most environments.

For the applications that are allowed, there are controls on the server side as well, hopefully. Does the application have granular controls in place to limit the scope of what authenticated users are authorized to do? Are you actually using those? Are those critical applications being patched by their manufacturer for critical security bugs? Are those patches being deployed in a timely manner?

Active Directory is also an important thing to discuss. It has a pretty extensive permissions model. Are you actually using it? In other words, are your users–and administrators–configured with only the bare minimum permissions they need to perform their authorized functions? Are users using named accounts or generic accounts like “guest” or “administrator”?

There is one other type of segmentation control to be considered, and that’s around data that exists outside applications, such as Microsoft Office documents. Data can be protected with encryption, using technologies like full disk encryption, media encryption (USB sticks, etc), and document security (think DRM). You can prevent or track the movement of data in your enterprise (encrypted or otherwise) using a “port protection” product that restricts your ability to use USB drives, or an in-line network-based data loss prevention tool.

I could go on and on. These controls are not particularly groundbreaking. Most of them have been around for years, yet in company after company, I observe one or more of these controls not being used to their full potential. Why they don’t comes down to three basic reasons:

  • Lack of knowledge about the controls
  • A desire for convenience over security (includes not wanting to manage said controls)
  • No budget

Going from a flat network to a segmented one is by no means easy. It will require a lot of planning and work in order to implement successfully. The good news is you can get started on the basics using the products you have today, i.e. without spending money. Over time, you can work your way up the OSI model, adding more sophisticated controls, like those new-fangled Breach Detection System that NSS Labs recently reviewed.

Even as the threat landscape and security tools evolve, a properly segmented environment built around least privilege principles will always be a good and necessary first line of defense.

Disclaimer: These are my own thoughts. If you’re interested in what Check Point has to say on the matter, see the Software-Defined Protection framework.