From Palo Alto Networks: Best Practices for Securing Your Network from Layer 4 and Layer 7 Evasions:

To monitor and protect your network from most Layer 4 and Layer 7 attacks, here are a few recommendations

While I’m all about Best Practices documentation, one wonders why some of the items in this documentation are even required.

Block all unknown applications/traffic using security policy. Typically, the only applications that are classified as unknown traffic are internal or custom applications on your network, or potential threats.

Wouldn’t a better approach be to only permit the applications you want to allow rather than to block all unknown applications? Oh, wait, that’s not how that product was designed to work…

Create a zone protection profile that is configured to protect against packet-based attacks: * Remove TCP timestamps on SYN packets before the firewall forwards the packet […] * Drop malformed packets. * Drop mismatched and overlapping TCP segments

A proper stateful firewall should do all of these things by default, shouldn’t it? Same with the half dozen other checks they say to enable via the CLI?

Doesn’t it seem strange to anyone else that you have to manually configure so many options to make the device more secure? What’s the likelihood customers have actually deployed their gateways in this fashion? Are they getting the performance they expect if they do so? My guess: probably not.

To add a little more fuel to the fire: Firestorm. This attack uses TCP SYN + Data packets to bypass certain firewalls. Per the article:

We disclosed the full details of the vulnerability to major vendors affected by the flaw. One of the vendors who replied, explained that they do not see this issue as a vulnerability because, by design, their firewall permits full TCP handshake in order to inspect the application type.

They said that once their state machine proceeded beyond the TCP handshake, they would recognize the application, matching a subsequent rule that applied to application traffic. The vendor added that if there was an application they did not recognize, they would treat the session as ‘unknown-TCP’ and, again, perform an additional security policy lookup to decide whether to allow or block the traffic.

Now that this information is out there, it is only a matter of time before some develops a tool that can be used to bypass these security devices and, short of blocking all traffic, there’s nothing you can do to mitigate the threat “by design.” Oh wait, someone already has. (Note: There was a tool at https://github.com/stasvolf/SynTunnel but it has been removed)

It should be noted that Check Point Security Gateways are not susceptible to Firestorm.

Disclaimer: While I work for Check Point Software (a competitor to Palo Alto Networks), the above thoughts are my own.

There have been a few videos produced that show various ways to bypass Palo Alto Networks firewalls. This is the latest, complete with a configuration file and a pastebin log from the Evader tool showing the various exploits that were triggered:

I don’t know enough to evaluate the claim made in this video that these flaws are fundamental to the architecture of the Palo Alto Networks gateways. I do know that Palo Alto Networks disputed this video privately, and a response to it was recorded, showing the same issues as before. If the video is factually incorrect, why hasn’t Palo Alto Networks posted a public, formal response via their website, YouTube, or social media? The fact they haven’t, make of that what you will, but when challenged on similar issues in the past, they first denied it and later they recanted.

I wonder: how do organizations who purchase this product decide a particular product meets their needs? Are organizations doing a true evaluation pitting a number of security tools against a set of objectively-defined criteria or did a decision maker somewhere get wowed by the marketing and bought it without a serious evaluation?

Based on many of the request for proposals and proof of concepts I’ve been involved with, more often than not, it seems to be the latter a lot more often than the former.

Check Point CEO Gil Shwed said during the Q3 2015 earnings call: “We should work harder to expose the difference between marketing hype and technology that actually works.”

The best way to protect yourself from the marketing hype is to understand what your actual security needs are, define objective evaluation criteria, and put the tools through their paces to see which ones is best for you.

Disclaimer: I work for Check Point Software Technologies, which is a competitor of Palo Alto Networks. The views herein are my own.

It’s very easy to get discouraged in the information security business. Every piece of software, every software as a service we use is potentially vulnerable to security threats: some known, many likely not known. When these threats are exploited–it’s no longer a question of if–data and reputation loss are likely results. Even if you’re secured the central repositories of this data, the client devices that access that data, perhaps even storing that data, have their own vulnerabilities and threats. When you sprinkle in configuration errors that are all too prevalent and permit more access to resources than absolutely required, it’s easy to come to the conclusion that the game is over, the jig is up, we’re compromised, and we’re done.

The worst of all this is: you most likely don’t even know what resources you have. Even when you know, you probably don’t have a lot of say into who can access what resource how. When you try to bring this to the attention of the executives to get more resources to address the issues, the executives don’t see the value.

Over the last couple of years, I’ve been working with Check Point customers to understand their specific situations and come up with a long-term game plan. As part of that process, I try to find out what’s truly important at the business level. This means not talking to the technical people, but to the business leaders. This helps provide some clarity on what of the thousands of potential security issues out there needs the greatest focus.

It’s also important to enumerate what’s in the environment, starting with the critical assets. Where are they? Who accesses them? What security controls are in place to ensure only authorized persons can access those resources in a non-malicious way? A logical network diagram showing where everything is and understanding the various traffic flows is very helpful in figuring this out.

The presence of controls in the environment is one thing. Are they configured to per the principle of least privilege? Are those controls logging? Are you actually reading those logs and/or using a properly Security Information and Event Management product to help contextualize what’s happening? Are you acting on the information these tools are giving you? If a serious breach does occur, do you have a plan in place?

I’m sure there are a lot more questions I could ask (and sometimes do, depending on the customer). However, there is only so much information I can gather over the course of two or three days. I then take this information and write a report with recommendations. These reports can be somewhat long, depending on the customer.

What I’ve also started doing, which I believe is more valuable, is summarizing all the relevant information in a spreadsheet. It’s designed to be executive friendly, showing the issues, relative risks (with color codes), recommendations, cost to improve, and so on. It’s by no means perfect, but the goal is to bring a bit of order to the chaos–showing a potential plan to move forward and a framework you can use to re-evaluate the situation in the future.

The question I ask of my fellow information security professionals: how are you helping your organization bring order to the chaos of Information Security? Are you just reacting to events as they occur–something that is unavoidable–or do you have a long-term strategy in place that you are actively implementing?

From Exclusive: The OPM breach details you haven’t seen:

According to the timeline, OPM officials did not know they had a problem until April 15, 2015, when the agency discovered “anomalous SSL traffic with [a] decryption tool” implemented in December 2014. OPM then notified DHS’ U.S. Computer Emergency Readiness Team, and a forensic investigation began.

The discovery of a threat to the background investigation data led to the finding two days later, on April 17, of a risk to the personnel records. US-CERT made the discovery by loading data on the April 15 incident to Einstein, the department’s intrusion-detection system. On April 23, US-CERT spotted signs of the Dec. 15 exfiltration in “historical netflow data,” and OPM decided that a major incident had occurred that required notifying Congress.

SSL/TLS traffic is pretty common. However, to most security tools, this traffic is opaque. Unlike SSH, which is impossible to inspect securely, SSL/TLS can be inspected inline safely in a way that, for the most part, maintains the end-to-end security of the communications.

For this SSL/TLS traffic to be inspected, inline security gateways must examine the data as clear text. Encrypted data sent by a client to a web server is:

  1. Intercepted by the security gateway and decrypted, effectively terminating the SSL/TLS connection by the client.
  2. Inspected by the relevant security functions.
  3. Encrypted again and sent to the designated web server, initiating a new SSL/TLS connection in the process,

When the security gateway terminates the connection, a certificate must be presented to the client. If the web server in question is protected by the security gateway, it can be configured with the private key of the website in question so it basically “pretends” to be the site. For a random site on the Internet, that’s obviously not feasible, so the gateway generates a certificate on the fly and signs it with a preconfigured certificate authority the client PCs have been configured to trust. This ensures at least the client to firewall connection hasn’t been compromised.

Once the connection is terminated on the security gateway, the traffic can be inspected or filtered just like regular plaintext traffic. The options available will depend on the vendor you have for your security gateway. On current (R77.x) versions of Check Point, you have the following security features at your disposal (assuming you are licensed for them): Data Loss Prevention (DLP), Anti Virus, Anti-Bot, Application Control, URL Filtering, Threat Emulation and Intrusion Prevention.

The firewall then initiates a connection as if it were the client to the server in question. The firewall validates the remote server presents a valid certificate for the site in question. If the server uses a self-signed certificate or some sort of certificate authority the firewall isn’t familiar with, you will have to add it to the configuration as valid. This step ensures the remote end of the connection is valid and trusted–something that SSH inspection is unable to do.

There are a couple of issues with SSL decryption:

  1. There may be privacy and legal regulations on the use of this feature depending on the country in which you are located. Please review your local laws and regulations.
  2. If client certificates are needed for authentication, SSL decryption cannot be used. This makes sense since the firewall won’t have these cerificates.
  3. Applications or site that use certificate pinning will fail since the certificate authority for these sites will differ once SSL decryption is enabled.
  4. When you visit a site with an extended validation (EV) certificate and SSL decryption is happening, you will not see an EV cert. This is because it is not possible to generate an EV cert with a typical certificate authority key.
  5. You cannot do SSL decryption on a network the general public uses. This is because you need to be able to distribute a certificate authority key to your end users. One cannot get a publicly-trusted root CA key for this purpose as certificate authorities do not allow this practice. Certificate authorities used for this purpose get revoked pretty quickly, as happened to a government agency in Turkey.
  6. Decrypting SSL/TLS traffic will have a performance impact. That said, it’s generally not an “all or nothing” thing. You can choose to be selective as to which traffic gets decrypted and which does not.

Bottom line: understading what hides inside your SSL/TLS traffic is critical to finding and eliminating network-based threats. A number of security gateway vendors, including Check Point, offer this feature, and it should be leveraged where possible. More information about Check Point’s implementation of this feature can be found in sk65123 in SecureKnowledge.

In order to allow your Nintendo Wii-U to participate in multiplayer online games, you have to configure your router/firewall/whatever in one of three ways per Nintendo:

  • Enable Universal Plug-n-Play (uPNP) on your router, which is widely known to be insecure.
  • Assign your Wii-U a static IP and use DMZ mode, which is opening your firewall to everyone on the Internet.
  • Assign your Wii-U a static IP and forward all UDP ports (1-65535) to your Wii-U. Note that mapping a couple of ports isn’t unusual, but when you’re mapping this many ports, is basically the same thing as DMZ mode, thus not any better.

None of these options are acceptable from a security point of view. Granted, I am reasonably security savvy and can mitigate the unnecessary risks involved with such a configuration. However, most consumers cannot and will not understand the risk they are undertaking.

While I’m not exactly fond of a CyberUL, if there was such a thing, this kind of configuration would clearly be non-compliant.

Of course, neither Microsoft or Sony are much better, as they push the DMZ and uPNP as primary solutions first, but at least they offer solutions that don’t require forwarding all ports like Nintendo does.

It’s the kind of thing I’m surprised I haven’t heard other security professionals rant about, honestly. It’s certainly going to make me think twice about buying any product from Nintendo in the future.

Edited to add on 10 Aug 2015: Links to why uPNP is bad as well as explanations on why the other options Nintendo offers aren’t much better.