Why Can't I Choose What to SSL Inspect Based on Application?

SSL Decryption is a feature that is in current versions of the Check Point Security Gateway. It’s in other competing products as well. I wrote a description of the technology in a previous blog post entitled Why SSL Decryption Is Important.

All implementations of this feature have a configurable policy so you can decide what traffic to decrypt. Here is an example policy from a Check Point Security Gateway, which can use IP addresses or URL Filtering Categories:

SSL Inspection Policy

Some people would prefer to use applications (e.g. YouTube), but I just don’t see a way to do that without reducing the overall security posture. Maybe someone more clever than me can explain the flaws in my logic.

The way Check Point determines whether or not a given IP requires SSL inspection is to actually man in the middle the first connection to a given IP (assuming the policy is configured appropiately and just the “site category” needs to be determined). In the first few packets of that MITM connection, we can determine conclusively what URL the end user is going to (or the app is using), put an IP and category entry in the local cache, and inspect the traffic on that connection. Even if a URL isn’t used, the certificate information is in the first few TCP data packets, which gives us something to put a URL category to. If further connections to that IP should be SSL Inspected, the firewall will do so per the policy.

Sometimes the “man in the middle” process can break specific applications (e.g. because they are using Certificate Pinning). Or a URL isn’t being used. Or, worse, the SSL site in question requires Client Authentication which will completely break when you attempt to man in the middle an SSL connection. This is why in the latest (R77.30) release, there’s now a mechansim called Probe Bypass that can be enabled as described in sk104717.

Some applications cannot be identified using just the certificate. Google is a great example of this as they use wildcard certificates across a number of their properties. Even Server Name Indiciation, which exists to remediate this issue, doesn’t work consistently across all browsers and servers. Thus we’re left with the original certificate as-is.

Let’s assume we’re ok with not man-in-the-middling traffic until we’re certain it’s an app we want to perform SSL Inspection on. When you’re identifying applications beyond using IPs and ports, you actually have to let some traffic pass through the firewall.

(Is a traditional IP/port related policy still relevant? Absolutely, despite what some Check Point competitors like to say in their marketing, which even they will admit if pressed on the issue.)

If we don’t man in the middle the first connection to an IP, and thus allow the application to be identified first before deciding to SSL Inspection, we run the risk of allowing encrypted traffic for an application we actually want to inspect. Malicious applications could easily exploit this behavior, pretending to be an unidentifiable application, thus connections would never be SSL Inspected.


Disclaimer: This is my own thinking. My employer Check Point Software Technologies may have a different stance on this matter.

Edited to add reference to Probe Bypass on 15 Feb 2016 (some hours after I originally published)

Why I'm in Information Security, Apple Epoch Reboot Loop Edition

Undoubtedly you’ve heard by now of the bug that occurs when you manually set the date back on your 64-bit iOS device to 1/1/1970: the device gets locked in a reboot loop. If not, there’s an article on Ars Technica or this read on Reddit that explains the issue.

Would a reasonable person actually set the date on their phone back to Jan 1st 1970? In most situations, no, but they can be socially engineered to do so, which I will admit I was with someone sharing an image like this one. The good thing is I had the sense to do it on my secondary iOS device and had backups. I also know how to disassemble my phone to unplug the battery, which is the only reliable way I’ve seen to recover from the problem.

Ok, social engineering issues aside, this problem requires physical access to the device, right? Not so fast. iPhones checks in with an NTP server to get the current system time. NTP is a UDP protocol, as such can easily be spoofed. A person who knows how NTP works might say that NTP won’t allow such large timejumps. Has anyone tested Apple’s implementation to ensure it doesn’t allow this? I don’t know, but it’s certainly a potential exploit vector.

There should be no setting on any device that causes it to become a paperweight like this. Whether a reasonable user would change that setting or not is really irrelevant because a malicious actor can exploit facts like this to cause mischief.

When it comes to bugs like this, it’s never about whether a reasonable user could trigger it–they can be socially engineered into doing so–it’s about how a malicious actor can leverage the bug to cause unauthorized disclosure, alteration, or distruction of information or assets. The fact I can reason my way through this and enjoy doing so is why I’m still in Information Security today.

Disclaimer: I have no idea what my employer Check Point Software Technologies thinks about any of this. These thoughts are my own.

Security: At What Cost?

I was listening to the end of DtSR Episode 179 when a question was asked: would you (or someone you know) buy a “secure” router that cost $25 more? That wasn’t the exact question, but that was the gist.

The immediate question I thought in response was the following (which, of course, I tweeted with the #dtsr hashtag): “How much (more) security do you get for $25? How much (more) do you get for $250,000? And how can non-infosec folks evaluate that?”

The challenge, of course, is how do you quantify security and the value that security provides. There is definitely no one-size-fits-all answer to this question. It comes down to quantifying the various risk in monetary terms. You know, in terms of single-loss expectancy or annualized loss expectancy.

This assumes you know what assets you’re protecting, have some understanding about the value of those assets, and have some clue about the likelihood of a loss and what impact that might have to the asset’s value. Many organizatons I’ve talked to can’t articulate these things, and that’s a problem. You have no idea how much you should spend to protect those assets. You don’t want to spend $1000 to protect a $10 asset, but you might spend $10 to protect a $1000 asset.

And if you think information security professionals have a tough time figuring this stuff out, think about how everyone else approaches the same situation. Is there any wonder there is so much FUD in information security marketing?

Disclaimer: I do work for a vendor: Check Point Software Technologies. These thoughts are my own.

FUD and Cybersecurity Marketing

From Cockroaches Versus Unicorns: The Golden Age Of Cybersecurity Startups:

With increasing hacks, the CISO’s life has just become a lot messier. One CISO told me, “Between my HVAC vendor and my board of directors, I am stretched. And everyday I get a hundred LinkedIn requests from vendors. Their FUD approach to security sales is exhausting.”

More than 50 large security vendors exist, and the list is growing rapidly. More than 200 new security startups are funded each year, competing for the CISO mindshare and budget. And the sales pitches use FUD (fear, uncertainty, doubt) as a primary tactic:

A large part of the reason why the various cybersecurity companies use fear, uncertainty, and doubt (FUD) as part of their marketing strategy is largely because it still works. More specifically, it is because companies have no clue what “security” jobs need to be done. These companies are already afraid out of ignorance (willfull or otherwise).

Various cybersecurity companies simply speak to this fear: “There’s lots of bad things out there and our widget will protect you from it.” Which, is, of course, patently false. Even the best security products in the world are useless if they are not deployed as part of an overall strategy that includes people, policies, and process working towards a common goal.

It’s not enough for cybersecurity vendors to market and sell widgets. We must do better and actually help our customers understand the real threats to their business, not just the ones that make the news. We must help them take steps to integrate security as part of their business process, enabling new capabilities that weren’t possible before without significant risk.

Disclaimer: While I hate the word cybersecurity, I do work for a vendor: Check Point Software Technologies. These thoughts are my own.

The Security Industry: Lead By Example

If the security industry itself can’t be bothered to fix security issues in a timely manner, how can we expect customers to apply the patches in a timely manner? Shouldn’t we be leading by example?

Generally speaking, Check Point (where I work) is pretty quick to respond to reported issues. In some cases, fixes have been out in hours. Some of Check Point’s competitors? Not so quick. Here are a couple recent examples.

FireEye’s Year Old Vulnerability

A couple people from Check Point reported an issue to FireEye on 24 July 2014. While the issue was reported on one of their products (their ‘EX’ product), the issue actually affected several of their products. A “fixed” version of code for the product we reported on was issued on 7 July 2015, 349 days after it was first reported. This is how the issue was reported in their Q4 2015 Security Vulnerability advisory:

FireEye Vulnerability

To be fair, the issue was fixed in some products in a much shorter time, but for one product, it was about 15 months.

The acknowledgement by FireEye is misleading. First of all, the employer of the reporters was not mentioned as it was for others in the advisory (it’s Check Point, as noted previously). Second of all, the actual issues reported are more serious than the somewhat misleading description FireEye chose to provide. Even with the description provided, I disagree with the “low” severity rating. Social engineering attacks, anyone?

Palo Alto Networks Evading For Three Years

I’ve largely covered the vulnerability in question on a previous post. The one update I can provide is that 3 years after the original SANS Institude paper being published highlighting the deficiencies, Palo Alto Networks finally issued an update to their Application and Threat Content that addresses the issues. Or at least they addressed the issues demonstrated by the 666 different ways to bypass Palo Alto Networks video, which are based on the same principles.

More Examples

Check Point has actually found and responsibly disclosed a number of vulnerabilities in a number of common services and software products, including those of competitors. A presentation has been posted on SlideShare providing the details.

Conclusion

Every product out there is going to have security vulnerabilities found in it, including and especially products designed to keep your environments secure. What separates the mature, market-leading vendors from the others is how you respond to such reports.