The Great Cloud Migration: Existential Threat or Opportunity?

As part of my day job at Check Point, I review customer security architectures and make recommendations with an eye towards securing the right things the right way. Generally, the customers I talk to have a pretty good idea of what they have, how it’s laid out, and what controls are in place. I can usually find gaps in their knowledge, as well as their controls, but at least there is some basic knowledge of their own environment.

Recently, I was asked if we could help a customer inventory their security equipment because they quite simply don’t know what they have. How can you protect the right things the right way when you don’t know what those things are and what tools you have to do it?

One thing that is typically done as part of my engagements is a Security CheckUp. To perform the Security CheckUp, a Check Point appliance is put onto a span port in the customer network so traffic can be passively analyzed. After some time, a report is generated (see a sample), which I review to get a sense for what is in the environment, what potential threats exist, and how effective certain security controls are.

The Security CheckUp won’t tell me, at least directly, what controls do exist in the environment, nor does it tell me where in the environment they are. Other vendors make tools that can map out the network, but even those tools have their limits. For example, they can’t tell you about any layer 2 equipment within the environment. It also can’t tell you about anything that might be on a span port (e.g. an IDS sensor) or equipment that is powered off, but in a rack somewhere.

Even with these tools, a fair amount of manual work is still required to turn that data into an accurate picture of what your environment is. In short, there is no “easy” button for this problem; it’s going to require real effort to track down what is where.

How can this happen? How can organizations become so unaware of what they have that they need someone to come in and tell them what they have and where it is? Then it hit me: anyone making use of “the cloud” is going to have this problem. Or they already are and they don’t know it.

In the cloud, infrastructure and applications can come and go with the push of a button. Need another 10 webservers? Done. Need to burst to handle three times the traffic? No problem. Sure, you’ve got to have physical machines to run on, but racking and stacking that stuff is easy. The physical topology? Flat. The virtual topology? Changes every second.

If you’re not treating your “cloud” infrastructure in an automated fashion, you’re doing it wrong. You’re also doomed to make the same mistakes and more that you’re making today. While some of the same tools can be used in the cloud, they integrate a bit differently. There are also a number of additional considerations that must be made for cloud—considerations that, quite frankly, are very different from physical networks.

There was a time when security people were siloed off from other parts of the organization. Security only got brought in at the end to make it all work and is often the scapegoat when it doesn’t work (or things get hacked). If we’re going to be one step ahead of the threats, this practice has to end. Security people have to be part of the conversation as applications and services are being conceived, or in the case of software as a service, being migrated to. Likewise, security people have to figure out how to stop being a business impediment, but being a business enabler.

It also means, if you’re a security person not versed in the ways of infrastructure or software as a service, if you want to remain relevant, you need to bring your skills and knowledge up to scratch, and quickly.

One could look at all of this movement to cloud as a threat to your career. If everything can be automatically deployed, do we need IT or Information Security professionals anymore?

Absolutely, organizations will still need people that understand how it all connects together and how to secure it. In fact, I see this as a huge opportunity to improve security for all organizations. Because everything is fundamentally shifting, we have a chance to get this security thing right after decades of getting it wrong. This means, finally, being able to securing the right things the right way, regardless of where they may be.

It’s a huge opportunity. It’s going to require all of us to acquire new skills—both technical and political—to accomplish. The question is: do you have the courage and the vision to take advantage of it?

Why Check Point Security Management Is Still The Gold Standard

In a recent customer meeting, one of my colleagues was telling a customer how the central management capabilities of Check Point Security Gateways is light years ahead of what any other vendor provides for their products and has been for as long as I’ve been supporting Check Point products–20 years now! This is not just my opinion, but that of third party analyst firms as well as customers.

The question one might ask: how come after all these years, no one has been able to seriously challenge Check Point’s management capabilities? Sure, other vendors offer centralized management, but no one can operate at the same scale. A single Check Point Multi-Domain Management installation can potentially manage hundreds and thousands of gateways, where competitors struggle managing tens or hundreds of gateways.

To me, the answer is very simple: it’s in Check Point’s DNA and it goes back to the very first versions of the product.

The first version of FireWall-1 I used was 2.0. At that point, it was managed using an OpenLook GUI on SunOS or Solaris. In version 2.1, a Windows-based client was added, which eventually became the way to manage the firewall policy. This GUI has seen numerous enhancements over the years, but I’m sure even people who haven’t been using Check Point as long as I have could use those oldest GUIs.

The concept of separating the management from the firewall? It was in version 2.0, though I’m told it was in a 1.0 release (not the first one, but a subsequent one). Large scale management at service provider scale, a.k.a. Provider-1, was added in 1999, first as a separate management product, then integrated into the maintrain product in the R71 release.

As more features became integrated into the product, Policy Editor became SmartDashboard and the number of GUIs proliferated, each focusing on its own part of monitoring or managing Check Point products. Even SmartDashboard had a number of tabs for managing the different Software Blades.

Certain competitors to Check Point still like to highlight the various tabs in SmartDashboard as a weakness of the product. You have to allow something in the firewall, then in IPS, App Control, etc, instead of just having a single, unified policy. While it’s a fair criticism, you can’t just make a drastic change to how things are done when you have more than 100,000 customers. The methods chosen in the R7x releases were pragmatic and allowed customers to leverage the functionality in Software Blades without having to hand-tweak hundreds or even thousands of rules built around access control only.

Prior to R80, perhaps one of the bigger management changes was the addition of SmartLog. I remember when I was asked to perform usability testing on this product before it was released. I was presented with a relatively spartan interface that reminded me of the Google homepage. I was able to quickly and easily find stuff that would have been possible in SmartView Tracker, of course, but would have taken far longer, particularly if I had to search across many log files (SmartLog indexes and searches across all your log files).

And now, after many many months in development, the R80 release of management is finally available! It’s now a single management UI (SmartConsole) which incorporates both policy and logging. Security policies are now unified across the blades (though for full functionality, this requires gateways running R80.10 and above). Even while managing older gateways, there are numerous usability enhancements that make it quicker and easier to manage your security infrastructure.

Automation, something that was not a strong point in earlier Check Point releases, is now a first class citizen. A robust CLI and Rest API is available complete with support for concurrent administration (in read/write mode, even!) and granular permissions to control who can do what. There is also a web-based interface to access logs, events, and reports.

While R80 was formally released on 31 March 2016, numerous customers have been using R80 for weeks and months through the Early Availability program. I decided to wait until the formal release to migrate the management of my home firewalls to R80, which involves a fresh installation and the use of the standard migration tools to export/import the previous configuration. Rather than leverage that process, I decided to hand-configure everything from scratch.

Actually, that’s not entirely true. I used the CLI in order to create the vast majority of the network objects I needed for my relatively simple policies. The R80 CLI is far easier than using dbedit to perform the same tasks. If I was feeling exceptionally clever, I could have also done it using the rest-based API, which allows for all kinds of integrations with other third party tools.

There are a few things missing in R80 Management that I’m sure will be added over time. Some of the really cool features in R80 today like nested policy layers or a fully unified policy will require R80 gateways (which are not available yet).

One thing which will take some time to get used to is the lack of a built-in “Demo Mode.” I remember when you would invoke it by logging into the server *local (later given an official tickbox on the login screen), which would bring up a locally hosted database so you could see the GUI in action. This approach had some limitations, namely certain features that required a live management server would not work. As R80 Management is definitely more than just UI, a virtual machine with a populated configuration will be provided for this purpose in the near future.

If you haven’t kicked the tires on R80 Management yet, there’s no excuse now that it has been generally released. It shows the level of commitment Check Point continues to make to keep ever-increasing complexity easier to manage than ever.

The Importance of Responsible Disclosure

From Wikipedia:

Responsible disclosure is a computer security term describing a vulnerability disclosure model. It is like full disclosure, with the addition that all stakeholders agree to allow a period of time for the vulnerability to be patched before publishing the details.

While there is a fair amount of debate over what is considered a reasonable period of time to allow this to happen, or even sometimes what constitutes a vulnerability, most people I know in the industry generally agree that responsible disclosure is a good thing overall.

The responsible disclosure process allows the software and services we rely on every day to get better and more resilient to malicious actors who regularly look to subvert these systems for their own gain. As an employee of Check Point, I see both sides of this debate: as both a receiver of security vulnerability reports from the community and as a discloser of vulnerabilities to other organizations.

I’ve been directly involved with a couple of vulnerability disclosures related to Check Point products. While I can’t get into specifics, overall I believe issues are respond to quickly and appropriately.

To speak to the other side, Check Point does find and disclose vulnerabilities in third party products as part of its ongoing security research. Some recent examples include:

Check Point’s research includes products that compete with Check Point in the marketplace. The latest example is a complete block bypass in Cisco Firepower. You can see a proof of concept video here:

As noted at the beginning of the video, the disclosure of this issue happened back in November 2015 and was remediated by Cisco today (30 March 2016), 134 days after it was initially disclosed. Nothing was disclosed publicly by Check Point until this date. Check Point worked closely with Cisco PSIRT, who was cooperative and professional throughout the entire process.

While there may be some competitive benefit to this research into competitive products, it really speaks more to the fact Check Point wants to see better security for everyone, not just those who happen to be Check Point customers. I think Mahatma Gandhi said it best:

“The best propaganda is not pamphleteering, but for each one of us to try to live the life we would have the world live.”

Check Point is leading the responsible disclosure debate by example here. It’s one of the things that makes me proud to work for Check Point.

A Macro-Sized Problem In The Enterprise

I remember when Macro viruses in Microsoft Word were a thing more than a decade ago. They’re definitely back with a vengence, and carrying far more dangerous payload than the Macro viruses of old. A couple recent examples are the Locky Ransomware and PowerSniff.

Microsoft disables Macros by default when you open documents in current versions of Microsoft Office. That said, it makes it very easy to enable them. Sometimes, malware-infested documents include text like this in them:

Despite these very sketchy looking documents, people actually enable macros and before you know it, their system is infected with malware and you have a potential problem on your hands.

Is this really how it plays out? It depends on what other controls you have in your environment.

Check Point and Palo Alto Networks both claim to provide protections for these kinds of threats. However, not all protection is equal. From PowerSniff Malware Used in Macro-based Attacks:

Palo Alto Networks WildFire customers are protected against this threat, as all encountered files have been correctly flagged as malicious. Additionally, all C2 domains currently encountered have also been marked malicious. AutoFocus users can identify this malware using the PowerSniff tag.

Is it really protection? From Palo Alto’s documentation on WildFire with emphasis added by me:

The key benefits of the Palo Alto Networks WildFire feature are that it can discover zero-day malware in web traffic (HTTP/HTTPS), email protocols (SMTP, IMAP, and POP), and FTP traffic and can quickly generate signatures to protect against future infections from the malware it discovers. WildFire will automatically generate a signature based on the malware payload of the sample and tests it for accuracy and safety. Because malware evolves rapidly, the signatures that WildFire generates will address multiple variants of the malware. As WildFire detects new malware, it generates new signatures within 15-30 minutes.

Considering how easy it is to take a malicious file and change it so the file hash is not known, this is not a serious hurdle. More importantly, it means the proverbial Patient Zero who opens the file and enables macros is infected.

Check Point takes this a step further. In addition to blocking a file that is known to be bad by a signature/file hash and emulating the file to see if it’s bad, the file is actually held while it is emulated. If the file is determined to be malicious, the file is not delivered to the end user, and appropriate indicators are delivered to ThreatCloud so others can benefit.

While the file is being emulated, Check Point can also provide a reconstructed file to the end user that contains no macros. This is what Check Point calls Threat Extraction. In fact, I experienced this today when I received an email from a customer containing a Word Document. The email contained the following warning:

1
2
3
4
5
________________________________________
This email's attachments were cleaned of potential threats by Check Point Gateway.
Agenda Draft2.pdf : files(s) were successfully converted to PDF 
Click here if the original attachments are required (justification needed). 
________________________________________

Instead of being delivered the original Word document, I was delivered a PDF that had the basic content of the file rendered in a harmless form. No option to enable macros at all.

What if it turns out the file was genuine and I actually need the original? There’s a link so I can download it, after confirming I trust the source. If the file actually was infected and Check Point’s Threat Emulation detected that, I would not be able to download it.

Sure, Threat Emulation could consider the file clean and it turns out not to be, but the chance of that is pretty low. Even so, you’ve added additional hurdles to prevent Patient Zero from getting infected in the first place. And, in case it happens, it’s now a lot easier to track down who Patient Zero is.

Microsoft Office documents aren’t going anywhere, and neither are macro viruses. Make sure whatever solutions you deploy to protect against these threats have the ability to immediately block threats inline, even if there is no signature for the threat. Controls should be deployable inline and on managed endpoints for maximum protection.

Disclaimer: As noted above, my employer Check Point Software Technologies has a dog in this hunt with their SandBlast Zero-Day Protection offering. These thoughts, however, are my own.

Who'll Stop The Evaders?

​In the information security business, we are always trying to stay one step ahead of the threats out there. Sometimes, the threats come from our own misconfigurations or lack of security controls, other times it comes from pushing the boundaries of what is acceptable in a given protocol allowed through a security control, eliciting a specific reaction which allows a hacker to obtain their objective (namely, getting a foothold in your environment).

The HTTP Evader test suite demonstrates techniques that can be used to bypass detection by various inline security tools. Specifically it uses HTTP in unique ways to obfuscate malicious code so it can not be detected by a security gateway. The “malicious” code, in this case, is the EICAR virus, which any antivirus or antimalware scanner will detect. With a more malicious payload, these same techniques could easily result in a malware-infected endpoint.

HTTP Evader should not be confused with the well-known Evader tool originally released by Stonesoft. Since it seems to have disappeared from the Internet, I made a copy of it available. Both tools operates on similar principles, though the focus of the Stonesoft/McAfee tool is far broader than HTTP traffic. Perhaps the reason one tool was confused for the other stems from the fact that both tools were used to demonstrate bypasses in Palo Alto Networks gear (as well as others).

The Stonesoft tool uses techniques noted by the SANS Institute in a whitepaper called Beating the IPS. At the time this video was produced, Palo Alto Networks was vulnerable to these issues. As of Version 549 of their Application and Threat Content, a full three years after the SANS Institute paper was published, these issues are finally fixed.

The HTTP Evader tool is shown below. The video was posted in December 2015 and, to my knowledge, Palo Alto Networks gear is still vulnerable to the issues demonstrated:

If you’re a Check Point customer? You’re covered. Refer to the following SK articles for more details:

Keep in mind that both Evader tools merely demonstrate known techniques to evade detection by modern security tools. Surely new evasion techniques will be developed, which will hopefully cause security vendors to react and update their products accordingly. I think it’s safe to say Check Point will be at least one step ahead of Palo Alto Networks in keeping up with new developments in this area.

Disclaimer: My employer Check Point Software Technologies may or may not have differing views on this topic. These thoughts are my own.

Edited to add a link to the Stonesoft Evader tool on 5 March 2016.