Ye Olde PhoneBoy FireWall-1 FAQ is Back…In A Manner of Speaking

Many of you probably remember the Check Point FireWall-1 FAQ I ran for many years. Many have told me it was their “go-to” source of information on all things Check Point, well before Check Point had SecureKnowledge.

Well, I’m here to say: it’s back…in a manner of speaking.

More specifically, I am back doing the activity I was doing twenty years ago, namely trying to help the Check Point community make the best use of the stuff they bought and make the resulting information available to everyone.

The one difference? I’m doing it for Check Point now, as opposed to doing it as an independent effort. The name of the site? CheckMates–and no, it’s not a dating site.

This site was previously called Exchange Point and was launched around the time R80 was released a little over a year ago. Previously the site was just focused on management, but it has been expanded to cover all of the products that make up Check Point Infinity.

Even in the past, I personally didn’t have all the answers. The one thing I did do was make what I did know and what others contributed available to all. I had plenty of help from people in the community back then, including from people at Check Point.

In that regard, nothing’s changed. CheckMates will be a collaborative effort. Unlike in the past, Check Point’s role will be more prominent, especially since they are hosting the site and paying my salary.

At the end of the day, I want CheckMates to be like phoneboy.com was back in the day: to be the go-to resource for all things Check Point. It’s a tall order, and I know what’s there is not much now, but phoneboy.com wasn’t much back when I started, either.

To give you a small sample of the discussions on CheckMates, I’ve put together a small sample of some of the threads that happened last week.

How Long is Long Enough for a Password?

As much as we might want to see different authentication methods available, passwords aren’t going anyway anytime soon. This means a significant part of our security online comes down to choosing good passwords.

There are three basic rules for choosing good passwords:

  1. The more complex the better
  2. The longer the better
  3. Don’t use the same password on multiple sites

Some services like Office 365 are being criticized for only allowing 16 character passwords. Some services offer even less than this.

If you actually do a little math, and choose the characters in your password carefully enough, perhaps using a tool like LastPass to generate and manage the passwords, even a 16 character password is more than strong enough to withstand a brute force attack!

To demonstrate that, I’m going to use the GRC Haystacks tool just to show the search space required in order to find a given password. Yes, I know there are some in the security community that poo-poo some of the contributions to information security that Steve Gibson has made. The tool is merely expressing the results of math and is being used for illustrative purposes.

A password can theoretically have four different types of characters:

  • Uppercase characters (26 possible options)
  • Lowercase characters (26 possible options)
  • Numbers (10 possible options)
  • Special characters (33 characters)

This gives us a total of 95 possible values for a given character of a password. Note that this may vary from site to site as some sites might restrict the special character space. Some sites might even allow for emoji, which I am excluding since outside of smartphone platforms, these are not universally available.

Let’s assume we pick a 16 character password that leverages all four character types and is relatively random. The time required to exhaustively search this space with a tool like hashcat or John The Ripper? A much longer time than I can even conceive of!

16 Character Complex Password

What about if I choose a 16 character password that is all lowercase, but random? Even if a lot of computing power is thrown at the password hash, we’re still looking at several years of computing time:

16 Character Lowercase Password

However, by adding a little bit of complexity, say, uppercase characters, the search space suddenly increases by orders of magnitude!

16 Character Upper and Lowercase Password

Even a 12 character complex password has a pretty large search space to search through:

12 Character Complex Password

All of this assumes you are choosing truly random characters for your password. If you’re using a well-known password manager, it’s probably random enough. Obviously, if you choose dictionary words for your password, or simple variations thereof, the odds of someone guessing your 16 character password are much higher.

Then again, how might someone perform a brute force attack on your password? Certainly if someone leaks the hashed passwords it’s possible. It’s likely not the result of an online brute force attack as that is likely to be detected and/or blocked and will most certainly take much longer.

And yes, the amount of time it takes to validate a password is a factor here. To illustrate this, let’s talk passcodes on phones. At least on Apple devices, if you enable the wipe feature, Apple will wipe the device after 10 failed passcode attempts. The phone only allows passcode entry via the screen and each attempt takes 80 milliseconds to process, as I discussed previously. After a few failed attempts, the phone will lock out additional attempts for a period of time. Which means, it’s not like someone can pick up your phone and a few seconds later, your phone is wiped.

With those constraints in place, how long and complex of a passcode do you really need to keep yours phone from being unlocked by someone other than you? Probably nowhere near as many as you think, so long as you avoid obvious and common ones. For the sake of argument, let’s look at an 8 digit passcode:

8 Digit PIN

To exhaustively search this space, assuming 80ms per guess and no other limiting factors, it would take about 103 days to try all possible combinations. Since there are other limiting factors as noted above, including the fact that the ability to automate passcode guessing is limited, it would take a bit longer. Of course, if the iPhone owner enabled the “erase after 10 failed attempts” option, all bets are off.

The bottom is line is, when you actually look at the math, you don’t need quite as long of a password as you think you do. Assuming the limit is at least 12 characters and all special characters are supported, you can make a complex enough password to sufficiently mitigate most brute force attacks. Even a 16 character password with just mixed case letters has a pretty large search space, assuming your passwords have sufficient entropy.

Having said all that, I’m all for sites supporting longer passwords. Length does allow people to make more complex passwords that are far easier to type, which can be good for people just learning good password hygiene. Also, if it helps people feel more secure to have a longer password and adding support for longer passwords is trivial, why not support it?

Obviously, if there is a massive increase in available computing power anytime soon, some of these assumptions will have to be reexamined. That said, I suspect we’ll have bigger issues to deal with than just the security of our passwords.

Disclaimer: My employer, Check Point Software Technologies, didn’t offer an opinion on this issue. The above thoughts are my own.

Cloudflares with a Chance of Goatse

As I’m sure you’ve heard by now, Cloudflare had a case of CloudBleed, causing what amounts to a massive privacy violation for any site that happened to use them, at least if they used one of three specific features of Cloudflare: Email Obfuscation, Server-side Excludes, and Automatic HTTPS Rewrites. A potential list of compromised sites showed up, which may not be entirely accurate because plenty of sites use Cloudflare but may not necessarily use these features.

The advice that is given as a result of this bug?

Check your password managers and change all your passwords, especially those on these affected sites. Rotate API keys & secrets, and confirm you have 2-FA set up for important accounts. This might sound like fear-mongering, but the scope of this leak is truly massive, and due to the fact that all Cloudflare proxy customers were vulnerable to having data leaked, it’s better to be safe than sorry.

Theoretically sites not in this list can also be affected (because an affected site could have made an API request to a non-affected one), you should probably change all your important passwords.

Which is fine if, like me, you actually use a password manager (I recommend LastPass). However, it’s not entirely complete advice as “HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data” were leaked. Changing passwords won’t suddenly fix this disclosure issue, particularly if the sites in question do a poor job invalidating cookies and tokens. Think that’s far fetched? Think again.

Changing passwords also doesn’t fix applications that may have communicated on the backend to a Cloudflare-backed site (either on your behalf or otherwise). The potential scope of this issue is…scary.

That said, I can’t imagine every one who ever used a given service over the last several months had their information disclosed. While this event increases the risk above zero, it’s not clear by how much for a given user. Also, the impact of disclosure of a login cookie/token for my bank or a service like Cloudflare is far different than for a site like Techdirt, which out of an abundance of caution is forcing everyone to reset their password on the site.

I feel sorry for the average Internet user, who has seen umpteen of these notifications lately (just from Yahoo alone)! The advice of “change all your passwords” is quite simply untenable for the vast majority of Internet users. Even though I use a password manager as part of good password hygiene, I certainly don’t have time to visit all the sites in LastPass, much less change all my passwords manually!

And, as I noted earlier, changing your password won’t fully address the issue. Still, it’s probably as a good a time as any to make sure your critical accounts are as protected as they can be. For me, that meant changing my Cloudflare password and API key as well as enabling multi-factor authentication. I’ve also changed the password for a few sites listed on the potential list of compromised sites. I will keep checking LastPass in case they decide to integrate this list of sites into their Security Challenge, which they’ve done in the past.

Even if you do none of this, my guess is that the vast majority of the users won’t be impacted by CloudBleed. At least I hope they won’t be.

Disclaimer: My employer, Check Point Software Technologies, didn’t offer an opinion on this issue. The above thoughts are my own.

Automation, Orchestration, and The Cloud

A while ago, I posted the following as a somewhat cryptic message on Twitter and LinkedIn:

To give this tweet a little more context, I’ll reference a previous post about the cloud, where I said the following:

In the cloud, infrastructure and applications can come and go with the push of a button. Need another 10 webservers? Done. Need to burst to handle three times the traffic? No problem. Sure, you’ve got to have physical machines to run on, but racking and stacking that stuff is easy. The physical topology? Flat. The virtual topology? Changes every second.

If you’re not treating your “cloud” infrastructure in an automated fashion, you’re doing it wrong. You’re also doomed to make the same mistakes and more that you’re making today. While some of the same tools can be used in the cloud, they integrate a bit differently. There are also a number of additional considerations that must be made for cloud—considerations that, quite frankly, are very different from physical networks.

I did get something wrong in the above statement, and it’s a mistake that a lot of people make. Instead of saying “automated fashion” I should have said “orchestrated fashion.” The reason is simple: while automation and orchestration are related, they are not the same thing.

Automation is something good sysadmins have been doing for 30+ years. Instead of doing a repetitive task over and over again, where you are bound to make mistakes, you build a script to do the hard work for you. If you’re clever, you might also create a system to execute that script on a number of systems. I actually worked on a system that did this back in the early 90s. Chef is a more modern version of the framework we built using shell scripts and rsh (this was pre-SSH).

The key thing about automation is you still have to know how to do whatever it is you’re trying to do and the order in which those commands should be run. You have to be able to handle all the various error conditions and the like as well.

Orchestration is a level above automation, it’s about the intent. Automation is leveraged to bring that intent to life, but orchestration is less concerned with how the result is achieved, only that it is.

What Does Automation and Orchestration Have To Do With The Cloud?

Everything.

Automation and orchestration is what gives the cloud its magical properties. Automation makes it possible to build an entire application stack with the security you specify in seconds, orchestration tells the system when and where to spin it up. Orchestration is also able to monitor the application stack for load and spin up more capacity as required, with all the necessary steps automated so it happens.

Which means, if all your doing is taking your existing manually deployed applications and businesses processes and move them on AWS or Azure, all your doing is using someone else’s computer. You are gaining little to no benefit of the inherent automation and orchestration frameworks built into AWS or Azure.

By the way, the same thing goes for your VMware, Openstack, or other similar privately hosted environments. You might gain some benefit from the consolidation of hardware, but you will gain none of the agility and have to adjust to growing complexity.

Embracing automation and orchestration in a cloud environment (public and private) does require relearning some tasks. Some of the fundamental assumptions that underlie networking are a bit different. Which means you may not be able to deploy things the same way as you did in the past, but that’s ok. Not every component of every traditionally deployed application is necessarily automation and orchestration friendly.

The good news is that automation and orchestration can improve security by ensuring everything is deployed in it’s most secure manner by default, which includes using the most up-to-date components. It can also deploy full next generation threat prevention with Check Point vSEC or other, similar tools.

Patching? Upgrading? Who does that in the cloud? You just redeploy your apps with the new versions automatically. If it fails for some reason, you can easily put the old versions back in.

By the way, those Software as a Service applications you and everyone else uses? They’re built this way, all with automation and orchestration on the backend to make it “just work.” When your executives tell you to “move to the cloud,” this is what they really want: services that just work.

Many IT organizations have not delivered on this vision. This includes ones that have supposedly moved to the cloud. Because without automation and orchestration, the cloud is just another computer.

Disclaimer: My employer, Check Point Software Technologies, is always trying to stay one step ahead of the threats, even in the cloud. The above thoughts are my own.

Which Comes First, the Ports or the Application ID?

Back when I started working with the Check Point product in 1996, things were much simpler. We still had plenty of IPv4 addresses, there weren’t a whole lot of users using the Internet, and applications were few and far between. To permit applications through a perimeter, it was generally considered best practice to open up the necessary TCP/UDP ports or use an application proxy.

For some applications, the act of opening up ports was complicated because the ports were determined dynamically. A good example of a classic protocol that does this is FTP. There were others, of course, but this was common back in the 1990s and is still used today. Check Point (and other vendors) had to have intelligence built into their product to account for FTP and a number of other protocols.

And then web-based applications became a thing. Now, if you’re allowing web traffic with no further classification, you might as well have an open firewall, because for all intents and purposes, it is. Even a single IP can host many different websites (some good, some not). And, of course, the content of a “good” website could also be “bad” at times. This created a clear need to control based not only on ports and IPs, but on other elements.

Enter Palo Alto Networks, who in 2007 released the first version of their product that is built around applications versus IP and ports. To be clear, this wasn’t a new concept as firewalls have been doing this in some capacity for years. However, Palo Alto’s approach resonated with customers, they gained market share, and other vendors started implementing similar technology.

The technology that Palo Alto Networks developed is called App-ID and they explain it as follows in their APP-ID Tech Brief:

App-ID uses multiple identification techniques to determine the exact identity of applications traversing your network – irrespective of port, protocol, evasive tactic, or encryption. Identifying the application is the very first task performed by App-ID, providing you with the knowledge and flexibility needed to safely enable applications and secure your organization.

Sounds magical. I can now build a security policy based on applications alone without regard to the ports they use? Or can I?

appid

Even Palo Alto Network’s own documentation says the very first check is based on IP and Port, exactly the way every other vendor does it. You know why? Because that’s the only way to do it.

If I open a TCP connection to 192.0.2.1 port 80, the first packet sent is a TCP SYN. Here’s what I know from that:

  1. It’s likely a web-based connection. That said, anything can use port 80, so that’s only an assumption.
  2. It could be a connection to do a Google search, gmail, Google Maps, Google Drive, or any other Google property. Or Office 365 apps. Or something else.
  3. I might be able to do a reverse lookup on the IP to see where it’s going, but that adds latency and provides no guarantee the lookup will show you anything that will help identify the app or website. Or tell you if the content being served up is actually safe.

Bottom line: more information is needed. A few more packets must be let through on the connection before we know exactly what it is.

Let’s assume for a moment we take the position that we don’t care about ports at all, only applications, as I often hear Palo Alto Networks reps say. What can happen? First of all, you can do reconnaissance on anything beyond the firewall. If you do this rapidly, you’ll probably trigger the various protections in place to detect port scanning and similar activity, but it could easily be done in a “low and slow” manner that these detections probably won’t trigger.

Even Palo Alto Networks has a concept of ports that tie in with applications. This is configured on a per-rule/service basis, as shown below:

application-default

Per a post on their community, Application Default means:

Choosing this means that the selected applications are allowed or denied only on their default ports defined by Palo Alto Networks. This option is recommended for allow policies because it prevents applications from running on unusual ports and protocols, which if not intentional, can be a sign of undesired application behavior and usage.

Which is, of course, correct. Ports do matter as it filters out a lot of undesirable traffic. Palo Alto Networks simply masks this fact by allowing you to build only application-centric policies and not a separate policy for ports and applications the way Check Point currently does it.

Is an application-centric policy better? It certainly means less policies to have to configure and is one benefit to Palo Alto’s solution. Check Point will offer similar functionality in the R80.10 release, which, at this writing is currently available as a Public Early Availability release.

Whitelist versus Blacklist

This whole post spawned out of a discussion I started on LinkedIn when I posted this graphic, highlighting the number of applications supported by the different next generation firewall vendors:

apps-supported-201610

Various Palo Alto reps on the thread pointed out the number of applications supported didn’t matter as much because the way you should do it is to only allow specific applications and block the rest. Which, if you have a single policy for ports and applications, is a little easier to achieve. It is also possible to achieve in Check Point, but it does require some additional effort compared to Palo Alto.

Even with a whitelist approach where you permit only a small number of applications to pass, you have to be able to differentiate safe traffic from malicious traffic. As an example, specific anonymizers can appear to behave like innocuous web browsing. This is why Palo Alto Networks and others can also identify specific malicious applications to help differentiate.

It’s also why the number of applications a particular solution can identify matters greatly. As an example, I ran a Security Checkup at a Palo Alto Networks customer and saw the following applications:

checkup-anonymizers

In this case, the Security Checkup appliance was positioned outside of a Palo Alto Networks gateway filtering traffic. The report from which this snapshot was taken ran in February 2016. You can see that Gom VPN and Betternet are clearly being allowed by the amount of traffic compared to some of the other ones, which are clearly being blocked due to the limited amount of traffic. I checked Applipedia and these anonymizers are still not supported (as of 11th December 2016, at least).

It’s also worth noting that a whitelist approach has a bit more administrative overhead and only works when the applications you want to allow are defined.

Clearly being able to detect more applications is better, even if you employ a whitelisting approach, which can have a bit more administrative overhead. Even then, it will only work when the applications you want to allow are defined. Thus again, more is better. And, as noted before, this whitelist strategy will be easier to implement in the Check Point R80.10 release.

Disclaimer: My employer, Check Point Software Technologies, is always trying to stay one step ahead of the threats as well as the competition. The views above, however, are my own.