Cloudflares with a Chance of Goatse

As I’m sure you’ve heard by now, Cloudflare had a case of CloudBleed, causing what amounts to a massive privacy violation for any site that happened to use them, at least if they used one of three specific features of Cloudflare: Email Obfuscation, Server-side Excludes, and Automatic HTTPS Rewrites. A potential list of compromised sites showed up, which may not be entirely accurate because plenty of sites use Cloudflare but may not necessarily use these features.

The advice that is given as a result of this bug?

Check your password managers and change all your passwords, especially those on these affected sites. Rotate API keys & secrets, and confirm you have 2-FA set up for important accounts. This might sound like fear-mongering, but the scope of this leak is truly massive, and due to the fact that all Cloudflare proxy customers were vulnerable to having data leaked, it’s better to be safe than sorry.

Theoretically sites not in this list can also be affected (because an affected site could have made an API request to a non-affected one), you should probably change all your important passwords.

Which is fine if, like me, you actually use a password manager (I recommend LastPass). However, it’s not entirely complete advice as “HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data” were leaked. Changing passwords won’t suddenly fix this disclosure issue, particularly if the sites in question do a poor job invalidating cookies and tokens. Think that’s far fetched? Think again.

Changing passwords also doesn’t fix applications that may have communicated on the backend to a Cloudflare-backed site (either on your behalf or otherwise). The potential scope of this issue is…scary.

That said, I can’t imagine every one who ever used a given service over the last several months had their information disclosed. While this event increases the risk above zero, it’s not clear by how much for a given user. Also, the impact of disclosure of a login cookie/token for my bank or a service like Cloudflare is far different than for a site like Techdirt, which out of an abundance of caution is forcing everyone to reset their password on the site.

I feel sorry for the average Internet user, who has seen umpteen of these notifications lately (just from Yahoo alone)! The advice of “change all your passwords” is quite simply untenable for the vast majority of Internet users. Even though I use a password manager as part of good password hygiene, I certainly don’t have time to visit all the sites in LastPass, much less change all my passwords manually!

And, as I noted earlier, changing your password won’t fully address the issue. Still, it’s probably as a good a time as any to make sure your critical accounts are as protected as they can be. For me, that meant changing my Cloudflare password and API key as well as enabling multi-factor authentication. I’ve also changed the password for a few sites listed on the potential list of compromised sites. I will keep checking LastPass in case they decide to integrate this list of sites into their Security Challenge, which they’ve done in the past.

Even if you do none of this, my guess is that the vast majority of the users won’t be impacted by CloudBleed. At least I hope they won’t be.

Disclaimer: My employer, Check Point Software Technologies, didn’t offer an opinion on this issue. The above thoughts are my own.

Automation, Orchestration, and The Cloud

A while ago, I posted the following as a somewhat cryptic message on Twitter and LinkedIn:

To give this tweet a little more context, I’ll reference a previous post about the cloud, where I said the following:

In the cloud, infrastructure and applications can come and go with the push of a button. Need another 10 webservers? Done. Need to burst to handle three times the traffic? No problem. Sure, you’ve got to have physical machines to run on, but racking and stacking that stuff is easy. The physical topology? Flat. The virtual topology? Changes every second.

If you’re not treating your “cloud” infrastructure in an automated fashion, you’re doing it wrong. You’re also doomed to make the same mistakes and more that you’re making today. While some of the same tools can be used in the cloud, they integrate a bit differently. There are also a number of additional considerations that must be made for cloud—considerations that, quite frankly, are very different from physical networks.

I did get something wrong in the above statement, and it’s a mistake that a lot of people make. Instead of saying “automated fashion” I should have said “orchestrated fashion.” The reason is simple: while automation and orchestration are related, they are not the same thing.

Automation is something good sysadmins have been doing for 30+ years. Instead of doing a repetitive task over and over again, where you are bound to make mistakes, you build a script to do the hard work for you. If you’re clever, you might also create a system to execute that script on a number of systems. I actually worked on a system that did this back in the early 90s. Chef is a more modern version of the framework we built using shell scripts and rsh (this was pre-SSH).

The key thing about automation is you still have to know how to do whatever it is you’re trying to do and the order in which those commands should be run. You have to be able to handle all the various error conditions and the like as well.

Orchestration is a level above automation, it’s about the intent. Automation is leveraged to bring that intent to life, but orchestration is less concerned with how the result is achieved, only that it is.

What Does Automation and Orchestration Have To Do With The Cloud?


Automation and orchestration is what gives the cloud its magical properties. Automation makes it possible to build an entire application stack with the security you specify in seconds, orchestration tells the system when and where to spin it up. Orchestration is also able to monitor the application stack for load and spin up more capacity as required, with all the necessary steps automated so it happens.

Which means, if all your doing is taking your existing manually deployed applications and businesses processes and move them on AWS or Azure, all your doing is using someone else’s computer. You are gaining little to no benefit of the inherent automation and orchestration frameworks built into AWS or Azure.

By the way, the same thing goes for your VMware, Openstack, or other similar privately hosted environments. You might gain some benefit from the consolidation of hardware, but you will gain none of the agility and have to adjust to growing complexity.

Embracing automation and orchestration in a cloud environment (public and private) does require relearning some tasks. Some of the fundamental assumptions that underlie networking are a bit different. Which means you may not be able to deploy things the same way as you did in the past, but that’s ok. Not every component of every traditionally deployed application is necessarily automation and orchestration friendly.

The good news is that automation and orchestration can improve security by ensuring everything is deployed in it’s most secure manner by default, which includes using the most up-to-date components. It can also deploy full next generation threat prevention with Check Point vSEC or other, similar tools.

Patching? Upgrading? Who does that in the cloud? You just redeploy your apps with the new versions automatically. If it fails for some reason, you can easily put the old versions back in.

By the way, those Software as a Service applications you and everyone else uses? They’re built this way, all with automation and orchestration on the backend to make it “just work.” When your executives tell you to “move to the cloud,” this is what they really want: services that just work.

Many IT organizations have not delivered on this vision. This includes ones that have supposedly moved to the cloud. Because without automation and orchestration, the cloud is just another computer.

Disclaimer: My employer, Check Point Software Technologies, is always trying to stay one step ahead of the threats, even in the cloud. The above thoughts are my own.

Which Comes First, the Ports or the Application ID?

Back when I started working with the Check Point product in 1996, things were much simpler. We still had plenty of IPv4 addresses, there weren’t a whole lot of users using the Internet, and applications were few and far between. To permit applications through a perimeter, it was generally considered best practice to open up the necessary TCP/UDP ports or use an application proxy.

For some applications, the act of opening up ports was complicated because the ports were determined dynamically. A good example of a classic protocol that does this is FTP. There were others, of course, but this was common back in the 1990s and is still used today. Check Point (and other vendors) had to have intelligence built into their product to account for FTP and a number of other protocols.

And then web-based applications became a thing. Now, if you’re allowing web traffic with no further classification, you might as well have an open firewall, because for all intents and purposes, it is. Even a single IP can host many different websites (some good, some not). And, of course, the content of a “good” website could also be “bad” at times. This created a clear need to control based not only on ports and IPs, but on other elements.

Enter Palo Alto Networks, who in 2007 released the first version of their product that is built around applications versus IP and ports. To be clear, this wasn’t a new concept as firewalls have been doing this in some capacity for years. However, Palo Alto’s approach resonated with customers, they gained market share, and other vendors started implementing similar technology.

The technology that Palo Alto Networks developed is called App-ID and they explain it as follows in their APP-ID Tech Brief:

App-ID uses multiple identification techniques to determine the exact identity of applications traversing your network – irrespective of port, protocol, evasive tactic, or encryption. Identifying the application is the very first task performed by App-ID, providing you with the knowledge and flexibility needed to safely enable applications and secure your organization.

Sounds magical. I can now build a security policy based on applications alone without regard to the ports they use? Or can I?


Even Palo Alto Network’s own documentation says the very first check is based on IP and Port, exactly the way every other vendor does it. You know why? Because that’s the only way to do it.

If I open a TCP connection to port 80, the first packet sent is a TCP SYN. Here’s what I know from that:

  1. It’s likely a web-based connection. That said, anything can use port 80, so that’s only an assumption.
  2. It could be a connection to do a Google search, gmail, Google Maps, Google Drive, or any other Google property. Or Office 365 apps. Or something else.
  3. I might be able to do a reverse lookup on the IP to see where it’s going, but that adds latency and provides no guarantee the lookup will show you anything that will help identify the app or website. Or tell you if the content being served up is actually safe.

Bottom line: more information is needed. A few more packets must be let through on the connection before we know exactly what it is.

Let’s assume for a moment we take the position that we don’t care about ports at all, only applications, as I often hear Palo Alto Networks reps say. What can happen? First of all, you can do reconnaissance on anything beyond the firewall. If you do this rapidly, you’ll probably trigger the various protections in place to detect port scanning and similar activity, but it could easily be done in a “low and slow” manner that these detections probably won’t trigger.

Even Palo Alto Networks has a concept of ports that tie in with applications. This is configured on a per-rule/service basis, as shown below:


Per a post on their community, Application Default means:

Choosing this means that the selected applications are allowed or denied only on their default ports defined by Palo Alto Networks. This option is recommended for allow policies because it prevents applications from running on unusual ports and protocols, which if not intentional, can be a sign of undesired application behavior and usage.

Which is, of course, correct. Ports do matter as it filters out a lot of undesirable traffic. Palo Alto Networks simply masks this fact by allowing you to build only application-centric policies and not a separate policy for ports and applications the way Check Point currently does it.

Is an application-centric policy better? It certainly means less policies to have to configure and is one benefit to Palo Alto’s solution. Check Point will offer similar functionality in the R80.10 release, which, at this writing is currently available as a Public Early Availability release.

Whitelist versus Blacklist

This whole post spawned out of a discussion I started on LinkedIn when I posted this graphic, highlighting the number of applications supported by the different next generation firewall vendors:


Various Palo Alto reps on the thread pointed out the number of applications supported didn’t matter as much because the way you should do it is to only allow specific applications and block the rest. Which, if you have a single policy for ports and applications, is a little easier to achieve. It is also possible to achieve in Check Point, but it does require some additional effort compared to Palo Alto.

Even with a whitelist approach where you permit only a small number of applications to pass, you have to be able to differentiate safe traffic from malicious traffic. As an example, specific anonymizers can appear to behave like innocuous web browsing. This is why Palo Alto Networks and others can also identify specific malicious applications to help differentiate.

It’s also why the number of applications a particular solution can identify matters greatly. As an example, I ran a Security Checkup at a Palo Alto Networks customer and saw the following applications:


In this case, the Security Checkup appliance was positioned outside of a Palo Alto Networks gateway filtering traffic. The report from which this snapshot was taken ran in February 2016. You can see that Gom VPN and Betternet are clearly being allowed by the amount of traffic compared to some of the other ones, which are clearly being blocked due to the limited amount of traffic. I checked Applipedia and these anonymizers are still not supported (as of 11th December 2016, at least).

It’s also worth noting that a whitelist approach has a bit more administrative overhead and only works when the applications you want to allow are defined.

Clearly being able to detect more applications is better, even if you employ a whitelisting approach, which can have a bit more administrative overhead. Even then, it will only work when the applications you want to allow are defined. Thus again, more is better. And, as noted before, this whitelist strategy will be easier to implement in the Check Point R80.10 release.

Disclaimer: My employer, Check Point Software Technologies, is always trying to stay one step ahead of the threats as well as the competition. The views above, however, are my own.

Networks Without Borders

​​I’ve spent the better part of twenty years focusing on network security. That wasn’t what I started out to do in my life, I was just sort of there and the industry grew up around me. I now see a day where network security is the exception rather than the rule.

Twenty years ago, people were using a few apps mostly hosted onsite from a few, wired locations. Most of the communications were not encrypted to boot. This made it practical to use a perimeter security devices to restrict who could go where and monitor the flow of data.

These days, networks are abundant and broadband. Users have multiple devices to connect across multiple networks, few of which go through some sort of perimeter security device you can control. Communications are plentiful with an increasing percentage of them encrypted. The applications used are also plentiful and increasingly hosted in the cloud, i.e. on someone else’s infrastructure.

In new organizations where Software and/or Infrastructure as a Service, the traditional perimeter gateway serves almost no purpose. There’s nothing in the network to segment and there little you can do in the network to protect.

To be clear, the traditional perimeter is not going away anytime soon for many organizations. There’s far too much legacy infrastructure that still needs protecting and a perimeter gateway may be your best bet. However, if you’re only looking at security from a network perspective, you’re missing out on an increasingly larger part of the picture. In the long run, visibility and security controls has to move closer to the endpoints. Not only those the end user uses, which includes traditional desktop/laptop and mobile devices, but the servers they connect to.

For cloud infrastructure hosted in VMware, OpenStack, AWS, Azure, or similar, this can be done through the use of microsegmentation, but make sure you are able to inspect traffic beyond layers 3 and 4. The good news is that Software Defined Networking technologies make it easy to apply deep inspection only to the traffic that needs it and not for all traffic. With the right solutions, the security will be enforced dynamically based on groups defined in the virtualization environment without regard to IP addresses. Also, traditional physical network security controls can make use of this information to make more intellegent enforcement decisions!

For Software as a Service offerings, you may need to utilize something like a cloud access security broker (CASB), a software tool or service that sits between an organization’s on-premises infrastructure and a cloud provider’s infrastructure. A CASB allow you to integrate familiar security controls with SaaS applications to extend visibility and enforcement of security policy beyond on premise infrastructure.

On endpoints, it’s simply not enough to employ regular anti-virus anymore, but tools that can block zero-day threats, which can enter a system usually through a web browser, email, or USB. Some vendors offer solutions to this that use highly instrumented solutions similar to Microsoft EMET or on-endpoint virtualization, which of course adds load to endpoints that probably already have too many agents installed. Keeping the protection lightweight and effective is key.

Mobile devices have their own challenges. Mobile Device Management is a good start, but for true bring your own device models, end users may object to the controls this provides. It also does not address issues of user/corporate data segmentation or mobile-focused malware. A specific threat prevention solution for mobile threats is definitely required.

Ideally, of course, all of these solutions can be managed centrally with events correlated across them. Some centralized identity framework that supports both on-premise and cloud-based applications will also be useful. Having identity correlated with your security events is even better.

It’s a challenge, but I feel like we finally have the technology to get this security thing right, or at least better than we’ve been able to do in the past. It’s going to take some effort to get there, along with supporting business processes and people, but I am hopeful organizations can and will get there.

Disclaimer: My employer, Check Point Software Technologies, does offer solutions to some of the above challenges. The views above, however, are my own.

Get Over Windows Defender Already, AV Vendors!

​From That’s It. I’ve Had Enough!:

Users of Windows 10 have been complaining that the system is changing settings, uninstalling user-installed apps, and replacing them with standard Microsoft ones.

A similar thing’s been happening with security products.

When you upgrade to Windows 10, Microsoft automatically and without any warning deactivates all ‘incompatible’ security software and in its place installs… you guessed it – its own Defender antivirus. But what did it expect when independent developers were given all of one week before the release of the new version of the OS to make their software compatible? Even if software did manage to be compatible according to the initial check before the upgrade, weird things tended to happen and Defender would still take over.

And then the piece goes on to talk about how Microsoft is being anti-competitive and Kaspersky is going to take this up with official government bodies in the EU and Russia.

If we’re simply talking about Anti-Virus here, I don’t know that Kaspersky, or anyone else for that matter, is doing anything that much better than anyone else. The technology has inherent limits and, generally speaking, efficacy comes down to how quickly signatures are generated and deployed.

We know how effective AV is in general. It’s why Check Point and numerous other vendors, including Kaspersky, offer different solutions that address threats AV cannot by itself. This is where security software vendors should be focusing their efforts. Stop fighting with Microsoft over Windows Defender.

Disclaimer: My blog, my personal opinions. I’m sure you knew that.