Automation, Orchestration, and The Cloud

A while ago, I posted the following as a somewhat cryptic message on Twitter and LinkedIn:

To give this tweet a little more context, I’ll reference a previous post about the cloud, where I said the following:

In the cloud, infrastructure and applications can come and go with the push of a button. Need another 10 webservers? Done. Need to burst to handle three times the traffic? No problem. Sure, you’ve got to have physical machines to run on, but racking and stacking that stuff is easy. The physical topology? Flat. The virtual topology? Changes every second.

If you’re not treating your “cloud” infrastructure in an automated fashion, you’re doing it wrong. You’re also doomed to make the same mistakes and more that you’re making today. While some of the same tools can be used in the cloud, they integrate a bit differently. There are also a number of additional considerations that must be made for cloud—considerations that, quite frankly, are very different from physical networks.

I did get something wrong in the above statement, and it’s a mistake that a lot of people make. Instead of saying “automated fashion” I should have said “orchestrated fashion.” The reason is simple: while automation and orchestration are related, they are not the same thing.

Automation is something good sysadmins have been doing for 30+ years. Instead of doing a repetitive task over and over again, where you are bound to make mistakes, you build a script to do the hard work for you. If you’re clever, you might also create a system to execute that script on a number of systems. I actually worked on a system that did this back in the early 90s. Chef is a more modern version of the framework we built using shell scripts and rsh (this was pre-SSH).

The key thing about automation is you still have to know how to do whatever it is you’re trying to do and the order in which those commands should be run. You have to be able to handle all the various error conditions and the like as well.

Orchestration is a level above automation, it’s about the intent. Automation is leveraged to bring that intent to life, but orchestration is less concerned with how the result is achieved, only that it is.

What Does Automation and Orchestration Have To Do With The Cloud?

Everything.

Automation and orchestration is what gives the cloud its magical properties. Automation makes it possible to build an entire application stack with the security you specify in seconds, orchestration tells the system when and where to spin it up. Orchestration is also able to monitor the application stack for load and spin up more capacity as required, with all the necessary steps automated so it happens.

Which means, if all your doing is taking your existing manually deployed applications and businesses processes and move them on AWS or Azure, all your doing is using someone else’s computer. You are gaining little to no benefit of the inherent automation and orchestration frameworks built into AWS or Azure.

By the way, the same thing goes for your VMware, Openstack, or other similar privately hosted environments. You might gain some benefit from the consolidation of hardware, but you will gain none of the agility and have to adjust to growing complexity.

Embracing automation and orchestration in a cloud environment (public and private) does require relearning some tasks. Some of the fundamental assumptions that underlie networking are a bit different. Which means you may not be able to deploy things the same way as you did in the past, but that’s ok. Not every component of every traditionally deployed application is necessarily automation and orchestration friendly.

The good news is that automation and orchestration can improve security by ensuring everything is deployed in it’s most secure manner by default, which includes using the most up-to-date components. It can also deploy full next generation threat prevention with Check Point vSEC or other, similar tools.

Patching? Upgrading? Who does that in the cloud? You just redeploy your apps with the new versions automatically. If it fails for some reason, you can easily put the old versions back in.

By the way, those Software as a Service applications you and everyone else uses? They’re built this way, all with automation and orchestration on the backend to make it “just work.” When your executives tell you to “move to the cloud,” this is what they really want: services that just work.

Many IT organizations have not delivered on this vision. This includes ones that have supposedly moved to the cloud. Because without automation and orchestration, the cloud is just another computer.

Disclaimer: My employer, Check Point Software Technologies, is always trying to stay one step ahead of the threats, even in the cloud. The above thoughts are my own.

Which Comes First, the Ports or the Application ID?

Back when I started working with the Check Point product in 1996, things were much simpler. We still had plenty of IPv4 addresses, there weren’t a whole lot of users using the Internet, and applications were few and far between. To permit applications through a perimeter, it was generally considered best practice to open up the necessary TCP/UDP ports or use an application proxy.

For some applications, the act of opening up ports was complicated because the ports were determined dynamically. A good example of a classic protocol that does this is FTP. There were others, of course, but this was common back in the 1990s and is still used today. Check Point (and other vendors) had to have intelligence built into their product to account for FTP and a number of other protocols.

And then web-based applications became a thing. Now, if you’re allowing web traffic with no further classification, you might as well have an open firewall, because for all intents and purposes, it is. Even a single IP can host many different websites (some good, some not). And, of course, the content of a “good” website could also be “bad” at times. This created a clear need to control based not only on ports and IPs, but on other elements.

Enter Palo Alto Networks, who in 2007 released the first version of their product that is built around applications versus IP and ports. To be clear, this wasn’t a new concept as firewalls have been doing this in some capacity for years. However, Palo Alto’s approach resonated with customers, they gained market share, and other vendors started implementing similar technology.

The technology that Palo Alto Networks developed is called App-ID and they explain it as follows in their APP-ID Tech Brief:

App-ID uses multiple identification techniques to determine the exact identity of applications traversing your network – irrespective of port, protocol, evasive tactic, or encryption. Identifying the application is the very first task performed by App-ID, providing you with the knowledge and flexibility needed to safely enable applications and secure your organization.

Sounds magical. I can now build a security policy based on applications alone without regard to the ports they use? Or can I?

appid

Even Palo Alto Network’s own documentation says the very first check is based on IP and Port, exactly the way every other vendor does it. You know why? Because that’s the only way to do it.

If I open a TCP connection to 192.0.2.1 port 80, the first packet sent is a TCP SYN. Here’s what I know from that:

  1. It’s likely a web-based connection. That said, anything can use port 80, so that’s only an assumption.
  2. It could be a connection to do a Google search, gmail, Google Maps, Google Drive, or any other Google property. Or Office 365 apps. Or something else.
  3. I might be able to do a reverse lookup on the IP to see where it’s going, but that adds latency and provides no guarantee the lookup will show you anything that will help identify the app or website. Or tell you if the content being served up is actually safe.

Bottom line: more information is needed. A few more packets must be let through on the connection before we know exactly what it is.

Let’s assume for a moment we take the position that we don’t care about ports at all, only applications, as I often hear Palo Alto Networks reps say. What can happen? First of all, you can do reconnaissance on anything beyond the firewall. If you do this rapidly, you’ll probably trigger the various protections in place to detect port scanning and similar activity, but it could easily be done in a “low and slow” manner that these detections probably won’t trigger.

Even Palo Alto Networks has a concept of ports that tie in with applications. This is configured on a per-rule/service basis, as shown below:

application-default

Per a post on their community, Application Default means:

Choosing this means that the selected applications are allowed or denied only on their default ports defined by Palo Alto Networks. This option is recommended for allow policies because it prevents applications from running on unusual ports and protocols, which if not intentional, can be a sign of undesired application behavior and usage.

Which is, of course, correct. Ports do matter as it filters out a lot of undesirable traffic. Palo Alto Networks simply masks this fact by allowing you to build only application-centric policies and not a separate policy for ports and applications the way Check Point currently does it.

Is an application-centric policy better? It certainly means less policies to have to configure and is one benefit to Palo Alto’s solution. Check Point will offer similar functionality in the R80.10 release, which, at this writing is currently available as a Public Early Availability release.

Whitelist versus Blacklist

This whole post spawned out of a discussion I started on LinkedIn when I posted this graphic, highlighting the number of applications supported by the different next generation firewall vendors:

apps-supported-201610

Various Palo Alto reps on the thread pointed out the number of applications supported didn’t matter as much because the way you should do it is to only allow specific applications and block the rest. Which, if you have a single policy for ports and applications, is a little easier to achieve. It is also possible to achieve in Check Point, but it does require some additional effort compared to Palo Alto.

Even with a whitelist approach where you permit only a small number of applications to pass, you have to be able to differentiate safe traffic from malicious traffic. As an example, specific anonymizers can appear to behave like innocuous web browsing. This is why Palo Alto Networks and others can also identify specific malicious applications to help differentiate.

It’s also why the number of applications a particular solution can identify matters greatly. As an example, I ran a Security Checkup at a Palo Alto Networks customer and saw the following applications:

checkup-anonymizers

In this case, the Security Checkup appliance was positioned outside of a Palo Alto Networks gateway filtering traffic. The report from which this snapshot was taken ran in February 2016. You can see that Gom VPN and Betternet are clearly being allowed by the amount of traffic compared to some of the other ones, which are clearly being blocked due to the limited amount of traffic. I checked Applipedia and these anonymizers are still not supported (as of 11th December 2016, at least).

It’s also worth noting that a whitelist approach has a bit more administrative overhead and only works when the applications you want to allow are defined.

Clearly being able to detect more applications is better, even if you employ a whitelisting approach, which can have a bit more administrative overhead. Even then, it will only work when the applications you want to allow are defined. Thus again, more is better. And, as noted before, this whitelist strategy will be easier to implement in the Check Point R80.10 release.

Disclaimer: My employer, Check Point Software Technologies, is always trying to stay one step ahead of the threats as well as the competition. The views above, however, are my own.

Networks Without Borders

​​I’ve spent the better part of twenty years focusing on network security. That wasn’t what I started out to do in my life, I was just sort of there and the industry grew up around me. I now see a day where network security is the exception rather than the rule.

Twenty years ago, people were using a few apps mostly hosted onsite from a few, wired locations. Most of the communications were not encrypted to boot. This made it practical to use a perimeter security devices to restrict who could go where and monitor the flow of data.

These days, networks are abundant and broadband. Users have multiple devices to connect across multiple networks, few of which go through some sort of perimeter security device you can control. Communications are plentiful with an increasing percentage of them encrypted. The applications used are also plentiful and increasingly hosted in the cloud, i.e. on someone else’s infrastructure.

In new organizations where Software and/or Infrastructure as a Service, the traditional perimeter gateway serves almost no purpose. There’s nothing in the network to segment and there little you can do in the network to protect.

To be clear, the traditional perimeter is not going away anytime soon for many organizations. There’s far too much legacy infrastructure that still needs protecting and a perimeter gateway may be your best bet. However, if you’re only looking at security from a network perspective, you’re missing out on an increasingly larger part of the picture. In the long run, visibility and security controls has to move closer to the endpoints. Not only those the end user uses, which includes traditional desktop/laptop and mobile devices, but the servers they connect to.

For cloud infrastructure hosted in VMware, OpenStack, AWS, Azure, or similar, this can be done through the use of microsegmentation, but make sure you are able to inspect traffic beyond layers 3 and 4. The good news is that Software Defined Networking technologies make it easy to apply deep inspection only to the traffic that needs it and not for all traffic. With the right solutions, the security will be enforced dynamically based on groups defined in the virtualization environment without regard to IP addresses. Also, traditional physical network security controls can make use of this information to make more intellegent enforcement decisions!

For Software as a Service offerings, you may need to utilize something like a cloud access security broker (CASB), a software tool or service that sits between an organization’s on-premises infrastructure and a cloud provider’s infrastructure. A CASB allow you to integrate familiar security controls with SaaS applications to extend visibility and enforcement of security policy beyond on premise infrastructure.

On endpoints, it’s simply not enough to employ regular anti-virus anymore, but tools that can block zero-day threats, which can enter a system usually through a web browser, email, or USB. Some vendors offer solutions to this that use highly instrumented solutions similar to Microsoft EMET or on-endpoint virtualization, which of course adds load to endpoints that probably already have too many agents installed. Keeping the protection lightweight and effective is key.

Mobile devices have their own challenges. Mobile Device Management is a good start, but for true bring your own device models, end users may object to the controls this provides. It also does not address issues of user/corporate data segmentation or mobile-focused malware. A specific threat prevention solution for mobile threats is definitely required.

Ideally, of course, all of these solutions can be managed centrally with events correlated across them. Some centralized identity framework that supports both on-premise and cloud-based applications will also be useful. Having identity correlated with your security events is even better.

It’s a challenge, but I feel like we finally have the technology to get this security thing right, or at least better than we’ve been able to do in the past. It’s going to take some effort to get there, along with supporting business processes and people, but I am hopeful organizations can and will get there.

Disclaimer: My employer, Check Point Software Technologies, does offer solutions to some of the above challenges. The views above, however, are my own.

Get Over Windows Defender Already, AV Vendors!

​From That’s It. I’ve Had Enough!:

Users of Windows 10 have been complaining that the system is changing settings, uninstalling user-installed apps, and replacing them with standard Microsoft ones.

A similar thing’s been happening with security products.

When you upgrade to Windows 10, Microsoft automatically and without any warning deactivates all ‘incompatible’ security software and in its place installs… you guessed it – its own Defender antivirus. But what did it expect when independent developers were given all of one week before the release of the new version of the OS to make their software compatible? Even if software did manage to be compatible according to the initial check before the upgrade, weird things tended to happen and Defender would still take over.

And then the piece goes on to talk about how Microsoft is being anti-competitive and Kaspersky is going to take this up with official government bodies in the EU and Russia.

If we’re simply talking about Anti-Virus here, I don’t know that Kaspersky, or anyone else for that matter, is doing anything that much better than anyone else. The technology has inherent limits and, generally speaking, efficacy comes down to how quickly signatures are generated and deployed.

We know how effective AV is in general. It’s why Check Point and numerous other vendors, including Kaspersky, offer different solutions that address threats AV cannot by itself. This is where security software vendors should be focusing their efforts. Stop fighting with Microsoft over Windows Defender.

Disclaimer: My blog, my personal opinions. I’m sure you knew that.

​A Word About Competition in the Information Security Industry

​The devices, networks, and social institutions we use today are only useful because, on the whole, most people largely trust them. If this trust gets eroded, people will not make use of them. It took me many years of working at Nokia to realize that regardless of what I do in life, I am always going to be looking for ways to improve the security with the ultimate goal of maintaining that trust.

As a company, Check Point firmly believes customers deserve the best security for their digital information. That, plus my long-time history with Check Point was why I ultimately decided to go work for Check Point when they acquired Nokia’s Security Appliance Business back in 2009. The talented, smart people I work with day-in and day-out working toward the same goal is why I’m still here even as a few of my friends recently left, for example Kellman.

That said, you may have noticed in my social media feeds that I’ve spent a little bit of time talking about Check Point’s competition. This is no accident as I see a lot of nonsense out there. I will admit to using my small platform to bring facts, understanding, and details to light, much as I did with my FireWall-1 FAQ back in the day.

To be clear, I think healthy competition is a good thing. It raises all boats, regardless of who you ultimately use. Despite our differences in approach, all infosec competition has a common enemy: the malicious actors who attempt to penetrate and disrupt our customers networks. We would do better as an industry to remember that and work better together toward defeating that common enemy.

Despite that common goal, everyone who works for a security vendor wants to succeed over the competition. As part of that competition, every vendor also puts out information that puts their offering in the best light, such as Check Point’s recent Facts vs. Hype campaign. Sometimes, that has the impact of throwing a bit of shade, perhaps 50 shades or so. This is all part of normal, healthy competition that happens in any industry.

With Palo Alto Networks, however, it’s clearly different. Nir Zuk, the co-founder of Palo Alto Networks, drives a car with the license plate CHKPKLR. This was widely known since at least 2005 and a picture of said license plate was featured prominently at their recent Sales Kick Off:

CHKPKLR

The guy up on stage? Their CEO Mark McLaughlin, propagating the “Check Point Killer” message to the assembled masses.

Over the years, I’ve heard countless stories of how Nir Zuk would come in to talk to a customer and spend a significant amount of time talking about Check Point, to the point where he was thrown out of at least one customer meeting! Given how some customers feel about Check Point, I’m sure that tactic did help to drive some sales.

Gil Shwed is not my friend

The guy on stage here? Palo Alto Networks CMO Rene Bonvanie.

It’s clear hatred of Check Point is institutionalized at Palo Alto Networks, and it comes straight from the top. It makes me question what business they are truly in. If paloaltonetworks.security doesn’t even resolve to their own website, it must not be the security business.

Disclaimer: My blog, my personal opinions. I’m sure you knew that.