Which Comes First, the Ports or the Application ID?

Back when I started working with the Check Point product in 1996, things were much simpler. We still had plenty of IPv4 addresses, there weren’t a whole lot of users using the Internet, and applications were few and far between. To permit applications through a perimeter, it was generally considered best practice to open up the necessary TCP/UDP ports or use an application proxy.

For some applications, the act of opening up ports was complicated because the ports were determined dynamically. A good example of a classic protocol that does this is FTP. There were others, of course, but this was common back in the 1990s and is still used today. Check Point (and other vendors) had to have intelligence built into their product to account for FTP and a number of other protocols.

And then web-based applications became a thing. Now, if you’re allowing web traffic with no further classification, you might as well have an open firewall, because for all intents and purposes, it is. Even a single IP can host many different websites (some good, some not). And, of course, the content of a “good” website could also be “bad” at times. This created a clear need to control based not only on ports and IPs, but on other elements.

Enter Palo Alto Networks, who in 2007 released the first version of their product that is built around applications versus IP and ports. To be clear, this wasn’t a new concept as firewalls have been doing this in some capacity for years. However, Palo Alto’s approach resonated with customers, they gained market share, and other vendors started implementing similar technology.

The technology that Palo Alto Networks developed is called App-ID and they explain it as follows in their APP-ID Tech Brief:

App-ID uses multiple identification techniques to determine the exact identity of applications traversing your network – irrespective of port, protocol, evasive tactic, or encryption. Identifying the application is the very first task performed by App-ID, providing you with the knowledge and flexibility needed to safely enable applications and secure your organization.

Sounds magical. I can now build a security policy based on applications alone without regard to the ports they use? Or can I?

appid

Even Palo Alto Network’s own documentation says the very first check is based on IP and Port, exactly the way every other vendor does it. You know why? Because that’s the only way to do it.

If I open a TCP connection to 192.0.2.1 port 80, the first packet sent is a TCP SYN. Here’s what I know from that:

  1. It’s likely a web-based connection. That said, anything can use port 80, so that’s only an assumption.
  2. It could be a connection to do a Google search, gmail, Google Maps, Google Drive, or any other Google property. Or Office 365 apps. Or something else.
  3. I might be able to do a reverse lookup on the IP to see where it’s going, but that adds latency and provides no guarantee the lookup will show you anything that will help identify the app or website. Or tell you if the content being served up is actually safe.

Bottom line: more information is needed. A few more packets must be let through on the connection before we know exactly what it is.

Let’s assume for a moment we take the position that we don’t care about ports at all, only applications, as I often hear Palo Alto Networks reps say. What can happen? First of all, you can do reconnaissance on anything beyond the firewall. If you do this rapidly, you’ll probably trigger the various protections in place to detect port scanning and similar activity, but it could easily be done in a “low and slow” manner that these detections probably won’t trigger.

Even Palo Alto Networks has a concept of ports that tie in with applications. This is configured on a per-rule/service basis, as shown below:

application-default

Per a post on their community, Application Default means:

Choosing this means that the selected applications are allowed or denied only on their default ports defined by Palo Alto Networks. This option is recommended for allow policies because it prevents applications from running on unusual ports and protocols, which if not intentional, can be a sign of undesired application behavior and usage.

Which is, of course, correct. Ports do matter as it filters out a lot of undesirable traffic. Palo Alto Networks simply masks this fact by allowing you to build only application-centric policies and not a separate policy for ports and applications the way Check Point currently does it.

Is an application-centric policy better? It certainly means less policies to have to configure and is one benefit to Palo Alto’s solution. Check Point will offer similar functionality in the R80.10 release, which, at this writing is currently available as a Public Early Availability release.

Whitelist versus Blacklist

This whole post spawned out of a discussion I started on LinkedIn when I posted this graphic, highlighting the number of applications supported by the different next generation firewall vendors:

apps-supported-201610

Various Palo Alto reps on the thread pointed out the number of applications supported didn’t matter as much because the way you should do it is to only allow specific applications and block the rest. Which, if you have a single policy for ports and applications, is a little easier to achieve. It is also possible to achieve in Check Point, but it does require some additional effort compared to Palo Alto.

Even with a whitelist approach where you permit only a small number of applications to pass, you have to be able to differentiate safe traffic from malicious traffic. As an example, specific anonymizers can appear to behave like innocuous web browsing. This is why Palo Alto Networks and others can also identify specific malicious applications to help differentiate.

It’s also why the number of applications a particular solution can identify matters greatly. As an example, I ran a Security Checkup at a Palo Alto Networks customer and saw the following applications:

checkup-anonymizers

In this case, the Security Checkup appliance was positioned outside of a Palo Alto Networks gateway filtering traffic. The report from which this snapshot was taken ran in February 2016. You can see that Gom VPN and Betternet are clearly being allowed by the amount of traffic compared to some of the other ones, which are clearly being blocked due to the limited amount of traffic. I checked Applipedia and these anonymizers are still not supported (as of 11th December 2016, at least).

It’s also worth noting that a whitelist approach has a bit more administrative overhead and only works when the applications you want to allow are defined.

Clearly being able to detect more applications is better, even if you employ a whitelisting approach, which can have a bit more administrative overhead. Even then, it will only work when the applications you want to allow are defined. Thus again, more is better. And, as noted before, this whitelist strategy will be easier to implement in the Check Point R80.10 release.

Disclaimer: My employer, Check Point Software Technologies, is always trying to stay one step ahead of the threats as well as the competition. The views above, however, are my own.

Networks Without Borders

​​I’ve spent the better part of twenty years focusing on network security. That wasn’t what I started out to do in my life, I was just sort of there and the industry grew up around me. I now see a day where network security is the exception rather than the rule.

Twenty years ago, people were using a few apps mostly hosted onsite from a few, wired locations. Most of the communications were not encrypted to boot. This made it practical to use a perimeter security devices to restrict who could go where and monitor the flow of data.

These days, networks are abundant and broadband. Users have multiple devices to connect across multiple networks, few of which go through some sort of perimeter security device you can control. Communications are plentiful with an increasing percentage of them encrypted. The applications used are also plentiful and increasingly hosted in the cloud, i.e. on someone else’s infrastructure.

In new organizations where Software and/or Infrastructure as a Service, the traditional perimeter gateway serves almost no purpose. There’s nothing in the network to segment and there little you can do in the network to protect.

To be clear, the traditional perimeter is not going away anytime soon for many organizations. There’s far too much legacy infrastructure that still needs protecting and a perimeter gateway may be your best bet. However, if you’re only looking at security from a network perspective, you’re missing out on an increasingly larger part of the picture. In the long run, visibility and security controls has to move closer to the endpoints. Not only those the end user uses, which includes traditional desktop/laptop and mobile devices, but the servers they connect to.

For cloud infrastructure hosted in VMware, OpenStack, AWS, Azure, or similar, this can be done through the use of microsegmentation, but make sure you are able to inspect traffic beyond layers 3 and 4. The good news is that Software Defined Networking technologies make it easy to apply deep inspection only to the traffic that needs it and not for all traffic. With the right solutions, the security will be enforced dynamically based on groups defined in the virtualization environment without regard to IP addresses. Also, traditional physical network security controls can make use of this information to make more intellegent enforcement decisions!

For Software as a Service offerings, you may need to utilize something like a cloud access security broker (CASB), a software tool or service that sits between an organization’s on-premises infrastructure and a cloud provider’s infrastructure. A CASB allow you to integrate familiar security controls with SaaS applications to extend visibility and enforcement of security policy beyond on premise infrastructure.

On endpoints, it’s simply not enough to employ regular anti-virus anymore, but tools that can block zero-day threats, which can enter a system usually through a web browser, email, or USB. Some vendors offer solutions to this that use highly instrumented solutions similar to Microsoft EMET or on-endpoint virtualization, which of course adds load to endpoints that probably already have too many agents installed. Keeping the protection lightweight and effective is key.

Mobile devices have their own challenges. Mobile Device Management is a good start, but for true bring your own device models, end users may object to the controls this provides. It also does not address issues of user/corporate data segmentation or mobile-focused malware. A specific threat prevention solution for mobile threats is definitely required.

Ideally, of course, all of these solutions can be managed centrally with events correlated across them. Some centralized identity framework that supports both on-premise and cloud-based applications will also be useful. Having identity correlated with your security events is even better.

It’s a challenge, but I feel like we finally have the technology to get this security thing right, or at least better than we’ve been able to do in the past. It’s going to take some effort to get there, along with supporting business processes and people, but I am hopeful organizations can and will get there.

Disclaimer: My employer, Check Point Software Technologies, does offer solutions to some of the above challenges. The views above, however, are my own.

Get Over Windows Defender Already, AV Vendors!

​From That’s It. I’ve Had Enough!:

Users of Windows 10 have been complaining that the system is changing settings, uninstalling user-installed apps, and replacing them with standard Microsoft ones.

A similar thing’s been happening with security products.

When you upgrade to Windows 10, Microsoft automatically and without any warning deactivates all ‘incompatible’ security software and in its place installs… you guessed it – its own Defender antivirus. But what did it expect when independent developers were given all of one week before the release of the new version of the OS to make their software compatible? Even if software did manage to be compatible according to the initial check before the upgrade, weird things tended to happen and Defender would still take over.

And then the piece goes on to talk about how Microsoft is being anti-competitive and Kaspersky is going to take this up with official government bodies in the EU and Russia.

If we’re simply talking about Anti-Virus here, I don’t know that Kaspersky, or anyone else for that matter, is doing anything that much better than anyone else. The technology has inherent limits and, generally speaking, efficacy comes down to how quickly signatures are generated and deployed.

We know how effective AV is in general. It’s why Check Point and numerous other vendors, including Kaspersky, offer different solutions that address threats AV cannot by itself. This is where security software vendors should be focusing their efforts. Stop fighting with Microsoft over Windows Defender.

Disclaimer: My blog, my personal opinions. I’m sure you knew that.

​A Word About Competition in the Information Security Industry

​The devices, networks, and social institutions we use today are only useful because, on the whole, most people largely trust them. If this trust gets eroded, people will not make use of them. It took me many years of working at Nokia to realize that regardless of what I do in life, I am always going to be looking for ways to improve the security with the ultimate goal of maintaining that trust.

As a company, Check Point firmly believes customers deserve the best security for their digital information. That, plus my long-time history with Check Point was why I ultimately decided to go work for Check Point when they acquired Nokia’s Security Appliance Business back in 2009. The talented, smart people I work with day-in and day-out working toward the same goal is why I’m still here even as a few of my friends recently left, for example Kellman.

That said, you may have noticed in my social media feeds that I’ve spent a little bit of time talking about Check Point’s competition. This is no accident as I see a lot of nonsense out there. I will admit to using my small platform to bring facts, understanding, and details to light, much as I did with my FireWall-1 FAQ back in the day.

To be clear, I think healthy competition is a good thing. It raises all boats, regardless of who you ultimately use. Despite our differences in approach, all infosec competition has a common enemy: the malicious actors who attempt to penetrate and disrupt our customers networks. We would do better as an industry to remember that and work better together toward defeating that common enemy.

Despite that common goal, everyone who works for a security vendor wants to succeed over the competition. As part of that competition, every vendor also puts out information that puts their offering in the best light, such as Check Point’s recent Facts vs. Hype campaign. Sometimes, that has the impact of throwing a bit of shade, perhaps 50 shades or so. This is all part of normal, healthy competition that happens in any industry.

With Palo Alto Networks, however, it’s clearly different. Nir Zuk, the co-founder of Palo Alto Networks, drives a car with the license plate CHKPKLR. This was widely known since at least 2005 and a picture of said license plate was featured prominently at their recent Sales Kick Off:

CHKPKLR

The guy up on stage? Their CEO Mark McLaughlin, propagating the “Check Point Killer” message to the assembled masses.

Over the years, I’ve heard countless stories of how Nir Zuk would come in to talk to a customer and spend a significant amount of time talking about Check Point, to the point where he was thrown out of at least one customer meeting! Given how some customers feel about Check Point, I’m sure that tactic did help to drive some sales.

Gil Shwed is not my friend

The guy on stage here? Palo Alto Networks CMO Rene Bonvanie.

It’s clear hatred of Check Point is institutionalized at Palo Alto Networks, and it comes straight from the top. It makes me question what business they are truly in. If paloaltonetworks.security doesn’t even resolve to their own website, it must not be the security business.

Disclaimer: My blog, my personal opinions. I’m sure you knew that.

Is Past (Security) Performance Indicative of Future Results?

It’s a phrase you will see in the fine print of any document related to past performance of a money manager, mutual fund, or managed financial account: “Past performance is not necessarily indicative of future results.” The same disclaimer could easily be applied to information security products and their ability to stop threats.

The most obvious technology this statement applies to: anti-virus. While it does a great job at doing what it was designed to do–block known, malicious files–it has limitations in the kinds of malicious files it can identify. It also can be a source of additional vulnerabilities, such as what recently was discovered in Symantec’s Endpoint products by Google. I suspect any widely security technology will suffer a similar fate: either the technology itself is attacked or the technology is rendered ineffective through innovation by the bad guys.

Where I think “past performance” is indicative with security products is: how quickly are security issues discovered with the product remediated. Because let’s face it: every security product will be vulnerable to some discovered issue at some point. What ultimately matters is: how quickly do you remediate these issues.

For a company that uses “Prevention is Non-Negotiable” as their marketing message, Palo Alto Networks is not so good at fixing security issues discovered in their products. Here’s the latest example from the PAN-OS 7.1.4 release notes:

PAN OS 7.1.4 Fixed CVE

The National Vulnerability Database lists this as a high-severity issue. The time to issue a public patch? Nearly 6 months from date of discovery. Based on the response times Check Point has seen when security vulnerabilities were responsibly disclosed to them, this timeframe doesn’t seem all that surprising.

To be fair, it’s possible that Palo Alto Networks did a risk assessment on these issues and determined the likelihood of exploit is low enough that they didn’t need to fix these issues urgently. They may be right, but when you preach “Prevention is Non-Negotiable,” taking 6 months to fix a known security vulnerability in your product just looks bad. Actions, ultimately, speak louder than marketing.

Disclaimer: My employer, Check Point, believes in addressing issues like this quickly. These views, however, are my own.