CPX 2016 Chicago Post-Mortem

It’s been a few years since I’ve been able to attend Check Point Experience (CPX), the annual user conference held by Check Point in Europe and the US. This year, I attended the event held in Chicago and, according to Check Point Founder and CEO Gil Shwed, had close to 2,000 attendees. It’s definitely smaller than RSA Conference, a more general security industry trade show that I also attended earlier this year. It is a bit more intimate, though, which is a good thing.

Yes, I managed to get the registration folks to print PhoneBoy on my badge

CPX runs for two days and has a combination of general sessions and smaller, more focused sessions covering individual products and services. A handful of customers also schedule one-on-ones with Check Point executives. There is also an expo floor where vendors have booths demonstrating their products and services. Check Point had a few booths for its various product and services offering, and yes, I had to stand at a couple of them to do my part for the cause.

While I always enjoy hearing what the Check Point executives have to say, the reality is, as a Check Point employee, I’ve heard a bit about what they had to say at our Sales Kick Off earlier this year. I was more interested in the other speakers at the general sessions, which at CPX, included a congressman, a futurist, and customers.

The guy from the Missouri State Police Department provided the most unique perspective, I thought. Not so much about the threats, though it was interesting to hear a bit about the Anonymous attacks they suffered during the Ferguson incident a couple years back. What stuck with me was the why, something we often don’t think a lot about. In this case, the “hacking” and “doxxing” Anonymous was doing in Ferguson was having immediate, real-world consequences to the very people entrusted with protecting citizens.

Information security for the Missouri State Police Department is about protecting these fine folks who protect our safety. A bit of a different goal from a lot of other organizations.

One of my favorite sessions was from Daniel Burrus, a best-selling author and futurist predicting the future for more than three decades. Lots of talk about hard and soft trends and using all that big data to look at the future instead of the past.

The Pathways to Innovation in the above slide were originally written in 1985. They haven’t changed. My favorite insight from his presentation? Rather than complaining about government regulations (a hard trend), look for the opportunities they present. They’re there.

And, of course, there’s Moti Sagey. I always enjoy his Sales Kick Off presentations about the competition and his CPX presentation did not disappoint. Even though he is part of Check Point’s marketing organization, his presentations are low on fluff and high on facts, with plenty of humor.

There were a few tracks of breakout sessions, which I have to admit I did not attend because I already knew a lot of the content in these sessions and I had to work some of the Check Point booths on our expo floor. The feedback I got from customers on the sessions were excellent and provided a lot of great information on new and recently announced products.

Other Vendors at CPX 2016 Chicago

Over the 23 year history of Check Point, a lot of former employees have gone on to start their own companies or work for companies started by former Check Point employees. One of the newer entrants in this space that had a booth at our expo was a company called Fireglass, which basically takes all of the code that runs in a browser and runs it somewhere else, exposing only visuals to an end user. It can also use Check Point’s SandBlast technology to handle file downloads, reducing the risk of zero-day malware entering the end user workstation. It’s extremely clever.

Another company involving former Check Point employees is GuardiCore, which has a clever solution for figuring out the traffic flows inside your virtualized environment so you can perform appropriate microsegmentation with vSEC. It can also identify rogue traffic flows–also a useful feature. For a bit more, check put Micro-Segmentation, the right way. Also, Product Manager Lior Neudorfer snapped a photo with me:

Indeni is a tool that monitors security devices like Check Point to, in their words, “power smarter networks through machine learning and predictive analysis technology, enabling companies to focus on growth acceleration rather than network failures.” They gave out happy face shirts–who can be unhappy wearing a happy face shirt?–and let you shoot Nerf guns at their booth. Looks easier than it is.

One other notable company was Avanan. They are a cloud access security broker (CASB), which sits between an organization’s on-premises infrastructure and a cloud provider’s infrastructure. CASBs integrate familiar security controls with SaaS applications to extend visibility and enforcement of security policy beyond on premise infrastructure. Avanan works with a number of security vendors, including Check Point. Specific to Check Point, they support Threat Emulation, Anti-Virus, and Data Loss Prevention with a growing list of SaaS applications such as Office 365, Google Enterprise, Box, and more.

There was a lot more to unpack from those two days in Chicago but that’s enough to give you a taste. If you use (or sell) Check Point products, I highly recommend attending next year to find out how Check Point and partners can keep you one step ahead of the threats.

Edited to add: I got Lior Neudorfer’s title wrong, he’s a Product Manager at Guardicore, not CEO. I also added a link to a Medium post about the product they showed me at CPX.

Disclaimer: If it’s not clear from the above, I work for Check Point. Hopefully it’s clear these are my opinions and Check Point’s official opinions may differ.

VirusTotal: Not a Replacement for Real Threat Prevention

From VirusTotal: Maintaining a healthy community:

VirusTotal was born 12 years ago as a collaborative service to promote the exchange of information and strengthen security on the internet. The initial idea was very basic: anyone could send a suspicious file and in return receive a report with multiple antivirus scanner results. In exchange, antivirus companies received new malware samples to improve protections for their users. The gears worked thanks to the collaboration of antivirus companies and the support of an amazing community. This is an ecosystem where everyone contributes, everyone benefits, and we work together to improve internet security.

VirusTotal is a great resource for users and security folks alike. It provides a quick way to validate how a number of antivirus engines view a particular file or URL (i.e. do they believe it is malicious or not). Think of it as a “peer review” for potential malware or even a “second opinion” when compared to whatever antimalware solutions you are using. Vendors who participate receive samples of files that aren’t detected as malicious for the purposes of improving their products.

VirusTotal is not meant to be used to compare different antimalware solutions. This is because the engines integrated into VirusTotal are not exactly the same version you might use on a desktop or on a network perimeter, which might take different information into account to determine whether or not something is malicious or might be configured differently than the “defaults” a particular vendor provides for a given engine. Also, signatures and engines change regularly, so even if a particular engine doesn’t detect something when you checked, it may detect it later.

Something VirusTotal is most definitely not is a replacement for a proper antimalware solution. This is noted on the VirusTotal about page:

VirusTotal is not a substitute for any antivirus/security software installed in a PC, since it only scans individual files/URLs on demand. It does not offer permanent protection for users’ systems either. At VirusTotal we think of our service as a second opinion regarding the maliciousness of your files/URLs.

It appears some vendors were using results from VirusTotal to supplement their products detection rates and not contributing their AV engine to VirusTotal. The comments in this article suggest the vendors that were doing this. VirusTotal has now expressly forbidden this behavior:

For this ecosystem to work, everyone who benefits from the community also needs to give back to the community, so we are introducing a few new policies to make sure that our community continues to work for years into the future. First, a revised default policy to prevent possible cases of abuse and increase the health of our ecosystem: all scanning companies will now be required to integrate their detection scanner in the public VT interface, in order to be eligible to receive antivirus results as part of their VirusTotal API services. Additionally, new scanners joining the community will need to prove a certification and/or independent reviews from security testers according to best practices of Anti-Malware Testing Standards Organization (AMTSO).

It’s yet another case of marketing hype not making you more secure. Companies who have deployed products from these vendors are most assuredly less safe than they were before, though I suppose they could switch to a number of VirusTotal alternatives easily enough.

Disclaimer: I am not aware of any relationship between VirusTotal and my employer Check Point Software Technologies. I’m also not aware of any relationship between my personal views that I’ve written here and Check Point’s views on this matter, either.

Prevention: The More Things Change, The More They Stay The Same

From Check Point vs. the world – firewall giant stays faithful to engineering roots:

The world’s largest dedicated security firm, Israel’s Check Point, still refuses to give an inch. Fashions wash over the industry on a never-ending hype cycle and yet the message handed out at the firm’s annual CPX 2016 developer and partner event in Nice this week was reassuringly old school - prevention is always better than cure and might cost you less in the long run.

A decade ago this would have been an inarguable orthodoxy and yet with younger US rivals such as FireEye, Fortinet and Palo Alto snapping at its heels pushing newer ideas angled more towards real-time detection and response, there is more explaining to do.

The “old school” message is still there because, fundamentally, the problems we face today haven’t changed all that much from the last 20 years. They are only on the rise now we are now more connected with more kinds of devices that connect in more places than ever before. The underlying risks are still the same, but with more and more data in more and more devices in more and more places, the impact of a control failure (or lack of controls) is far greater. The attacks and the attackers are more sophisticated, but the class of problems being exploited aren’t fundamentally different.

It’s not now, nor ever has been, an either-or proposition when it comes to detection versus prevention. You must do both and you must do both well if you’re going to stay one step ahead. Even organizations who employ predominately “fast detection and remediation” solutions still, for the most part, have traditional security controls in place. Clearly, a bank in Bangladesh could have used a little more of both.

While I wasn’t lucky enough to go to CPX in Nice, France, I will get a chance to go when it hits Chicago in a couple of weeks and hear for myself what Gil Shwed has to say. Will you be there?

The Great Cloud Migration: Existential Threat or Opportunity?

As part of my day job at Check Point, I review customer security architectures and make recommendations with an eye towards securing the right things the right way. Generally, the customers I talk to have a pretty good idea of what they have, how it’s laid out, and what controls are in place. I can usually find gaps in their knowledge, as well as their controls, but at least there is some basic knowledge of their own environment.

Recently, I was asked if we could help a customer inventory their security equipment because they quite simply don’t know what they have. How can you protect the right things the right way when you don’t know what those things are and what tools you have to do it?

One thing that is typically done as part of my engagements is a Security CheckUp. To perform the Security CheckUp, a Check Point appliance is put onto a span port in the customer network so traffic can be passively analyzed. After some time, a report is generated (see a sample), which I review to get a sense for what is in the environment, what potential threats exist, and how effective certain security controls are.

The Security CheckUp won’t tell me, at least directly, what controls do exist in the environment, nor does it tell me where in the environment they are. Other vendors make tools that can map out the network, but even those tools have their limits. For example, they can’t tell you about any layer 2 equipment within the environment. It also can’t tell you about anything that might be on a span port (e.g. an IDS sensor) or equipment that is powered off, but in a rack somewhere.

Even with these tools, a fair amount of manual work is still required to turn that data into an accurate picture of what your environment is. In short, there is no “easy” button for this problem; it’s going to require real effort to track down what is where.

How can this happen? How can organizations become so unaware of what they have that they need someone to come in and tell them what they have and where it is? Then it hit me: anyone making use of “the cloud” is going to have this problem. Or they already are and they don’t know it.

In the cloud, infrastructure and applications can come and go with the push of a button. Need another 10 webservers? Done. Need to burst to handle three times the traffic? No problem. Sure, you’ve got to have physical machines to run on, but racking and stacking that stuff is easy. The physical topology? Flat. The virtual topology? Changes every second.

If you’re not treating your “cloud” infrastructure in an automated fashion, you’re doing it wrong. You’re also doomed to make the same mistakes and more that you’re making today. While some of the same tools can be used in the cloud, they integrate a bit differently. There are also a number of additional considerations that must be made for cloud—considerations that, quite frankly, are very different from physical networks.

There was a time when security people were siloed off from other parts of the organization. Security only got brought in at the end to make it all work and is often the scapegoat when it doesn’t work (or things get hacked). If we’re going to be one step ahead of the threats, this practice has to end. Security people have to be part of the conversation as applications and services are being conceived, or in the case of software as a service, being migrated to. Likewise, security people have to figure out how to stop being a business impediment, but being a business enabler.

It also means, if you’re a security person not versed in the ways of infrastructure or software as a service, if you want to remain relevant, you need to bring your skills and knowledge up to scratch, and quickly.

One could look at all of this movement to cloud as a threat to your career. If everything can be automatically deployed, do we need IT or Information Security professionals anymore?

Absolutely, organizations will still need people that understand how it all connects together and how to secure it. In fact, I see this as a huge opportunity to improve security for all organizations. Because everything is fundamentally shifting, we have a chance to get this security thing right after decades of getting it wrong. This means, finally, being able to securing the right things the right way, regardless of where they may be.

It’s a huge opportunity. It’s going to require all of us to acquire new skills—both technical and political—to accomplish. The question is: do you have the courage and the vision to take advantage of it?

Why Check Point Security Management Is Still The Gold Standard

In a recent customer meeting, one of my colleagues was telling a customer how the central management capabilities of Check Point Security Gateways is light years ahead of what any other vendor provides for their products and has been for as long as I’ve been supporting Check Point products–20 years now! This is not just my opinion, but that of third party analyst firms as well as customers.

The question one might ask: how come after all these years, no one has been able to seriously challenge Check Point’s management capabilities? Sure, other vendors offer centralized management, but no one can operate at the same scale. A single Check Point Multi-Domain Management installation can potentially manage hundreds and thousands of gateways, where competitors struggle managing tens or hundreds of gateways.

To me, the answer is very simple: it’s in Check Point’s DNA and it goes back to the very first versions of the product.

The first version of FireWall-1 I used was 2.0. At that point, it was managed using an OpenLook GUI on SunOS or Solaris. In version 2.1, a Windows-based client was added, which eventually became the way to manage the firewall policy. This GUI has seen numerous enhancements over the years, but I’m sure even people who haven’t been using Check Point as long as I have could use those oldest GUIs.

The concept of separating the management from the firewall? It was in version 2.0, though I’m told it was in a 1.0 release (not the first one, but a subsequent one). Large scale management at service provider scale, a.k.a. Provider-1, was added in 1999, first as a separate management product, then integrated into the maintrain product in the R71 release.

As more features became integrated into the product, Policy Editor became SmartDashboard and the number of GUIs proliferated, each focusing on its own part of monitoring or managing Check Point products. Even SmartDashboard had a number of tabs for managing the different Software Blades.

Certain competitors to Check Point still like to highlight the various tabs in SmartDashboard as a weakness of the product. You have to allow something in the firewall, then in IPS, App Control, etc, instead of just having a single, unified policy. While it’s a fair criticism, you can’t just make a drastic change to how things are done when you have more than 100,000 customers. The methods chosen in the R7x releases were pragmatic and allowed customers to leverage the functionality in Software Blades without having to hand-tweak hundreds or even thousands of rules built around access control only.

Prior to R80, perhaps one of the bigger management changes was the addition of SmartLog. I remember when I was asked to perform usability testing on this product before it was released. I was presented with a relatively spartan interface that reminded me of the Google homepage. I was able to quickly and easily find stuff that would have been possible in SmartView Tracker, of course, but would have taken far longer, particularly if I had to search across many log files (SmartLog indexes and searches across all your log files).

And now, after many many months in development, the R80 release of management is finally available! It’s now a single management UI (SmartConsole) which incorporates both policy and logging. Security policies are now unified across the blades (though for full functionality, this requires gateways running R80.10 and above). Even while managing older gateways, there are numerous usability enhancements that make it quicker and easier to manage your security infrastructure.

Automation, something that was not a strong point in earlier Check Point releases, is now a first class citizen. A robust CLI and Rest API is available complete with support for concurrent administration (in read/write mode, even!) and granular permissions to control who can do what. There is also a web-based interface to access logs, events, and reports.

While R80 was formally released on 31 March 2016, numerous customers have been using R80 for weeks and months through the Early Availability program. I decided to wait until the formal release to migrate the management of my home firewalls to R80, which involves a fresh installation and the use of the standard migration tools to export/import the previous configuration. Rather than leverage that process, I decided to hand-configure everything from scratch.

Actually, that’s not entirely true. I used the CLI in order to create the vast majority of the network objects I needed for my relatively simple policies. The R80 CLI is far easier than using dbedit to perform the same tasks. If I was feeling exceptionally clever, I could have also done it using the rest-based API, which allows for all kinds of integrations with other third party tools.

There are a few things missing in R80 Management that I’m sure will be added over time. Some of the really cool features in R80 today like nested policy layers or a fully unified policy will require R80 gateways (which are not available yet).

One thing which will take some time to get used to is the lack of a built-in “Demo Mode.” I remember when you would invoke it by logging into the server *local (later given an official tickbox on the login screen), which would bring up a locally hosted database so you could see the GUI in action. This approach had some limitations, namely certain features that required a live management server would not work. As R80 Management is definitely more than just UI, a virtual machine with a populated configuration will be provided for this purpose in the near future.

If you haven’t kicked the tires on R80 Management yet, there’s no excuse now that it has been generally released. It shows the level of commitment Check Point continues to make to keep ever-increasing complexity easier to manage than ever.