From Stop fretting about mobile security, says Palo Alto Networks founder:

“What I often hear from customers is that 'users have a mobile and they have corporate email and they have Dropbox and I'm afraid they will upload a PDF via Dropbox to their personal account'. Well, what about your Windows users? They've been doing that for the last ten years! Nobody stopped them using Dropbox on their browser for the last ten years.”

So says Nir Zuk, founder and CTO of Palo Alto Networks.

And you know what: he's right. Not necessarily about Dropbox since Dropbox hasn't been around for ten years, but because if you've given people access to a web browser in your organization, you've basically had little to no control over the “applications” they can run. Because even ten years ago, you could run a lot of “applications” organizations so desperately want to control today.

Of course we had URL filtering ten years ago, which can be used to control what people can use with a web browser. But it wasn't as widely used and unless you were using explicit proxies, HTTPS was a pretty big blind spot. And, really, that's only a partial solution since you might want to allow some parts of a web-based application and not others. Doing that solely based on URLs might not always be possible.

But I disagree that you have no control over what end users do on their PCs. Things like the “dead but not going anywhere anytime soon” Anti-Virus/Anti-Malware, firewall, Application Whitelisting, Media Encryption and Port Protection, and a host of other tools, if properly deployed and are monitored, give you something to protect yourself from the malicious things your users get inadvertently from the Internet.

And, of course, segmentation helps too. Not putting your user machines and servers on the same network, using a firewall to media and control access by user, application, service and yes, Nir Zuk, ports.

In fact, once you remember that the browser has made you liable to these kinds of threats for a long time, mobile devices start to look like an attractive option. Zuk claims “mobile devices open up a lot of opportunities for being more secure than today because they do allow the opportunity to control movement of data between devices, and because of the way they're built, the operating system and the controls – especially in iOS 7 and hopefully soon in Android.”

He's absolutely right here. Mobile operating systems are built to be more secure from the ground up. However, you're assuming the device is not rooted or jailbroken, which removes many of the protections these operating systems have in place.

And then there's the data these devices can access and use. What are you doing to ensure data remains protected on these devices? Nir's right there is opportunity to do this better on mobile devices but right now it's an “all or nothing” approach. VDI, Mobile Device Management and secure container technologies are all variations on this approach and users are adverse to all of them.

And then there's the whole lack of visibility over what's going on with the mobile device. At least with a PC you get some, on a mobile device? Not so much.

“You can have a firewall that denies all incoming traffic and bad things still come in,” [Zuk] points out, because Web apps and cloud services mean “the firewall doesn't control access into the network.” Even more bluntly, he's prone to suggesting that “I strongly recommend you take your firewall out and replace it with an Ethernet cable – it will improve the performance and improve the management. And no, I’m not joking.”

Again he's right insofar as replacing a firewall with an Ethernet cable will improve performance and improve management (if you consider removing something to manage an improvement).
However, this advice is utterly clueless as it ignores decades of evidence to the contrary, not to mention the fact Nir Zuk's company Palo Alto Networks sells firewalls.

You know when Windows XP dramatically improved security? In Service Pack 2 when the built-in firewall was enabled by default. Yes, the attacks moved up the stack as a result but a properly configured firewall–even one that only blocks on ports and IPs–is better than no firewall at all.

So should you do about mobile device usage in your enterprise? Depends on your policy and depends on what your critical assets are. Should you “fret” about it? No more than anything else. Just realize mobile devices present unique challenges–and opportunities.

Back when I first got into IT and just started working with FireWall-1, Pointcast was a thing. For those who weren't around back in the mid to late 1990s, Pointcast had a very popular screensaver that displayed news and other information delivered periodically over the Internet to PCs. The problem was: it used an excessive amount of bandwidth on corporate networks, especially if more than a couple of people used it.

The result was, of course, corporations wanted to block access to Pointcast. The problem: how to do it. All we had in the mid 1990s was the traditional firewall which could control access based on IP and port. So we should be able to block the port or IPs it communicates with, right?

Pointcast used good old HTTP. Even back then, no one in their right mind would block HTTP. Of course, everything uses HTTP or HTTPS to communicate these days, and with a traditional firewall with the ability to control traffic only by IP or port, leaving HTTP or HTTPS wide open is tantamount to leaving the barn door open. 

Pointcast didn't exactly publish their list of servers, but users of the PhoneBoy FireWall-1 FAQ contributed a list of IPs plus a couple of other clever solutions to the problem, which I've made available after the break if you're curious.

Of course, with things like content delivery networks, Amazon Web Services, and a host of other ways to serve up an application to users that are available today, attempting to control access to these applications merely by port and IP address is crazy. 

Fortunately, there are a number of solutions to this problem. Check Point's solution is the Application Control Software Blade, which can allow/block access to an application regardless of the ports and destination IP users, and even limit the bandwidth these applications use. New applications or changes to existing applications are made available to the gateway periodically so you can see that you're users are using it and, when it kills you bandwidth or worse, you can block it. 

If only tools like App Control were available back in the day, security admins could have spent more time on more important issues rather than figuring out how to block Pointcast and other applications and I would have a few less FAQ entries on "how do I block X application."

There are a few ways to block access to Pointsec:

  1. Deny HTTP Access to Pointcast Servers
  2. Use the HTTP Security Server
  3. Create a Dummy Host in your DNS/WINS

Deny HTTP Access to Pointcast Servers

To deny HTTP requests to the Pointcast HTTP server, deny access to the following machines: through, inclusive. through, inclusive.

To minimize the number of network objects needed (since range objects aren't supported), create the objects as follows and put them into a group:

Create host 
Create network with subnet mask (include broadcast) 
Create network with subnet mask (include broadcast) 
Create network with subnet mask (include broadcast) 
Create host

Create host 
Create network with subnet mask (include broadcast) 
Create network with subnet mask (include broadcast) 
Create network with subnet mask (include broadcast) 
Create host

Deny HTTP traffic to these hosts.

Using HTTP Security Server

Thanks to Daniel Blander for this idea:

Create a URI resource that filters the following URLs:


This roughly translates to creating a Wildcard URI Resource with the following parameters:

Service: http 
Action: all 
Host: * 
Path: /FIDO* 
Query: *

You will want to use this URI resource in a rule that denies access.

Create a Dummy Host in your DNS/WINS

Thanks to Mark Syroka for this idea.

Create an entry in your DNS or WINS for the hostname PCNPROXY. Your clients will try and access whatever host resolves to this name if it exists. If you wish to use the PointCast Caching Manager, which is designed to Cache PointCast Requests and is available for free from, your DNS/WINS entry would point to this machine. Otherwise, this entry can point to a non-existant machine or any machine that does not run a web server on port 80.

From Chris Hoff's (a.k.a. Beaker) NGFW = No Good For Workloads:

NGFW, as defined, is a campus and branch solution. Campus and Branch NGFW solves the “inside-out” problem — applying policy from a number of known/identified users on the “inside” to a potentially infinite number of applications and services “outside” the firewall, generally connected to the Internet. They function generally as forward proxies with various network insertion strategies.

If you look at the functionality Check Point and its various competitors provide, this is precisely what a large chunk of the "next generation" functionality is geared towards--protecting a number of known/identified users from the dangers they might encounter from a potentially infinite number of application and services. There are differences in how the different security solutions perform this task, as well as how well they perform, but that's their overall goal.

That is, as Beaker continues, very different from what a Data Center firewall needs to do:

Data Center NGFW is the inverse of the “inside-out” problem.  They solve the “outside-in” problem; applying policy from a potentially infinite number of unknown (or potentially unknown) users/clients on the “outside” to a nominally diminutive number of well-known applications and services “inside” the firewall that are exposed generally to the Internet.  They function generally as reverse proxies with various network insertion strategies.

In other words, we're not always sure who is coming in, but we know what they are going to and (hopefully) what applications and services they are going to connect to. 

What kinds of protection do you need in these scenarios? Usually very different. Can every next generation firewall provide just the right protection? 

First, let's take a step back and realize that the Data Center itself is very different from what it used to be a decade or two ago. Whereas we started with a number of servers hosting resources in one or two physical locations with users mostly in known physical locations, we now potentially have services, data, and users all over the place, with a mix of physical and virtual servers where traditional methods of segmentation and protection are not practical. 

The "core" of the enterprise network--where all the necessary resources ultimately connect together--is quickly becoming the Internet itself. How do you protect your resources in this reality?

We go back to one of the fundamental tenets of information security, our old friend segmentation. This means grouping together resources with like function and like information confidentiality levels, placing a enforcement point at the ingress/egress point where you can enforce the appropriate access control policy. The goal for that enforcement point? Let the authorized stuff in and keep the unauthorized and bad stuff out. 

Of course with virtualization, end user PCs, and mobile devices, the boundaries become more difficult to apply but with virtualized security solutions, integrated endpoint security on the end user PCs, trusted channels (VPNs), and secure containers on mobile devices, more is possible than you think. Check Point and other companies have various solutions for this. 

Once the network is segmented and enforcement points are in place, then you can decide what protections and policies should be applied. In some cases, like on User Segments, you want lots of protection as users could go anywhere on the Internet and unknowingly bring in some malware to run amok in your network or send company secrets to their Gmail account. For your data center? Maybe you just want to make sure authorized users can reach specific applications and you want to sanity check the traffic to make sure it's not malicious. Or maybe you just need a simple port-based firewall with low latency for a given app.

The idea of putting a firewall as the core of your network--especially a next generation one-- is silly, as Beaker rightfully points out. Really, your core should be a transit network with enforcement points--those things we typically call firewalls--at the ingress point of the various network segments. This way, just the right policy and just the right protections can be applied without applying them to traffic that doesn't need it. 

This is where I think Check Point's portfolio shines. In the Security Gateway space, the Software Blades architecture is flexible enough to allow you to be very granular about what protections are applied to a specific enforcement point, whether a physical gateway, or a virtual one either in a Check Point chassis (e.g. VSX) or in a VMware or Amazon Web Services environment. This means you can scan a random MS Word document from the Internet for malware on one gateway close to the users while not impeding the flow of traffic in and out of your Data Center that flows through a different Security Gateway. And yes, if you have a 5 microsecond transmission requirement, Check Point has a solution for that with the Security Acceleration Module in the 21000 series of appliances. 

Does an NGFW solve every problem? No, and anyone that tells you it will is flat out wrong. It's not always the right tool for the job, as Beaker points out:

Show me how a forward-proxy optimized [Campus & Branch] NGFW deals with a DDoS attack (assuming the pipe isn’t flooded in the first place.)  Show me how a forward-proxy optimized C&B NGFW deals with application level attacks manipulating business logic and webapp attack vectors across known-good or unknown inputs.

While an Enforcement Point needs to be hardened for DDoS--especially if it is exposed to the Internet--no Enforcement Point is going to completely mitigate a DDoS. There are a number of mitigation strategies that include on-premise DDoS-specific appliances as well as external services, which I know Check Point has advised customers to utilize in various scenarios as part of their Incident Response Services.

Likewise, business logic and webapp attack vectors are outside of the wheelhouse of all NGFWs. You still need to properly secure your web applications, even with an NGFW in place. In addition, there are dedicated, Web Application Firewalls for this purpose and if you've properly segmented your network, you can make sure only those resources are protected by them.

At the end of the day, a Next Generation Firewall, whether it is from Check Point or someone else, is not a panacea. It can be a powerful tool, but like all tools, it needs to be applied properly as part of a comprehensive security strategy that begins with proper segmentation and a well-defined policy. From there, you can apply just the right protection to just the right resources.

Disclaimer: It should be obvious from my last post I work for Check Point, but this is my own opinion. 

Note: I've released a podcast of this article if you prefer. 

The 20 Year Anniversary of Check Point's founding has a special place in my heart. Mostly because it is how I personally made my career. How I got involved in Information Security. How I, unbeknownst to me at the time, helped a lot of people get into Information Security.

18 years ago, I had no idea what Information Security was. I was a systems administrator working for a contracting agency fresh out of college. I did some odd programming jobs which, quite frankly, I was never that great at, and eventually, an interesting contract: doing tech support for a company out of San Mateo, CA.

The product: Qualix HA, a high availability product for Sun Workstations based on a Veritas product. One of the products we also sold along with it and provided high availability for was a product called Check Point FireWall-1.

That contract turned into a full-time job and eventually, as the other people in the group kept getting hired out to do "professional services" or whatever, I had to learn FireWall-1 the hard way: by supporting customers calling for help without much of a backstop.

Back in those days, Check Point did all of their support out of Israel. SecureKnowledge didn't exist. They had a mailing list, which had a lot of questions asked on it, but not a lot of answers.

On a hidden page on the Qualix website, there was an FireWall-1 FAQ started by one of the developers at Qualix. I started writing entries on it. Eventually, I got permission from Qualix to take the content and put it on my

Qualix became Fulltime Software and got bought by Legato Systems in 1999. Before that happened, I got a job at Nokia in their IP Routing Group--the guys who make the firewall appliances that ran Check Point's firewall. 

PhoneBoy's FireWall-1 FAQ existed for the better part of 8 years as a publicly available resource containing the knowledge I collected about the Check Point products from the mailing lists and my own work with the product as a technical support guy. Obviously a lot of that knowledge also migrated itself into Nokia's Knowledge Base, which I more or less maintained during my tenure there. It also made its way into two books that I published with Addison Wesley (now Pearson Education).

In parallel, I created a moderated mailing list on FireWall-1 in June of 2000, first called FireWall-1 Wizards, then renamed to FireWall-1 Gurus after the folks who own the Firewall Wizards trademark suggested I should change the name. The mailing list lasted for about 9 years.

Around 2003 or so, I started burning out. Technical Support is a difficult job to do long term in general and I had done more than my share. I ended up moving onto other things inside Nokia's Enterprise Solutions or whatever it was back at that time. In 2005, I agreed to let Barry Stiefel take the content on and copy it onto

I kinda thought I was done with Check Point stuff by then, but I was wrong. I kept working with Nokia's Knowledgebase for the Enterprise Solutions group, which had a lot of Check Point content in it. This meant, for me, reading, writing, and re-writing this content. I kept mentoring folks in the TAC when they had issues with Check Point or just general network troubleshooting. I kept supporting other products that were somewhat Information Security related (VPN and Remote Access product as well as Sourcefire on Nokia).

When the Check Point acquisition of Nokia's Security Appliance business was announced, I wasn't sure what to expect: for a platform that I spent 10 years of my life supporting as well as my own career. When it became clearer that I had a home at Check Point, I began to start looking a bit more closely at the Check Point products again.

What I discovered was that the product hadn't changed all that much. Sure, there was NGX, the rise of Secure Platform and Check Point's own appliance offerings, and many refinements along the way, but the fundamentals of the product were basically the same.

But change was happening: I could see it before I was officially part of Check Point as I was told about the new IPS Software Blade in R70. As I started visiting the Check Point headquarters in Tel Aviv, I got to hear in more detail from the people who develop the product. I got to see the changes up close and personal. App Control, URL Filtering, Anti-Bot, the new (and old) SMB products, DLP, appliances, Gaia, I got to see it all before it was released.

Also, Check Point made a couple of key acquisitions prior to Nokia's Security Appliance business: Pointsec, which was a well-known disk encryption solution, and Zone Labs, which made the ZoneAlarm desktop firewall product. Both of which ultimately became part of Check Point's Endpoint Security offering along with the later acquired Liquid Machines to provide Document Security along with Dynasec to provide Compliance solutions to Check Point's overall product portfolio.

It's been a beautiful thing that I'm proud to say I've been a part of since nearly the beginning. And, of course, there is a lot more to come.

Let's face it: the threats to our networks have only gotten more complex, more dangerous. A lot of the fundamental issues in Information Security haven't changed, either. End Users still do unwise things. Companies don't invest enough time or money in doing the basics in security practices like segmentation, user education, changing default passwords, and a whole host of other practices.

The Information Security market has many players. Check Point plays in many spaces with different competitors in different segments but continues to grow and innovate year over year and continues to remain independent and focused on the goal of securing the Internet in a sea of acquisitions by larger, less security focused companies. 

Here's to another 20 years, Check Point. 

It's pretty obvious from looking through the number of 404s I'm seeing in Google's Webmaster tools that a lot of pages still link to old stuff I wrote about Check Point FireWall-1. I'm actually trying to "fix" these 404s now by resurrecting some of the old content.  Not updating it, of course, but at least making the links point to something semi-useful, if historical.

This is one of those articles, obviously not at it's original URL, but the original URLs will point here. What amazes me about this particular article is that it's still relevant today as NAT really hasn't fundamentally changed in the Check Point products for some time. The basic concepts are still the same, too, and other than the implementation details, is probably relevant for other security products, too. 

Bottom line: NAT only works if the firewall is in the path of the communication. How do you know? Follow the bouncing packet, otherwise known as Troubleshooting 101. 

Hit the break to see this old FAQ in all its ASCII network mapped glory.

When implementing address translation, the unspoken assumption is the firewall will always be between the two machines talking to each other. For external machines accessing internal machines, this is a safe assumption. In the case where internal hosts are accessing internal hosts, this is not always the case.

Another important thing to note is that NAT rules are processed in the order they are listed. Once a packet matches a rule in the rulebase, subsequent rules are not processed for that packet. NAT rules are not applied cumulatively.

To demonstrate these principles in action, here is an example network:

Internet Segment (204.32.38.x)
-------------------------------- (
                         (        (
                             |                  |
       Segment A (10.1.x.x)  |                  | Segment B (10.3.x.x)
                             |                  | 

The firewall has two interfaces: and The router has three interfaces:,, and Each interface of the router has a class B netmask (

Let's assume an "Externally" accessible SMTP server is at and it has an external address of (via NAT). There is some other internal SMTP server ( that tries to access the "external" SMTP server via the external address. Assume the following NAT rules:

  Original Translated
No Source Destination Service Source Destination Service
1 Any Any Orig Orig
2 10.x.x.x Any Any Orig Orig tries to initate a connection to Routing will eventually take this packet to the firewall. The packet is accepted by the firewall's security policy and is then processed by NAT. The first rule that matches is rule 1, which translates the destination of the packet from to The "source" of the packet is not changed (rule 1 says not to touch it). The packet will then be routed to, then

When sends its "reply," it will be sent to (the "source" of the connection attempt). The reply is routed to and then directly to is expecting replies from (who it thinks it tried to connect to), not, so they are dropped (as they should be). If a machine on 10.1.x.x were to access, the same thing would happen except the packet would travel one less hop.

What would happen if the rules were reversed (i.e. rule 2 was listed before rule 1)? When tries to access, the packet gets routed to the firewall and passes through the rulebase. NAT then would rewrite the source of the packet to be The destination of the packet would still be (i.e. it does not get translated), but gets routed out the internal interface (or at least it should if you've configured NAT correctly. ;-). The internet router sees this packet and routes it back to the firewall (it's an external address, after all). The packet would ping pong back and forth until the TTL expires.

One reason why you might connect to the translated IP address is because your internal client's DNS server points to it. You can resolve this problem by implementing split-horizon DNS, i.e. different DNS servers for the internal and external networks. An internal DNS server reflects the internal IP address for a host and the external DNS server reflects the externally resolvable IP addresses for the host. Internal clients will use the internal DNS server. You can also put a host entry on the local system pointing at the internal address.

Other than implementing split-horizon DNS, can you get around this problem? There are two methods you can use to get around this problem, which I have documented below.

Put Externally Accessible Hosts on a DMZ

This actually makes more sense from a security standpoint because you can provide more control over access if externally accessible host are all on their own segment. To create a DMZ, you would have to add a third interface to your firewall with a different logical subnet and move the accessible hosts to that subnet.

Internet Segment (204.32.38.x)
-------------------------------- (          DMZ segment (172.31.0.x)
                                   FireWall (
                         (        (
                             |                  |
       Segment A (10.1.x.x)  |                  | Segment B (10.3.x.x)
                             |                  | 

This puts the firewall between the client and the server, thus solving the NAT problem.

The Dual-NAT Trick

The success or failure of this trick is dependent on the OS that you use for your firewall and may even depend on the environment. In most cases, it does not work. When a packet is received in one interface and is routed out that same interface, the OS's TCP/IP stack will instead issue an ICMP Redirect with the system's untranslated IP address. Depending on the circumstances, this connection may either never take place or take a long time to establish. FireWall-1 can't do anything about this. Assuming my trick does work for you, you are effectively doubling the amount of traffic that the connection would generate and add additional, unnecessary load to the firewall. The best way to resolve this problem is to simply not use the translated IP address in the internal network.

In order to insure that the firewall stays between the connection between the two hosts, you will need to create dual NAT rule. The NAT rule will look at both the source and destination of the packet and translate both the source and the destination of the packet. Because the rules are processed in order, the dual NAT rule must come before both your "HIDE" rule and your SMTP server's translation rule as below:

  Original Translated
No. Source Destination Service Source Destination Service
1 10.x.x.x Any Orig
2 Any Any Orig Orig
3 10.x.x.x Any Any Orig Orig

What will this do?

  • All traffic coming from 10.x.x.x that is destined for will get hidden behind (the internal IP address of the firewall) and have a destination of
  • All other traffic coming to will keep the original source and have a destination of
  • All other traffic coming from 10.x.x.x will get hidden behind (the external IP of the firewall) and keep the original destination.

The side effect of this is that for each connection to your "internal" SMTP server using the external IP address, you will see the network connection traverse your internal network twice:

  1. Once between the "server" and the FireWall
  2. Once between the firewall and the "client"

If you have done this and you still can not access the host in question, use a packet sniffer to determine what is going on. In cases where it will not work, the firewall system will send an ICMP redirect to the client pointing them to the internal host using the untranslated address. Since the client is not expecting to see the host's real IP address, the connection will fail. In this case, you will need to disable ICMP redirects on your host system. The only system I know how to do this on is Solaris, and the command is as follows:

   /usr/sbin/ndd -set /dev/ip ip_send_redirects 0

On IPSO, this is done at a per-interface level. If VRRP is running on a particular interface, this is the default behaviour. If you are not running VRRP on a particular interface, then issue the following command if the interface you wish to enable it on is eth-s3p1c0 (add it to /var/etc/rc.local if you wish for this command to be active after a reboot):

   ipsctl -w interface:eth-s3p1c0:family:inet:flags:icmp_no_rdir 1

On NT, you can disable ICMP Redirects with NT Service Pack 5 and later by adding or modifying the following registry entry:


This key should be a DWORD set to 0.

If you know how to do this on other platforms, please contact me so I can update the page.

Optionally, you can also block ICMP Redirects at the firewall:

No. Source Destination Service Action Track Install-On
1 Firewall any icmp-redirect drop   Source

Binding the NATted IP address to the Loopback Interface

The basic idea is to bind the translated IP address to the loopback interface of the server. On Unix machines, you use a command like

   ifconfig lo0:0 up

On NT, you will need to add the MS Loopback interface (which you will need to add to the system) and add the IP address to this interface with a netmask of If packets come into the system for the translated IP address (because, for instance, they did not come to the firewall), the system will respond to packets for this IP address. This method does require slightly more administration since now you must also maintain the NAT on the individual servers as well.