I know a few articles have been written about this topic already elsewhere. That said, I sometimes will do a blog post so if I need to find something again, I know it will exist on my own blog.

For a while I had been using WhiteHat Aviator, which is a fork of Google Chrome that has many privacy options enabled by default. Since, at the time of this writing, it is currently five major versions behind mainline Chrome, it’s gone Open Source, and there doesn’t appear to be any recent checkins on the GitHub project, and there will likely be some difficulty updating the existing browser, I’m guessing we probably won’t see an update unless someone undertakes a major effort.

With that in mind, I’m looking to try and configure Google Chrome as securely as possible. There is the Google Chrome Privacy Whitepaper and another independently maintained page on Privacy and Security Settings in Chrome. You could also use something like the Disconnect Search plugin. However, none of these pages solve one feature I really enjoyed about Aviator: starting up the browser in Incognito Mode. This does two important things:

  • Doesn‚Äôt log website browsing history at all

  • Clears all cookies on browser exit

However, Google Chrome does not offer an easy way to do this, but it can be done in two specific places: The shortcut you use to start Google Chrome on your Windows machine and the Windows registry. 

First, the shortcut. Find it on your computer, right-click on it and select Properties from the pulldown menu. Add a ‚Äďincognito to the end of the Target field (note, it‚Äôs a space, two dashes, then incognito) and click Apply.

This will cause Chrome to start in Incognito Mode when launched from the shortcut. However, this will not cause web links you click on in other apps to also launch into the Incognito Mode. To do that, you need to edit the registry as follows:

  • Open Regedit (From the Start Menu, type regedit)

  • Find the following registry key: HKEY_CLASSES_ROOT\ChromeHTML\shell\open\command

  • Edit the registry string (double click on ‚Äú(Default)‚ÄĚ), adding a ‚Äďincognito to the value data between the quote and the double hyphen and click Ok.

Congratulations, you‚Äôre now permanently incognito in Chrome. Note if you want to use some of your extensions in Chrome, you will need to manually enable them for Incognito Mode by clicking on the hamburger menu in the upper right corner of the browser window (the three vertical lines), selecting Settings, and then clicking on Extensions. Check the ‚ÄúAllow in Incognito‚ÄĚ checkbox for each one.

While it’s all fine and good that I can configure this stuff by default, I really wish Google had a one-click option that just enabled all this stuff by default.

To do something similar on the Mac, try the Google Chrome Incognito app. For Linux, it will likely be a distribution specific answer, but this should work for recent versions of Ubuntu.

I was listening to Episode 395 of the Security Weekly podcast when I heard one of the guys say that Information Security is often The Department of No. As in, no, you can't use that device. No, you can't use this new technology. Why? Because we can't secure it or control it. Because the risk of data loss is too great. Because NSA. Or whatever boogeyman The Department of No can come up with.

There are points on both sides of this debate. Anytime a new device connects to a network, regardless of what it is, there is always additional risk. If that device is controlled by the company, the risk is significantly less. If they don't control it, and there are inadequate security measures built into the network they connect to, it's another potential member of a botnet or worse. 

Despite all these risks, people want to use whatever device they want wherever they want and still get to and work with corporate data. Most users have zero clue about the potential information security implications of this, and even those that do--often those folks who sign paychecks--don't care because they want what they want, implications be damned.

So what's a poor information security professional to do? How can they become The Department of Yes while minimizing the associated risks these new technologies present?

First, we should ask the question: what is it we're trying to protect, anyway? Certainly an unmanaged devices presents a risk and should be connected to an untrusted network, such as a guest WLAN. You have your network properly segmented to allow this, right?

Next, you should ask the question: what is it that device needs access to? Certainly not the entire internal network, but some subset of it. Common items include Email, some intranet sites, their home fileshare, and maybe Salesforce. 

Of course, the minute you provide access to that data to an unmanaged device, there's a risk for that data to be lost somehow, which should raise the hackles of any information security professional. The common reactions to this phenomenon are to not allow access to any data, only allow access to very limited, harmless data, or to allow it only on condition of managing the device (i.e. with a Mobile Device Management solution). Users, on whole, like none of these options.

Another option is available, and that's the concept of a secure container. This allows an unmanaged device to access sensitive data within a container that is secured and controlled by the business. The user can do what they need to do, the data only stays inside that container, that data is stored securely on the device, and the business can revoke access to that data at any time without managing the entire device. 

The challenge with any container-type solution, of course, is doing this without compromising the end user experience. The users still need to be able to access and potentially manipulate this data in an intuitive manner. If adding a container-type solution degrades the end user experience, end users won't accept it and will demand a less secure solution. 

There are a number of different solutions that operate on this principle. The ones I am most familiar with are the ones Check Point sells. Given I work there, that should be no shock to any of you. They include:

  • Mobile Access Blade, which provides a way for unmanaged desktop/laptop systems to access corporate and manipulate documents within a Secure Workspace. This workspace is encrypted and disappears from the local system when the end user disconnects from the gateway.
  • Capsule, which enables similar access on mobile devices, as well as provides a clean pipe to mobile¬†and desktop systems that are off your corporate premises by routing traffic to Check Point's cloud where traffic is inspected using Check Point's Software Blades using¬†the same policy you've already defined for your enterprise environment.

Regardless of whose solutions you choose, they allow you to be The Department of Yes. Yes, you can access that data with your chosen device without the enterprise losing control of the data. Everyone wins.

While I don't typically read End User License Agreements, I was pointed to the following phrase that exists in the End User License Agreement for Palo Alto Networks equipment:

1.3 License Restrictions: End user shall maintain Products in strict confidence and shall not: [...] (d) disclose, publish or otherwise make publicly available any benchmark, performance or comparison tests that End User runs (or has run on its behalf by a third party) on the Products;

I trust you can think through the implications of this restriction on your own. It certainly adds additional color to the recent public spat between NSS and Palo Alto Networks.

A lot of noise has been made recently about some nude celebrity photos that have made their way around the Internet, such as those of Jennifer Lawrence. Initially it was thought to have been a compromise of Apple's iCloud service, but, really it was because Apple failed to properly rate-limit an API that allowed rapid brute-force guessing of passwords. Apple claims it was a targeted attack on these specific users and tells users to enable two-factor authentication and use a strong password.

Getting someone's iCloud credentials, either by brute force or guessing the answers to security questions, is all well and good. It certainly explains these photos were acquired and posted. It doesn't adequately explain how supposedly deleted photos from a celebrity's iPhone were posted online. 

Moti Sagey, whom I work with at Check Point, actually came up with the idea of how this might have happened. He discovered a tool called dr.fone that can "recover data from iOS devices, iCloud backup, and iTunes backup." Curious, Moti and I downloaded and installed the tool, attempting to recover data from our own iCloud backups. Lo and behold: there they were. Up to 3 iCloud backups for each device we own. 

A note about the way iCloud backups work: they will only happen from your iOS device if the device is on WiFi and is plugged in--usually at night. They could also be triggered manually, but let's face it: how often do regular people do that? Almost never. 

While the majority of my devices have 3 consecutive days of backups, one in particular had quite a gap in their backup dates. Which, for most celebrity-types, may be pretty common. Movie stars are like normal people: they rarely charge their phone and are rarely on WiFi outside of their own home. Given how busy some celebrities are, it doesn't seem inconceivable that they could go months without being at home on their WiFi and charging their iPhone at the same time.

What's in an iCloud backup, you ask? Quite a lot, as it turns out, including the camera roll:

A popular celebrity with easy to guess iCloud username and password plus a rarely backed up iPhone plus a tool like dr.fone equals access to potentially naughty photos the celebrity thought were deleted months ago. Voila!

Of course, this is just the tip of the iceberg of what criminals do in order to get illicit photos and videos from celebrities, but let's be honest: why go through the trouble of social engineering, installing illicit remote access tools, and stealing authentication keys when you can take the much easier route of brute force guessing a weak password and/or easily answer their password reset questions?

People are blaming Apple here, when, last we checked, you had to opt-in to iCloud backups. Even if you choose to opt-in, you can choose what elements of your phone to back up. The fact that Apple keeps a few of them on-hand is, in fact, a good thing. Good backups are just as important as choosing a strong password or using two-factor authentication when its available. While there is some inherent risk in backing stuff up to the cloud, doing so is a greater good than never backing up your device, which is what would happen for most people without iCloud doing it for them.

Since it appears iCloud only keeps the last 3 device backups, ensuring that iCloud backs up your device regularly will also reduce the risk someone might "recover" an old nude photo you thought you deleted. Of course, if you want to reduce the risk even further, don't take the photos with your mobile phone, or at least don't do it with the default camera application. Use an ephemeral messaging app like Wickr or Telegram, at least until phones get an incognito mode similar to browsers.

Authorship Note: If Posthaven would let me mark more than one author, I would credit Moti Sagey as one. Especially since this was his idea, which I put a few refinements on.

Back when I worked for the Nokia TAC support Check Point Security Gateways, I actually ran a Nokia appliance as the headend for my network. One could call it "keeping my skills sharp for my daily work." For a variety of reasons, I discontinued this practice and instead opted for more traditional home routers that I could install DD-WRT on.

When I started working for Check Point, I alternated between either running a DD-WRT router or using a UTM-1 EDGE appliance. When Check Point announced the SG80 (and later the 1100 series appliance), I moved to using those. 

And while I probably could continue to use the 1100 series appliance for some time, I finally got my hands on a 2200 Appliance and decided that it should be the headend to my home Internet connection. I wanted to be able to use all the latest Check Point features and get my hands a bit dirtier than I have in a while. What better way to do that than with my family, which is very good at generating lots of real-world, Internet-bound traffic :)

DHCP and DNS Server on the Gateway

Every gateway I've used in the last several years had a DHCP server. In most cases, it also included a DNS server. Gaia does have a built-in DHCP server, and should you use it, should make sure to configure your rulebase to permit the traffic correctly as described in sk98839.

That said, it's not currently possible to make the Gaia DHCP server authoritative for a given subnet without manually hacking the dhcpd.conf file (see sk92436 and sk101525). I did this, which worked.

I still wanted a local DNS server too where I could define local DNS entries for items on my subnet. I could have easily used dnsmasq on one of the many DD-WRT routers I still have on my network as WiFi access points, but I wanted it on the Security Gateway, just like I had it on my 1100, UTM-1 EDGE, and others. I poked around a bit on my Gaia R77.20 installation and discovered that it too had dnsmasq.

I want to be clear: dnsmasq is not formally supported on Gaia. That said, it appears to work quite nicely. The configuration file /etc/dnsmasq.conf is not managed through Gaia and, as far as I know, it should survive upgrades and the like and not be affected by anything you do in Gaia, except maybe enable the DHCP server in Gaia, which obviously do you don't want to do if you're going to use dnsmasq for this.

The dnsmasq in Gaia appears to take most of the common configuration options, so I set about configuring it as described in the dnsmasq documentation. Starting it up is pretty simple: a service dnsmasq start from Expert mode gets it going.

But, of course, that only works until the next reboot. Now I could have hacked /etc/rc.local or something like that, but I know how to make Gaia's configuration system actually start this process for me, monitor it, and restart it if it dies.  

[[email protected]:0]# dbset process:dnsmasq t
[[email protected]:0]# dbset process:dnsmasq:path /usr/sbin
[[email protected]:0]# dbset process:dnsmasq:runlevel 3
[[email protected]:0]# dbset :save

I guess all those years of supporting IPSO paid off :) If you want to turn off dnsmasq in the future, issue the following commands from Expert mode:

[[email protected]:0]#¬†dbset process:dnsmasq
[[email protected]:0]#¬†dbset :save

One other thing you might want to do if you're running a local DNS server is have your Security Gateway actually use it. The problem is: dhclient (which obtains settings from a DHCP server) actually overwrites the configuration in Gaia for the DNS domain and the DNS servers to use. You should be able to prevent this by changing the /etc/dhclient.conf configuration file not to use this information.

Specifically, it means changing this line:

request ntp-servers, subnet-mask, host-name, routers, domain-name-servers, domain-name;

to this:

request ntp-servers, subnet-mask, host-name, routers;

To be absolutely clear, Check Point does not support using dnsmasq at all for any purpose on Gaia. While I am using it successfully in R77.20, it could easily be removed in a future version at a future date.

DHCP and the Default Route on the Internet Interface

When I put the 2200 in my network, I noticed that Gaia was not accepting the default route from Comcast's DHCP server. To get my home network oeprational, I manually set the default route to my Internet-facing interface, which seemed to work for a time.

One persistent issue I was having, that I initially was unable to figure out, was that it would take some time in order for sites to load. I wasn't quite sure why, but it was only happening occasionally and not something I could easily track down. I ignored it for a couple of weeks, but last night the problem really came to a head.

I thought Comcast was having a problem. I even went so far as to call Comcast and they said there was some signal problem on their end. The problem still persisted. The only thing "different" was that we had visitors who brought a laptop with him and was playing League of Legends with my son.

I tried a number of things, and couldn't figure out what was going on. I did start seeing some peculiar traffic on my external interface, and decided on a fluke to check my arp table. Rather than the 20-30 entries I'm used to seeing, I was seeing over 700 entries. For IPs on the Internet. That didn't look right. And sure enough, I was seeing arp whois requests for Internet IPs coming from my gateway.

Arping for every IP you access on the Internet is not terribly efficient. Comcast was, understandably, not responding to these ARP requests quickly, though they eventually did. Furthermore, due to the transient nature of ARP entries, they were timing out every minute or so, which meant that as long as I was actively passing traffic, connections to websites worked great. When I stopped and reconnected? It would take forever to connect.

Which brings me back to the original problem I had: getting Gaia to take a default route from a DHCP server. Turns out, there's an option in Gaia called "kernel routes" that you need to enable to make this work, and enabling it is pretty straightforward. If I had search SecureKnowledge earlier, I probably could have saved myself learning (and documenting) this lesson. It is documented in sk95869.

Now the Internet at home is working great once  No delays in loading websites at all. At least ones caused by configuration errors on my part :)

Dynamic IP Gateway Security Policy

One of the challenges of using a Dynamic IP Gateway is that you don't know what the external IP is of your security gateway. Not knowing it could make it difficult to, say, make "port forwarding" rules or rules that relate to allowing traffic to the Security Gateway itself.

Turns out this is not an issue. When you check a gateway object as Dynamic IP as shown below, two Dynamic Objects come into play: LocalMachine (which always resolves to your gateway's external IP address) and LocalMachine_All_Interfaces which resolves to all of the IP addresses associated with your gateway. These dynamic objects are simply placeholder objects that get their actual definition updated on the local gateway. For these two objects in particular, this happens periodically without any action on the administrator's part.

Dynamic Objects have a couple of important limitations. Rules using Dynamic Objects must be installed on a specific installation target, i.e. you cannot simply leave the Install On field for the rule the default Policy Targets. Not doing so will cause a policy compilation error. The second issue is that rules with Dynamic Objects disable SecureXL termplates at that rule. This isn't a huge deal as I'm coming nowhere near the capacity of my 2200, and most of my traffic is hitting rules with acceleration, but it is a consideration you must make when constructing your rulebase. 

Another important limitation: Identity Awareness is not supported on Security Gateways that have a Dynamic IP. I do not believe this is an issue on the 600/1100 appliances, but it is for the 2200 and up. 

Final Thoughts

It's been a while since I posted something technical on my own blog about using Check Point. My role is such that I'm not always involved in the technical side of things like I was back when I wasn't quite Internet famous :)

In any case, it may be something I'll do again in the future. I'm sure that pieces of this article will also show up in SecureKnowledge--the supported parts of the above at least.

And, of course, these are my own thoughts and experiences. Corrections and feedback are welcome.