Detecting malicious code that enters your network is challenging problem that traditional anti-virus and anti-malware can't keep up with. These tools use a series of heuristics and static signatures to try and detect malicious code.

Tomer Teller, a security researcher and evangelist at Check Point, told me about an incident at the 29th annual Chaos Communication Congress where the crowd of security professionals were asked if anyone is actually using AV. Not a single person raised their hand.

How much value does AV provide? There was an article put out by Imperva that got misquoted and misrepresented in the media. The main message? AV only catches about 5% of malware. The real story is a bit better, but it's still not all that rosy.

To help you understand why this problem is particularly tricky, consider how many ways you can write a "Hello, World!" program--often one of the first programs you write when learning a computer language. It is so called because the program writes the phrase "Hello, World!" to the output device. It's often a simplistic program, such as the following C-based example:

#include <stdio.h>int main() {    printf("Hello Worldn"); }

A more complex example (that I borrowed from here) might look something like this:

#include <stdio.h> #define THIS printf(
#define IS "%sn"
#define OBFUSCATION ,v);
double h[2]; int main(_, v) char *v; int _; { int a = 0; char f[32]; h[2%2] = 21914441197069634153456391018824026170709523170177760997320759459436800394073 07212501870429040900672146338833938303659439237740635160500855813030357492372 682887858054616489605441589829740433065995076650229152079883597110973562880.0 00000; h[4%3] = 1867980801.569119; switch (_) { case 0: THIS IS OBFUSCATION break; default: main(0,(char *)h); break; } }

Replace "print Hello World" with "inject code into target host using latest exploit" and you can begin to understand why it is so hard to detect malicious code by simple inspection.

That isn't to say that static analysis provides no value--it does. It catches the really obvious stuff. Unfortunately, it's not foolproof.

In a previous post, I asked if we could trust no one in Information Security. The reality is that, at some point, we have to trust. We have to trust that we have the right policy in place. We have to trust our people to do the right thing. We have to trust our tools will do their job.

Of course, we should not blindly trust. We need to evaluate that our tools are doing their job, keeping the bad stuff out and enforcing the policy you’ve specified. We need to trust that our people are doing the right thing. Your tools are both enforcing the policy and educating the users about what the policy is, right? And, of course, you need to evaluate the policy to ensure it is both effective and in-line with business objectives.

If Information Security professionals spend all their time doubting everything, not only will they drive themselves crazy, but the real threats will get by.

Along these lines, The marketing folks at Check Point put together a video discussing trust and its very important role in Information Security.

It's funny, every time I read about yet another security vulnerability in Internet Explorer, such as the recent one involving Adobe Flash hosted on the Council of Foreign Relations website that performs a heap spray against Internet Explorer 8, I am reminded of the old Ralph Nader tome Unsafe at Any Speed, which was a book released in 1965 about how unsafe the automobiles designed by the American Auto Industry are. Thus, the phrase "Unsafe at Any Version" seems to come to mind when I think of Internet Explorer. Likewise, I tend to think the same thing about Adobe Acrobat, Adobe Flash, or Adobe Shockwave (2 year old vulnerability, anyone?)

Is it fair to say these products are unsafe at any version? While evidence seems to suggest that is probably true, I believe the security problems we see in these products are evidence of their success. Okay, maybe Internet Explorer was successful because of being illegally tied to Microsoft Windows, but I'm trying to remember the last time Internet Explorer, Adobe Flash, and Adobe Acrobat Reader were not considered "required items" for a PC.

Which is part of the problem of keeping these programs secure. There is a lot of legacy code in those apps. They were written well before Secure Coding Practices became the norm. Internet Explorer itself has a fundamental flaw by being so tightly tied into the operating system. Rewriting code is no fun and, unless there is a significant business reason to do so, doesn't happen.

Granted, Adobe did do this with Adobe Reader, but there's still a lot older Adobe Readers out there still, just waiting to be compromised. Just like there are millions of people still running XP and Internet Explorer 8, which Microsoft will eventually stop providing security patches for.

These applications aren't going anywhere anytime soon. Which means the bad guys are going to continue to find vulnerabilities in these applications for the foreseeable future. It certainly will keep us good guys busy for the foreseeable future, too.

Trust. It's something I'm sure many security professionals think about in various contexts. However, I don't think anyone can fully appreciate the level of trust that we exercise on a daily basis without really thinking about it.

Just think about getting packets from point A to point B. There's an insane amount of things we simply trust without really thinking about it. This includes:

  1. The program running on point A to generate traffic: who created that program? Will that program do something you don't expect?
  2. The OS running on point A: is that program running through an OS where key calls are "compromised" in the same way that the recent Linux Rootkit was?
  3. The various processors that run on point A: are those processors calculating true? Will they have a divide-by-zero bug like ye olde Pentium processors? Or did someone replace the processors in your device with one that purposefully does what you don't expect?
  4. The transmission medium of those packets: how secure is that medium? Who (or what) can read those packets off the wire, or the air as appropriate?
  5. The routers and switches along the way between point A and point B. They, too, are computers running code, are they not? Are they configured correctly? Will they route the packets along the path you expect? Are they potentially compromised as a result of bad design or malicious intent?
  6. Point B, that receives the traffic: does it believe what it is reading off the wire? How does it know Point A sent it? Will Point B process it correctly?

And so on. Trying to account for all these possibilities to ensure absolute security is next to impossible and will surely drive you crazy. That said, the thought exercise is important if you're trying to design a secure system. All of your assumptions about various elements of that system must be examined on a regular basis to ensure that you don't miss when something transitions from a largely theoretical threat to a very real one.

Soon, I'll share some ideas on what a "trust no one" network might look like. Does such a thing exist today? Is maintaining such a thing even practical?

If you've been in the IT industry long enough, you'll start seeing the same concepts "reinvented" every few years or so.

The current panacea is so-called Bring Your Own Device--the idea that an end user can use their own technology devices in a corporate setting while having some level of access to corporate data. While we went through this with laptops and personal computers over the years, now the devices we are bring our own of? Mobile phones/tablets.

Another acronym I've heard recently describes the state of IT, again, as long as I've been in it--Corporate Owned, Personally Enabled. Here, the idea is that a corporate-owned asset is used for an employee's personal needs. This has been the case with corporate-owned PCs forever without any formal policy for the last couple of decades. Now we're starting to see this with mobile devices, either with or without the use of third party tools.

The reality is that, regardless of whether companies adopt BYOD, COPE, something else, or neither, the reality is, employees are going to use personal devices to do work. And, likewise, use corporate devices for "personal use." This has always been the case and will always be the case, regardless of any formal policies to the contrary.

From a security point of view, this creates some rather obvious issues. On corporate-owned devices, some sort of "device management" or "Endpoint Security" offering is installed, which users tolerate to varying degrees. (I happen to like Check Point's Endpoint Security offering, but I will admit, I'm biased.) BYOD won't work because users are often asked to submit "device management" or an "Endpoint Security" installation in order to use their own device on the corporate network.

But ask yourself: what is it that you're really trying to protect on that endpoint? Prevent malicious software? You have a properly segmented network, right? You have the technology to detect any malicious traffic from that segment, right? Good. That should take care of it.

But what if the software doesn't "phone home" while on the corporate network (or generate malicious traffic), but collects data and then sends it out over the mobile operator's network? Modern mobile operating systems have these things called sandboxes that prevent one app from reading data from another in the first place. Obviously, if you're jailbroken or rooted, all bets are off.

And malicious apps, while not unheard of, are nearly non-existent in the official App Stores for iOS or Android. Same with potential privilege escalation-type attacks in iOS and Android. Not impossible but a lot harder to pull off, given that Android and (moreso) iOS are pretty secure out-of-the-box.

Really there's only one thing to worry about on these devices: the corporate data. This data needs to be protected. Which is generally pretty easy to do assuming only a trusted application is able to access the data, the regular OS protections are in place (i.e. device isn't rooted or jailbroken). And, of course, the data has to come on and off the device in a secure manner (e.g. either with strong encryption or using a physical access mechanism).

Once you have the magic, trusted app (or suite of apps) to access, work with, and secure the small amounts of corporate data the device can work with, congratulations! You've now eliminated the headache of managing potentially unknown devices in the hands of users who will do everything they can to thwart your security controls anyway. If users want to work with corporate data, they can use the "trusted" apps to do it, which should have appropriate hooks back to corporate to validate whether you are able to even use the data and, if you or your device goes rogue, wipe the data from your device without wiping the entire device (which has personal data on it).

While I believe there are great solutions along these lines (yet), this is the only kind of solution I believe makes any sense in the long term. People will be able to bring their own devices and access business data while infosec will rest easy knowing business data is still  accessed and stored safely.

It's a BYOD solution everyone can COPE with.