FireEye: Indemnification That's Basically Worthless

From FireEye’s CEO and the meaning of ‘basically’:

In an interview on CNBC’s “Mad Money” with Jim Cramer, FireEye CEO Dave DeWalt said a certification granted by the Department of Homeland Security under a law known as the SAFETY Act “allows companies who use our product to basically be indemnified against legal costs relative to being breached.”

Which, if you unpack this statement, turns out to be basically meaningless.

From the FAQ on the Safety Act maintained by the Department of Homeland Security, emphasis added:

[The] Act creates certain liability limitations for “claims arising out of, relating to, or resulting from an Act of Terrorism” where Qualified Anti-Terrorism Technologies have been deployed. The Act does not limit liability for harms caused by anti-terrorism technologies when no Act of Terrorism has occurred.

What is an Act of Terrorism? The FAQ about the SAFETY Act continues:

A: Pursuant to the SAFETY Act, an Act of Terrorism is: ACT OF TERRORISM- (A) The term “act of terrorism” means any act that the Secretary determines meets the requirements under subparagraph (b) of the Act, as such requirements are further defined and specified by the Secretary. REQUIREMENTS- (B) An act meets the requirements of this subparagraph if the act- (i) is unlawful; (ii) causes harm to a person, property, or entity, in the United States, or in the case of a domestic United States air carrier or a United States-flag vessel (or a vessel based principally in the United States on which the United States income tax is paid and whose insurance coverage is subject to regulation in the United States), in or outside the United States; and (iii) uses or attempts to use instrumentalities, weapons or other methods designed or intended to cause mass destruction, injury or other loss to citizens or institutions of the United States.

That’s actually a pretty broad definition of terrorism that I should probably explore in another forum. Sufficed to say, most breaches that affect most companies are not recognized “Acts of Terrorism” under the SAFETY Act. Which means there is likely no legal indemnification if and when a breach happens.

Even on the off chance legal indemnification applies, there are still plenty of other costs that won’t be covered by the SAFETY Act. I’m sure FireEye will happily sell you the consulting necessary to clean up from such a breach, and I’m pretty sure it won’t be for free, either.

Personally, I’d rather prevent the breach from happening rather than relying on promises of indemnification if and when they do. But that’s just me.

Disclaimer: My employer Check Point Software Technologies competes with FireEye in the market. These thoughts are my own.

My Podcasts on Apple and the FBI Backdoor Requests

For those of you who don’t know, I produce a short but regular podcast called PhoneBoy Speaks. It is not exclusively information security focused, though I do maintain a RSS feed for infosec-related episodes if that’s all your interested in.

On the last couple episodes, I discussed the FBI’s requests to Apple to assist them in breaking into a specific iPhone “because terrorism,” which I covered in written form already. I don’t think I say anything different in these podcasts, but if you like audio better than reading, here you go:

Disclaimer: I haven’t asked what my employer Check Point Software Technologies thinks about all this. These thoughts are my own.

Can Apple Actually Comply With The FBI Request To Allow Bruteforcing Pin Codes?

For a moment, anyway, ignore the politics of whether or not Apple should comply with the FBI’s request to assist them in unlocking a particular iPhone relevant to a highly publicized terrorism case, which has been court ordered. Let’s talk about whether it’s actually possible or not, based on the information available and what I know is possible.

To summarize, the request is to provide “reasonable technical assistance” to achieve the following three things:

  1. Disable the auto-erase function (which can happen if this feature is enabled)
  2. Allow submission of passcodes via something other than the touchscreen
  3. Disable the ever-escalating delays that Apple introduces when you enter incorrect passcodes

This would allow the FBI to effectively brute-force the PIN code on the device, assuming the owner of said device used a simple 4-digit passcode (very likely). A 6-digit passcode, which is the new default in iOS 9.2, might take a little longer. If it’s an actual strong password, the FBI might be waiting a long time to actually unlock the phone.

I don’t think there is any debate that Apple can write firmware that accomplishes the above tasks. The trick, of course, is getting the changed firmware onto the device. The only way I know of that can happen on a device where the passcode is not known is by using the Device Firmware Upgrade (DFU) mode.

Just for kicks, I put one of my iPhones into DFU mode and plugged it into one of my computers. I got the following warning:

Loading new firmware on the device will erase the device, which kind of defeats the purpose the FBI is trying to achieve. That is, of course, unless Apple has designed a way to prevent that from happening in certain cases. Dan Guido seems to think this is the case and if you read Apple’s Legal Process Guidelines, it seems to suggest this is possible, at least in versions of iOS prior to 8.0:

For iOS devices running iOS versions earlier than iOS 8.0, upon receipt of a valid search warrant issued upon a showing of probable cause, Apple can extract certain categories of active data from passcode locked iOS devices. Specifically, the user generated active files on an iOS device that are contained in Apple’s native apps and for which the data is not encrypted using the passcode (“user generated active files”), can be extracted and provided to law enforcement on external media. Apple can perform this data extraction process on iOS devices running iOS 4 through iOS 7. Please note the only categories of user generated active files that can be provided to law enforcement, pursuant to a valid search warrant, are: SMS, iMessage, MMS, photos, videos, contacts, audio recording, and call history. Apple cannot provide: email, calendar entries, or any third-party app data.

It’s possible Apple didn’t need new firmware to obtain access to the data it outlines above, it could simply interrogate the device in DFU mode and the device would give the information up. Even if a new firmware load is required to accomplish this task, it’s possible that as part of improving device security by encrypting all the data, Apple also closed the loophole that allowed them to put a new version of iOS on the device in DFU mode without erasing it. As the device in question is reportedly running iOS 9.0, the entire argument for whether this is technically possible or not hinges on the answer to this question.

The only other method available to Apple: hacking the actual device in much the same way jailbreakers do. The very same thing John Mcafee is offering to do for free using his army of hackers with 24-inch purple mohawks, 10-gauge ear piercings, and tattooed faces–people not likely to be under the employ of the FBI.

Edit 19 February 2016: I asked Dan Guido via a comment about whether or not you could load new firmware via DFU mode. He said it could be loaded as a ramdisk which would leave the user data alone. Whether this could actually accomplish what the FBI is asking for is still not known, but the fact you can do this certainly makes it seem a lot more plausible.

Regardless of whether or not this is technically possible, Apple is right to challenge this court order. The implications go well beyond this one iPhone and will impact our digital rights globally for decades to come.

Disclaimer: I haven’t asked what my employer Check Point Software Technologies thinks about all this. These thoughts are my own.

Apple wrote a letter to customers regarding a request they had received from the United States Federal Bureau of Investigations to essentially “backdoor” an iPhone in possession so they can retrieved the encrypted data on it. I have reproduced the letter in it’s entirety below for posterity sake.

Data is protected on devices such as the iPhone with math. That’s all cryptography is, folks: math. Granted, it’s well beyond the stuff that most people learn in school, but it’s all math in the end.

Aside from the specific math used in cryptography, what makes cryptography able to protect data is the encryption key, which itself is merely a large number. A fact, if you will. If that fact gets out, you can undo the encryption. This is why Apple and anyone else doing cryptography correctly goes to great lengths to ensure that encryption key is kept private and the math used to encrypt the data is strong enough that they can’t derive the encryption key from the encrypted data.

While it’s possible to create encryption schemes where some third party has (or can derive) the encryption keys, which many governments are now asking for, keep in mind those encryption keys are merely facts. Facts that, once they are out there, can be used by anyone (“good” or “bad”) and cannot easily be changed. These schemes put everyone’s data at risk. (This is also why biometric data used for authentication purposes on it’s own is not so fantastic, it’s merely a fact that cannot be changed, but can be replicated.)

The worst part is the government probably already has all the information they need thanks to all the metadata they collect at the nation’s telecom providers today. Why aren’t they using that, or any number of other traditional methods of investigation, instead of asking Apple (and by extension other device manufacturers) to make their devices less secure?

Edit: Yes, I realize the FBI is asking for Apple to disable the 10 passcode tries erases the device option on devices, not backdoor the actual encryption (as noted in this Techdirt article. It’s effectively disabling a key mechanism to protect the encryption keys, which ultimately has the same effect as backdooring the encryption itself.

Disclaimer: I don’t know what my employer Check Point Software Technologies thinks about this. I didn’t ask. These are my own thoughts.


February 16, 2016

A Message to [Apple] Customers

The United States government has demanded that Apple take an unprecedented step which threatens the security of our customers. We oppose this order, which has implications far beyond the legal case at hand.

This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.

The Need for Encryption

Smartphones, led by iPhone, have become an essential part of our lives. People use them to store an incredible amount of personal information, from our private conversations to our photos, our music, our notes, our calendars and contacts, our financial information and health data, even where we have been and where we are going.

All that information needs to be protected from hackers and criminals who want to access it, steal it, and use it without our knowledge or permission. Customers expect Apple and other technology companies to do everything in our power to protect their personal information, and at Apple we are deeply committed to safeguarding their data.

Compromising the security of our personal information can ultimately put our personal safety at risk. That is why encryption has become so important to all of us.

For many years, we have used encryption to protect our customers’ personal data because we believe it’s the only way to keep their information safe. We have even put that data out of our own reach, because we believe the contents of your iPhone are none of our business.

The San Bernardino Case

We were shocked and outraged by the deadly act of terrorism in San Bernardino last December. We mourn the loss of life and want justice for all those whose lives were affected. The FBI asked us for help in the days following the attack, and we have worked hard to support the government’s efforts to solve this horrible crime. We have no sympathy for terrorists.

When the FBI has requested data that’s in our possession, we have provided it. Apple complies with valid subpoenas and search warrants, as we have in the San Bernardino case. We have also made Apple engineers available to advise the FBI, and we’ve offered our best ideas on a number of investigative options at their disposal.

We have great respect for the professionals at the FBI, and we believe their intentions are good. Up to this point, we have done everything that is both within our power and within the law to help them. But now the U.S. government has asked us for something we simply do not have, and something we consider too dangerous to create. They have asked us to build a backdoor to the iPhone.

Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession.

The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.

The Threat to Data Security

Some would argue that building a backdoor for just one iPhone is a simple, clean-cut solution. But it ignores both the basics of digital security and the significance of what the government is demanding in this case.

In today’s digital world, the “key” to an encrypted system is a piece of information that unlocks the data, and it is only as secure as the protections around it. Once the information is known, or a way to bypass the code is revealed, the encryption can be defeated by anyone with that knowledge.

The government suggests this tool could only be used once, on one phone. But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks — from restaurants and banks to stores and homes. No reasonable person would find that acceptable.

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.

A Dangerous Precedent

Rather than asking for legislative action through Congress, the FBI is proposing an unprecedented use of the All Writs Act of 1789 to justify an expansion of its authority.

The government would have us remove security features and add new capabilities to the operating system, allowing a passcode to be input electronically. This would make it easier to unlock an iPhone by “brute force,” trying thousands or millions of combinations with the speed of a modern computer.

The implications of the government’s demands are chilling. If the government can use the All Writs Act to make it easier to unlock your iPhone, it would have the power to reach into anyone’s device to capture their data. The government could extend this breach of privacy and demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone’s microphone or camera without your knowledge.

Opposing this order is not something we take lightly. We feel we must speak up in the face of what we see as an overreach by the U.S. government.

We are challenging the FBI’s demands with the deepest respect for American democracy and a love of our country. We believe it would be in the best interest of everyone to step back and consider the implications.

While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

Tim Cook

Why Can't I Choose What to SSL Inspect Based on Application?

SSL Decryption is a feature that is in current versions of the Check Point Security Gateway. It’s in other competing products as well. I wrote a description of the technology in a previous blog post entitled Why SSL Decryption Is Important.

All implementations of this feature have a configurable policy so you can decide what traffic to decrypt. Here is an example policy from a Check Point Security Gateway, which can use IP addresses or URL Filtering Categories:

SSL Inspection Policy

Some people would prefer to use applications (e.g. YouTube), but I just don’t see a way to do that without reducing the overall security posture. Maybe someone more clever than me can explain the flaws in my logic.

The way Check Point determines whether or not a given IP requires SSL inspection is to actually man in the middle the first connection to a given IP (assuming the policy is configured appropiately and just the “site category” needs to be determined). In the first few packets of that MITM connection, we can determine conclusively what URL the end user is going to (or the app is using), put an IP and category entry in the local cache, and inspect the traffic on that connection. Even if a URL isn’t used, the certificate information is in the first few TCP data packets, which gives us something to put a URL category to. If further connections to that IP should be SSL Inspected, the firewall will do so per the policy.

Sometimes the “man in the middle” process can break specific applications (e.g. because they are using Certificate Pinning). Or a URL isn’t being used. Or, worse, the SSL site in question requires Client Authentication which will completely break when you attempt to man in the middle an SSL connection. This is why in the latest (R77.30) release, there’s now a mechansim called Probe Bypass that can be enabled as described in sk104717.

Some applications cannot be identified using just the certificate. Google is a great example of this as they use wildcard certificates across a number of their properties. Even Server Name Indiciation, which exists to remediate this issue, doesn’t work consistently across all browsers and servers. Thus we’re left with the original certificate as-is.

Let’s assume we’re ok with not man-in-the-middling traffic until we’re certain it’s an app we want to perform SSL Inspection on. When you’re identifying applications beyond using IPs and ports, you actually have to let some traffic pass through the firewall.

(Is a traditional IP/port related policy still relevant? Absolutely, despite what some Check Point competitors like to say in their marketing, which even they will admit if pressed on the issue.)

If we don’t man in the middle the first connection to an IP, and thus allow the application to be identified first before deciding to SSL Inspection, we run the risk of allowing encrypted traffic for an application we actually want to inspect. Malicious applications could easily exploit this behavior, pretending to be an unidentifiable application, thus connections would never be SSL Inspected.


Disclaimer: This is my own thinking. My employer Check Point Software Technologies may have a different stance on this matter.

Edited to add reference to Probe Bypass on 15 Feb 2016 (some hours after I originally published)