Sunday 28 September 2014

How I fixed Shellshock on my OpenBSD Box

Well, I am on a "really old" OpenBSD, and I couldn't be bothered updating it right now. It was really arduous:

[Sun 14/09/28 10:52 BST][p0][x86_64/openbsd5.2/5.2][4.3.17]
zsh 1044 % sudo pkg_delete bash
bash-4.2.36: ok
Read shared items: ok

In reality, this is only possible, because, as a sane operating system, there's no dependencies on anything other than a POSIX-compliant sh, of which there are several available.

To me, just another reason to avoid specific shells and just target POSIX. When the shit hits the fan, you'll have somewhere to hide.

When I tried to do the same on my work (FreeBSD) box, I came up against several issues. The main one being that lots of packages specify bash as a dependency.

At some point, I'll write a blog post about the functions of that server, and how I've hardened it.

Friday 26 September 2014

Time for Regulation in the Software Industry?

Many pieces of software have a clause similar to:
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You may recognise that as a large portion of the BSD 2-Clause open source license. Not to pick on BSD style licenses, section 15 of the GPL v3, section 11 of the GPL v2, section 7 of the Apache license, and a substantial portion of the NCSA and MIT licenses also have this clause. The wording is very similar from license to license. Let's look at how this plays out:

  • Linux: GPL
  • FreeBSD: BSD Style
  • OpenBSD: BSD Style
  • Bash: GPL
  • GCC: GPL
  • Clang/LLVM: NCSA
  • Apache HTTPd: Apache
  • NGinx: BSD
  • Postgres: "BSD-Like", Postgres License 
  • MySQL: GPL
  • Apache Struts: Apache
This represents a huge portion of internet-facing devices, including popular services such as Netflix, tumblr, Facebook, and even the London Stock Exchange. Many devices run Android, and lots of the routers separating home networks from the Internet are running variants of Linux.

This is not to exclude commercial software, even Microsoft puts these clauses in their licenses.

I've even come across behaviour in The Co-operative Bank's online banking platform and TSB's Verified by Visa platform (URL structure, exception handling behaviour) that suggest it has a component which uses Apache Struts.

The basic meaning (and I am not a lawyer by any stretch) is that the person or entity who produced the software is (as far as legally possible), in the event of Bad Stuff, they're not Responsible for the Bad Stuff.

So, the story so far:
  • Developers kindly write software, entirely disclaim warranty or liability.
  • Organisations setup paid-for services off the back of this software, without verifying that the software is fit for purpose, since auditing software is expensive.
  • End users then entrust the organisations, via the un-audited software, with their money and personally identifying information (PII).
The end users -- the ones risking their bank deposits, their PII -- are, in many cases (banking has specific protections), the ones who are basically expected to
evaluate the software which they are unknowingly going to use. They are in no position to asses or even understand the risks that they are taking.

With cars, there are safety ratings, Euro NCAP and the like. Electric goods must meet a minimum safety standard (In the UK, most people look for the Kitemark) before they can be brought to market. When these goods fail to meet this standard, the seller is held accountable, who in turn, may hold suppliers accountable for failing to supply components which meet the specified requirements. There is a well-known, well-exercised tradition of consumer protection, especially in the UK.

On the other hand, should you provide a service which fails to deploy even basic security practices, you're likely to get, at most, a slap on the wrist from a toothless regulatory body (In the UK, this is the Information Commissioner's Office, ICO). For example, when Tesco failed to store passwords securely, the only outcome was that the ICO was "unimpressed". There was likely no measurable outcome for Tesco.

Banks, who usually are expected to be exemplary in this field, respond very poorly to research that would allow consumers to differentiate between the "secure banks" and the "insecure banks". This is the exact opposite of what needs to happen.

The lack of regulation, and strangling off of information to consumers is leading to companies transferring ever larger risks to their clients. Often, the clients have no option but to accept this risk. How many people have a Facebook account because it's the only way to arrange social gatherings (because everyone's on Facebook)? How many people carry and use EMV (in the UK, these are Chip n' PIN) debit or credit cards? Banks are blindly rolling out NFC (Touchless payments) to customers, who have no choice in the matter, and who, in many situations, simply do not know that this is an unsafe proposition, and could never really know.

An illuminating example is eBay's recent antics. The short version of the story is that a feature for sellers (and hence, a profit making feature for eBay) has turned out to be a Bad Idea, exposing buyers to a very well crafted credential-stealing attack. This has lead to the exposure of many buyer's bank details to malicious third-parties who have then used the details to commit fraud.

In this situation, eBay is clearly gaining by providing a feature to the sellers, and by shifting the risk to the consumer. Further, because this is a profit-making hole, and closing it could break many sellers' pages (thus incurring a huge direct cost), eBay is spinning its wheels and doing nothing.

The consumers gain little, if anything from this additional risk which they are taking on. Like a petrochemical company building plants in heavily populated areas, the local population bear the majority of the risk and do not share in the profits.

This is an unethical situation, and regulatory bodies will normally step in to ensure that the most vulnerable are suitably protected. This is not the case in software & IT services, the regulatory bodies are toothless, and often do not have the expertise to determine which products are safe and which are not.

For instance, both OpenSSL and NSS (both open source cryptographic libraries) are used the The Onion Router (TOR) and the TorBrowser (Effectively FireFox with TOR baked in) are used by dissidents and whistleblowers the world over to protect their identity where their lives or livelihoods may be at risk.

Both OpenSSL and NSS have won FIPS-140 (a federal standard in the US) approval in the past. Yet we have had Heartbleed from OpenSSL, and recently signature forgeries in NSS. Clearly, these bodies don't actually audit the code they certify, and when it does go catastrophically wrong, the libraries in question maintain their certifications.

For reference, the academic community have been concerned with the state of the OpenSSL codebase for some time. We've known that it was bad, we've shouted about it, and yet it retained it's certified status.

Individual governments often influence this, by only procuring high-assurance software, and demanding that certain products meet stringent standards, and failures of those products can therefore be financially damaging to the suppliers.

The UK government already has Technology code of practice which government departments must use to evaluate IT suppliers' offerings. However, there are many more fields which the government has little remit over, and no international remit. The US Government has similar standards processes embodied with the Federal Information Processing Standards (FIPS, of which FIPS-140, mentioned above, is just one of many).

We also have existing standards processes, like the ISO 27000 series, which have a viral nature, in that the usage of a external service can only be considered if the organisation aiming for ISO 27000 certification can show that it has done due diligence on the supplier.

However, as mentioned above, these standards rarely mean anything, as they rely on the evaluation of products which very few people understand, and are hard to test. Products that shouldn't achieve certification do, as with OpenSSL.

Clearly, the current certification process is not deployed widely enough, and is not behaving as a certification process should, so we need something with more teeth. In the UK, we have the British Medical Association (BMA), which often takes it's recommendations directly from the National Institute of Clinical Excellence (NICE). If a failure occurs, a health care provider (doctor, medical device supplier, etc.) will end up in court with the BMA present, and may lose their right to trade, as well as more serious consequences.

There is a similar situation in the UK for car manufacture, where cars have a minimum safety expectation, and if the manufacturer's product doesn't meet that expectation, the company is held accountable.

Another example is the US cars being sold into China. Many US cars don't meet the Chinese emissions standards, and hence cannot be sold into China.


We need something similar in software and services: an agreement that, like in other forms of international trade, vendors and service providers are beholden to local law.

We have existing legislation relating to the international provision of services. In many cases, this means that when a company (such as Google) violates EU  anti-competition laws, they are threatened with fines. The laws are in place, but need to be stronger, in terms of what constitutes a violation of the law, and the measures that can be applied to the companies in question.

Currently, internet is the wild west, where you can be shot, mugged and assaulted all at once, and it's your own fault for not carrying a gun and wearing body armour. However, the general public are simply not equipped to acquire suitable body armour or fire a gun, so we need some form of "police" force to protect the general public.

There will always be "bad guys" but we need reasonable protection from both the malicious and the incompetent. Anyone can set up an "encrypted chat program" or "secure social media platform", but actually delivering on those promises when people's real, live PII is on them is much harder, and should be regulated.

Acknowledgements

Many thanks to my other half, Persephone Hallow for listening to my ranting on the subject, and inspiring or outright suggesting about half the ideas in this post, as well  proofreading & reviewing the post.

Sunday 14 September 2014

Achieving Low Defect Rates

Overview

Software defects, from null dereferences to array out of bounds and concurrency errors are a serious issue, especially in security-critical software. As such minimising defects is often a stated goal of many projects.

I am currently writing a library (OTP-Java) which provides several one-time password systems to Java applications, and this is a brief run-down of some of the steps I am taking to try to ensure that the library is as defect-free as possible. The library is not even alpha as yet. It still has several failing tests, but hopefully will be "completed" relatively soon.

Specification

Many products go forward without a specification. In many cases this is not a "bad thing" per-se, but it can make testing more difficult.

When surprising or unexpected behaviour is found, it should be classified as either a defect or simply a user without a complete understanding of the specification. With no specification, there can be so such classification. The best that can be done is to assess the behaviour and to determine if it is "wanted".

As an example, I have seen a system where one part expected a user to have access to some data, and forwarded them on to it. The system for retrieving the data had more stringent requirements, and threw a security exception. Without a clear, unambiguous specification, there's no way of telling which part of the system is in error, and hence, no immediate way to tell which part of the system should be corrected.

I would like to make it clear that I am not advocating for every system to have a large, unambiguous specification. If the product is security or safety critical,  I would argue that it is an absolute must, and many share this view. For most other systems, a specification is an additional burden that prevent a product from getting to market. If a failure of your system will sink your business or kill people, then a specification is wise. Otherwise, just listen to your users.

Defects

Given a specification, a defect is often defined simply as a deviation from that specification. Many defects are benign, and will not cause any issues in production. However, some subset of defects will lead to failures -- these are actual problems encountered by users: exception messages, lost data, incorrect data, data integrity violations and so on.

It is often seen to be most cost-effective to find and eliminate defects before deployment. Sometimes, especially in systems that do not have an unambiguous specification, this is extremely difficult to do, and in a sufficiently large system, this is often nearly impossible.

For a large enough system, it's likely that the system will interact with itself, causing emergent behaviour in the end product. These odd interactions are what make the product versatile, but also what make eliminating surprising behaviour nearly impossible, and it may even be undesired for certain products.

Static Analysis

Tools that can provide feedback on the system without running it are often invaluable. Safety critical systems are expected to go through a battery of these tools, and to have no warnings or errors.

I am using FindBugs on my OTP-Java project to try to eliminate any performance or security issues. I have found that it provides valuable feedback on my code, pointing out some potential issues.

There are also tools which will rate the cyclomatic complexity (CC) of any methods that I write. I believe that Cobertura will do this for me. This will be important, as a high CC is correlated with a high defect rate, and is also expected to make reading the code more difficult.

Testing

Randomised Testing

Fortunately, generating and verifying one-time passwords (OTPs) is a problem space where there are clear measures of success, for example, if I generate a valid OTP, I must be able to validate it. Similarly, if I generate an OTP, modify it, the result should not be valid.

This lends itself to randomised testing, where random "secrets" can be produced, and used to generate OTPs. These can then be verified or modified at will.

Other properties can also be validated, such as, requesting a 6-digit OATH OTP actually does produce a 6-digit string, and that the output is entirely composed of digits.

For the OTP-Java project, I am using the Java implementation of QuickCheck, driven by JUnit.

Unit Testing

In OTP-Java, I've augmented the randomised testing with some test vectors extracted from the relevant RFCs. These test vectors, along with the randomised tests, should provide further confidence that the code meets the specification.

Usually, unit testing only involves testing a single component or class. However, with such a small library, and with such well-defined behaviour, it makes sense to test the behaviour of several parts of the system at the same time.

In my opinion, these tests are probably too big to be called unit tests, and too small to be called integration tests, so I've just lumped them together under "unit tests". Please don't quarrel over the naming. If there's an actual name for this style of testing, I'd like to know it.

Assertive Programming

I have tried to ensure that, as far as possible, the code's invariants are annotated using assertions.


That way, when an invariant is violated under testing, the programmer (me!) is notified as soon as possible. This should help with debugging, and will hopefully avoid any doubt when a test fails as to whether it was a fluke (hardware, JVM failure, or other) or genuine failure on my part.

This has, so far, been of a lot of use in randomised testing, where there have been test failures, but determining exactly why has been shown by the assertions.

It also helps users of the library. If my input validation is not good enough, and the users subject the library to tests as part of their testing, they will also, hopefully, be helped by the assertions, as they may help explain the intent of the code.

Type System

While Java's type system does leave a lot to be desired, it is quite effective at communicating exactly what is required and may be returned from specific methods.

I have, unlike the reference implementation in the RFC (Mmm, "String key", "String crypto", and so on), tried to use appropriate types in my code, requiring a SecureRandom instead of just byte[] or even Random, to convey the fact that this is a security-critical piece of code, and one shouldn't use "any old value", as has often happened with OpenSSL's API (See also, predictable IVs) which have lead to real vulnerabilities in real products.

Shielding the user from common mistakes by the use of a "sane" or "obvious" API is as much my job as it is the final user's. The security of any product which relies on the library is formed by the library's correct specification and implementation, as well as it's correct use. Encouraging and supporting both is very important.

Code Coverage

Code coverage is often a yard-stick for how effective your testing is. Code coverage of 100% is rarely possibly. For instance, if you use Mac.getInstance("HmacSHA1"), it's nearly impossible to trigger the "NoSuchAlgorithmException".

However, many tools provide branch-coverage as well as line coverage. Achieving a high coverage can help your confidence, but when using complex decision cases (for example, if (a && b || !(c && (d || e)))), it's very hard to really be sure that you've covered all cases for "entry" into a block

Cyclomatic complexity (CC) should help here. As a rough guide, if you have a CC of 2, you should have at least 2 tests. Although, this is still just a rule of thumb, it does help me feel more confident that I've ensured that, to a reasonable level, all eventualities are accounted for.

Conclusion

Many products don't have a specification, which can make reducing surprising behaviours difficult. Similarly, not all defects lead to failures.

However, even without a specification, some of the techniques listed in this post can be applied to try to lower defect rates. I've personally found that these increase my confidence when developing, but maybe that just increases my appetite for risk.

I am by no means saying that all of the above tools and techniques must be used. Similarly, I will also not say that the above techniques will ensure that your code is defect free. All you can do is try to ensure that your defect rate is lowered. For me, I feel that the techniques and tools in this post help me to achieve that goal.

Friday 5 September 2014

The Future of TLS

There exist many implementations of TLS. Unfortunately, alongside the proliferation of implementations, there is a proliferation of serious defects. These defects aren't just limited to the design of the protocol, they are endemic. The cipher choices, the ways in which the primitives are combined, everything.

I want to lay out a roadmap to a situation where we can have an implementation of TLS that we can be confident in. Given the amount commerce that is currently undertaken using TLS, and the fact that other, life-and-death (for some) projects pull TLS libraries (don't roll your own!) and entrust them, I'd say that this is a noble goal.

Let's imagine what would make for a "strong" TLS 1.3 & appropriate implementation.

We start with primitive cipher and mac selection. We must ensure that well-studied ciphers are chosen. No export ciphers, no ciphers from The Past (3DES), and the ciphers must not show any major weaknesses (RC4). Key sizes must be at least 128-bit or more. If we're looking to defend against quantum computers, the key-size must be at least 256-bit. A similar line of reasoning should be applied to the asymmetric and authentication primitives.

Further, the specification should aim to be as small, simple, self-contained and obvious as possible to facilitate review. It should also shy away from recommendations that are known to be difficult for programmers to use without a high probability of mistakes, for instance, CBC mode.

Basic protocol selections should ensure that a known-good combination of primitives is used. For example, encrypt-then-MAC should be chosen. We head off any chosen ciphertext attacks by rejecting any forged or modified ciphertexts before even attempting to decrypt the cipher text. This should be a guiding principle. Avoid doing anything with data that is not authenticated.

The mandatory cipher suite in TLS 1.2 is TLS_RSA_WITH_AES_128_CBC_SHA. While this can be a strong cipher, it's not guaranteed. It's quite easy to balls-up AES CBC, to introduce padding oracles, and the like. I would argue that the default should be based around AES GCM, as this provides authenticated encryption without even so much as lifting a finger. Even the choice of MAC in the current default is looking wobbly. It's by no means gone, but people are migrating away from HMAC-SHA1 to better MAC constructions. I would recommend exploiting the parallelism on current-generation technologies by allowing a PMAC.

There should also be allowances, and maybe even a preference towards well-studied ciphers that are designed to avoid side-channels, such as Salsa20 and ChaCha20.

There should be no equivalent of the current "null" ciphers. What a terrible idea.

I like that in the current RFC, the maximum datagram size is specified. That means that the memory usage per-connection can be bounded, and the potential impact of any denial of service attack better understood before deployment.

For the implementation, I am not fussy. Upgrading an existing implementation is completely fine by me. However, it should be formally verified. SPARK Ada may be a good choice here, but I have nothing against ACSL and C. The latter "could" be applied to many existing projects, and so may be more applicable. There are also more C programmers than there are Ada programmers.

Personally, I think SPARK Ada would be a fantastic choice, but the license scares me. Unfortunately, ACSL has it's own host of problems. Primarily that the proof annotations for ACSL are not "as good" as for SPARK, due to the much more relaxed language semantics in C.

Any implementation which is formally verified should be kept as short and readable as reasonably possible, to facilitate formal review. The task of any reviewers would be to determine if any side-channels existed, and if so, what remediation actions could be taken. Further, the reviewers should generally be checking that the right proofs have been proven. That is, the specification of the program is in-line with the "new" TLS specification.

Responsibilities should be shirked where it makes good sense. For instance, randomness shouldn't be "hoped for" by simply using the PID, or the server boot time. Use the OS provided randomness, and if performance is paramount, use a CSPRNG periodically reseeded from true random (on Linux, this is supposed to be provided by /dev/random).

The implementation should, as far as possible, aim to meet the specification, as well as providing hard bounds for the time and memory usage of each operation. This should help with avoiding denial of service attacks.

The deployment would be toughest. One could suppose that a few extra cipher suites and a TLS 1.2 mode of operation could be added in to ease deployment, but this would seriously burden any reviewers and implementors. The existing libraries could implement the new TLS specification, without too much hassle, or even go through their existing code-bases and try to move to a formally-verified model, then add in the new version. Once a sufficient number of servers started accepting the new TLS, a movement in earnest to a formally verified implementation could begin (unless this route was taken from the start by and existing library).

Any suitable implementation that wants major uptake would ensure that it has a suitable license and gets itself out there. Current projects could advertise their use of formal methods to ensure the safety of their library over others to try to win market share.

We had the opportunity to break backwards compatibility once and fix many of these issues. We did not, and we've been severely bitten for it. We really go for it now, before the next heartbleed. Then we can go back to the fun stuff, rather than forever looking over our shoulders for the next OpenSSL CVE.

Tuesday 2 September 2014

Lovelocks: The Gap Between Intimacy and Security

Let me start by asking a question. One that has been thoroughly explored by others, but may not have occurred to you:

Why do you lock your doors when you leave your house?

We know that almost all locks can be defeated, either surrupticiously (for example, picking or bumping the lock) or destructively (for example, by snapping the lock). What does this say about your choice to lock the door?

Many agree with me when I say that a lock on a private residence is a social contract. For those that are painfully aware of how weak they are, they represent a firm but polite "Keep Out" notice. They are there to mark your private space.

Now, assume that you and your partner have a long distance from relationship with one another, and also live in the 50s. Love letters may have been common in this era, and you would have likely kept the letters in your house. I know that I did just this with previous partners. I kept the love letters in my house.

Imagine that someone breaks into your house and photographs or photocopies those letters, and posts the contents of them in an international newspaper. To learn of the intrusion, and that you have been attacked on such a fundamental level would be devastating.

To my mind, that is a reasonable analogy for what has happened to those who have had their intimate photos taken and posted to 4chan. An attacker defeated clear security mechanisms to gain access to information that was obviously, deeply, private and personal to the victims.

These victims were not fools, they did the equivalent of locking their doors. Maybe they didn't buy bump, snap, drill and pick resistant locks for over £100 per entrance, but they still marked the door as an entrance to a private space and as such marked it as their private space.

It is right that the police should be treating this breach as an extremely serious assault on these womens' (and they are all women at this time) personal lives, and therefore pursuing the perpetrators to the fullest extent allowable under law.

Claiming that the victims should have practiced better security in this case is  blaming the victim. If I go to a bank, and entrust my will to one of their safety deposit boxes (now a dying service), and it is stolen or altered in my absence, am I at fault? No, the bank should be investing in better locks -- that's what they're paid to do; and beyond that, people shouldn't be robbing banks. Bank robbers are not my fault, and iCloud hackers are not the fault of these women.

Further, it is just plain daft to claim that these women would be able to protect themselves in this world, and maintain the intimate relationships that they wanted to. Security is very hard, we know this as barely a week passes where members of a service are not urged to change their passwords due to a security breach. And these breaches affect entities of all sizes, from major corporations to single people. Even those agencies with budgets the size of a small country's GDP, and whose remit is to protect the lives of many people have serious breaches, as evidenced by the ongoing Snowden leaks.

Expecting anyone to be cognizant of the security implications and necessary precautions of sharing an intimate moment with a partner is to ask them to question their trust for that person and the openness of that moment. Security is the quintessential closing off of oneself from the world, of limiting trust, and exerting control. Intimacy; even long distance; is about opening oneself up to another person, to sharing yourself with them, extending to them a deep trust, and allowing them a spectacular amount of control over yourself, physically and emotionally. To place these two worlds together is cognitive dissonance made manifest.

And yet, we use these flawed security devices to display and proclaim our love -- and hence intimacy -- with one another. Even as a we put love locks on bridges, we know that they can be removed. We acknowledge their physical limitations as we try to communicate our emotions with each other.


We are accepting of the persistent and somewhat insurmountable failings of physical security, and we do not blame the victim when their physical security is breached. It is also the case that physical and digital security are in many senses a good analogy of one another, but that we apply different standards. We need to start realising that digital security is also imperfect, and further, that it is not the victims' fault when that security fails.