Sunday, April 17, 2016

Inherent leakage-resiliency of LWE

MathJax TeX Test Page

Why we care about post-quantum cryptography should be clear by now (if it’s not, you’re in the right place anyway: you just need to take a step back and read this splendidly packaged blog post). A mathematical notion that is having much success in achieving it is the Learning With Errors (LWE) problem (guess what, we've also talked about that here). Thanks to some mathematical evidences, we are quite confident about its hardness so that many cryptographic primitives whose security is based on it have arisen. That said, let’s take a step further and imagine a future (not that far in time) when all those primitives are standardised and deployed at many levels, including hardware security. Our experience with classic algorithms teaches us that we shouldn't be too confident of the security of an implementation given the security of the scheme on paper, and post-quantum cryptography is no exception. In fact, side-channel attacks are known to be a great threat to cryptographic implementations. The question this blog post wishes to partially answer is: how can we secure our (future) post-quantum implementations?

Playing cops and robbers is a way, of course: applying common techniques (e.g. masking, randomisation, …) against known side-channel attacks is possible just by fixing the design of specific schemes. However, this approach has at least a couple of drawbacks:

  1. a protection is not guaranteed to thwart future attacks (sometime new attacks are specifically designed to circumvent a countermeasure). More generally, a countermeasure doesn't protect against anything outside what it was designed to protect against;
  2. usual countermeasures introduce overhead (in terms of whatever metric you prefer, e.g. area, time, ...). This results in a protected scheme to be less efficient than its unprotected version.

Leakage-resilient cryptography is a possible alternative: a cryptographic scheme is provably secure even in the case of some anticipated amount of leakage, by design. Usually such proofs only rely on a bounded amount of leakage that the scheme can tolerate. The main idea sounds like the following statement.

We build a cryptographic scheme which retains security even if a bounded amount of arbitrary information is leaked.

Full stop. There aren't many other assumptions: this fact gives leakage-resilient cryptography enough generality to capture many, if not all, side-channel attacks. In essence, we are assuming that the scheme is secure if leakage occurs, no matter how that is exploited by an attacker. This seems to address the first drawback of ad-hoc countermeasures, provided everything is implemented in the proper manner. Is there anything we can do about the second drawback?

If we keep thinking of securing a scheme, then answering to that question is really difficult. Indeed a countermeasure, even the most general and provable one, introduces some changes in the original design which necessarily result in a overhead, for the simple reason that they take into account a more powerful adversary, namely one who is able of measuring and exploiting leakage too. However, think of the case in which none of the anticipated amount of leakage actually occurs. Then we end up having a less efficient implementation, the protected one, which doesn't provide any additional security because, without leakage, the original scheme was secure anyway. Eventually, stop thinking of securing a scheme and start thinking of securing an assumption.

In their paper, Goldwasser, Kalai, Peikert and Vaikuntanathan show that LWE has the nice feature of being hard even in the presence of leakage. Does it sound like what we've just said? Essentially yes, but with a great difference: they're not building a leakage-resilient scheme, hence introducing some sort of overhead, but proving the leakage-resiliency of an assumption. This makes a big difference, because the former implies that the efficiency degrades to let leakage-resiliency grow, while the latter implies that security, not efficiency, degrades. In particular, their proof establishes a relation between the security level and the amount of leakage in such a way that we don't need to anticipate a maximum tolerable amount of it. The straightforward consequence is that a scheme based on LWE is secure independently on whether the leakage occurs or not, with the advantage that in the latter case we have not introduced any protection-related overhead in the design. As a result, they propose a symmetric-key encryption scheme being secure even in the presence of leakage (with some caveats): the novelty relies on the fact that you won't find any parameter describing an amount of leakage (usually called $\lambda$ in the literature of leakage-resilient cryptography).

We omit any technicality here as this would like to be a generic and accessible reading. For further details on how the authors achieved this result, we refer to their paper. Instead, it you prefer to have just a mathematical flavour without going into too many details, you may like this post from the Bristol cryptography group's blog. Stay tuned for more insights on post-quantum cryptography!

P.S.: the word "leakage" appears 19 times in this text. If you're tired of reading it, watch this.

Marco (Thanks to Simon and Matthias for some corrections!)

Friday, April 15, 2016

Mobile messenger security...could it be any worse?

Recently, a prominent court case has led to a lot more people noticing the fact that BBM (Blackberry Messenger) apparently uses a symmetric cipher (3DES) with one key for everybody.
This seems to be old news, it is e.g. mentioned in this forum post from 2014. Indeed the investigation for that court case ran between 2010 and 2012. But it has now received a bit more attention.

Now I hope you agree that a globally shared symmetric key basically provides no secrecy at all. Indeed even a article from 2010 states:
BlackBerry Messenger and PIN to PIN messages are NOT encrypted. They are scrambled using a global cryptographic key which EVERY BlackBerry in the world uses.
(emphasis in the original)

Drawing comparisons is difficult. E-mail is insecure but often at least the connections are encrypted. SMS is hopefully encrypted with A5/X of GSM (including its security problems). Still both are usually visible to the network operator or service provider.

So how can we do better?
For Android, iOS and now as a Chrome browser extension, there is Signal. Which is also used by many of the ECRYPT fellows. The signal protocol, formerly known as axolotl has also recently been adopted for the new encryption in WhatsApp. Signal is still to be preferred however, since it is open source, has reproducible builds and there is no Facebook behind it to correlate the metadata to. While neither Signal nor WhatsApp hide metadata, the encryption seems to be solid.

So let's say you get rid of your Blackberry with BBM and get an Android phone with Signal instead? How's your security?
Unfortunately Android has a systematic problem with security updates.
Consider this devastating graph from Even ignoring details about how the collected their data which would probably cause a bias towards being too optimistic, it shows that almost all Android devices are susceptible to a known vulnerability. (For obvious reasons unknown vulnerabilities are not included.)

There are some interesting projects trying to improve the situation, for example CopperheadOS or the Guardian Project (who have teamed up with f-droid in an attempt to provide a true open-source ecosystem). It does however look like a really good solution is still some time away.

But for end-to-end encryption Signal looks like an excellent choice.

Friday, April 8, 2016

Post-Quantum Cryptography: Why? What is it?

Why Post-Quantum Cryptography?

When I wrote my first blog entry here ("Ola from Gustavo"), I gave a short description of what will happen when someone presents the first quantum computer, i.e., most of the actual cryptography (ECC, RSA) will be dead, see [1] for more details. One statement about quantum computers and cryptography is:
"Well, we don't have a quantum computer now and the expectation for one is around 15 years, why I should be worried now?"
I will give one good reason, imagine that you use some encryption (RSA, ECC or whatever) in your messages and someone starts storing these messages. Now, this agency/person/computer is not able to read your messages. However, in 15 years (around this) when a real quantum computer appears then, it will be possible to break the encryption and read everything about your past. For this reason, we need to be prepared against this threat now and we don't need to wait for the quantum computer to start to protect ourselves. For this protection, we have good guardians of the privacy that with the union of their powers started the research in Post-Quantum cryptography (PQC).

In the figure (see above), we can see that PQC divides into four big areas, i.e., Hash-based, Code-based, Multivariate-quadratic-equations-based and Lattice-based. There are other areas but these four became more important in the last 10 years. In the following, I will give a very brief explanation and examples of schemes of each area in PQC. There is a website about PQCrypto with initial recommendations and a list of publications in the area to access click here.

HASH-BASED Cryptography

In hash-based, we can see important signature schemes such as Lamport-Diffie OTS, Merkle's hash-tree, XMSS, SPHINCS and a lot of relevant work. I decided to talk about one of the newest works in the area, i.e., SPHINCS.
SPHINCS was presented in 2015 at EUROCRYPT under the title of "SPHINCS: practical stateless hash-based signatures". In order to understand better the idea behind this work, it is necessary to have a previous knowledge about hash signatures because they present two new ideas: replacement of the leaf OTS (One-time Signature) with a hash-based FTS (Few-time Signature) and Goldreich’s construction as a hyper-tree construction with h layers of trees of height 1. For more details about the implementation and construction of SPHINCS, it is possible to access here.
Also, if you have more interest in Hash-Based cryptography can be checked via pqcrypto hash-based and at this webpage of Andreas Hülsing.

CODE-BASED Cryptography

The classic name that appears in Code-based is McEliece’s hidden-Goppa-code public-key encryption system from 1978. However, there is new work in the area such as the work from Misoczki, Tillich, Sendrier, and Barreto using quasi-cyclic moderate-density-parity-check (QC-MDPC) codes for the McEliece encryption system. The principal contribution of this work for the area is the use of small key sizes.
Other relevant work, this time in the fast implementation area, came from Bernstein, Chou, and Schwabe under the name of "McBits: Fast constant-time code-based cryptography". The main contribution of this work is a constant-time in the implementation of code-based cryptography and they avoid time attacks.
The literature for code-based is very big and can be checked in this link.

MQ-BASED Cryptography

The Multivariate Cryptography schemes are based on the difficult to solve non-linear systems of equations over
finite fields. Finding a solution for these systems, in general, is considered a NP-complete/-hard problem. One of many
interesting examples is Patarin’s Hidden Fields , generalizing a proposal by Matsumoto and Imai (1988).
The original work from Paratin can be found at: Hidden Field Equations (HFE) andIsomorphisms of Polynomials (IP): two new Families of Asymmetric Algorithms.

If we want to see what happened recently in the MQ area, there is slides from the PQCrypto winter school are available online at PQCrypto . The slides from Jintai Ding gave a very nice overview of the state-of-art in the Multivariate
Public Key Cryptography (MPKC). In the slides, he gave an introduction MQ with a small example to understand
how MQ works. After that, the author gives the principal cryptosystems in MPKC.
The slides are available at this link and more material about MQ-based cryptography can be checked at PQCrypto project.

LATTICE-BASED Cryptography

The lattice area received a lot of attention in the past years, new cryptosystems, new attacks, and implementations. Everything is based on hard problems in lattices such as shortest-vector problem (SVP), Ideal Lattice, Closest-vector Problem (CVP) and others. There is a lattice challenge to show the shortest vector in a Lattice and the ideal Lattice given the dimension of it, more details about it can be checked at Lattice Challenge.

In the area of public-key cryptography, there are several schemes and mostly is based on SVP and Ideal lattices. That is the case for  the scheme described by Peikert and Stehlé et al.

Post-quantum Cryptography and ECRYPT-NET

One of the interests of the ECRYPT-NET project is in PQC and we have four fellowships working in this area. In Lattice-based, we have Marco Martinoli, Michele Minelli and Henry de Valence. In the quantum algorithms area I started work with this.

I started to read about all of the areas described before for two reasons. Firstly, I needed to understand the basic of each area to decide where I want to work. Secondly,  after all the reading I decided to work with quantum algorithms and attack each area using it. In the following months, I will start writing about quantum algorithms and the impact in the cryptography.

[1] Bernstein, Daniel J., Johannes Buchmann, and Erik Dahmen, eds. Post-quantum cryptography. Springer Science & Business Media, 2009.