ECRYPTEU
A blog for the H2020 ECRYPT projects ECRYPT.NET and ECRYPT.CSA
Monday, October 17, 2016
Supersingular isogeny DiffieHellman
Most postquantum cryptography is based on lattices, codes, multivariate quadratics or hashes. See Gustavo's post for more.
A fifth category seems to slowly establish itself: Isogenybased crypto.
These schemes are based on the difficulty of finding an isogeny between two supersingular elliptic curves. Isogenies are specific rational maps between elliptic curves which must also be a group homomorphism for the group of points on the curve.
The original proposal [Stolbunov et al., 2006] was to use the problem of finding isogenies between ordinary elliptic curves but this system was shown to be breakable with a quantum computer [Childs et al., 2010]. After that it was proposed to use supersingular elliptic curves instead [De Feo et al., 2011].
SIDH currently has significantly worse performance than lattice based keyexchange schemes but offers much smaller keysizes. Compared to Frodo it is 15 times slower but the key size is only one twentieth. Compared to NewHope it is over 100 times slower at less than one third of the keysize. This can be relevant in scenarios like IOT where cryptographic computations require orders of magnitude less energy then actually transmitting data via the radio.
Although finding isogenies between curves is difficult, Vélu's formulas allow calculating an isogeny with a given finite subgroup as its kernel. All such isogenies are identical up to isomorphism.
Now, starting with a public curve \(E\) that is a system parameter we have both parties, Alice and Bob generate an isogeny for kernels \(\langle r_a \rangle, \langle r_b\rangle\) respectively. Let \(r_a, r_b\) be any generators of a subgroup for now. This gives us two isogenies
$$ \phi_a: E \rightarrow E_a\\
\phi_b: E \rightarrow E_b.$$
Now we would like to exchange $E_a, E_b$ between the partners and somehow derive a common $E_{ab}$ using the kernels we have used. Unfortunately, $r_a$ does not even lie on $E_b$ so we have a problem.
The solution that was proposed by De Feo et al. is to use 4 more points $P_a, P_b, Q_a, Q_b$ on $E$ as public parameters, two for each party. This allows constructing
$$r_a = m_aP_a + n_aQ_a\\
r_b = m_bP_b + n_bQ_b$$ using random integers $m_a, n_a, m_b, n_b$ appropriate for the order.
Now, after calculating the isogenies $\phi_a, \phi_b$ the parties not only exchange the curves $E_a, E_b$ but also $\phi_a(P_b), \phi_a(Q_b)$ and $\phi_b(P_a), \phi_b(Q_a)$.
Looking at the example of Alice she can now calculate
$$m_a\phi_b(P_a)+n_a\phi_b(Q_a) = \phi_b(m_aP_a + n_aQ_a) = \phi_b(r_a)$$ and Bob can perform the analogous computation. Constructing another isogeny using this $\langle \phi_b(r_a) \rangle$ and $\langle \phi_a(r_b) \rangle$ respectively gives Alice and Bob two curves $E_{ba}, E_{ab}$ which are isomorphic and their jinvariant can be used as a common secret.
I will leave you with this wonderfully formula laden image from the De Feo et al. paper showing the protocol.
Monday, October 10, 2016
Quantum computation, algorithms and some walks.. pt.1
Well, computers aren't made with cats inside. Computers use bits and registers to work. So, how is it possible to compute with a quantum computer? How is it possible to represent bits? How is it possible to take advantage of the superposition?
Quantum Notation
First, let's learn about the Dirac notation, that is commonly used in Quantum physics and in most of the literature about quantum computers. Since Blogger doesn't allow me to create mathematical equations, I will get some help from physciense blog and pick some images from them:The Dirac notation could be a little bit different. If you want to check about it, I selected some nice lecture notes in the following links: Lecture 1 and Lecture 2.
So, the braket notation is just vectors and we are going to use this to represent our qubit, yes we call the bit of a quantum computer by this name. In classical computers we represent a bit with 0 and 1 but in quantum computers it is a little bit different. We can represent the states as follows:
As the image show to us, the qubit is 0,1 or a superposition of 0 and 1. However, if we measure, i.e., if we see the qubit we lose the superposition. In other words, our state collapses and we cannot take advantage of the superposition anymore. In the same way that in classical computers we have gates, the quantum computer also has gates. One very famous gate is the Hadamard gate. This gate has the property to put a qubit in the superposition state. We can see the action of this gate in the following image:
Quantum Algorithms
Now, we know what is a qubit and how we can operate with it. We can move for the next step and create some algorithms to solve problems. The most common and very wellknown example came from Deutsch and Jozsa. It is known by DeutschJozsa problem and consist of: Input: f: {0,1}^n to {0,1} either constant or balanced
 Output: 0 iff the function f is constant
 Constraints: f is a blackbox
If we solve this problem with a quantum computer, we are going to make exactly 1 query. The function f will be implemented as a blackbox in the quantum computer and it will be:
After this, we can see that we put our qubit in a superposition state. Now, we go to our function and call our black box. The result of it can be seen as:
Tuesday, September 6, 2016
Requirements and security goals for the Cloud
But first ... I'd like to mention that soon the ECRYPTNET Summer School on cryptography for the cloud in Leuven is coming up.
This blog post can be seen as an intro to cryptology in the cloud; so there are some topics we will for sure hear about at the event in depth. I'm looking forward to a good discourse during the scheduled talks and to the discussions afterwards.
Part I: Requirements and security goals for the Cloud
The cloud here is loosly defined as computing and storage resources outsourced to servers located offsite that are ondemand accessible via Internet.
Overview of cloud computing services to be secured (Image by Wikimedia user Sam Johnston) 
Requirements and the according cryptologic countermeasures are defined for various usecases (such as depicted in the image) and briefly explained. It is important to realize that this needs to be done without deliberately weakening (or "backdooring") solutions, as is sometimes suggested. To ensure democratic use of the developed technologies it is vital to see that the difference of a "front door" and a "back door" is merely ones viewpoint. Intentionally implementing two entrances makes an attacker happy not the legitimate user.
Transparent services, meaning the use of the cloud as if it was onsite, should ideally be built upon a backend with verifiable, clear, possibly standardized, cryptologic concepts and open code for public scrutiny.
To capture the realworld cloud settings one has to view it from different perspectives: First the one of a private entity (user), then that of an organization (such as a company) and lastly the global perspectives of, say, a government.
An interesting startingpoint for threats we have to consider, is the Cloud Security Alliance's 2016 report for cloud security naming twelve critical issues ranked by severity of realworld impact.
Data breaches is followed directly by weak identity, credential and access management and insecure Application Programming Interfaces (APIs) in this report  issues that can be addressed by assessing requirements and tailoring cryptographic solutions from the beginning and deploying stateoftheart implementations instead of sticking to legacy code.
Roughly four categories of requirements for the usages of the cloud can be distinguished:
 Computations in the Cloud
 Sharing Data in the Cloud
 Information Retrieval from the Cloud
 Privacy Preservation in the Cloud
The following, briefly explained, concepts tackle concrete usecases for endusers, companies and eGovernment tasks alike:
 OrderPreserving Encryption (OPE) allows efficient range queries but does it diminish security too much?
 FormatPreserving Encryption (FPE) offers inplace encryptions in legacy databases.
 Data DeDuplication (DDD) is enhancing servers' backend performance.
 Secret Sharing Schemes (SSS) is solving backup and availability issues by distributing encrypted parts of the whole.
 Malleable Signature Schemes (MSS) is providing flexible authentication for documents and derived data.
 Private Information Retrieval (PIR) privately accessing elements in a database.
 Provable Data Possession (PDP) ensures that the cloud is storing the (uncorrupted) files.
 MultiParty Computation (MPC) allows secure cooperation across the Internet.
 Verifiable Computation (VC) builds trust in results of a delegated computation.
Let's hope this trend continues and ultimately leads to reliable, standardized primitives for realworld applications.
Stay tuned for Part II: Requirements and security goals for the IoT...
Tuesday, August 23, 2016
CRYPTO 2016 –BAKCDOORS, BIG KEYS AND REVERSE FIREWALLS ON COMPROMISED SYSTEMS
Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines
BigKey Symmetric Encryption: Resisting Key Exfiltration
Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results
[DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Soﬁa, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.
Saturday, July 30, 2016
ArcticCrypt
Nordenskiöld glacier 
The talks had a good mixture between talks from invited speakers as well as talks from researchers that submitted a paper. The topics reached from symmetric cryptography to fully homomorphic encryption as well as digital signatures and side channel attacks. The full program can be found here: http://arcticcrypt.b.uib.no/program/.
One of the most interesting talks on Monday, was given by Eik List: POEx: A BeyondBirthdayBoundSecure OnLine Cipher. In his talk he presented POEx that reached beyondbirthdaybound security with one call to a tweakable blockcipher and a call to a 2nbit universal hash function per message block. He then showed a security proof and gave possible instantiations.
In the evening from Monday/Tuesday at midnight there were two fascinating talks during midnight sun. The first one was given by Ron Rivest on: Symmetric Encryption based on Keyrings and Error correction. The second talk was from Adi Shamir: How Can Drunk Cryptographers Locate Polar Bears.
Adi Shamir explaining how drunk cryptographers can locate polar bears. 
On Tuesday, Joan Daemen gave his invited talk on: Generic security of fullstate keyed duplex. In his talk he briefly explained sponge constructions and how they can be used for authenticated encryption. Afterwards, he explained how to achieve beyondbirthdaybound security using sponges. In the end, he showed a new core of sponges, the (full state) keyed duplex construction.
On Wednesday, there was a full day of sightseeing planned, where we went on a boat trip to the "ghost town" Pyramiden. We started our boat trip in Longyearbyen, where we saw some minke whale. The captain of the ship told us that he saw also a blue whale a few days earlier. After a while we approached the bird cliffs where many seagulls and puffins were nesting. Birds are very important for the eco system of svalbard, as they exchange life from the water to the main lands. From there we continued our journey to the nordenskiöld glacier, a huge glacier with blueish shining ice. After a whiskey with glacier ice, we continued to our final destination. The "ghost town" pyramiden was a russian settlement and coalmining community was closed in 1998 and is since 2007 a tourist attraction.



In the more realistic multiuser setting, the attacker gets all users' public keys and can choose which one to attack. In her paper, she first analysed the BLS (BonehLynnShacham) signature scheme's security in a manner similar to what was done for Schnorr  is keyprefixing necessary to maintain unforgeability of signatures in a multiuser setting? Next, she analysed the multiuser security of the aggregate signature scheme BGLS (BonehGentryLynnShacham). She proposed a security notion in a multiuser setting analogous to the multiuser setting for normal (nonaggregate) signatures, then analysed BGLS's security in this model.
On Friday, Gregor Leander was presenting Structural Attacks on Block Ciphers where he presented invariant subspace attacks. Furthermore, he introduced an improved technique called nonlinear invariant attacks.



Wednesday, July 13, 2016
Crypto events in ÎledeFrance
The sunny weather and the general feeling of holiday were not impeding cryptoenthusiasts around Paris to meet and discuss the advancements in this topic. On one hand, the Paris Crypto Day brought together people working on different aspects of cryptography, who are based in the Paris area. The last such meeting was organized by ENS on 30.06.2016 and was fortunate to have Anne Canteaut (INRIA Paris), Leo Reyzin (BU), Victor Shoup (NYU) and Rafael Pass (Cornell) speaking about their research. On July 57, Paris also hosted a workshop organized within the HEAT (Homomorphic Encryption Applications and Technology) programme. It was held at Universite Pierre et Marie Curie (a.k.a. Paris 6) and it was composed of six invited talks given by famous researchers within the homomorphic encryption community and ten "regular" talks given by younger researchers and students.
Paris Crypto Day
The first presentation was given by Anne Canteaut on Algebraic Distinguishers against Symmetric Primitives. The talk focused on presenting a unified view about the notions of cube distinguishers and the more recently introduced division property. The aforementioned attacks are based on Knudsen's higherorder differential attacks which exploit properties of the polynomial representation of the cipher. The presentation was very appreciated by the symmetric and asymmetric cryptographers.
Victor Shoup gave a talk about hash proof systems^{1} and their applications, in which he reviewed definitions, constructions and applications. Hash proof systems can be seen as a family of keyed hash functions $H_{sk}$ associated to a language $L$ defined over a domain $D$. The secret hashing key $sk$ is used to compute a hash value for every input $x \in D$. Magically
, there is a second way to compute the same hash value: it uses a projection key $pk$ (derived from the $sk$) and also a witness $w$ for $x \in L$. The original definition of hash proof systems requires that the projection key does not depend on the word $x$, but later, smooth projective hash functions allow for this change. Smooth projective hash functions have found applications, among others, in password authenticated key exchange.
Leo Reyzin from Boston University (joint work with Joel Alwen, Jeremiah Blocki, and Krzysztof Pietrzak), presented an analysis of SCrypt (originally introduced by Colin Percival in 2009 for Tarsnap), a tool whose potential applications include the realization of timelock puzzles from memoryhard problems.
The starting point for their work was the key derivation function in SCrypt.
As stated by Leo during the talk, SCrypt is defined as the result of $n$ steps, where each step consists of selecting one of two previously computed values (the selection depends on the values themselves) and hashing them. It is conjectured that this function is memoryhard
.
The new result shows that in the Parallel Random Oracle Model, SCrypt is maximally memoryhard. One metric used is the product of time and memory used during the execution of SCrypt, for which
the authors show the bound must be $\Theta(n^2)$.
Interestingly, for a nonconstant amount of memory used during the computation (this scenario simulates real applications), a more accurate metric  defined by the sum of memory usage over time  is again proven to be bounded by $\Theta(n^2)$ and this holds
even if the adversary is allowed to make an unbounded number of parallel random oracle queries at each step
.
The last speaker was Rafael Pass, from Cornell, who gave an gripping talk about the Analysis of the Blockchain Protocol in Asynchronous Networks. During his talk, Rafael defined the notions of consistency and liveness in asynchronous networks. In what followed, he explained his result that proves the blockchain consensus mechanism satisfies a strong forms of consistency and liveness in an asynchronous network with adversarial delays that are apriori bounded
.
HEAT Workshop
The workshop was really interesting because, besides new theoretical advances in the field, many talks were about the practicalside of FHE: how to set the parameters, concrete results in cryptanalysis, libraries and realworld applications. The part about lattice reduction techniques was especially interesting.
In particular, Antoine Joux gave a talk named "The prehistory of latticebased cryptanalysis" where he reviewed some lattice reduction algorithms (Gauss's algorithm for two dimensions and LLL for higher dimensions) and gave some cryptanalytic results, e.g. Shamir's attack against the knapsack problem and the lowdensity attack against MerkleHellman knapsack. Basically, latticereduction aims at finding a "good" basis, made of short and almost orthogonal vectors, from a "bad" one, made of long and nonorthogonal vectors. In fact, with a good basis problems like SVP or CVP become easy and it is possible to break cryptosystems based on these problems. There are algorithms that do this (like the famous LLL) but the conclusion was that latticebase cryptography remains secure as long as lattices are big enough: in fact, all the latticereduction algorithms work well if the dimension is not too high. With higher dimension many problems appear and latticereduction remains hard.
Another interesting talk about this kind of topic was "An overview of lattice reduction algorithms" by Damien Stehlé, who pointed out that lattice reduction has mainly two goals: beside the predictable one of cryptanalysing latticebased cryptosystems (such as NTRU and all those based on SIS and LWE), it is useful for cryptanalysing other cryptosystems as well, like variants of RSA. He then presented the two main algorithms in this field, i.e. BKZ and LLL, and outlined their differences, like the global strategy used by BKZ versus the local one used by LLL. He also introduced fasterLLL^{2}, an improvement of the LLL algorithm which is the subject of one of his most recent works. In the conclusions, he mentioned some open problems and finding a "quantum acceleration" is certainly one of the most interesting ones. In fact, as far as we know, lattice problems are not easier for quantum computers, and this is the reason why they are considered the most promising candidate for postquantum cryptography.
If someone is into coding, this may be interesting: Shi Bai gave a short talk about FPLLL, an implementation of FloatingPoint LLL and BKZ reduction algorithms created by Damien Stehlé. It is a C++ library (also available in Python under the name of FPyLLL) which is also used by the popular Sage. Its goal, as stated by the authors, is to provide benchmarks for lattice reduction algorithms and, more in general, lattice reduction for everyone. More details can be found at https://github.com/fplll/fplll and contributions are welcome!
Besides lattice reduction algorithms, another interesting talk was given by Florian Bourse, who presented a recent work^{3} about circuit privacy for FHE. The main result is that it is possible to homomorphically evaluate branching programs over GSW ciphertext's without revealing anything about the computation, i.e. the branching program, except for the result and a bound on the circuit's size, by adding just a small amount of noise at each step of computation. This means that the "price" to pay is quite low, especially if compared to other techniques based on bootstrapping. Also, this method does not rely on notsowellunderstood assumptions like circular security and only assumes the hardness of LWE with polynomial modulustonoise ratio.
References
^{1. Cramer R, Shoup V. Universal hash proofs and a paradigm for adaptive chosen ciphertext secure publickey encryption. In Advances in Cryptology— EUROCRYPT 2002, vol. 2332, LNCS. Springer: New York, NY, 2002; 45–64.↩}
^{2. Arnold Neumaier and Damien Stehlé. Faster LLLtype reduction of lattice bases. ISSAC 2016.↩}
^{3. Florian Bourse, Rafael Del Pino, Michele Minelli and Hoeteck Wee. FHE circuit privacy almost for free. CRYPTO 2016, to appear.↩}
This blog post has been collaboratively written by Michele and Razvan.
Tuesday, July 12, 2016
The SubsetSum Problem
Historical Remarks
This old problem was first studied in 1897, the same year the first airborne mission to completely reach the geographical north pole (NP) started (and ended...), and was one of the first proven to be NPcomplete  worstcase instances of this problem are computationally intractable. SubsetSum was proved to be NPcomplete by reducing '3SAT' to the 'Graph Coloring Problem' which was reduced to 'Exact cover' which was reduced to Knapsack and close variants thereof. These proofs were carried out during the early 1970's rigorous reduction proofs and SubsetSum problem is featured on Karp's somewhat famous list of 21 NPcomplete problems, all infeasible to solve on current computers & algorithms thus a possible basis for cryptographic primitives. In the following table one can see how the expected time/space requirements of algorithms solving (1) in hard cases evolved as the techniques were refined by modern research:
Expected time and space requirements of algorithms solving (1) in average hard instances. 
Let us review two classical techniques that led to remarkable speedups:
Technique 1  Meet in the Middle
SchröppelShamir: Combining disjoint subproblems of smaller weight. 
Algorithms based on the birthdayparadox construct expected collisions in the second component of the subproblems in the lists $L_1, L_2$ forcing any $x \in L_0$ to fulfill (1). The difficulty is to estimate the listsize needed to observe the existence of one solution with high probability. It is desirable to ensure that it is more likely to terminate the algorithm with a nonempty $L_0 \geq 1$ (i.e. have a solution) than the chance to see a polar bear towards the northeast or meet one in the middle of Svalbard, Norway.
Technique 2  Enlarge Number Set
BCJ11: Adding length $n$ subsolutions increases the numberset. 
The numberset used by the authors was $\{1,0,1\}$, indicating a summand appearing on both sides of Equation (1).
After constructing sufficiently many subproblems and their respective partial solutions a collision can be expected thus the combination forms a solution for the given instance.
Applications
The cryptanalytic methods for structurally approaching the SubsetSum problem are valuable algorithmic metatechniques also applicable to other NPcomplete problems like lattice or codebased problems.
Credits: http://fav.me/d3a1n08 
PS: The bad image quality, is due to blogger wouldn't let me include vectorgraphics like .pdf or .eps nor directly render them giving latex code... :(