Provably Secure Cryptographic Constructions

Modern cryptography has virtually no provably secure constructions. Starting from the first Diffie–Hellman key agreement protocol (Diffie & Hellman, 1976) and the first public key cryptosystemRSA (Rivest et al., 1978), not a single public key cryptographic protocol has been proven secure. Note, however, that there exist secure secret key protocols, e.g., the one-time pad scheme (Shannon, 1949; Vernam, 1926); they can even achieve information–theoretic security, but only if the secret key carries at least as much information as the message.


Introduction 1.1 Cryptography: treading uncertain paths
Modern cryptography has virtually no provably secure constructions.Starting from the first Diffie-Hellman key agreement protocol (Diffie & Hellman, 1976) and the first public key cryptosystem RSA (Rivest et al., 1978), not a single public key cryptographic protocol has been proven secure.Note, however, that there exist secure secret key protocols, e.g., the one-time pad scheme (Shannon, 1949;Vernam, 1926); they can even achieve information-theoretic security, but only if the secret key carries at least as much information as the message.
An unconditional proof of security for a public key protocol would be indeed hard to find, since it would necessarily imply that P = NP.Consider, for instance, a one-way function, i.e., a function such that it is easy to compute but hard to invert.One-way functions are basic cryptographic primitives; if there are no one-way functions, there is no public key cryptography.The usual cryptographic definition requires that a one-way function can be computed in polynomial time.Therefore, if we are given a preimage y ∈ f −1 (x),wecan,by definition, verify in polynomial time that f (y)=x, so the inversion problem is actually in NP.This means that in order to prove that a function is one-way, we have to prove that P =NP, a rather daring feat to accomplish.A similar argument can be made for cryptosystems and other cryptographic primitives; for example, the definition of a trapdoor function (Goldreich, 2001) explicitly requires an inversion witness to exist.
But the situation is worse: there are also no conditional proofs that might establish a connection between natural structural assumptions (like P =NP or BPP =NP) and cryptographic security.Recent developments in lattice-based cryptosystems relate cryptographic security to worst-case complexity, but they deal with problems unlikely to be NP-complete (Ajtai & Dwork, 1997;Dwork, 1997;Regev, 2005;2006).
An excellent summary of the state of our knowledge regarding these matters was given by Impagliazzo (1995); although this paper is now more than 15 years old, we have not advanced much in these basic questions.Impagliazzo describes five possible worlds -we live in exactly one of them but do not know which one.He shows, in particular, that it may happen that NP problems are hard even on average, but cryptography does not exist (Pessiland)o rt h a t one-way functions exist but not public key cryptosystems (Minicrypt). 1  Another angle that might yield an approach to cryptography relates to complete cryptographic primitives.In regular complexity theory, much can be learned about complexity classes by studying their complete representatives; for instance, one can study any of the numerous well-defined combinatorial NP-complete problems, and any insight such as a fast algorithm for solving any of them is likely to be easily transferrable to all other problems from the class NP.In cryptography, however, the situation is worse.There exist known complete cryptographic constructions, both one-way functions (Kojevnikov & Nikolenko, 2008;2009;Levin, 1986) and public key cryptosystems (Grigoriev et al., 2009;Harnik et al., 2005).However, they are still mostly useless in that they are not really combinatorial (their hardness relies on enumerating Turing machines) and they do not let us relate cryptographic security to key assumptions of classical complexity theory.In short, it seems that modern cryptography still has a very long way to go to provably secure constructions.

Asymptotics and hard bounds
Moreover, the asymptotic nature of cryptographic definitions (and definitions of complexity theory in general) does not let us say anything about how hard it is to break a given cryptographic protocol for keys of a certain fixed length.And this is exactly what cryptography means in practice.For real life, it makes little sense to say that something is asymptotically hard.Such a result may (and does) provide some intuition towards the fact that an adversary will not be able to solve the problem, but no real guarantees are made: why is RSA secure for 2048-bit numbers?Why cannot someone come up with a device that breaks into all credit cards that use the same protocol with keys of the same length?There are no theoretical obstacles here.In essence, asymptotic complexity is not something one really wants to get out of cryptographic constructions.Ultimately, I do not care whether my credit card's protocol can or cannot be broken in the limit; I would be very happy if breaking my specific issue of credit cards required constant time, but this constant was larger than the size of the known Universe.
The proper computational model to prove this kind of properties is general circuit complexity (see Section 2).This is the only computational model that can deal with specific bounds for specific key lengths; for instance, different implementations of Turing machines may differ by as much as a quadratic factor.Basic results in classical circuit complexity came in the 1980s and earlier, many of them provided by Soviet mathematicians (Blum, 1984;Khrapchenko, 1971;Lupanov, 1965;Markov, 1964;Nechiporuk, 1966;Paul, 1977;Razborov, 1985;1990;Sholomov, 1969;Stockmeyer, 1977;1987;Subbotovskaya, 1961;1963;Yablonskii, 1957).Over the last two decades, efforts in circuit complexity have been relocated mostly towards results related to circuits with bounded depth and/or restricted set of functions computed in a node (Ajtai, 1983;Cai, 1989;Furst et al., 1984;Håstad, 1987;Immerman, 1987;Razborov, 1987;1995;Smolensky, 1987;Yao, 1985;1990).However, we need classical results for cryptographic purposes because the bounds we want to prove in cryptography should hold in the most general B 2,1 basis.It would be a very bold move to advertise a credit card as "secure against adversaries who cannot use circuits of depth more than 3".

Feebly secure cryptographic primitives
We cannot, at present, hope to prove security either in the "hard" sense of circuit complexity or in the sense of classical cryptographic definitions (Goldreich, 2001;2004;Goldwasser & Bellare, 2001).However, if we are unable to prove a superpolynomial gap between the complexities of honest parties and adversaries, maybe we can prove at least some gap?Alain Hiltgen (1992) managed to present a function that is twice (2 − o(1) times) harder to invert than to compute.His example is a linear function over GF(2) with a matrix that has few non-zero entries while the inverse matrix has many non-zero entries; the complexity gap follows by a simple argument of Lamagna and Savage (Lamagna & Savage, 1973;Savage, 1976): every bit of the output depends non-idly on many variables and all these bits correspond to different functions, hence a lower bound on the complexity of computing them all together (see Section 3.2).The model of computation here is the most general one: the number of gates in a Boolean circuit that uses arbitrary binary Boolean gates.We have already noted that little more could be expected for this model at present.For example, the best known lower bound for general circuit complexity of a specific Boolean function is 3n − o(n) (Blum, 1984) even though a simple counting argument proves that there exist plenty of Boolean functions with circuit complexity ≥ 1 n 2 n (Wegener, 1987).In this chapter, we briefly recount feebly one-way functions but primarily deal with another feebly secure cryptographic primitive: namely, we present constructions of feebly trapdoor functions.Ofcourse,inordertoobtaintheresult,wehavetoprovealowerboundonthecircuit complexity of a certain function.To do so, we use the gate elimination technique which dates back to the 1970s and which has been used in proving virtually every single known bound in general circuit complexity (Blum, 1984;Paul, 1977;Stockmeyer, 1977).New methods would be of great interest; alas, there has been little progress in general circuit complexity since Blum's result of 3n − o(n).A much simpler proof has been recently presented by Demenkov & Kulikov (2011), but no improvement has been found yet.
We begin with linear constructions; in the linear case, we can actually nail gate elimination down to several well-defined techniques that we present in Section 3.3.These techniques let us present linear feebly trapdoor functions; the linear part of this chapter is based mostly on (Davydow & Nikolenko, 2011;Hirsch & Nikolenko, 2008;2009).For the nonlinear case, we make use of a specific nonlinear feebly one-way function presented in (Hirsch et al., 2011;Melanich, 2009).

Boolean circuits
Boolean circuits (see, e.g., (Wegener, 1987)) represent one of the few computational models that allow for proving specific rather than asymptotic lower bounds on the complexity.In this model, a function's complexity is defined as the minimal size of a circuit computing this function.Circuits consist of gates, and gates can implement various Boolean functions.
We denote by B n,m the set of all 2 m2 n functions f : B n → B m ,w h e r eB = {0, 1} is the field with two elements.Definition 1.Let Ω be a set of Boolean functions f : B m → B (m may differ for different f ).T h e n an Ω-circuit is a directed acyclic labeled graph with vertices of two kinds: • vertices of indegree 0(vertices that no edges enter) labeled by one of the variables x 1 ,...,x n , • and vertices labeled by a function f ∈ Ω with indegree equal to the arity of f .Vertices of the first kind are called inputs or input variables; vertices of the second kind, gates.T h e size of a circuit is the number of gates in it.
x 1 x 2 We usually speak of outputs of a circuit and draw them on pictures, but in theory, every gate of an Ω-circuit computes some Boolean function and can be considered as an output of the circuit.The circuit complexity of a function f : and is defined as the minimal size of an Ω-circuit that computes f (that has m gates which compute the result of applying function f to input bits).
In order to get rid of unary gates, we will assume that a gate computes both its corresponding function and its negation (the same applies to the inputs, too).Our model of computation is given by Boolean circuits with arbitrary binary gates (this is known as general circuit complexity); in other words, each gate of a circuit is labeled by one of 16 Boolean functions from B 2,1 .Several simple examples of such circuits are shown on Fig. 1.
In what follows, we denote by C( f ) the circuit complexity of f in the B 2,1 basis that consists of all binary Boolean functions.We assume that each gate in this circuit depends of both inputs, i.e., there are no gates marked by constants and unary functions Id and ¬.This can be done without loss of generality because such gates are easy to exclude from a nontrivial circuit without any increase in its size.

Feebly secure one-way functions
We want the size of circuits breaking our family of trapdoor functions to be larger than the size of circuits that perform encoding.Following Hiltgen (1992;1994;1998), for every injective function of n variables f n ∈ B n,m we can define its measure of one-wayness as The problem now becomes to find sequences of functions f = { f n } ∞ n=1 with a large asymptotic constant lim inf n→∞ M F ( f n ), which Hiltgen calls f 's order of one-wayness.Hiltgen (1992;1994;1998) presented several constructions of feebly secure one-way functions.To give a flavour of his results, we recall a sample one-way function.Consider a function f : B n → B n given by the following matrix: that is (we assume for simplicity that n is even), (3) Straighforward computations show that f is invertible, and its inverse is given by . . .
that is, (5) It remains to invoke Proposition 6 (see below) to show that f −1 requires at least ⌊ 3n 2 ⌋−1 gates to compute, while f can be obviously computed in n + 1g a t e s .F i g .2sh o w sac i r c u i t that computes f in n + 1 gates; Fig. 3, one of the optimal circuits for f −1 .Therefore, f is a feebly one-way function with order of security 3 2 .For this particular function, inversion becomes strictly harder than evaluation at n = 7 (eight gates to compute, nine to invert).

Feebly trapdoor candidates
In the context of feebly secure primitives, we have to give a more detailed definition of a trapdoor function than the regular cryptographic definition (Goldreich, 2001): since we are interested in constants here, we must pay attention to all the details.The following definition does not say anything about the complexity and hardness of inversion, but merely sets up the dimensions.
Definition 2. For given functions pi, ti, m, c : N → N,afeebly trapdoor candidate is a sequence of triples of circuits , where: (6) Hiltgen's feebly one-way function of order 3 2 :ac i r c u i tf o rf −1 .n)  such that for every security parameter n, every seed s ∈ B n , and every input m ∈ B m(n) , where Seed n,1 (s) and Seed n,2 (s) are the first pi(n) bits ("public information") and the last ti(n) bits ("trapdoor information") of Seed n (s), respectively.
Informally speaking, n is the security parameter (the length of the random seed), m(n) is the length of the input to the function, c(n) is the length of the function's output, and pi(n) and ti(n) are lengths of the public and trapdoor information, respectively.We call these functions "candidates" because Definition 2 does not imply any security, it merely sets up the dimensions and provides correct inversion.In our constructions, m(n)=c(n) and pi(n)=ti(n).
To find how secure a function is, one needs to know the size of the minimal circuit that could invert the function without knowing the trapdoor information.In addition to the worst-case complexity C( f ), we introduce a stronger notion that we will use in this case.In fact, in what follows we prove a stronger result: we prove that no circuit (of a certain size) can break our candidate for any random seed s, that is, for every seed s, every adversary fails.For a trapdoor function to be secure, circuits that break the function should be larger than the circuits computing it.In fact, in our results we can require that every such adversary fails with probability at least 1 4 .Definition 5. We say that a feebly trapdoor candidate where the function We say that a feebly trapdoor candidate has order of security k if it has order of security k with probability α = 3 4 .
Let us first give a few simple examples.If there is no secret key at all, that is, pi(n)=0, then each feebly trapdoor candidate {(Seed n ,Eval n ,Inv n )} ∞ n=1 has order of security 1, since the sequence of circuits {Inv n } ∞ n=1 successfully inverts it.If {(Seed n ,Eval n ,Inv n )} ∞ n=1 implement a trapdoor function in the usual cryptographic sense then k = ∞.M o r e o v e r , k = ∞ even if the bounds on the size of adversary are merely superlinear, e.g., if every adversary requires Ω(n log n) gates.Our definitions are not designed to distinguish between these (very different) cases, because, unfortunately, any nonlinear lower bound on general circuit complexity of a specific function appears very far away from the current state of knowledge.
One could also consider key generation as a separate process and omit its complexity from the definition of the order of security.However, we prove our results for the definition stated above as it makes them stronger.
In closing, let us note explicitly that we are talking about one-time security.An adversary can amortize his circuit complexity on inverting a feebly trapdoor candidate for the second time for the same seed, for example, by computing the trapdoor information and successfully reusing it.Thus, in our setting one has to pick a new seed for every input.

Classical gate elimination
In this section, we first briefly cover classical gate elimination and then introduce a few new ideas related to gate elimination that have recently been presented by Davydow & Nikolenko (2011).Gate elimination is the primary (and, to be honest, virtually the only) technique we have to prove lower bounds in general circuit complexity; so far, it has been used for every single lower bound (Blum, 1984;Paul, 1977;Stockmeyer, 1977;Wegener, 1987).The basic idea of this method lies in the following inductive argument.Consider a function f and a circuit of minimal size C that computes it.Now substitute some value c for some variable x thus obtaining a circuit for the function f | x=c .The original circuit C can now be simplified, because the gates that had this variable as inputs become either unary (recall that negation can be embedded into subsequent gates) or constant (in this case we can even proceed to eliminating subsequent gates).After figuring out how many gates one can eliminate on every step, one proceeds by induction as long as it is possible to find a suitable variable that eliminates enough gates.Evidently, the number of eliminated gates is a lower bound on the complexity of f .Usually, the important case here is when a gate is nonlinear, such as an AND or an OR gate.
In that case, it is always possible to choose a value for an input of such a gate so that this gate becomes a constant and, therefore, its immediate descendants can also be eliminated.However, for linear functions this kind of reasoning also works, and in Section 3.3 we distill it to two relatively simple ideas.
To give the reader a flavour of classical gate elimination, we briefly recall the proof of the 2n − 3 lower bound for the functions of the form f This proof can be found in many sources, including (Wegener, 1987).Note that every function 3,c has the following property: for every pair of variables x j and x k , f (n) 3,c has at least three different restrictions out of four possible assignments of values to x j and x k ;t h i si se a s yt o see since different assignments of x j and x k give three different values of x j + x k ,r e s u l t i n g in functions with three different constants: .N o w c o n s i d e r t h e topmost gate in some topological order on the optimal circuit computing f (n) 3,c .Since it is topmost, there are two variables, say x j and x k , that come to this gate as inputs.At least one of these variables enters at least one other gate because otherwise, f (n) 3,c would depend only on x j ⊕ x k and not on x j and x k separately, giving rise to only two possible subfunctions among four restrictions.Therefore, there exists a variable that enters at least two gates; therefore, by setting this variable to a constant we eliminate at least two gates from the circuit.It remains to note that setting a variable to a constant transforms f , and we can invoke the induction hypothesis.

Gate elimination for feebly secure one-way functions
The following very simple argument is due to Lamagna and Savage; this argument actually suffices for all Hiltgen's linear examples.
1. Suppose that f : B n → B depends non-idly on each of its n variables, that is, for every i there exist values a 1 ,...,a i−1 , a i+1 ,...,a n ∈ B such that f (a 1 ,...,a i−1 ,0,a i+1 ,...,a n ) = f (a 1 ,...,a i−1 ,1,a i+1 ,...,a n ). (12) ,wh e r e f (k) is the k th component of f .If the m component functions f (i) are pairwise different and each of them satisfies C( f Proof. 1.Consider the minimal circuit of size s computing f .S i n c ef depends (here and in what follows we say "depends" meaning "depends nontrivially") on all n of its variables, each input gate must have at least one outgoing edge.Since the circuit is minimal, each of the other gates, except possibly the output, also must have at least one outgoing edge.Therefore, the circuit has at least s + n − 1 edges.On the other hand, a circuit with s binary gates cannot have more than 2s edges.Therefore, 2s ≥ s + n − 1.
2. Consider a circuit computing f .Note that it has at least c − 1gatesthatdonotcomputeany function of circuit complexity c or more (they are the first c − 1 gates in some topological order).However, to compute any component function f (i) we have to add at least one more gate, and we have to add at least one gate for each component, since every new gate adds only one new function.Thus, we get the necessary bound of c + m − 1 gates.
Hiltgen counted the minimal complexity of computing one bit of the input (e.g., since each row of A −1 has at least n 2 nonzero entries, the minimal complexity of each component of A −1 y is n 2 ) and thus produced lower bounds on the complexity of inverting the function (e.g. the complexity of computing Besides, in cryptography it is generally desirable to prove not only worst-case bounds, but also that an adversary is unable to invert the function on a substantial fraction of inputs.In Hiltgen's works, this fact followed from a very simple observation (which was not even explicitly stated).
Lemma 7. Consider a function f = n i=1 x i .For any g that depends on only m < n of these variables, Proof.Since m < n, there exists an index j ∈ 1..n such that g does not depend on x j .T h i s means that for every set of values of the other variables, whatever the value of g is, for one of the values of x j f coincides with g, and on the other value f differs from g.This means that f differs from g on precisely 1 2 of the inputs.
This argument suffices for Hiltgen's feebly one-wayness result for the square matrix A −1 :fi rst we apply the first part of Proposition 6 and see that every output has complexity at least n 2 − 1, and then the second part of Proposition 6 yields the necessary bound of 3n 2 − 1.Moreover, if a circuit has less than the necessary number of gates, one of its outputs inevitably depends on less than the necessary number of input variables, which, by Lemma 7, gives the necessary 1 2 error rate.

Gate elimination for linear functions
In this section, we deal with gate elimination for linear functions.We do not know how to prove that one cannot, in general, produce a smaller circuit for a linear function with nonlinear gates, but it is evident that we cannot assume any gates to be nonlinear in this setting.Thus, gate elimination distills to two very simple ideas.Idea 1 is trivial and has been noted many times before, while Idea 2 will allow us to devise feebly secure constructions in Section 4.

11
Provably Secure Cryptographic Constructions www.intechopen.comSince we are dealing with linear functions, we will, for convenience, state our results in terms of matrices over F 2 ; the circuit complexity of a matrix C α (A) is the circuit complexity of the corresponding linear function.By A −i we denote the matrix A without its i th column; note that if A corresponds to f then A −i corresponds to f | x i =0 .If a matrix A has a zero column A i , it means that the corresponding function does not depend on the input x i ; in what follows, we will always assume that functions depend nontrivially on all their inputs and thus the matrices do not have zero columns; we call such matrices nontrivial.
Idea 1. Suppose that for n steps, there is at least one gate to eliminate.Then C( f ) ≥ n.
n=1 is a series of predicates defined on matrices over F 2 with the following properties: •i f P n (A) holds then P m (A) holds for every 1 ≤ m ≤ n; •i f P n (A) holds then, for every index i, P n−1 (A −i ) holds.

Then, for every matrix
Proof.The proof goes by straightforward induction on the index of P i ; the first property of P provides the base, and other properties takes care of the induction step.For the induction step, consider the first gate of an optimal circuit C implementing A. By the monotonicity property of P and the induction base, the circuit is nontrivial, so there is a first gate.Consider avariablex i entering that gate.Note that if C computes f on fraction α ofitsinputsthenfor some c, C | x i =c computes f | x i =c on fraction α of its inputs.If we substitute this value into this variable, we get a circuit C | x i =c that has at most (size(C) − 1) gates and implements A −i on at least α fraction of inputs.
Note that the first statement of Proposition 6 is a special case of Theorem 8 for P n (A)= "A has a row with n + 1 ones".We also derive another corollary.

Corollary 9. If A is a matrix of rank n, and each column of A has at least two ones, then C(
Proof.Take P n (A)="rank(A) ≥ n + 2 and each column of A hasatleast2ones".
Idea 2. Suppose that for n steps, there exists an input in the circuit with two outgoing edges, and, moreover, in m of these cases both of these edges go to a gate (rather than a gate and an output).T h e n C( f ) ≥ n + m.Theorem 10.We call a nonzero entry unique if it is the only nonzero entry in its row.Fix a real number α ∈ [0, 1].Suppose that P = {P n } ∞ n=1 is a series of predicates defined on matrices over F 2 with the following properties: •i f P n (A) holds then P m (A) holds for every 1 ≤ m ≤ n; •i f P n (A) holds then, for every index i, if the i th column has no unique entries then P n−2 (A −i ) holds, otherwise P n−1 (A −i ) holds.
Theorem 10 and Corollary 11 generalize several results that have been proven independently.
For example, here is the "master lemma" of the original paper on feebly trapdoor functions.
Corollary 12 ( (Hirsch & Nikolenko, 2009, Lemma 5) ).Let t, u ≥ 1. Assume that χ is a linear function with matrix A over F 2 .Assume also that all columns of A are different, every row of A has at least u nonzero entries, and after removing any t columns of A, the matrix still has at least one row containing at least two nonzero entries.Then C(χ) ≥ u + tand,moreover ,C 3/4 (χ) ≥ u + t.
Proof.Take P n (A)= "After removing any n columns of A,i ts t i l lh a sa tl e a s to n en o n z e r o row", Q 0 (A)="true", and Q m (A)="Every row of A has at least m + 1ones"form > 0. Then P t+1 (A) and Q u−1 (A) hold, and P and Q satisfy the conditions of Corollary 11, which gives the desired bound.Note that in this case, Q m for m > 0 cannot hold for a matrix where a row has only a single one, so in the gate elimination proof, for the first u − 1 steps two gates will be eliminated, and then for t − u + 2 steps, one gate will be eliminated.
We also derive another, even stronger corollary that will be important for new feebly secure constructions.
Corollary 13.Let t ≥ u ≥ 2. Assume that A is a u × t matrix with different columns, and each column of A has at least two nonzero elements (ones).Then C(A) ≥ 2t − uand,moreover ,C3 Proof.Take P n (A)="twice the number of nonzero columns in A less the number of nonzero rows in A is at least n".Then P 2t−u (A) holds, and P n satisfy the conditions of Theorem 10.Naturally, we could prove Corollaries 9 and 13 directly.We have chosen the path of generalization for two reasons: one, to make Theorem 14 more precise and more general, and two, to show the limits of gate elimination for linear functions.As we have already mentioned, for linear functions we cannot count on nonlinear gates that could eliminate their descendants.In Theorems 8 and 10, we have considered two basic cases: when there is only one edge outgoing from a variable and when there are two edges (going either to two gates or to a gate and an output).It appears that we can hardly expect anything more from classical gate elimination in the linear case.

Extension to block diagonal matrices
We finish this section with an extension of these results to block diagonal matrices.In general, we cannot prove that the direct sum of several functions has circuit complexity equal to the sum of the circuit complexities of these functions; counterexamples are known as "mass production" (Wegener, 1987).However, for linear functions and gate elimination in the flavours of Theorems 8 and 10, we can.The following theorem generalizes Lemma 6 of (Hirsch & Nikolenko, 2009).
Theorem 14. Suppose that a linear function χ is given by a block diagonal matrix Proof.We invoke Theorem 10 with the predicate composed of original predicates: It is now straightforward to check that P = {P n } ∞ n=1 satisfies the conditions of Theorem 10 (since every deleted column affects only one block), and the block diagonal matrix satisfies P n 1 +...+n k .

Feebly secure trapdoor functions 4.1 Idea of the construction
Over this section, we will present two constructions of feebly secure trapdoor functions, a linear construction and a nonlinear one.Both of them have the same rather peculiar structure.It turns out that when we directly construct a feebly secure candidate trapdoor function such that an adversary has to spend more time inverting it than honest participants, we will not be able to make encoding (i.e., function evaluation) faster than inversion.In fact, evaluation will take more time than even an adversary requires to invert our candidates.
To achieve a feebly secure trapdoor function, we will add another block as a direct sum to that candidate.This block will represent a feebly secure one-way function, one of the constructions presented by Hiltgen (1992;1994;1998).In this construction, honest inversion and break are exactly the same since there is no secret key at all; nevertheless, both of them are harder than evaluating the function.Thus, in the resulting block diagonal construction break remains harder than honest inversion but they both gain in complexity over function evaluation.This idea was first presented by Hirsch & Nikolenko (2009) and has been used since in every feebly secure trapdoor function.

Linear feebly secure trapdoor functions
This section is based on (Davydow & Nikolenko, 2011).Let us first introduce some notation.By U n we denote an upper triangular matrix of size n × n which is inverse to a bidiagonal matrix: note that U 2 n is an upper triangular matrix with zeros and ones chequered above the main diagonal.We will often use matrices composed of smaller matrices as blocks; for instance, ( U n U n ) is a matrix of size n × 2n composed of two upper triangular blocks.
An adversary is now supposed to compute As a feebly one-way function A we take one of Hiltgen's functions with order of security 2 − ǫ that have been constructed for every ǫ > 0 Hiltgen (1992); we take the matrix of this function to have order λn,whereλ will be chosen below.For such a matrix, . Now Lemma 15 and Theorem 14 yield the following complexity bounds: The order of security for this protocol is This expression reaches maximum for λ = 1 1−ǫ , and this maximum equals 5−4ǫ 4−ǫ , which tends to 5 4 as ǫ → 0. Thus, we have proven the following theorem.Theorem 16.For every ǫ > 0, there exists a linear feebly secure trapdoor function with seed length pi(n)=ti(n)=n, input and output length c(n)=m(n)=2n, and order of security 5 4 − ǫ.

Nonlinear feebly secure trapdoor functions
O v e rt h ep r e v i o u st w os e c t i o n s ,w eh a v ed i s c u s s e dlinear feebly secure one-way functions.However, a nonlinear approach can yield better constants.This section is based on (Hirsch et al., 2011;Melanich, 2009).
Our nonlinear feebly trapdoor constructions are based on a feebly one-way function resulting from uniting Hiltgen's linear feebly one-way function with the first computationally asymmetric function of four variables (Massey, 1996).Consider a sequence of functions { f n } ∞ n=1 given by the following relations (we denote y j = f j (x 1 ,...,x n )): In order to get f −1 n , we sum up all rows except the last one: Further, substituting y n instead of x n ,wefindx 2 and x n−1 .The other x k can be expressed via x n−1 in turn, so the inverse function is given by Lemma 17.The family of functions { f n } ∞ n=1 is feebly one-way of order 2.
Proof.It is easy to see that f n can be computed in n + 1 gates.Each component function of f −1 n , except for the last one, depends non-trivially of all n variables, and all component functions are different.Therefore, to compute f −1 n we need at least (n − 1)+(n − 2)=2n − 3 gates (since f n is invertible, Proposition 6 is applicable to f n and f −1 n ).Therefore, On the other hand, f n cannot be computed faster than in n − 1 gates because all component functions f n are different, and only one of them is trivial (depends on only one variable).At the same time, f −1 n can be computed in 2n − 2 gates: one computes (y 1 ⊕ ...⊕ y n−1 )y n in n − 1 gates and spends one gate to compute each component function except the last one.We get 2n − which is exactly what we need.
For the proof of the following theorem, we refer to (Hirsch et al., 2011;Melanich, 2009).
We can now apply the same direct sum idea to this nonlinear feebly one-way function.The direct sum consists of two blocks.First, for f as above, we have: In this construction, evaluation is no easier than inversion without trapdoor.
For the second block we have Eval n (m)=f (m), Inv n (c)=f −1 (c), Adv n (c)=f −1 (c). ( 18 Cryptography and Security in Computing www.intechopen.comAgain, as above, it is not a trapdoor function at all because inversion is implemented with no regard for the trapdoor.For a message m of length |m| = n the evaluation circuit has n + 1 gates, while inversion, by Theorem 18, can be performed only by circuits with at least 2n − 4 gates.Thus, in this construction evaluation is easy and inversion is hard, both for an honest participant of the protocol and for an adversary.
We can now unite these two trapdoor candidates and get the following construction: Key n (s)=(f n (s), s), The proofs of lower bounds on these constructions are rather involved; we refer to (Hirsch et al., 2011;Melanich, 2009) for detailed proofs and simply give the results here.It is easy to see that this expression is maximized for α = 2, and the optimal value of the order of security is 7 5 .We summarize this in the following theorem.
Theorem 20.There exists a nonlinear feebly trapdoor function with seed length pi(n)=ti(n)=n, input and output length c(n)=m(n)=3n, and order of security 7 5 .

Conclusion
In this chapter, we have discussed recent developments in the field of feebly secure cryptographic primitives.While these primitives can hardly be put to any practical use at present, they are still important from the theoretical point of view.As sad as it sounds, this is actually the frontier of provable, mathematically sound results on security; we do not know how to prove anything stronger.
Further work in this direction is twofold.One can further develop the notions of feebly secure primitives.Constants in the orders of security can probably be improved; perhaps, other primitives (key agreement protocols, zero knowledge proofs etc.) can find their feebly secure counterparts.This work can widen the scope of feebly secure methods, but the real breakthrough can only come from one place.
It becomes clear that cryptographic needs call for further advances in general circuit complexity.General circuit complexity has not had a breakthrough since the 1980s; nonconstructive lower bounds are easy to prove by counting, but constructive lower bounds remain elusive.The best bound we know is Blum's lower bound of 3n − o(n) proven in 1984.
At present, we do not know how to rise to this challenge; none of the known methods seem to work, so a general breakthrough is required for nonlinear lower bounds on circuit complexity.
The importance of such a breakthrough can hardly be overstated; in this chapter, we have seen only one possible use of circuit lower bounds.

Fig. 2 .
Fig. 2. Hiltgen's feebly one-way function of order 3 2 :ac i r c u i tf o rf .

14
Cryptography and Security in Computing www.intechopen.comevery A j satisfies the conditions of Theorem 10 with predicates P j = {P As i z es circuit that breaks a feebly trapdoor candidate C = {Seed n ,Eval n ,Inv n } on seed length n in the sense of Definition 4 provides a counterexample for the statement C α (Inv n ) > s.
Definition 3. We denote by C α ( f ) the minimal size of a circuit that correctly computes a function f ∈B n,m on more than α fraction of its inputs (of length n).Obviously, C α ( f ) ≤ C( f ) for all f and 0 ≤ α ≤ 1. Definition 4. A circuit N breaks a feebly trapdoor candidate C = {Seed n ,Eval n ,Inv n } on seed length n with probability α if, for uniformly chosen seeds s ∈B n and inputs m ∈B m(n) , Pr (s,m)∈U N(Seed n,1 (s),Eval n (Seed n,1 (s), m)) = m > α.( 8 ) 8 Cryptography and Security in Computing www.intechopen.com Lemma 19.The following upper and lower bounds hold for the components of our nonlinear trapdoor construction:C(Key n ) ≤ n + 1, C(Eval n ) ≤ 2n − 2 + n + αn + 1 = 3n + αn − 1, C(Inv n ) ≤ n + 2αn − 2, C 3/4 (Adv n ) ≥ 3n + 2αn − 8.To maximize the order of security of this trapdoor function (Definition 5), we have to find α