Open access peer-reviewed chapter

The Security of Cryptosystems Based on Error-Correcting Codes

By Ahmed Drissi

Submitted: July 3rd 2020Reviewed: August 27th 2020Published: August 18th 2021

DOI: 10.5772/intechopen.93782

Downloaded: 33

Abstract

Quantum computers are distinguished by their enormous storage capacity and relatively high computing speed. Among the cryptosystems of the future, the best known and most studied which will resist when using this kind of computer are cryptosystems based on error-correcting codes. The use of problems inspired by the theory of error-correcting codes in the design of cryptographic systems adds an alternative to cryptosystems based on number theory, as well as solutions to their vulnerabilities. Their security is based on the problem of decoding a random code that is NP-complete. In this chapter, we will discuss the cryptographic properties of error-correcting codes, as well as the security of cryptosystems based on code theory.

Keywords

  • McEliece cipher
  • hash function
  • syndrome decoding
  • correcting codes
  • random code

1. Introduction

Like all asymmetric cryptographic systems, the idea is to base security on the difficulty of reversing a one-way function with a trap door. The theory of error-correcting codes contains well-structured and difficult problems to solve, more or less suitable for use in cryptography. The first who had the idea of using error-correcting codes for cryptographic purposes was McEliece in 1978 and he proposed an asymmetric encryption algorithm. In 1986, Niederreiter proposed another cryptographic system equivalent to that of McEliece [1]. The two systems of McEliece and Niederreiter are of equivalent security against a passive attack; however, they are not against an active attack [2]. In the following paragraph, we give an overview of the theory of error-correcting codes. In the third paragraph, we will only deal with the basic systems based on this theory. The last paragraph is devoted to the discussion of security settings and the most well-known attacks. In what follows we note.

F2m: a finite field of 2melements.

Kx: the ring of polynomials with an indeterminate.

Kx/P: the quotient ring Kxde modulo P.

K: a private set of the element 0.

dQx: the degree of the polynomial Qx.

F2m: the set of length vectors mand components 0 and 1.

Fn: the scalar product ntimes of the set F.

x: the integer part of x.

At: the transpose of the matrix A.

Ik: the identity matrix of order k.

gcd: greatest common divisor.

Cnt: the combination of telements among nelements.

Advertisement

2. Error-correcting codes

2.1 Finite fields

Finite fields are the basis of many error-correcting codes and cryptographic systems, it is therefore essential to recall the theory of finite fields in order to understand the functioning of linear codes. In this paragraph we present some properties of finite fields and a method of representing them for later use. We are interested in constructing finite fields F2mand the calculations on these fields. Finite fields are generally constructed from primitive polynomials [3].

Definitions

The minimal polynomial of an element βon a finite field Fis the unit polynomial with coefficients in Fsmaller degree and its value in βis zero.

Proposition

  1. The ring Kx/Pis a field if and only if the polynomial Pxis irreducible on the field K.

  2. If Pxis irreducible of degree mand Ka finite field of qelements then Kx/Pis field of qmelements.

This proposition gives us a way to build a finite field: Take a polynomial Pirreducible over a field Ket former le quotient Kx/P.

Theorem (the primitive element)

If Kis a finite field of order q, then the multiplicative group Kis cyclic generated by an element αcalled primitive element of Kand we write K=αii=1q. Any generator of this group is called a primitive element of K.

Definition (primitive polynomial)

We say that a polynomial PF2xof degree mis primitive if it is the minimal polynomial of a generator of F2m.

Lemma

Let F2xm=QxF2xdQxm1, PxF2xmprimitive and αa root of Px, so we have: F2mF2xmF2x/PxF2m01αα2m1.

It follows from this lemma that we can represent the nonzero elements of a finite field F2mby nonzero vectors of F2mand that the αihave representatives of ximodPxand consequently αi=ximodPx. In what follows we denote by αa primitive element of F2m.

2.2 Principle of error-correcting codes

In order to transmit a message, it must be coded, it consists in temporarily giving it a certain form, the coding mode depends on the means of transmission, it can be disturbed by noise, hence the need for coding which allows the receiver to find the initial message even if it has been altered. Such coding is called channel coding. The principle of error-correcting codes is to add to a message to be transmitted additional information called redundant or control information, so that transmission errors can be detected and corrected. This operation is called coding and its result is a code word, each message is associated, therefore a code word of length greater than that of the message.

The code is the set of code words thus obtained. We assume that all messages are words of the same length >0, written using an alphabet Fof qelements. Each message x0x1xk1is an element of the set of Fk(message space). We then have qkpossible messages. We assume that all the code words are of the same length n>k. Encode mmessages of length k,mqkconsists in choosing an integer n>k, and associate with each message from Fka word from Fn(injectively). The coding introduces a redundancy equal to nk. Decoding consists of receiving a word xof Fnto determine if xis a code word and if not correct it thanks to the redundancy. This is done using the Hamming distance.

Definition (hamming distance)

let x=x0x1xn1x0x1xn1and y=y0y1yn1y0y1yn1of Fn. We call the Hamming distance between words xand y, and we note dHxy=dxythe number of index i012n1such as xiyi, we call Hamming’s weight of a word xthe number of nonzero components of x, we notewx=dx0.

Definitions

We call the minimum distance of a code Can integer dsuch as d=mindmmmCmCmm. We call the weight of a word xof code Con integer wx=dx0.

Proposal (correction capacity)

Let Ca minimum distance code d, and xFna received message assigned to rerrors, with r1.

  1. If 2r<dthat is to say that rd12, the code Ccorrect rerrors.

  2. If d12<r=d2, the Ccode detects the existence of rerrors but cannot always correct them.

  3. If d2<rd1, the Ccode detects the existence d’ errors but risk of making an erroneous correction.

The integer t=d12is called code correction capability, we also say that Cis a t-corrector code.

Proof

Let mthe code word transmitted and xthe message received and assigned from rerrors then dmx=r.

  1. We show that the code word mis the only code word such as dmxr.

Otherwise it exists mof Csuch as dmxr, we are dmmdmx+dxm2r<d, then m=m.

  1. There is no code word mof Csuch as dxm<dmx=r, but the code word mis not necessarily the only one to check dmx=r. Indeed be m=m1m2mnand m=m1m2mntwo code words and if we receive the message x=m1m2mrm1mr, we’ll have dxm=dxm=r.

  2. We know there is an error because xC, but there may be a code word mCsuch as dmx<dmx=r.

The most used codes are the linear codes which we discuss in the next part.

2.3 Linear codes

Definitions

A linear code Cof size nand dimension kon the finite field Fqis a vector subspace of Fqn. We note it nkdqwith dits minimum distance.

Linear codes are codes in which each code word yis obtained by linear transformation of the components of the initial word (information) x.

A linear code is characterized by its generator matrix G, we have C=y=xG/xFqk.

let Hnk×nmatrix with coefficients in Fq. His called the parity control matrix of Cif “xCHxt”.

Fqk: the message space.

The systematic code

The matrix Gdefines a bijective function FqkCby xxGwhich we represent qkmessages, its length kby code words, of lengthn.

The generator matrix Gof a Ccode is not unique; Gcan be transformed into G=IkAwith Ikthe identity matrix with korder and Athe matrix of klines and nkcolumns.

Gand Ggenerate the same Csubspace; Gis called canonical generator matrix and if the generator matrix of a code is of the form G=IkA, this code is said systematic.

Theorem

Let Ca nkqlinear code.

  1. If Gis a generator matrix of Cand Ha parity control matrix of Cthen GHt=0.

  2. If Gis a k×nmatrix of rank kand His a nk×nmatrix of rank nksuch as GHt=0then we have:

His a parity control matrix of Cif and only if Gis a generator matrix of C.

Proof

  1. We know that Ht=0, xC, in particular we have GiHt=0for all i=1kwith Giis line of G. It follows that GHt=0.

  2. )Since GHt=0, then we have GiHt=0. For all i=1k. And since His a parity control matrix of C, we have the Gibelong to C. rgG=k, then Gii=1kconstitute a basis of C. It follows that Gis a generator matrix of C.

)we have yCif and only if it exists xFqksuch as y=xG. Then yCif and only if yHt=xGHt=0. Then His a parity control matrix of C.

In the case of systematic code, we have the following corollary.

Corollary

Let Ca nkqlinear code

  1. If G=IkAa canonical generator matrix of Cthen H=AtInkis a parity control matrix ofC.

  2. If H=BInkis a parity control matrix of Cthen, G=IkBtis a generator matrix of C.

Proof

By applying the preceding theorem

  1. we have GHt=IkAAtInkt=A+A=0, if Gis a generator matrix of Cthen, His a parity control matrix ofC.

  2. we have GHt=IkBtBInkt=BtBt=0then if His a parity control matrix of Cwe will have Gis a generator matrix of C.

Encoding and decoding

The coding is obtained by applying the generator matrix. Decoding consists in applying the control matrix to the message; if the result is 0 then the message is valid otherwise look for errors and correct them. Hxtis called syndrome. Suppose the word xis sent through a noisy channel and the word received is yso the error vector is e=yx.

Given y, the decoder must decide which word of the code xhas been transmitted (which error vector?). For a vector uand a code Cwe call coset class of C, the set u+C=u+ccC. A representative of a class of Cof minimum weight is called a leader of this class.

Theorem

Let Ca nkdqlinear code then,

  1. uand vare of the same coset class of Cif and only if uvC.

  2. Any vector of Fqnis in a coset of C.

  3. Given two coset classes, they are either disjoint or identical.

Proof

  1. If u,vx+Cthen, it exists y,zCsuch as u=x+yand v=x+z, then uv=yzC, because Cis a vector subspace of Fqn.

If uvCit exists xCsuch as uv=xthen u=v+xv+Cand we have v=v+0v+C.

  1. Let aFqn, on a 0Cthena=a+0a+C.

  2. Suppose that a+Cb+C, the nit exists vFqnsuch as a+Cb+Ccontains the element v, the nit exists x,yCsuch as v=a+x=b+yhence b=a+xyand a=b+yx. b+cb+Cwe have b+c=a+xy+ca+C(then b+Ca+C). a+ca+CWe have a+c=b+yx+cb+C(then a+Cb+C), hence b+C=a+C.

Principle

We construct the standard array of Cwhich is a matrix of qnklines and qkcolumns. It contains all the vectors of Fqn; its first line corresponds to the words of Cwith vector 0 on the left. The other lines represent the cosets ui+Cwith the class leader uito the left. The procedure is as follows:

  1. We list the words of Cstarting with 0 on the first line.

  2. We choose a vector u1of minimum weight that does not belong to the first line and we list in the second line the elements u1+C, by entering below 0 the class leader u1and below each element xCthe element u1+x.

  3. We choose u2in the same way and we repeat the same operation.

  4. We iterate this process until all the side classes are listed and all the vectors of Fqnappear only once.

When the word yis received, we look for its position in the standard table. The decoder then decides that the error vector ecorresponds to the class leader who is located in the first column of the same row of yand decode ylike x=ye, by choosing the code word of the first line on the same column ofy.

Remark

The standard table provides nearest neighbor decoding. Note that this process is too slow and too expensive in memory for large codes. In practice each code has by its structure a decoding algorithm.

2.4 The hamming code

A Hamming code with r2redundancy is a linear code 2r12r1r2its parity control matrix H, with His a matrix of rlines and 2r1columns that correspond to the set of all nonzero vectors of F2r.

Theorem

The minimum distance of the Hamming 2r12r1r2code is d=3(it therefore corrects a single error).

Proof

This code does not contain any element of weight 1 and 2 otherwise we would have a column of Hwhich would be zero or two columns of Hwould be identical.

It exists xCsuch as wx=3, indeed by definition of the parity control matrix H, the first 3 columns are 000000011101then the vector x=11100its weight wx=3and belongs to Cbecause Hxt=0.

Decoding

The vector syndrome xof which only the jth component is nonzero is none other than the transpose of the jth column of H. If the columns of Hare ordered in increasing order of binary numbers, the jth column corresponds to the binary writing of j, hence the following decoding algorithm:

Let ya message received, we calculate Hyt. If Hyt=0then, ycorresponds to the message transmitted. If Hyt0and assuming there is only one error, Hytdirectly gives the position of the error written in binary in the form b3b2b1b0. We can then correct y=y1ynlike x+ejfor j=i=1nbi2iand ejthe vector of which only the jth coordinate is nonzero.

2.5 The Reed-Solomon codes

Let n=q1with q=2met FqxkThe set of polynomials of degree strictly less than kon F2m. Let us build a length code nand dimension k. Let L=α1α2αna vector formed of distinct elements of F2m=αii=1n, with αprimitive of F2m. Each word of the code is the evaluation of a function fof Fqxkon Lthen, we have a length code nand dimension kand generator matrix

G=111α1α2αn......α1k1α2k1αnk1.

By its structure, this code has a minimum distance of at least nk+1, because two polynomials of degrees less than kdistinct cannot be equal in addition to k1positions. This distance is exactly equal to nk+1, since the evaluation of a polynomial of the form i=1k1xαihis weight is nk+1. So we have a code on F2mof the form nknk+1qwhich can have both good transmission rate and good correction ability.

Remark

Reed-Solomon codes represent a special case of a slightly more general class called generalized Reed-Solomon codes GRSwhose definition is as follows.

Definition

Let v1v2vna vector of length nin F2met α1α2αna vector of length nin F2m, with the αiare distinct two by two.

The set of codes with the generator matrix Gof the form G=v1v2vnv1α1v2α2vnαn..v1α1k1v2α2k1vnαnk1is called the family of generalized Reed-Solomon codes.

2.6 The classical Goppa codes

Definition

Let L=α1α2αna suite of ndistinct elements of F2mand gzF2mza unit polynomial of degree r irreducible in F2mz. The irreducible binary Goppa code, its support L(generator vector) and its generator polynomial gnoted ΓLgis the set of words a=a1anF2nsuch that one of the following equivalent characterizations is verified:

  1. Raz=i=1naizαi=0modgz.

  2. Hat=0with H=111α1α2αn...α1r1α2r1αnr1gα11....gαn1parity check matrix.

  3. gzdivided dσazdzwith c σaz=i=1nzαiailocator polynomial.

The construction of a code Goppa:

Goppa’s code is a linear code on the field F2, its construction requires the use of an extension F2m. Each element of the matrix His then broken down into melements of F2placed in columns, using a projection of F2min F2m; we go from a size matrix r×non F2mto a matrix of size rm×non F2so it is a length code n=Land dimension k=nmrand has a minimum distance at least equal to d=r+1. Indeed the parity check matrix His written as the product of a Vandermonde matrix and an invertible matrix therefore all under a square matrix r×rof His invertible, then there are no code words with a weight less than or equal to r.

The decoding of a Goppa code:

Several techniques exist to decode Goppa codes but they work by the same principle. Let c=c+eand we<r2. We start by calculating the syndrome Rczon F2m; from this syndrome we will write a key equation, and we will finish the decoding by solving the key equation to finde.

If Raz=0the word will belong to the code.

The key equation

Let σez=i=1nzαieiof degree <r2. On introduit le polynôme wez=σezRezmodgzcalled evaluator polynomial.

σezRez=i=1neizαij=1nzαjejmodgz=i=1neij=1jinzαjejmodgz.

We can solve the key equation in two different ways: Berlekamp Massey’s algorithm and the extended Euclidean algorithm. The latter has the advantage of being easier to present. Indeed we seek to find weand σeof degree <r2such as wez=σezRezmodgz=σezRez+kzgz. If we try to calculate the gcdof gRewith the extended Euclidean algorithm, we will calculate at each step the polynomials ui,vi,richecking Reui+gvi=ri.At each step the polynomials uiand viwill be of degree <iand the degree of riis equal to ri. There is therefore a step at which if we stop the algorithm we will find a solution of the equation σe=ui0and wi0=ri0to a scalar coefficient.

3. Encryption/decryption systems

3.1 The basic system (McEliece)

We start by generating a code nkdqlinear of a well-chosen family and its generator matrix G. We are going to mix this matrix to make it indistinguishable from a random matrix, so we need a permutation matrix Pher size is n×n(having 1 in each row and column and 0 everywhere) and an invertible matrix Sher size k×k(Sis jammer). The public key will be G=SGPwhich is indistinguishable from a random matrix (The definition of a random matrix comes from the definition of random code which be introduced in section four). The knowledge of S, Pand Gallows us to find the structure of the design code and provides us with the decoding algorithm.

3.1.1 The algorithms of the McEliece system

We cite the component algorithms of the McEliece cryptosystem [4].

The generation of keys

Input

A family of linear codes nkdqchosen for design.

Procedure

Choose a generator matrix Gin systematic form of the design code.

Choose an invertible matrix Sher size kwith coefficients in Fq.

Choose a permutation matrix Pher size is n×n.

Calculate G=SGP.

Output

The public key G.

The private key SGP.

Encryption of the plaintext.

Input

The public key G.

The plaintext xFqk.

Procedure

Choose a vector eFqn(an error) his weight less than or equal to the design code correction capacity.

Calculate y=xG+e.

Output: The cipher text y.

Decryption of cipher text

Input: the cipher text y, The private key SGP.

Procedure

Calculate u=yP1.

Calculate x=fGuwith fGthe design code decoding algorithm, whose generator matrix is G.

Calculate x=xS1.

Output: the plaintext x.

Remark

The use of binary Goppa code as a secret key is initially proposed by McEliece in its original version. Where he took the following parameters: m=10,n=2n=1024,r=50,k=nmr=524.So far it seems that this choice is perfectly safe, but it is not used in practice because the size of its public key is very large.

Example

We use the Hamming code with its generator matrix G=1000101010001100101100001111and parity check matrixH=101110001110101101001.

The generation of keys

Let the private key S,G,P.

S=1000010110110001=S1, P=0100000000100000100001000000000000100001000000010, P1=0001000100000000100000100000000001000000010000100

The public key: G=SGP=0100111100100111100011000111

Encryption

Let the plaintext x=0110and let error vector e=0010000.

The cipher text is y=xG+e=0111000+0010000=0101000.

Decryption

We decipher the text received y=0101000. We have y=xG+e=xSGP+ethen P1=xSG+eP1=1100000=y. Hyt=110, so the error is in the third position hence u=1110000=xSG. And since Gis generator matrix of the systematic system then xS=1110then x=1110S1=0110. x. Then the plaintext sought.

3.2 The Niederreiter variant

Let Ca linear t-corrector code of length nand dimension k. Let Ha parity check matrix of Cher size is nk×n. We randomly choose an invertible matrix Sand Pa permutation matrix. We calculate H=SHP. We will have Ha public key and SHPthe private key, with the knowledge of a syndrome decoding algorithm in C. Let xa plaintext of length nand weight t, we calculate the cipher text y=Hxt. The recipient receives yknowing the secret key, he can calculate S1y=HPxt. Using the syndrome decoding algorithm of C, he can find Pxtand applying P1the plaintext xis found.

The algorithms of the Niederreiter cryptosystem [5]

The generation of keys

Input

A linear code nkdqis chosen for the design, of which we know a decoding algorithm by syndrome.

Procedure

Choose a parity check matrix Hof design code.

Choose a matrix S, invertible of size kwith coefficients inFq.

Choose a permutation matrix Pof sizen×n.

Calculate H=SHP.

Output

The public key H.

The private keySHP.

Encryption

Input

The public key H.

The plaintext xFqnof weight less than or equal to the correction capacity.

Procedure

Calculate y=Hxt.

Output

The cipher texty.

Decryption

Input

The private key SHP.

The cipher texty.

Procedure

Calculate y=S1y.

Calculate x=fHywith fHthe code syndrome decoding algorithm, its parity check matrix is H.

Calculate x=xP1.

Output

The plaintext x.

Remark

Reed-Solomon codes were originally proposed by Niederreiter as a family of codes that could be considered by his cryptosystem. In 1992 Sidelnikov and Shestakov have shown that it is easy to attack this cryptosystem [2].

4. The security of cryptosystems based on correcting codes

The security of cryptosystems based on error-correcting codes is based on the problem of distinguishing the design code (hidden) from a random code. We first give the following definitions:

  • Code equivalence

Two codes are said to be equivalent if their generator matrices (respectively parity) are deduced from each other by permutation of columns.

  • Random code

A random code is a linear code of which the klinearly independent lines of the generator matrix (or the n linearly independent columns of the parity matrix) have been generated randomly.

The main parameters for securing an McEliece cryptosystem and its variants are then the structure of the code family chosen for the design, which it is desirable that it will be difficult to find an equivalent code. Since the robustness of such a system lies in the difficulty of decoding and the hidden structure of the design code, then the attacker can attempt to attack the system by two methods: decoding attack and structural attack. The resistance of the system to these two attack methods depends on the family of codes chosen for the design. The choice of code family is the essential point in the design of the cryptosystem.

4.1 Decoding attack

The attacker directly attempts to decode the cipher text in the Ccode (generator matrix Gor public key parity H); the principle consists of decoding the intercepted cipher text relative to the public code using general decoding algorithms. We cite two decoding problems in a random code:

Problem 1

Given Ga random binary matrix of size k×n, generator of a Ccode of dimension k. xa random word of F2nand ta positive integer, find if there is an error word eof F2nsuch as wetand x+eC.

Problem 2

Given Ha binary random parity matrix; her size nk×nof a Ccode its dimension k, sa random vector of F2nkand ta positive integer, find if there is a word xof F2nsuch as wxtand Hxt=s.

Decoding in random code is behind the following attacks:

  • Algorithme de décodage par ensemble d’information

The principle is based on two steps: the selection of a set of information and the search for low-weight word. There are several variants which propose to optimize one or the other of these two steps.

Definition

Let Ca linear code of generator matrix Gand length n. A set of information Iis a subset of 12nsuch as GI, her size k×kformed of columns of Glabeled by the elements of I, is invertible.

Remark

The matrix GIGJwith IJ=12nis equivalent to G.

Algorithm

Input

G: a matrix generating of a code C.

t: a positive integer.

y: a word of F2nsuch as dyCt.

Output

The couple xesuch as y=xG+ewherewet.

Procedure

Randomly draw a set of information Iof the code C(let Jsuch as IJ=12n).

Calculate R=GI1GJ.

Write y=yIyJ.

Calculate eJ=yJyIR.

Repeat the previous operations until you find eJsuch as weJt.

Returne=0eJ.

Determine the word xsuch as ye=xG.

Proof

We have a y=xG+eand y=yIyJ=xGIGJ+eIeJ. Hence eI=yIxGIand eJ=yJxGJ.

If the set of information Idoes not contain an error position eI=0and like GIis invertible, we obtain yI=xGI, eJ=yJyIGI1GJ. Then e=0yJyIGI1GJ.is the solution sought.

Remark

We have Cnkpossibilities to choose k=Ipositions of 12n(Iis a cardinally of I). And we have Cntkpossibilities to choose k=Ipositions among ntpositions where ei=0. So the probability of getting the set of information Iwith eI=0is p=CntkCnkand the average number of iterations will be 1p.

Example

Let us try to attack the following system by this method: We haveG=1110010101011110.

The cipher text y=10101011ett=1.

looking mesuch as mG+e=y.

Let I=15128then GI=1001=GI1andGJ=110101101110.

yI=11and yJ=010011. Then eJ=yJyIGI1GJ=001000, it follows that eIeJ=00001000.

yIyJ+eIeJ=11010011+00001000=11011011=mGIGJ=m1011010101101110

then m=11.

  • Decoding by paradox of birthdays

Consider an instance of problem 2. For a parity check matrix Hof size r×n, a syndrome sand a weight t. If the weight tis even, let us separate the columns of Hin two sets of the same size H1and H2such as H=H1H2.

Let us build L1=H1e1te1of lengthn2and the weightt2et L2=s+H2e2,te2of lengthn2and the weightt2. Common elements of L1and L2are such that H1e1t=s+H2e2t, that is to say e1e2is solution of problem 2.

The probability that one of the solutions splits into two equal parts of the parity matrix is p=Cn/2t/22Cnt; to solve problem 2 you have to repeat these operations 1pon different permutations of the public code.

  • The recovery of a plaintext encrypted twice by the same McEliece system

This is an active attack that only applies to the McEliece encryption system (because it is not deterministic) and does not apply to the Niederreiter system. Suppose the plaintext xis encrypted in two different ways. We will have y1=xG+e1, y2=xG+e2e1et e2sont deux vecteurs d’erreur distincts de poids t. We get the word y1y2=e1e2which is less than or equal to 2t. Once an attacker has detected that the two cipher texts y1and y2correspond to the same plaintext, this information will reduce the number of iterations of the decoding algorithm set of information. Message forwarding is detected by observing the weight of the two cipher texts. If the two plaintexts are identical then, the weight of the sum of the two numerical texts remains less than 2tin general (tthe correction capacity).

Algorithm

Input

G: The public key of sizek×n.

Two words y1and y2such as y1=xG+e1, y2=xG+e2where e1and e2are two distinct error vectors of weightt.

Output

The plaintext x.

Procedure

Calculate y1y2.

Randomly draw a set of information I12nwhich label the zero positions of y1y2.

Calculate eJ=yJyIGI1GJy1=yIyJetIJ=12n.

Repeat the previous operations until the weight of e(t).

Return x=yIGI1.

Example

Let us try to attack by this method the system of the previous example.

Either plaintext encrypted two ways in which the public key is

G=1110010101011110.
y1=mG+e1=111110010101011110+00010000=10101011
y2=mG+e2=111110010101011110+00100000=10011011
y1+y2=00110000.

Draw a set of information that labels the zero positions of y1+y2let I=78.

GI=0110=GI1, GJ=111001010111; y1=10101011,yI=11,yJ=101010.

eJ=yJyIGI1GJ=101010110110111001010111=000100.
yIyJ+eIeJ=11101010+00000100=11101110=m0111100110010111

So we extract m=11.

4.2 Structural attack

The attacker tries to find a decomposition of the key G=S1G1P1, which allows it to develop its own decoding algorithm. Succeeding in a structural attack generally amounts to finding a code equivalent to the public code for which we know a decoding algorithm. This attack depends exclusively on the structure of the space of the keys used. We quote here a successful attack on an McEliece system with the Reed-Solomon code as the design code.

  • The attack of Sidelnikov and Shestakov

Sidelnikov and Shestakov showed [6] that generalized Reed-Solomon codes were so structured that one could find a decoder of the public code in polynomial time. The systematic form of the matrix generating a GRScode can be obtained from the following proposition:

Proposal

Let G=v1v2vnv1α1v2α2vnαn..v1α1k1v2α2k1vnαnk1a matrix generating a Reed-Solomon code generalized on Fqmthen there is a matrix k×kinvertible Scoefficient in Fqmand a matrix R=Riji=1kj=k+1nsuch that IR=SGand Rij=vjvis=1sikαjαsαiαs

Proof

For i=1,2kwe define the following interpolation polynomial fix=s=1sikαjαsαiαs=j=1kfijxj1of degree k1such that fiαi=1, fiαj=0for j=1,2kand ji. We noteS=fijvii=1kj=1.k.

The ith row of the matrix produces SGis fiα1v1vifiα2v2vifiαnvnvi

By construction of polynomials fi, the kfirst columns of the matrix SGform the identity matrix, therefore Sis invertible and SG=IRwhere R=Rijand Rij=fiαjvjvi.

Corollary

Let Ithe identity matrix its order kand R=Riji=1kj=k+1nwhere Rij=vjvis=1sikαjαsαiαs. Alors la matrice IRis the generator matrix in systematic form of the generator matrix GRScode G=v1v2vnv1α1v2α2vnαn..v1α1k1v2α2k1vnαnk1.

Proof

Can be deducted from the definition of the generalized Reed-Solomon code and the latest proposal.

Algorithm

Input

A family of generalized Reed-Solomon code of length n, of dimension kconstituting the key space.

The public key G.

Results

The matrix G=vjαjii=0,..k1j=1nand Sinvertible matrix its size k×ksuch that G=SG.

Procedure

Put the matrix Gin form IRby Gaussian elimination.

Determine the matrix G=vjαjii=0,..k1j=1nsuch that α1,αnet v1,vncheck the equations Rij=vjvis=1sikαjαsαiαs.

Determine the matrix Ssuch that G=SG.

5. Conclusion

In conclusion, the security of cryptosystems based on error-correcting codes is strongly linked to the family of code used in the design of the system. The cryptosystem based on the Reed-Solomon code was broken by Sidelnikov and Shestakov in 1992. The version of McEliece using Goppa codes has been studied for 40 years and it seems perfectly secure from a cryptographic point of view; but it is not used in practice because the size of its public key is much larger that we know how to do with systems from other fields (RSA for example), hence the importance of finding a way to reduce the size of their public key. In the end, the McEliece system based on Goppa’s code remains a preferred system as a post-quantum cryptosystem. We have not covered in this chapter other cryptographic applications of error-correcting codes, including hash functions [3, 7, 8, 9, 10, 11], pseudo-random generators, identification protocols, etc.

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Ahmed Drissi (August 18th 2021). The Security of Cryptosystems Based on Error-Correcting Codes, Cryptography - Recent Advances and Future Developments, Riccardo Bernardini, IntechOpen, DOI: 10.5772/intechopen.93782. Available from:

chapter statistics

33total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Tradeoff Attacks on Symmetric Ciphers

By Orhun Kara

Related Book

First chapter

Security of Quantum Key Distribution Protocols

By Mhlambululi Mafu and Makhamisa Senekane

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us