The Security of Cryptosystems Based on Error-Correcting Codes

Quantum computers are distinguished by their enormous storage capacity and relatively high computing speed. Among the cryptosystems of the future, the best known and most studied which will resist when using this kind of computer are cryptosystems based on error-correcting codes. The use of problems inspired by the theory of error-correcting codes in the design of cryptographic systems adds an alternative to cryptosystems based on number theory, as well as solutions to their vulnerabilities. Their security is based on the problem of decoding a random code that is NP-complete. In this chapter, we will discuss the cryptographic properties of error-correcting codes, as well as the security of cryptosystems based on code theory.


Introduction
Like all asymmetric cryptographic systems, the idea is to base security on the difficulty of reversing a one-way function with a trap door. The theory of errorcorrecting codes contains well-structured and difficult problems to solve, more or less suitable for use in cryptography. The first who had the idea of using errorcorrecting codes for cryptographic purposes was McEliece in 1978 and he proposed an asymmetric encryption algorithm. In 1986, Niederreiter proposed another cryptographic system equivalent to that of McEliece [1]. The two systems of McEliece and Niederreiter are of equivalent security against a passive attack; however, they are not against an active attack [2]. In the following paragraph, we give an overview of the theory of error-correcting codes. In the third paragraph, we will only deal with the basic systems based on this theory. The last paragraph is devoted to the discussion of security settings and the most well-known attacks. In what follows we note. F 2 m : a finite field of 2 m elements. K x ½ : the ring of polynomials with an indeterminate. K x ½ = P ð Þ: the quotient ring K x ½ de modulo P. K * : a private set of the element 0. dQ x ð Þ: the degree of the polynomial Q x ð Þ. F m 2 : the set of length vectors m and components 0 and 1. F n : the scalar product n times of the set F. x ½ : the integer part of x: A t : the transpose of the matrix A. I k : the identity matrix of order k. gcd: greatest common divisor. C t n : the combination of t elements among n elements.

Error-correcting codes 2.1 Finite fields
Finite fields are the basis of many error-correcting codes and cryptographic systems, it is therefore essential to recall the theory of finite fields in order to understand the functioning of linear codes. In this paragraph we present some properties of finite fields and a method of representing them for later use. We are interested in constructing finite fields F 2 m and the calculations on these fields. Finite fields are generally constructed from primitive polynomials [3]. Definitions The minimal polynomial of an element β on a finite field F is the unit polynomial with coefficients in F smaller degree and its value in β is zero. Proposition 1. The ring K x ½ = P ð Þ is a field if and only if the polynomial P x ð Þ is irreducible on the field K.

If P x
ð Þ is irreducible of degree m and K a finite field of q elements then K x ½ = P ð Þ is field of q m elements.
This proposition gives us a way to build a finite field: Take a polynomial P irreducible over a field K et former le quotient K x ½ = P ð Þ. Theorem (the primitive element) If K is a finite field of order q, then the multiplicative group K * is cyclic generated by an element α called primitive element of K and we write K * ¼ α i , i ¼ 1…qÀ È É . Any generator of this group is called a primitive element of K. Definition (primitive polynomial) We say that a polynomial P ∈ F 2 x ½ of degree m is primitive if it is the minimal polynomial of a generator of F * 2 m . Lemma It follows from this lemma that we can represent the nonzero elements of a finite field F 2 m by nonzero vectors of F m 2 and that the α i have representatives of x i modP x ð Þ and consequently In what follows we denote by α a primitive element of F 2 m .

Principle of error-correcting codes
In order to transmit a message, it must be coded, it consists in temporarily giving it a certain form, the coding mode depends on the means of transmission, it can be disturbed by noise, hence the need for coding which allows the receiver to find the initial message even if it has been altered. Such coding is called channel coding.
The principle of error-correcting codes is to add to a message to be transmitted additional information called redundant or control information, so that transmission errors can be detected and corrected. This operation is called coding and its result is a code word, each message is associated, therefore a code word of length greater than that of the message.
The code is the set of code words thus obtained. We assume that all messages are words of the same length >0, written using an alphabet F of q elements. Each message x 0 , x 1 , …x kÀ1 , ð Þ is an element of the set of F k (message space). We then have q k possible messages. We assume that all the code words are of the same length n > k. Encode m messages of length k, m ≤ q k À Á consists in choosing an integer n > k, and associate with each message from F k a word from F n (injectively). The coding introduces a redundancy equal to n À k. Decoding consists of receiving a word x of F n to determine if x is a code word and if not correct it thanks to the redundancy. This is done using the Hamming distance.
Definition (hamming distance) let x ¼ x 0 , x 1 , …x nÀ1 ð Þ ≔ x 0 x 1 …x nÀ1 and y ¼ y 0 , y 1 , …y nÀ1 À Á ≔ y 0 y 1 …y nÀ1 of F n . We call the Hamming distance between words x and y, and we note d H x, y À Á ¼ d x, y À Á the number of index i ∈ 0, 1, 2…n À 1 f gsuch as x i 6 ¼ y i , we call Hamming's weight of a word x the number of nonzero components of x, we note w

Definitions
We call the minimum distance of a code C an integer d such as We call the weight of a word x of code C on integer w x ð Þ ¼ d x, 0 ð Þ. Proposal (correction capacity) Let C a minimum distance code d, and x ∈ F n a received message assigned to r errors, with r ≥ 1.
1. If 2r < d that is to say that r ≤ dÀ1 2 Â Ã , the code C correct r errors.
, the C code detects the existence of r errors but cannot always correct them.
3. If d 2 Â Ã < r ≤ d À 1, the C code detects the existence d' errors but risk of making an erroneous correction.
is called code correction capability, we also say that C is a t-corrector code.
Proof Let m the code word transmitted and x the message received and assigned from r errors then d m, x ð Þ¼r. 3. We know there is an error because x ∉ C, but there may be a code word m 0 ∉ C such as d m 0 , x ð Þ< d m, x ð Þ¼r.
The most used codes are the linear codes which we discuss in the next part.

Definitions
A linear code C of size n and dimension k on the finite field F q is a vector subspace of F n q . We note it n, k, d ½ q with d its minimum distance. Linear codes are codes in which each code word y is obtained by linear transformation of the components of the initial word (information) x.
A linear code is characterized by its generator matrix G, we have let H n À k ð ÞÂn matrix with coefficients in F q . H is called the parity control matrix of C if "x ∈ C ⇔ Hx t ".
F k q : the message space. The systematic code The matrix G defines a bijective function F k q ! C by x ! xG which we represent q k messages, its length k by code words, of length n.
The generator matrix G of a C code is not unique; G can be transformed into G 0 ¼ I k jA ð Þwith I k the identity matrix with k order and A the matrix of k lines and n À k columns.
G and G 0 generate the same C subspace; G 0 is called canonical generator matrix and if the generator matrix of a code is of the form G ¼ I k jA ð Þ, this code is said systematic.
Theorem Let C a n, k ½ q linear code.
1. If G is a generator matrix of C and H a parity control matrix of C then GH t ¼ 0.
2. If G is a k Â n matrix of rank k and H is a n À k ð ÞÂn matrix of rank n À k such as GH t ¼ 0 then we have: H is a parity control matrix of C if and only if G is a generator matrix of C. Proof And since H is a parity control matrix of C, we have the G i belong to C. rg G ð Þ ¼ k, then G i , i ¼ 1…k f gconstitute a basis of C. It follows that G is a generator matrix of C.
(Þ we have y ∈ C if and only if it exists x ∈ F k q such as y ¼ xG. Then y ∈ C if and only if yH t ¼ xGH t ¼ 0. Then H is a parity control matrix of C.
In the case of systematic code, we have the following corollary. Corollary Let C a n, k ½ q linear code 1. If G ¼ I k jA ð Þa canonical generator matrix of C then H ¼ ÀA t jI nÀk ð Þis a parity control matrix ofC.

If H ¼ BjI nÀk
ð Þis a parity control matrix of C then, G ¼ I k jÀB t ð Þis a generator matrix of C.

Proof
By applying the preceding theorem Encoding and decoding The coding is obtained by applying the generator matrix. Decoding consists in applying the control matrix to the message; if the result is 0 then the message is valid otherwise look for errors and correct them. Hx t is called syndrome. Suppose the word x is sent through a noisy channel and the word received is y so the error vector is e ¼ y À x.
Given y, the decoder must decide which word of the code x has been transmitted (which error vector?). For a vector u and a code C we call coset class of C, the set A representative of a class of C of minimum weight is called a leader of this class.
Theorem Let C a n, k, d ½ q linear code then, 1. u and v are of the same coset class of C if and only if u À v ∈ C.
2. Any vector of F n q is in a coset of C.
3. Given two coset classes, they are either disjoint or identical.

Principle
We construct the standard array of C which is a matrix of q nÀk lines and q k columns. It contains all the vectors of F n q ; its first line corresponds to the words of C with vector 0 on the left. The other lines represent the cosets u i þ C with the class leader u i to the left. The procedure is as follows: 1. We list the words of C starting with 0 on the first line.
2. We choose a vector u 1 of minimum weight that does not belong to the first line and we list in the second line the elements u 1 þ C, by entering below 0 the class leader u 1 and below each element x ∈ C the element u 1 þ x.
3. We choose u 2 in the same way and we repeat the same operation.
4. We iterate this process until all the side classes are listed and all the vectors of F n q appear only once.
When the word y is received, we look for its position in the standard table. The decoder then decides that the error vector e corresponds to the class leader who is located in the first column of the same row of y and decode y like x ¼ y À e, by choosing the code word of the first line on the same column ofy. Remark The standard table provides nearest neighbor decoding. Note that this process is too slow and too expensive in memory for large codes. In practice each code has by its structure a decoding algorithm.

The hamming code
A Hamming code with r ≤ 2 redundancy is a linear code 2 r À 1, 2 r À 1 À r ½ 2 its parity control matrix H, with H is a matrix of r lines and 2 r À 1 columns that correspond to the set of all nonzero vectors of F r 2 . Theorem The minimum distance of the Hamming 2 r À 1, 2 r À 1 À r ½ 2 code is d ¼ 3 (it therefore corrects a single error).
Proof This code does not contain any element of weight 1 and 2 otherwise we would have a column of H which would be zero or two columns of H would be identical.
It exists x ∈ C such as w x ð Þ ¼ 3, indeed by definition of the parity control matrix H, the first 3 columns are The vector syndrome x of which only the jth component is nonzero is none other than the transpose of the jth column of H. If the columns of H are ordered in increasing order of binary numbers, the jth column corresponds to the binary writing of j, hence the following decoding algorithm: Let y a message received, we calculate Hy t . If Hy t ¼ 0 then, y corresponds to the message transmitted. If Hy t 6 ¼ 0 and assuming there is only one error, Hy t directly gives the position of the error written in binary in the form ⋯b 3 b 2 b 1 b 0 . We can then correct y ¼ y 1 ⋯y n like x þ e j for j ¼ P n i¼1 b i 2 i and e j the vector of which only the jth coordinate is nonzero.

The Reed-Solomon codes
The set of polynomials of degree strictly less than k on F 2 m . Let us build a length code n and dimension k.
Each word of the code is the evaluation of a function f of F q x ½ k ð Þ on L then, we have a length code n and dimension k and generator matrix By its structure, this code has a minimum distance of at least n À k þ 1, because two polynomials of degrees less than k distinct cannot be equal in addition to k À 1 positions. This distance is exactly equal to n À k þ 1, since the evaluation of a polynomial of the form Q kÀ1 i¼1 x À α i ð Þhis weight is n À k þ 1. So we have a code on F 2 m of the form n, k, n À k þ 1 ½ q which can have both good transmission rate and good correction ability.
Remark Reed-Solomon codes represent a special case of a slightly more general class called generalized Reed-Solomon codes GRS whose definition is as follows. Definition Let v 1 , v 2 , …v n ð Þa vector of length n in F * 2 m et α 1 , α 2 , ⋯α n , ð Þa vector of length n in F * 2 m , with the α i are distinct two by two. The set of codes with the generator matrix G of the form is called the family of generalized Reed-Solomon codes.

The classical Goppa codes
Definition Let L ¼ α 1 , α 2 , …α n ð Þa suite of n distinct elements of F 2 m and g z ð Þ ∈ F 2 m z ½ a unit polynomial of degree r irreducible in F 2 m z ½ . The irreducible binary Goppa code, its support L (generator vector) and its generator polynomial g noted Γ L, g ð Þ is the set of words a ¼ a 1 , …a n ð Þ∈ F n 2 such that one of the following equivalent characterizations is verified: :: :: The construction of a code Goppa: Goppa's code is a linear code on the field F 2 , its construction requires the use of an extension F 2 m . Each element of the matrix H is then broken down into m elements of F 2 placed in columns, using a projection of F 2 m in F m 2 ; we go from a size matrix r Â n on F 2 m to a matrix of size rm Â n on F 2 so it is a length code n ¼ L j j and dimension k ¼ n À mr and has a minimum distance at least equal to d ¼ r þ 1. Indeed the parity check matrix H is written as the product of a Vandermonde matrix and an invertible matrix therefore all under a square matrix r Â r of H is invertible, then there are no code words with a weight less than or equal to r.
The decoding of a Goppa code: Several techniques exist to decode Goppa codes but they work by the same principle. Let c 0 ¼ c þ e and w e ð Þ < r 2 . We start by calculating the syndrome R c 0 z ð Þ on F 2 m ; from this syndrome we will write a key equation, and we will finish the decoding by solving the key equation to finde.
If R a z ð Þ ¼ 0 the word will belong to the code. The key equation Let σ e z ð Þ ¼ P n i¼1 z À α i ð Þ e i of degree < r 2 . On introduit le polynôme w e z ð Þ ¼ σ e z ð ÞR e z ð Þmod g z ð Þ called evaluator polynomial.
We can solve the key equation in two different ways: Berlekamp Massey's algorithm and the extended Euclidean algorithm. The latter has the advantage of being easier to present. Indeed we seek to find w e and σ e of degree < r 2 such as w e z ð Þ ¼ σ e z ð ÞR e z ð Þmod g z ð Þ ¼ σ e z ð ÞR e z ð Þ þ k z ð Þ g z ð Þ. If we try to calculate the gcd of g, R e ð Þwith the extended Euclidean algorithm, we will calculate at each step the polynomials u i , v i , r i checking R e u i þ gv i ¼ r i : At each step the polynomials u i and v i will be of degree <i and the degree of r i is equal to r À i. There is therefore a step at which if we stop the algorithm we will find a solution of the equation σ e ¼ u i 0 and w i 0 ¼ r i 0 to a scalar coefficient.

The basic system (McEliece)
We start by generating a code n, k, d ½ q linear of a well-chosen family and its generator matrix G. We are going to mix this matrix to make it indistinguishable from a random matrix, so we need a permutation matrix P her size is n Â n (having 1 in each row and column and 0 everywhere) and an invertible matrix S her size k Â k (S is jammer). The public key will be G 0 ¼ SGP which is indistinguishable from a random matrix (The definition of a random matrix comes from the definition of random code which be introduced in section four). The knowledge of S, P and G allows us to find the structure of the design code and provides us with the decoding algorithm.

The algorithms of the McEliece system
We cite the component algorithms of the McEliece cryptosystem [4]. The generation of keys Input A family of linear codes n, k, d ½ q chosen for design. Procedure Choose a generator matrix G in systematic form of the design code. Choose an invertible matrix S her size k with coefficients in F q . Choose a permutation matrix P her size is n Â n.
The public key G 0 . The private key S, G, P ð Þ. Encryption of the plaintext. Input The public key G 0 . The plaintext x ∈ F k q . Procedure Choose a vector e ∈ F n q (an error) his weight less than or equal to the design code correction capacity.
Calculate y ¼ xG 0 þ e. Output: The cipher text y. Decryption of cipher text Input: the cipher text y, The private key S, G, P ð Þ.
ð Þ with f G the design code decoding algorithm, whose generator matrix is G. Calculate The use of binary Goppa code as a secret key is initially proposed by McEliece in its original version. Where he took the following parameters: m ¼ 10, n ¼ 2 n ¼ 1024, r ¼ 50, k ¼ n À mr ¼ 524: So far it seems that this choice is perfectly safe, but it is not used in practice because the size of its public key is very large. Example We use the Hamming code with its generator matrix G ¼ The generation of keys 9 The Security of Cryptosystems Based on Error-Correcting Codes DOI: http://dx.doi.org /10.5772/intechopen.93782 Let the private key S, G, P. The public key: G 0 ¼ SGP ¼ And since G is generator matrix of the systematic system then xS ¼ 1110 x. Then the plaintext sought.

The Niederreiter variant
Let C a linear t-corrector code of length n and dimension k. Let H a parity check matrix of C her size is n À k ð ÞÂn. We randomly choose an invertible matrix S and P a permutation matrix. We calculate H 0 ¼ SHP. We will have H 0 a public key and S, H, P ð Þthe private key, with the knowledge of a syndrome decoding algorithm in C. Let x a plaintext of length n and weight t, we calculate the cipher text y ¼ H 0 x t . The recipient receives y knowing the secret key, he can calculate S À1 y ¼ HPx t . Using the syndrome decoding algorithm of C, he can find Px t and applying P À1 the plaintext x is found.
The algorithms of the Niederreiter cryptosystem [5] The generation of keys Input A linear code n, k, d ½ q is chosen for the design, of which we know a decoding algorithm by syndrome. Procedure The plaintext x. Remark Reed-Solomon codes were originally proposed by Niederreiter as a family of codes that could be considered by his cryptosystem. In 1992 Sidelnikov and Shestakov have shown that it is easy to attack this cryptosystem [2].

The security of cryptosystems based on correcting codes
The security of cryptosystems based on error-correcting codes is based on the problem of distinguishing the design code (hidden) from a random code. We first give the following definitions: • Code equivalence Two codes are said to be equivalent if their generator matrices (respectively parity) are deduced from each other by permutation of columns.
• Random code A random code is a linear code of which the k linearly independent lines of the generator matrix (or the n linearly independent columns of the parity matrix) have been generated randomly.
The main parameters for securing an McEliece cryptosystem and its variants are then the structure of the code family chosen for the design, which it is desirable that it will be difficult to find an equivalent code. Since the robustness of such a system lies in the difficulty of decoding and the hidden structure of the design code, then the attacker can attempt to attack the system by two methods: decoding attack and structural attack. The resistance of the system to these two attack methods depends on the family of codes chosen for the design. The choice of code family is the essential point in the design of the cryptosystem.

Decoding attack
The attacker directly attempts to decode the cipher text in the C code (generator matrix G or public key parity H); the principle consists of decoding the intercepted cipher text relative to the public code using general decoding algorithms. We cite two decoding problems in a random code: Problem 1 Given G a random binary matrix of size k Â n, generator of a C code of dimension k. x a random word of F n 2 and t a positive integer, find if there is an error word e of F n 2 such as w e ð Þ ≤ t and x þ e ∈ C. Problem 2 Given H a binary random parity matrix; her size n À k ð ÞÂn of a C code its dimension k, s a random vector of F nÀk 2 and t a positive integer, find if there is a word x of F n 2 such as w x ð Þ ≤ t and Hx t ¼ s. Decoding in random code is behind the following attacks: • Algorithme de décodage par ensemble d'information The principle is based on two steps: the selection of a set of information and the search for low-weight word. There are several variants which propose to optimize one or the other of these two steps.
Definition Let C a linear code of generator matrix G and length n. A set of information I is a subset of 1, 2, …n f gsuch as G I , her size k Â k formed of columns of G labeled by the elements of I, is invertible.
Remark The matrix G I jG J À Á with I ∪ J ¼ 1, 2, …n f gis equivalent to G. Algorithm Input G: a matrix generating of a code C. t: a positive integer. y: a word of F n 2 such as d y, C The couple x, e ð Þ such as y ¼ xG þ e where w e ð Þ ≤ t. Procedure Randomly draw a set of information I of the code C (let J such as Calculate e J ¼ y J À y I R.
Repeat the previous operations until you find e J such as w e J À Á ≤ t. Returne ¼ 0je J À Á . Determine the word x such as y À e ¼ xG. Proof We have a y ¼ xG þ e and y ¼ y I jy J ¼ x G I jG J À Á þ e I je J À Á . Hence e I ¼ y I À xG I and e J ¼ y J À xG J .
If the set of information I does not contain an error position e I ¼ 0 ð Þand like G I is invertible, we obtain y I ¼ xG I , e J ¼ y J À y I G À1 I G J . Then e ¼ 0jy J À y I G À1 I G J: is the solution sought.

Remark
We have C k n possibilities to choose k ¼ I j j positions of 1, 2, …n f g( I j j is a cardinally of I). And we have C k nÀt possibilities to choose k ¼ I j j positions among n À t positions where e i ¼ 0. So the probability of getting the set of information I with e I ¼ 0 is p ¼ The cipher text y ¼ 10101011 ð Þett ¼ 1. looking m, e ð Þsuch as mG þ e ¼ y.
• Decoding by paradox of birthdays Consider an instance of problem 2. For a parity check matrix H of size r Â n, a syndrome s and a weight t. If the weight t is even, let us separate the columns of H in two sets of the same size H 1 and H 2 such as H ¼ H 1 jH 2 ð Þ. Let us build L 1 ¼ H 1 e t 1 , e 1 of length n 2 and the weight t 2, e 2 of length n 2 and the weight t 2 È É . Common elements of L 1 and L 2 are such that H 1 e t 1 ¼ s þ H 2 e t 2 , that is to say e 1 je 2 ð Þis solution of problem 2. The probability that one of the solutions splits into two equal parts of the parity matrix is p ¼ • The recovery of a plaintext encrypted twice by the same McEliece system This is an active attack that only applies to the McEliece encryption system (because it is not deterministic) and does not apply to the Niederreiter system. Suppose the plaintext x is encrypted in two different ways. We will have y 1 ¼ xG þ e 1 , y 2 ¼ xG þ e 2 où e 1 et e 2 sont deux vecteurs d'erreur distincts de poids t. We get the word y 1 À y 2 ¼ e 1 À e 2 which is less than or equal to 2t. Once an attacker has detected that the two cipher texts y 1 and y 2 correspond to the same plaintext, this information will reduce the number of iterations of the decoding algorithm set of information. Message forwarding is detected by observing the weight of the two cipher texts. If the two plaintexts are identical then, the weight of the sum of the two numerical texts remains less than 2t in general (t the correction capacity).
Algorithm Input G: The public key of sizek Â n. Two words y 1 and y 2 such as y 1 ¼ xG þ e 1 , y 2 ¼ xG þ e 2 where e 1 and e 2 are two distinct error vectors of weightt. Output The plaintext x. Procedure Calculate y 1 À y 2 .
Randomly draw a set of information I ⊂ 1, 2, …n f gwhich label the zero positions of y 1 À y 2 .
Calculate e J ¼ y J À y I G À1 Repeat the previous operations until the weight of e ( ≤ t). Return x ¼ y I G À1 I . Example Let us try to attack by this method the system of the previous example. Either plaintext encrypted two ways in which the public key is Draw a set of information that labels the zero positions of y 1 þ y 2 let I ¼ 7, 8 f g.

Structural attack
The attacker tries to find a decomposition of the key G 0 ¼ S 1 G 1 P 1 , which allows it to develop its own decoding algorithm. Succeeding in a structural attack generally amounts to finding a code equivalent to the public code for which we know a decoding algorithm. This attack depends exclusively on the structure of the space of the keys used. We quote here a successful attack on an McEliece system with the Reed-Solomon code as the design code.

• The attack of Sidelnikov and Shestakov
Sidelnikov and Shestakov showed [6] that generalized Reed-Solomon codes were so structured that one could find a decoder of the public code in polynomial time. The systematic form of the matrix generating a GRS code can be obtained from the following proposition: Proposal a matrix generating a Reed-Solomon code generalized on F q m then there is a matrix k Â k invertible S coefficient in The ith row of the matrix produces SG is f i α By construction of polynomials f i , the k first columns of the matrix SG form the identity matrix, therefore S is invertible and SG ¼ IjR ð Þ where R ¼ R ij and R ij ¼ f i α j À Á v j v i . Corollary Let I the identity matrix its order k and R ¼ R ij À Á i ¼ 1…k α i Àα s . Alors la matrice IjR ð Þ is the generator matrix in systematic form

Proof
Can be deducted from the definition of the generalized Reed-Solomon code and the latest proposal.
Algorithm Input A family of generalized Reed-Solomon code of length n, of dimension k constituting the key space.
The public key G 0 . Results The matrix G ¼ v j α i j i ¼ 0,::k À 1 j ¼ 1…n and S invertible matrix its size k Â k such that G 0 ¼ SG.

Procedure
Put the matrix G 0 in form IjR ð Þ by Gaussian elimination.
Determine the matrix G ¼ v j α i j i ¼ 0,::k À 1 j ¼ 1…n such that α 1 , …α n et v 1 , …v n check the Determine the matrix S such that G 0 ¼ SG.

Conclusion
In conclusion, the security of cryptosystems based on error-correcting codes is strongly linked to the family of code used in the design of the system. The cryptosystem based on the Reed-Solomon code was broken by Sidelnikov and Shestakov in 1992. The version of McEliece using Goppa codes has been studied for 40 years and it seems perfectly secure from a cryptographic point of view; but it is not used in practice because the size of its public key is much larger that we know how to do with systems from other fields (RSA for example), hence the importance of finding a way to reduce the size of their public key. In the end, the McEliece system based on Goppa's code remains a preferred system as a post-quantum cryptosystem. We have not covered in this chapter other cryptographic applications of error-correcting codes, including hash functions [3,[7][8][9][10][11], pseudo-random generators, identification protocols, etc.

Author details
Ahmed Drissi National School for Applied Sciences, ENSA, Abdelmalek Essaadi University, Tangier, Morocco *Address all correspondence to: idrissi2006@yahoo.fr © 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.