Open access peer-reviewed chapter - ONLINE FIRST

Confidentiality and Integrity for IoT/Mobile Networks

By Tri Ngo Minh

Submitted: May 20th 2019Reviewed: June 12th 2019Published: October 23rd 2019

DOI: 10.5772/intechopen.88011

Downloaded: 13

Abstract

This chapter discusses how to ensure confidentiality and integrity for data flow in IoT applications. While confidentiality could be assessed by access control, cryptography, or information flow analysis, integrity is still an open challenge. This chapter proposes to use error-correcting codes to guarantee integrity, i.e., to maintain and assure the errorless state of data. Besides errors, many communication channels also cause erasures, i.e., the receiver cannot decide which symbol the received waveform represents. The chapter proposes a method that might correct both errors and erasures together. Our method is efficient in reducing memory storage as well as decoding complexity.

Keywords

  • confidentiality
  • integrity
  • information flow
  • erasure
  • separating matrix
  • covering design

1. Introduction

It is estimated that Internet of Things (IoT) will generate billions of dollars in profit for industries over the next two decades. Many organizations have started to develop and implement their own IoT strategies. IoT enables devices would generate and transmit so many data such that security should be a top concern. IoT users require that communication technologies have to guarantee both efficiency and security. This chapter discusses how to guarantee two main properties of security, i.e., confidentiality and integrity, for IoT applications.

1.1 Confidentiality

Securing the data manipulated by information systems has been a challenge in the past few years. Several methods to limit the information disclosure have been proposed, such as access control and cryptography. These are useful approaches, i.e., they can prevent confidential information from being read or modified by unauthorized users. However, they still have a fundamental limitation, i.e., they do not regulate the information propagation after it has been released. For example, access control prevents unauthorized file access, but is insufficient to control how the data is used afterwards. Similarly, cryptography provides a shield to exchange information privately across a nonsecure channel, but no guarantee about the confidentiality of private data is given after it is decrypted. Thus, neither access control nor encryption provides a complete solution to protect confidentiality for information systems.

To ensure confidentiality for an information system, i.e., IoT system, it is necessary to show that the system as a whole enforces a confidentiality policy, i.e., by analysing how information flows within the system. The analysis must show that information controlled by a confidentiality policy cannot flow to a place where that policy is violated. Thus, the confidentiality policy we wish to enforce is an information flow policy, and the method that enforces them is an information flow analysis.

Information flow analysis is a technique that has recently become an active research topic. In general, the approach of information flow security is based on the notion of interference [1]. Informally, interference exists inside a system when private data affect public data, e.g., an attacker might guess private data from observing public data. Noninterference, i.e., the absence of interference, is often used to prove that an information system is secured.

Noninterference is required for applications where the users need their private data strictly protected. However, many practical IoT applications might leak minor information. Such systems include password checkers, cryptographic operations, etc. For instance, when an attacker tries to guess the password: even when the attacker makes a wrong guess, secret information has been leaked, i.e., it reveals information about what the real password is not. Similarly, there is a flow of information from the plain-text to the cipher-text, since the cipher-text depends on the plain-text. These applications are rejected by the definition of noninterference.

However, the insecure property will happen only when it exceeds a specific threshold, or amount of interference. If the interference in the system is small enough, e.g., below a threshold given by specific security policy, the system is considered to be secure. The security analysis that requires to determine how much information flows from high level, i.e., secret data, to low level, i.e., public output, is known as quantitative information flow. It concerned with measure the leakage of information in order to decide if the leakage is tolerable.

Qualitative information flow analysis, i.e., noninterference, aims to determine whether a program leaks private information or not. Thus, these absolute security properties always reject a program if it leaks any information. Quantitative information flow analysis offers a more general security policy, since it gives a method to tolerate a minor leakage, i.e., by computing how much information has been leaked and comparing this with a threshold. By adjusting the threshold, the security policy can be applied for different applications, and in particular, if the threshold is 0, the quantitative policy is seen as a qualitative one. The idea of quantitative information flow analysis has been discussed in details in [2], one of our papers; readers can refer to it for more information.

1.2 Integrity

Integrity means maintaining and assuring accuracy and completeness of data. However, during the wireless transmission in IoT applications, messages can be erroneous due to many reasons, e.g., attenuation, distortion or the addition of noise. Error means the receiver cannot decode correctly the signal to get the right symbol. In order to protect data against errors, channel coding, i.e., error-correcting codes are required. Error-correcting codes ensure proper performance of IoT systems. They ensure the integrity of communication links in the presence of noise, distortion, and attenuation [3, 4, 5, 6]. The use of a parity-bit as an error-detecting mechanism is one of the simplest and most well-known schemes used in digital communication. Data is portioned into blocks. To each block, an additional bit is appended to make the number of bits which are 1in the block, including the appended bit, an even number. If a single bit-error occurs, within the block, the number of 1’s becomes odd. Hence, this allows for detection of single errors [7, 8].

Error-correcting codes are often applied in telecommunications. Many early applications of coding were developed for deep-space and satellite communication systems. For example, signals from satellites and space crafts are sent back to earth. The channel for such transmission is space and the earth’s atmosphere. These communication systems not only have limitations on their transmitted power, but also introduce errors, due to solar activity and atmospheric conditions, into weak signals. Error-correcting codes are an excellent method to guarantee the integrity of these communication links. With the applications of error-correcting codes, most of the data sent could be correctly decoded here on earth. As examples, a binary (32,6,16) Reed-Muller code was used during the Mariner and Viking mission to Mars around 1970 or a convolutional code was used on the Pioneer 10 and 11 missions to Jupiter and Saturn in 1972. The (24,12,8) Golay code was used in the Voyager 1 and Voyager 2 spacecrafts transmitting color pictures of Jupiter and Saturn in 1979 and 1980. When Voyager 2 went on to Uranus and Neptune, the code was switched to a concatenated Reed-Solomon code for its substantially more powerful error correcting capabilities.

The block and convolutional codes are also applied to the global system for mobile communications (GSM) which is the most popular digital cellular mobile communication system. Reed Solomon and Viterbi codes have been used for nearly 20 years for the delivery of digital satellite TV. Low-density parity-check codes (LDPC codes) are now used in many recent high-speed communication standards, such as Digital video broadcasting-S2 (DVB-S2), WiMAX, 10GBase-T Ethernet [9].

Most error correcting codes, in general, are designed to correct or detect errors. However, many channels cause erasures, i.e., the demodulator cannot decide whether the received waveform represents bit 0or 1, in addition to errors. Basically, decoding over such channels can be done by: firstly, deleting erased symbols and then, decoding the resulting vector with respect to the punctured code, i.e., the code in which all erasures have been removed. For any given linear code and any given maximum number of correctable erasures, in [7], Abdel-Ghaffar and Weber introduced a parity-check matrix yielding parity-check equations that do not check any of the erased symbols and which are sufficient to characterize the punctured code. This allows for the separation of erasures from errors to facilitate decoding. However, these parity-check matrices have too many redundant rows. To reduce decoding complexity, parity-check matrices with small number of rows are preferred. This chapter proposes a method that can build a matrix with a smaller number of rows.

Organization of the paper: The rest of this chapter is organized as follows. Section 2 introduces the main ideas of error-correcting codes, errors and erasures. Section 3 presents methods to construct a parity-check matrix that can correct both errors and erasures. Section 4 discusses a general solution for the covering design, which is used in the proposal. Finally, Section 5 concludes the chapter.

2. Codes, errors and erasures

2.1 Linear block codes

Let Cbe an nkdlinear block code. It means that Cis a k-dimensional subspace of the n-dimensional vector space. The set of codewords of Ccan be defined as the null space of the row space of an r×nparity-check matrix H=hi,jof rank nk. Since a vector x is a codeword of Ciff xHT=0, where the superscript Tdenotes the transpose, we can derive rparity-check equations PCE, as follows,

PCEi:j=1nhi,jxj=0fori=1,2,,r.E1

An equation PCEi(x) is said to check x in position jiff hi,j0.

2.2 Erasures

Sometimes, at the receiver, the demodulator cannot decide which symbol the received waveform represents. In this case, we declare the received symbol as an erasure. When the received codeword contains erasures instead of errors, the iterative decoding can be used [8].

Here, we summarize the iterative decoding procedure using an example of the (7,4,3) binary Hamming code with the following parity-check matrix,

Since a vector x=x1x2x3x4x5x6x7is a codeword iff xHT=0. Hence, every codeword has to satisfy three parity-check equations as follows,

Assume that the received vector is 0100, where the erased symbol is denoted by Equation A checks on x1, x3,x4and x5. If exactly one of these four symbols is erased, it can be retrieved from this equation. Thus, x1=1since x3=0, x4=1, and x5=0. Similarly, we can derive that x2=1, and x6=0from Equation B and C. Therefore, the iterative decoding decided that the transmitted codeword is 1101000.

Iterative decoding is successful iff erasures do not fill the positions of a nonempty stopping set. A stopping set is a set of positions in which there is no parity-check equation that checks exactly one symbol in these positions. The performance of iterative decoding techniques for correcting erasures depends on the sizes of the stopping sets associated with the parity-check matrix representing the code. The parity-check matrix with redundant rows could benefit the decoding performance, i.e., reducing the size of stopping sets, while increasing the decoding complexity. More information on stopping set can be found in [2, 10, 11].

2.3 Separation of errors from erasures

In this part, we discuss how to handle errors together with erasures. In this case, we can apply an algorithm using trials in which erasures are replaced by 0or 1; and the resulting vector is decoded by a decoder which is capable of correcting errors. For binary code, two trials are sufficient [8, 12].

For example, if Cis a binary nk-code with a Hamming distance d=2tε+t?+1, then Ccan correct tεerrors and t?erasures. In the presence of no erasures, Cis able to correct up to tε+t?/2errors. Let r be a received vector having at most tεerrors and at most t?erasures. Suppose the decoder constructs two vectors r0and r1, where riis obtained by filling all erasure positions in r with the symbols i,i=0,1. Since Cis binary, in either r0or r1, at least half of the erasure locations has the right symbols. Hence, either r0or r1has a distance at most tε+t?/2from the transmitted codeword. Thus, any standard error correction technique can be applied. If the correction decodes both r0and r1to codewords, and these codewords are the same, then this is the transmitted codeword. If they are different, then there is one, and only one, vector requiring at most tεchanges in nonerasure positions to become the right codeword. More information on this algorithm can be found in [6].

Abdel-Ghaffar and Weber proposed another way of decoding over such channels [7]. First, all erasures are deleted from the received message. Errors in the resulting codeword will be corrected based on the punctured code, i.e., codewords consist of symbols in positions which are not erased. After all errors have been corrected, erasures will be recovered by the iterative decoding.

The decoder can compute a parity-check matrix for the punctured code after receiving the codeword. However, this leads to time delay which is unacceptable specially in IoT applications. To reduce time delay, we can store parity-check matrices of all punctured codes corresponding to all erasure patterns. The drawback of this solution is the requirement of huge memory storage at the decoder.

Abdel-Ghaffar and Weber proposed using a separating matrix with redundant rows, providing enough parity-check equations which do not check any of the erased symbols and are sufficient to form a parity-check matrix for the punctured code obtained by deleting all erasures [7]. Having these parity-check equations not checking any of the erased symbols lead to the concept of separation of errors from erasures.

The basic concept of this decoding technique can be illustrated by an example as follows. We consider an (8,4,4) binary extended Hamming code with the following parity-check matrix, Figure 1.

Figure 1.

A parity check matrix for the code C.

A normal parity-check matrix just has only four rows as the first four rows in this separating matrix. Allowing redundant rows simplifies the decoding of erasures in addition to errors. Assume that we get a codeword r = 0011000with one erasure in the second position. Applying the decoding technique mentioned above, firstly we delete the erasure and the resulting vector is r’ = 0011000. This vector r’ can be considered as a codeword of the (7,4,3) punctured code. In H, the first, the second and the sixth row have zeros in the second position. It means that three corresponding parity-check equations do not check the erased symbol. Based on these three rows, we can form a parity-check matrix Hfor the punctured code, as follows Figure 2.

Figure 2.

A parity check matrix for the punctured code.

Using H, r’ is decoded into 0011010. Putting back the erasure, we get 0*011010. The third row of H, which checks the erased symbol, can be used to recover the erasure. Thus, the decoded codeword corresponding to r is 01011010.

A normal parity-check matrix cannot be used for decoding of both errors and erasures together. Decoding is feasible when we pay the price of storing a parity-check matrix with more rows than a normal one. In order to reduce the memory storage as well as the decoding complexity, a parity-check matrix with small number of rows is preferred.

Given any linear code and any given maximum number of correctable erasures, Abdel-Ghaffar and Weber introduced separating matrices yielding parity-check equations that do not check any of the erased symbols and which are sufficient to characterize all punctured codes corresponding to this maximum number of erasures [7]. This allows for the separation of erasures from errors to facilitate decoding. However, their proposal yields separating matrices which typically have too many redundant rows. The following part of this chapter discusses an improved method to construct such separating matrices, applying covering design, with a smaller number of rows.

3. How to build an l-separating matrix

3.1 Set separation

Let H=hi,jof rank nkbe an (r×n) parity-check matrix of C, rnk. Let Sbe a subset of 12nand Tbe a subset of 12r, define HST=hi,jwith iTand jS, be a T×Ssubmatrix of H. For the code Cwith the length n, define CS´=cS´:cCbe the punctured code consisting of all codewords of Cin which the symbols in positions indexed by S,S=12n{Sare deleted. Clearly, CS´is a linear code over GFqof length n=S´, dimension kk, and Hamming distance ddS´. Let S=i:1irhij=0jS, define HS=HS´S˜.

Definition 1 [7]: A parity-check matrix Hseparates S12niff HSis a parity-check matrix of CS´.

Theorem 1 [7]: A parity-check matrix Hof an nkdlinear code Cseparates a set Sof size Sd1iff HShas rank nkS.

Definition 2 [7]: If Hseparates all sets Sof size lfor a fixed lmindnk1, it is l-separating.

If His an l-separating parity-check matrix of the code C, based on H, we can construct a parity-check matrix for any code punctured up to a fixed number lof symbols. Hhas two features:

  • Hcan separate erasures from errors, since Hhas enough parity-check equations that do not check any erased symbols, and are sufficient to characterize the punctured code. It means that the punctured code, which is formed by deleting erased symbols, can be corrected errors by a sub-matrix of H.

  • In case lmindnk1, Hhas no stopping set of size lor less. For any pattern of lor fewer erasures, not only are there enough parity-check equations that do not check any of the erased symbols characterize the punctured code, but also there is a parity-check equation that checks exactly one of the erased symbols. It means that after all errors have been corrected, erasures can be recovered by the iterative decoding procedure.

3.2 Separating matrix

Let Hbe a full rank parity-check matrix, Si12n, in which i=1,2,,nl, be distinct subsets of 12nof size l, For each i, it is trivial that HSihas rank l(lmindnk1). By elementary row operations on H, we can obtain an nk×nmatrix Hi, for each i=1,2,,nl, of rank nk, such that its last nklrows have zeros in positions indexed by SiFigure 3.

Figure 3.

Independent-row separation.

Let HIbe a matrix which rows is the union of sets of the last nklrows in Hi, for i=1,2,,nl. HIis an l-separating matrix of the code C, and it has at most nlnklrows [7] Figure 4.

Figure 4.

An l-separating matrix.

3.3 A more efficient separating matrix

In this section, we propose a method that can construct an l-separating matrix with a smaller number of rows. This method implements the idea of covering design [13, 14]. Basically, given 1tuv, a vutcovering design is a collection of u-element subsets of V=12v, called blocks, such that each t-element subset of Vis contained in at least one block, e.g., 12is contained in 123.

For our specific situation, consider an nblcovering design. Let B=Bibe a set of b-element subsets, 1lbmindnk1, such that every l-element subset Siis contained in at least one member of B. Assign to each Si, i=1,2,,nl, an element Bjof Bsuch that Siis contained in Bj. HBjhas rank b. For any Bj, by elementary row operations on H, we can obtain an nk×n-matrix of rank nksuch that its last nkbrows have zeros in positions indexed by Bj. After arranging columns, we obtain a matrix Hj1with the following format (Step 1) Figure 5.

Figure 5.

Row separation—Step 1.

Consider the set Siassigned to Bj, by further elementary row operations, Hj1can be changed into a matrix such that rows l+1,l+2,,bhave zeros in positions indexed by Si, and rows b+1,b+2,,nkhave zeros in positions indexed by Bj. After column arrangement, we obtain a matrix with the following format (Step 2), Figure 6.

Figure 6.

Row separation—Step 2.

Following this method, if Siand Sibelong to the same Bj, the last nkbrows in Hiand Hiare the same. It follows that the matrix which rows is the union of the last nklrows in Hj, j=1,2,,B, and the rows l+1,l+2,,bof Hi, i=1,2,,nl, is an l-separating parity-check matrix of C. Let Bnbldenote the minimum size of B, i.e., Bnbl=min|B|. This matrix has at most nkbBnbl+nlblrows. It is obvious to see that the upper bound on number of rows in Approach 2 is strictly smaller than in Approach 1. In case b=l, two approaches are the same. For a given l, we can choose an appropriate bto achieve the best result Figure 7.

Figure 7.

A more efficient l-separating matrix.

4. Covering design

Consider a vutcovering design, where 1tuv.

Example 1: Given that v=8,u=3,t=2. There are 82 = 28 subsets of 2-elements, and 83 = 56 subsets of 3-elements of V={1,2,…,8}. However, we only need at most 21 subsets of 3-elements, i.e., {{1,2,3}, {1,2,4}, {1,2,5}, {1,2,6}, {1,2,7}, {1,2,8}, {1,3,4}, {1,3,5}, {1,3,6}, {1,3,7}, {1,3,8}, {1,4,5}, {1,4,6}, {1,4,7}, {1,4,8}, {1,5,6}, {1,5,7}, {1,5,8}, {1,6,7}, {1,6,8}, {1,7,8}}, to form all 28 subsets of 2-elements. For example, based on the subset {1,2,3}, we can form {1,2}, {2,3}, {1,3}. Using 21 subsets of size 3 mentioned above, we can construct all 28 subsets of size 2.

The covering design problem has been investigated since many years ago. However, until now, there is no general optimal solution for all triples vut. In this section, we propose a covering design valid for all triples vut. This design is not optimal but it can give a general solution for the problem.

4.1 Approach 1

Firstly, we show that with at most vuttsubsets of size u, we can form all vtsubsets of size t.

  1. Take the first utelements, i.e., 12..ut, out of V=12v.

  2. The rest of the set is ut+1ut+2v1v. Based on these elements, form all subsets of size t. The number of subsets is vutt.

  3. Put the first utelements into each subset of size tto have subsets of size u. With these vuttsubsets of size u, it is easy to see that we can form all vtsubsets of size t.

4.2 Approach 2

By modifying Approach 1, we show that some u-element subsets can be merged to reduce B.

  1. Take the first utelements, i.e., 12..ut, out of V=12v.

  2. The rest of the set is ut+1ut+2v1v. Based on these elements, form all subsets of size tand arrange them into columns based on the following rules:

    • Elements in each subset are arranged in ascending order, e.g., {1,2,3}.

    • Subsets are arranged into columns. Subsets are in one column iff their first t1elements are the same (except the special column mentioned below). Hence, subsets in one column are different from each other only in the last element. The subset with the smaller last element will be listed above.

    • Special column: In case t2, we arrange all subsets containing both element v1,vin a column and name it special column. It is easy to see that there are vut2t2subsets in this column.

  3. Put the first utelements into each subset of size tto have subsets of size u.

  4. If the number of subsets in the longest column is greater or equal to three and the special column exists, we can merge the last two subsets, which contain either v1or v, in each column (except the special column) into one, i.e., the merged set, by this rule

    • Take the union of two subsets, i.e., the size of the union subset is u+1.

    • Eliminate the first element of the union subset, i.e., its size is now u.

We can merge the last two subsets in each column (except the special column) because: (a) the first u1elements in the last two subsets are also in another subset in the columns. Thus, any subset of size tformed by using these u1elements can be form by any other subset in the column; (b) any subset of size tcontaining 1v1or 1vcan be formed by subsets in the special column.

Example 2: Given that v=9,u=5,t=4. Following the first three steps of Approach 2, we get:

First, take {1} out of the set. Following Step 2, form all subsets of size 4, and arrange them into columns. Put {1} back into each subset of size 4 to have subsets of size 5. We denote subsets in boxes are subsets which can be merged Figure 8.

Figure 8.

The first three steps of Approach 2.

The merged step (Step 4): For example, consider the first column, two subsets in the box are {1,2,3,4,8} and {1,2,3,4,9}. First, take the union of the two, i.e., {1,2,3,4,8,9}, and then eliminate the first element; thus, this results in {2,3,4,8,9}.

Therefore, Step 4 of Approach 2 gives the following result. It is easy to see that any subset of size 4 can be formed by subsets in Figure 9.

Figure 9.

The last step of Approach 2.

The number of reduced subsets is equal to the number of subsets that contain the elements v1or v. Thus, the number of reduced subsets is vut2t1. Therefore, with at most vuttvut2t1subsets of size u, we can form all vtsubsets of size t.

5. Conclusions

This chapter discusses how to ensure confidentiality and integrity for data in IoT systems. The chapter focuses more on integrity which can be ensured via the implementation of error-correcting codes. Separating parity-check matrices are useful for decoding over channels causing both errors and erasures. We propose a way to build a separating parity-check matrix with a smaller set of rows. This method reduces both decoding complexity and memory storage. Besides, we also present a covering design. This design is not optimal but it gives a general solution for all triple vut.

Acknowledgments

The authors are supported by The University of Danang—University of Science and Technology through the grant T2019-02-13 and T2019-02-14.

Download

chapter PDF

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Tri Ngo Minh (October 23rd 2019). Confidentiality and Integrity for IoT/Mobile Networks [Online First], IntechOpen, DOI: 10.5772/intechopen.88011. Available from:

chapter statistics

13total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us