TPR values for varying
Abstract
Due to the current level of telecommunications development, fifth-generation (5G) communication systems are expected to provide higher data rates, lower latency, and improved scalability. To ensure the security and reliability of data traffic generated from wireless sources, 5G networks must be designed to support security protocols and reliable communication applications. The operations of coding and processing of information during the transmission of both binary and non-binary data in nonstandard communication channels are described. A subclass of linear binary codes is considered, which are both Varshamov-Tenengolz codes and are used for channels with insertions and deletions of symbols. The use of these codes is compared with Hidden Markov Model (HMM)-based systems for detecting intrusions in networks using flow watermarking, which provide high true positive rate in both cases. The principles of using Bose-Chadhuri-Hocquenhgem (BCH) codes, non-binary Reed-Solomon codes, and turbo codes, as well as concatenated code structures to ensure noise immunity when reproducing information in Helper-Data Systems are considered. Examples of biometric systems organization based on the use of these codes, operating on the basis of the Fuzzy Commitment Scheme (FCS) and providing FRR < 1% for authentication, are given.
Keywords
- linear codes
- Varshamov-Tenengolz codes
- non-binary turbo codes
- Reed-Solomon codes
- concatenated codes
- flow watermarking
- biometric system
1. Introduction
Engineers and researchers around the world have been using various error correction codes (ECCs) for almost a century to provide communication and combat noise in information channels. In addition to communication, ECCs have found many other uses, including watermarking and intrusion detection, cryptography, and information security. Digital watermarking is the process of embedding a digital code into some public data. Today, this technology is widely used not only in multimedia processing but also in network traffic monitoring. In this case, the input patterns, which are easily identified when the watermarked flows cross an observation point, allow the creation of a mechanism to scan the network for the harmful activity. This procedure finds applications both for securing network connections and intrusion detection in them.
On the other hand, when providing secure access to any data, it becomes necessary to use user verification by analyzing his password, which requires ensuring the reliability of its storage. To solve this problem, biometric methods of organizing secure access to the system are widely used, that reduce the risks of storing passwords, which have long been a weak point in security systems. This chapter will discuss some types of the ECС and how they can be used to help ensure the security and reliability of information.
In recent years, the technique of applying the ECC has been undergoing changes due to the use of machine learning (ML) methods and, in particular, deep learning (DL). A good review of the recent advancements in DL-based communication was made by Qin et al. [1], where the authors described the use of this technique for channel modeling, modulation recognition, and improvement of decoding methods. In recent papers, the authors have considered in more detail the DL methods for decoding known codes [2] and, moreover, for constructing an ECC based on intelligent methods [3]. Despite the increasing use of the ML technique for ECC, it is important to understand both the principles of describing known ECC based on algebraic constructions that lead to elegant decoding algorithms and their application in non-standard communication channels.
The rest of the chapter is organized as follows. First, we present the basic encoding-decoding principles of the binary and non-binary ECC used for substitution and symbol insertions and deletions errors in Section 2. Then we discuss the flow watermarking techniques for intrusion detection in Section 3. In Section 4, we describe the use of various ECC types in biometric systems (BSs) for solving the problem of authentication and present our conclusion in Section 5.
2. Error-correcting codes
2.1 Linear codes
At the present stage of the ECC theory and technology development, more and more complex code structures attract our attention. Although coding algorithms are becoming more complex and require powerful computing resources, in recent years, researches have increasingly turned to known codes and mathematical descriptions developed for them. Such codes, for example, are
There are many good tutorials about error-correcting codes (for example, see [4, 5]), so only the necessary definitions are used in the entire chapter. We define a
We start from the description of linear code. A linear
Generally, a binary code
According to this principle, the decoder selects a codeword to minimize the Hamming distance of the matched codeword relative to the received codeword
The decoder performs following steps: the syndrome calculation of codeword
determination of the most likely error vector
The standard array for a binary (
For linear codes, it is important that the number of syndromes, 2
If we take a linear (6,3,3)-code
2.2 Cyclic codes
Binary
In such a polynomial representation, the presence or absence of the formal variable
Cyclic codes have the property that all code polynomials
Sometimes, to find a generating polynomial
Then, a parity-check matrix for a cyclic code is given by using as rows the binary vectors associated with the first
It should be noted that the principles of representation and encoding and decoding of polynomial codes are based on the concepts of both simple and extended finite fields, calculations in which can be found in a number of textbooks [5]. Below, we will only briefly use the basic concepts.
Representatives of more powerful correction codes are the Bose-Chadhuri-Hocquenhgem (BCH) codes that provide suitable selection of block lengths, code rates, and correcting capacity. BCH codes are cyclic codes that are constructed by specifying the roots of their generator polynomials, i.e., a BCH code of
Here, LCM is the least common multiple. Thus, we have a code with a length of
For example, consider GF(24),
We get a double-error-correcting binary BCH (15,7,5) code.
The main idea of decoding binary BCH codes is to use the elements of
2.3 Reed-Solomon codes
Reed-Solomon codes are multiple error-correcting non-binary codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. There are two main representations of Reed-Solomon codes – the original representation and the BCH-based representation, which is the most common, due to the fact that BCH-based decoding is more efficient compared to the original representation decoders. In the first case, if
It follows from Eq. (3) that the minimum distance of RS (
The decoding algorithms of RS codes are similar to that of binary BCH codes. As shown above, setting the primitive powers of the root as evaluation points makes the Reed-Solomon source code cyclic. Reed-Solomon codes in BCH representation are always cyclic because BCH codes are cyclic. In this regard, they are characterized by the same decoding methods as for cyclic codes. In order to choose the correct algorithm that meets the requirements of the system, it is necessary to understand its purpose, which is determined by the RS decoder operation. There are cycle decoding evaluation algorithm, PGZ algorithm, BM algorithm, Sugiyama algorithm with erasures and without erasures, and list decoding algorithms.
Reed Solomon code can correct not only errors but also the erasures, i.e., so-called “lost” symbols. If
Sudan in 1997 introduced an algorithm that allows the correction of errors beyond the minimum distance of the code. This algorithm produces a list of codewords (it is a list decoding algorithm) and is based on interpolation and factorization of polynomials over
The algebraic decoding methods described above are generally hard decision decoding (HDD) methods, which means that for each symbol a hard decision is made about its value. However, the decoder may also contain an information about the reliability of symbol (for example, the demodulator’s confidence in the correctness of the symbol), which allows to build soft decision decoders (SDDs). The advent of turbo codes that use iterated soft decision propagation decoding techniques to achieve error correction efficiency has spurred interest in applying SDD to conventional algebraic codes.
2.4 Turbo codes
Turbo codes involve the concatenation of two recursive systematic convolutional (RSC) codes connected serially or in parallel, and an interleaver between them. Due to space limitations in this section, we omit the description of convolutional codes. The iterative decoding of constituent codes starts individually, either serially or in parallel, based on inputs derived from the channel and typically some a priori information. Information from each data symbol propagates through the overall code structure in time. The optimal decoding algorithm for each component code in terms of minimizing the probability of error given independent inputs is the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm [8], realizing the maximum a posteriori (MAP) criterion decoding. Then resulting symbol probabilities are used to find the log-likelihood ratio (LLR) for
2.5 VT codes
Often, to describe and compare codes, a channel model is used in which information is transmitted. However, in the presence of noise in the channel, symbols may be received with errors. This type of error sometimes called the substitution error. The influence of interference in communication channels also causes synchronization errors associated with the insertion of additional symbols or deletion of transmitted symbols, which are sometimes called “indels.” Therefore, there is a strong reason to develop codes that not only correct substitution errors but also deal with “indels.”
One of the first codes to deal with synchronization errors caused by symbol deletion was the Varshamov-Tenengol’ts (VT) codes. Below, we briefly consider this construction.
Given a parameter
These codes are single-error-correcting codes and optimal for
For example, after calculation
Considering the simplicity of calculating the parameters of VT codes, we would like to make a linear encoder for efficient mapping of binary message sequences into codewords. For binary VT codes, such an encoder was proposed by Abdel-Ghaffar and Ferriera [12]. They constructed a systematic encoder that maps
Now we can introduce the “parity” bits denoted by
In an example, see [12] of code for
However, VT0(
Using this algorithm will allow constructing a subcode that has at least
Representing
Thus,
Recently, these codes have again attracted interest, as evidenced by the publication [15], where an encoding method was proposed for a non-binary systematic VT code.
3. Use of error-correcting coding in flow watermarking
3.1 HMM-based model for watermark embedding and extraction
The watermark embedding algorithm aims to detect any changes in the marked data or its integrity. The contents integrity is performed in the verification process. In this section, we discuss the application of ECC for watermark embedding in the context of traffic analysis (TA) used for such purposes as diagnostic monitoring, resource management, and intrusion detection. Intrusion detection systems attempts to detect intrusion through analyzing the network traffic with the use of watermark tracing [16]. If the embedded watermark is both reliable and unique, it is possible to analyze the watermarked return traffic and trace it back at intermediate nodes. This TA approach is referred to as the “flow watermarking” (FW).
To prevent an attacker to endure and analyze the delayed packets and then to eliminate the embedded watermarks, the developed FW schemes have to be “invisible” in the network. An example of stepping-stone detection scenario with FW is depicted in Figure 1 where an Attacker attacks Victim hiding his identity. Fortunately, FW can be applied for tracing back the attack source.
FW is often implemented on the basis of
The presence of contiguous packet merging leads to a telecommunication channel with deletion and/or substitution errors, and the appearance of jitter-induced bursting or splitting of packets also causes symbol insertions, which requires the appropriate choice of coding for reliable transmission of watermarks. Figure 2 demonstrates these phenomena. It follows from it that four packets 0, 1, 2, and 3 are sent, three packets 0, 2, and 3 are received, packet 1 is lost, and new packets 4 and 5 are added.
Most FW technologies use a carrier that modulates the transfer of watermark data. Gong et al. [18] embedded quantization index modulation (QIM) watermarks into IPDs and added a layer of ECC to handle watermark desynchronization and substitution errors. Authors developed a Hidden Markov Model (HMM) for channel with dependent deletion and substitution errors using a maximum likelihood decoding (MLD) algorithm paired with a forward-backward algorithm for the calculation of the posterior probabilities [5]. The schematic of the proposed system can be depicted as shown in Figure 3.
This scheme uses an Encoder and Decoder to process the incoming watermark sequences in order to obtain the codewords
Due to the network artifacts described above, the additional transformations must be performed in the encoder to improve noise immunity. For example, see [19], a
The
The whole scheme uses a secret key
The key used for security plays a supporting role in dealing with IDS channel errors during decoding. For example, see [18], if
Next, we will consider in more detail the principle of QIM for which modulation and demodulation are carried out by means of QIM Embedder and QIM Extractor, respectively (see Figure 3).
To embed the watermark, the IPD flow is modified so that each IPD is converted to an interval according to the even/odd multiplier of the quantization interval ∆/2, depending on the value of the 0/1 bit. Formally, this can be represented as:
Since packets can only be delayed by the QIM Embedder, it is possible to define the
The following QIM demodulation threshold function is used to recover the embedded bit
Consider the example in Figure 2. Here, the first two IPDs
In general
Obviously, in the absence of packet loss or split, the watermark bit is inverted if the IPD jitter exceeds ∆/4. The jitter can be described by i.i.d. Laplace distributed with zero mean. Then the jitter substitution error probability can be estimated as:
where
Authors in [17, 18] used the concept of
After calculating these probabilities for all bits of the watermark sequence, the presence of a watermark in flow is determined based on the correlation value of the resulting sequence and the original one. For those interested in the details of mathematical calculations, one can refer to the original publications of the authors mentioned above.
3.2 Use of VT codes in FW
An alternative IPD-FW scheme for embedding watermarks based on the use of binary VT codes, which are subcodes of linear codes and exploiting QIM, has been proposed in [14]. The scheme uses linear codes of length 6 and 8 bits with an attached marker and optional matrix interleaving to deal with bursting errors.
Next, the generated sequence
The Decoder detects markers in the
To solve the problem, it is proposed to use hybrid decoding with error correction and the choice of one of two algorithms is depended on the number of errors in each received codeword
For example, suppose that a sequence
It was found in [21] that there is a VT code that coincides with a linear (8,2,5) error-correcting code [5], consisting of four codewords and subcoding VT0(8). However, to perform the independent decoding of codewords from a linear VT subcode, placed in a continuous bitstream, the boundaries of the codewords must be known. We can implement their independent decoding by the organization of the codewords set of linear subcode and the use of matrix interleaving.
The proposed scheme consists of several layers. However, to simplify its work, we describe it based on the scheme in Figure 3. As before, we assume that a watermark sequence
In fact,
Two FW methods have been modeled: the first one is based on HMM and the second one uses VT codes with markers. At the same time, in the first method, the length of the sparse sequence for FW was 10 bits, and in the second method, it was 9 bits, considering the VT0(6) code with 3-bit marker. About 5000 packets were generated, in which network jitters were modeled as Laplace distribution with zero mean and a standard deviation of 10 ms. In the synthetic channel, substitution errors followed sequentially after deletion errors, and symbol insertions were studied separately. The detection threshold was chosen to keep false positive rate (FPR) below 1% for all deletion probabilities. The evaluation of true positive rates (TPRs) in the detection of watermarks for two schemes with respect to different deletion probabilities
1% | 2% | 3% | 10% | 20% | |
---|---|---|---|---|---|
HMM-based | 1.000 | 1.000 | 1.000 | 0.994 | — |
VT code | 0.999 | 0.999 | 0.999 | 0.995 | 0.666 |
It follows from Table 1 that the use of less complex VT coding leads to virtually the same performance compared to HMM. From the results, the TPR value drops to 66% when the packet loss is 20%, which is rare in a network environment. Methods using interleaving and code (8,2,5) showed better results [21] for channels with bursting insertion errors.
4. Application of error-correcting codes in biometrics
In recent years, there has been increasing interest in cryptographic approaches using biometric measurements. For these purposes, many physical methods are used: from taking fingerprints of a person to the dynamics of his gait. The uniqueness of these characteristics allows them to be used for both identification and authentication. However, for the verification organization, it is required to perform the recognition procedure. A special
In this section, we will focus on the processing of biometric features of a person’s face. Face recognition is very flexible and can be performed from a distance. These systems can be classified as follows [23]: image-based matching (whole face), feature-based face recognition, and video-based matching. The accuracy of the user’s biometric data recognition is high. However, the security and privacy of user data may be compromised. In this case, the concept of
The idea of a reversible template was proposed by Ratha et al. [24]. It includes five main features: tautology, irreversibility, accuracy, diversity, and revocability.
There are several approaches to the creation of biometric system (BS), which are based on direct generation of a secret key from biometrics or key binding to biometric data. The widespread implementation of BS solutions is constrained by the fuzziness of biometric data. This problem can be alleviated by applying error correction codes. Below we will consider several BS based on the use of different methods for obtaining biometric features and various code structures using the so-called
4.1 Biometric system based on facial HOG features
The use of ECC is due to the spread of biometric measurement values, which can be regarded as noise added to the received signal. Taking into account the signal processing procedures for registration and verification, the generalized scheme can be represented as shown in Figure 4.
Let us consider the operation of the BS in accordance with [26] with the only difference that instead of local information from convolutions with Gabor kernels, the histogram of oriented gradients (HOG) is used as features. In addition, more powerful BCH codes are used to suppress noise due to fuzzy biometric data [27].
The principle of the scheme operation is as follows. The Preprocessor receives the set of images of the user’s face as input, scales them, and converts color images into gray scale ones. Next, HOG features are extracted from the images in the form of real
The Preprocessor calculates mean
In addition, the reliability function
Based on the calculated parameters (Eq. (13)), according to
At the same time, a codeword
When implementing the user verification, one or more images are sent to the Preprocessor, where they are converted into a sequence of real numbers
The OpenFace tool [28] was used to obtain the HOG characteristics of user images containing 12 × 12 blocks of 31 histograms and written into a row vector
The calculated values of false acceptance rate (FAR) and false rejection rate (FRR) had the following values: FRR = 0, FAR = 3.5% demonstrating good performance of the used BCH codes, allowing to choose the lengths of secret keys
4.2 Application of turbo codes in biometric systems
Recently, the use of non-binary turbo codes with modulation in a biometric system has been studied [29]. These codes were generated from non-binary convolutional component codes combined with a random interleaver. Then phase-shift keying (8-PSK) was used. During processing, the polynomials with a coding matrix
At the authentication stage, the resulted codeword
Preliminary experimental estimates of FRR resulted in value FRR ∼ 0.1%, which is several times better than the previous scheme and known results for turbo codes [31].
A further increase in the effectiveness of BS is possible by increasing the inter-class differences in biometric characteristics, which prompted the use of neural networks (NNs) in this area.
In the NN-based system below, we have applied the stacked autoencoder (SAE) structure and the concatenated ECC using RS codes.
4.3 Smiling face biometric authentication system
In the following BS [6], we consider the use of a stacked autoencoder (SAE) to extract features from a sequence of video frames of a user smiling face in order to authenticate him and provide the access to digital services. In contrast to the generally accepted application of binary ECC in BS, we used the non-binary Reed-Solomon codes concatenated with the binary linear ones. The use of these codes, taking into account the dimension of the symbols, leads to an increase in the length of their bit representation. On the other hand, in order to neutralize “biometric noise” and correct errors, it is necessary to increase the ECC redundancy, which reduces the encoding efficiency.
The biometric templates are created according to FCS [25] based on the equidistant quantization of real data at the output of SAE for further processing and encoding by concatenated RS codes. The BS uses concatenated ECC based on non-binary RS codes and binary linear codes with the use of hard decision decoding (HDD) technique and soft decision decoding (SDD) obtained from symbol reliability. The operation principle the proposed BS will be described based on the generalized authentication scheme shown in Figure 4.
The Preprocessor block performs such operations as video data capturing, face and smile detection, the smile frames extraction, image transformation and normalization, and features extraction using the SAE pretrained at the registration stage. The biometric data samples obtained from the SAE output layer form the concatenated supervector Y = {
At the Enrollment stage, the biometric samples obtained at the SAE output using HD1 and HD2 then are binded to the secret Key
A series of experiments were carried out with SAE to get good compact biometric features. To reduce time spent, in these experiments, the subsets of 40 subjects were randomly selected from the entire UvA-NEMO database [32], reproducing a user smile. Unsupervised learning results and then supervised tuning of SAE with parameters 127/63 in the form of histograms are shown in Figure 7 showing the significant expansion of the inter-class distributions relative to each other.
The expected values of FAR and FRR were estimated based on the block error probability of decoding for the uncorrectable error patterns accepted by BS. In the evaluation, we conducted simulation experiments for different error-correcting code structures. The results are placed in Table 2 [6].
Inner code | Outer code | FRR, % | Key, bit/frames × dimension | Efficiency |
---|---|---|---|---|
RS (63,15) | Linear (6,3,1) | 1.0 | 90/2 × 63(180/4 × 63) | 0.119 |
RS (63,15) | REP (3,1,1) | 0.5 | 90/3 × 63(180/3 × 63) | 0.0079 |
RS (31,9) | REP (3,1,1) | 0.5 | 90/6 × 31 | 0.0968 |
RS (63,21) | — | 0.7 | 126/1 × 63 | 0.33 |
RS (31,17) | — | 0.3 | 170/2 × 31 | 0.5 |
RS (31,17) | REP (3,1,1) | <0.1 | 170/6 × 31 | 0.1828 |
From Table 2, it follows that reducing the RS code length makes it possible to increase the performance of the BS in terms of the FRR parameter. Simulation experiments have shown the possibility of achieving the FRR less than 1% for key lengths of 90–170 bits and demonstrated a more efficient use of RS codes compared to the previous scheme and the results from [33] for face template protection.
For all studied schemes, privacy leakage was assessed. The calculated mutual information between the input (output) data was significantly less than the entropy of the ECC codewords, which actually confirms the impossibility of compromising the user biometric data.
5. Conclusion
Despite the fact that the development of error-correcting codes was aimed at application in communication systems, their use is also relevant in security systems, where it is required to neutralize the noise added to the data from the environment. In this chapter, the main code structures that have found application in the flow watermarking for network intrusion detection, as well as in biometric authentication systems, have been considered.
The watermarking environment model is treated as a channel with substitution, insertion, and deletion errors. Two main code constructions were considered, first: based on HMM with adding a synchronizing key sequence to sparse data and second: based on the use of the modified error-correcting VT codes with a marker attachment. Statistical and computational experiments have shown the same performance of these schemes in terms of watermark detection TPR ≈ 1 when FPR < 1% with a simpler implementation of the second scheme, which is slightly inferior in coding rate to the first one. At the same time, the considered implementations of FW schemes are invisible and sufficiently resistant to network artifacts if their relative values do not exceed 20%.
In addition, two types of face biometric authentication systems based on HOG structures and latent autoencoder data were considered. The fuzziness of the HOG data was compensated by using binary BCH codes (511,58) and (511,28), which made it possible to obtain the FRR parameter value of 3.5%. The use of non-binary turbo codes of rate 1/3 with octal data modulation provided the possibility to improve performance up to the value of FRR = 1% with real helper data. And the use of concatenated RS codes together with linear binary codes showed the possibility of increasing efficiency and achieving FRR values of less than 1%. Moreover, it has been shown that a decrease in the FRR parameter is possible, firstly, by increasing the redundancy of the concatenated ECC, and secondly, by using the additional information from helper data when exploiting the EED for RS codes.
Thus, the transition to efficient non-binary code structures and real-valued ECC is a promising area of research in the field of watermarking and biometrics.
Acknowledgments
The authors express their gratitude to the publisher for financial support.
References
- 1.
Qin Z, Ye H, Li GY, Juang B-HF. Deep learning in physical layer communications. IEEE Wireless Communications. 2019; 26 (2):93-99. DOI: 10.1109/MWC.2019.180060 - 2.
Nachmani E, Marciano E, Lugosch L, Gross WJ, Burshtein D, Y. Be’ery. Deep learning methods for improved decoding of linear codes. IEEE Journal of Selected Topics in Signal Processing. 2018; 12 (1):119-131. DOI: 10.1109/JSTSP.2017.2788405 - 3.
Huang L, Zhang H, Li R, Ge Y, Wang J. AI coding: Learning to construct error correction codes. IEEE Transactions on Communications. 2020; 68 (1):26-39. DOI: 10.1109/TCOMM.2019.2951403 - 4.
Morelos-Zaragoza RH. The Art of Error Correcting Coding. 2nd ed. Chichester, West Sussex, England: Wiley; 2006. p. 278. DOI: 10.1002/0470035706 - 5.
Sklar B. Digital Communications: Fundamentals and Applications. 2nd ed. Upper Saddle River, N.J.: Prentice-Hall PTR; 2001. p. 1079 - 6.
Assanovich B, Kosarava K. Authentication System Based on Biometric Data of Smiling Face from Stacked Autoencoder and Concatenated Reed-Solomon Codes. PRIP'2021. CCIS, 1562. In: Proceedings of the 15th International Conference. Cham: Springer; 2021. pp. 205-219 - 7.
Guruswami V, Rudra AM. Sudan Essential Coding Theory. University at Buffalo; 15 Mar 2019. p. 473. eBook (Creative Commons Licensed, 2022) - 8.
Carrasco RA, Johnston M. Non-Binary Error Control Coding for Wireless Communication and Data Storage. Chichester, West Sussex, United Kingdom: Wiley; 2008. p. 322 - 9.
Varshamov RP, G.M. Tenengol’ts. Correction code for single asymmetric errors. Avtomat. Telemekh. 1965; 26 (2):286-290 - 10.
G. M. Tenengol’ts. Class of codes correcting bit loss and errors in the preceding bit. Avtomat. Telemekh. 1976; 37 (5):797-802 - 11.
Levenshtein VI. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics-Doklady. 1966; 10 (8):707-710 - 12.
Abdel-Ghaffar KAS, Ferreira HC. Systematic encoding of the Varshamov-Tenengolts codes and the Constantin-Rao codes. IEEE Transactions on Information Theory. 1998; 44 :340-345 - 13.
Abdel-Ghaffar KAS, Ferreira HC, Cheng L. Correcting deletions using linear and cyclic codes. IEEE Transaction on Information Theory. 2010; 56 (10):5223-5234 - 14.
Assanovich B, Puech W, Tkachenko I. Use of linear error-correcting subcodes in flow watermarking for channels with substitution and deletion errors. In: Proceedings 14th IFIP TC 6/TC 11 Int. Conf. Commun. Multimedia Security (CMS). Magdeburg, Germany; 2013. pp. 105-112 - 15.
Abroshan M, Venkataramanan R, Fabregas AGI. Efficient systematic encoding of non-binary VT Codes. In: 2018 IEEE International Symposium on Information Theory (ISIT). Vail, CO, USA: IEEE; 17-22 Jun 2018. pp. 91-95. DOI: 10.1109/ISIT.2018.8437715 - 16.
Wang X, Reeves DS, Wu SF, Yuill J, Sleepy watermark tracing: An active network-based intrusion response framework. In: Proceedings Trusted Information New Decade Challenge IFIP TC11 16th Annu. Working Conference Information. Paris, France: Security (IFIP/SEC); 2001. pp. 369-384 - 17.
Gong X, Rodrigues M, Kiyavash N. Invisible flow watermarks for channels with dependent substitution and deletion errors. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Proc. Kyoto, Japan; 2012. pp. 1773-1776 - 18.
Gong X, Rodrigues M, Kiyavash N. Invisible flow watermarks for channels with dependent substitution, deletion, and Bursty insertion errors. IEEE Transactions on Information Forensics and Security. 2013; 8 (11):1850-1859 - 19.
Davey MC, Mackay DJC. Reliable communication over channels with insertions, deletions, and substitutions. IEEE Transactions on Information Theory. 2001; 47 (2):687-698 - 20.
Yazdani R, Ardakani M. Reliable communication over non-binary insertion/deletion channels. IEEE Transactions on Communications. 2012; 60 (12):3597-3608 - 21.
Assanovich B, Terre VA, Penaranda-Foix FL. Watermarking pattern recognition in channels with substitution and bursty insertion and deletion errors. In: Proceedings of Pattern Recognition and Information Processing. Minsk; 2016. pp. 185-189 - 22.
Cheng L, Swart TG, Ferreira HC, Abdel-Ghaffar KA, Codes for correcting three or more adjacent deletions or insertions. In: IEEE International Symposium on Information Theory (ISIT). Honolulu, USA; Jul. 2014. pp. 1246-1250 - 23.
Tran QN, Turnbull BP, Hu J. Biometrics and privacy-preservation: How do they evolve? IEEE Open Journal of the Computer Society. 2021; 2 :179-191. DOI: 10.1109/OJCS.2021.3068385 - 24.
Ratha NK, Connell JH, Bolle RM. Enhancing security and privacy in biometrics-based authentication systems. IBM Systems Journal. 2001; 40 :614-634 - 25.
Juels A, Wattenberg M. A fuzzy commitment scheme. In: Proceedings 6th ACM Conference Computer and Communications Security. Singapore; 2-4 Nov 1999. pp. 28-36. DOI: 10.1145/319709.319714 - 26.
Kevenaar TAM, Schrijen GJ, van der Veen M, Akkermans AHM, Zuo F. Face recognition with renewable and privacy preserving binary templates. In: Fourth IEEE Workshop on Automatic Identification Advanced Technologies (AutoID'05). Buffalo, NY, USA; 2005. pp. 21-26. DOI: 10.1109/AUTOID.2005.24 - 27.
Assanovich B, Veretilo Y. Biometric database based on HOG structures and BCH codes. In: Proceedings of Information Technology and Systems (ITS2017). Minsk; 2017. pp. 286-287 - 28.
Baltrušaitis T, Robinson P, Morency L-P, OpenFace: An open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). Lake Placid, NY, USA; 2016. pp. 1-10. DOI: 10.1109/WACV.2016.7477553 - 29.
Assanovich B, Veretilo Y. Fuzzy secure sketch biometric scheme based on non-binary turbo codes. In: Proceedings of Information Technology and Systems. (ITS2018). Minsk; 2018. pp. 186-187 - 30.
Assanovich B. Application of Turbo codes for data transmission in UWB using PSK modulated complex wavelets, In. Signal Processing Workshop (SPW). 2020; 2020 :40-43. DOI: 10.23919/SPW49079.2020.9259136 - 31.
Maiorana E, Blasi D, Campisi P. Biometric template protection using Turbo codes and modulation constellations. IEEE WIFS. 2012; 2012 :25-30 - 32.
Dibeklioğlu H, Salah AA, Gevers T. Recognition of genuine smiles. IEEE Transactions on Multimedia. 2015; 17 (3):279-294 - 33.
Chen L et al. Face template protection using deep LDPC codes learning. IET Biometrics. 2019; 8 :190-197