Open access peer-reviewed chapter

Decision Fusion for Large-Scale Sensor Networks with Nonideal Channels

Written By

Yiwei Liao, Xiaojing Shen, Junfeng Wang and Yunmin Zhu

Reviewed: 26 June 2022 Published: 01 September 2022

DOI: 10.5772/intechopen.106075

From the Edited Volume

Functional Calculus - Recent Advances and Development

Edited by Hammad Khalil

Chapter metrics overview

63 Chapter Downloads

View Full Metrics

Abstract

Since there has been an increasing interest in the areas of Internet of Things (IoT) and artificial intelligence that often deals with a large number of sensors, this chapter investigates the decision fusion problem for large-scale sensor networks. Due to unavoidable transmission channel interference, we consider sensor networks with nonideal channels that are prone to errors. When the fusion rule is fixed, we present the necessary condition for the optimal sensor rules that minimize the Monte Carlo cost function. For the K-out-of-L fusion rule chosen very often in practice, we analytically derive the optimal sensor rules. For general fusion rules, a Monte Carlo Gauss-Seidel optimization algorithm is developed to search for the optimal sensor rules. The complexity of the new algorithm is of the order of OLN compared with OLNL of the previous algorithm that was based on Riemann sum approximation, where L is the number of sensors and N is the number of samples. Thus, the proposed method allows us to design the decision fusion rule for large-scale sensor networks. Moreover, the algorithm is generalized to simultaneously search for the optimal sensor rules and the optimal fusion rule. Finally, numerical examples show the effectiveness of the new algorithms for large-scale sensor networks with nonideal channels.

Keywords

  • decision fusion
  • multisensor detection
  • nonideal channels
  • Monte Carlo method
  • importance sampling

1. Introduction

Distributed detection has been an active research area in the past decades [1, 2, 3, 4, 5, 6, 7]. It involves the design of decision rules for the sensors1 and the fusion rule [8]. Early work on distributed detection mainly focused on conditionally independent sensor observations, such as [2, 4, 9, 10], and the resulting optimal sensor decision rules, as well as the fusion rule, were likelihood ratio tests (LRTs). Details on distributed detection with conditionally independent sensor observations can be seen in [1, 6, 7] and references therein.

In this chapter, we focus on conditionally dependent observations in sensor networks. In [5], the computational difficulty of obtaining the optimal sensor rules was shown by a rigorous mathematical approach. Some early progress was made on the derivation of sensor rules for the dependent observation case such as in [11, 12, 13, 14, 15]. More recently, a hierarchical conditional independence model was provided that was applicable to some specific classes of multisensor detection problems with dependent observations [16]. Copula-based distributed decision fusion methods have been proposed to deal with dependent observations in sensor networks, such as [17, 18, 19] and references therein. Given a fusion rule, Monte Carlo methods were proposed to reduce the computational complexity of deriving sensor decision rules with ideal channels in [20, 21], and the optimal sensor rules were obtained analytically for the K-out-of-L fusion rule in [20].

Some works on the derivation of optimal fusion rules can be seen in [15, 22, 23, 24]. For some specific parallel network decision systems, a unified fusion rule was presented in [15]. Some further results on the problem are available in [25, 26]. In [27], the authors provided methods that search for the sensor rules and the fusion rule simultaneously by combining the methods of [2] and [15] in order to attain near-optimal system performance.

The works discussed thus far assumed the availability of ideal channels in sensor networks. However, channel errors between the sensors and the fusion center are omnipresent in practical multisensor detection networks, and, therefore, studies on multisensor detection in the presence of nonideal channels have attracted some recent interest, such as in [8, 28, 29, 30, 31, 32, 33]. Under the Neyman-Pearson criterion, the design of sensor rules in the presence of nonideal channels was addressed in [32]. The parallel fusion structure was extended by incorporating the fading channel layer and two alternative fusion schemes were presented based on fixed sensor rules in [28]. It was shown that the optimal sensor decision rule that minimizes the error probability at the fusion center is equivalent to a local LRT for independent sensor observations in [29]. Under Neyman-Pearson and Bayesian criteria, the work was generalized to dependent and noisy channels, respectively, in [8]. In [31], the authors considered the optimal sensor rules with channel errors via Riemann sum approximation under a given fusion rule for general dependent sensor observations. Although the method based on the Riemann sum approximation has been developed for dependent observations with channel errors, it is too computationally expensive to be of practical use in large-scale sensor networks.

In this chapter, a Monte Carlo importance sampling method is provided to reduce the computational complexity of multisensor detection fusion with channel errors. Based on the strong law of large numbers, the Bayesian cost function is approximated by its empirical average through the Monte Carlo importance sampling method. The main contributions of this chapter are listed below:

  1. When the fusion rule is fixed, we derive a necessary condition for the optimal sensor rules that minimize the approximated Bayesian cost function. A Monte Carlo Gauss-Seidel optimization algorithm is developed and it is shown to be finitely convergent. The complexity of the new algorithm is shown to be of the order of OLN compared with OLNL of the previous algorithm based on the Riemann sum approximation.

  2. When the fusion rule is the K-out-of-L rule, we prove that there exists an analytical form for the optimal sensor rules in the presence of nonideal channels. Thus, the proposed method allows us to design decision rules for large-scale sensor networks.

  3. The Monte Carlo Gauss-Seidel optimization algorithm is extended to simultaneously search for the optimal sensor rules and the optimal fusion rule.

Numerical examples show the effectiveness of the new algorithms for large-scale sensor networks with dependent observations and channel errors.

The rest of this chapter is organized as follows: In Section 2, the parallel binary Bayesian detection network with channel errors is formulated and the Monte Carlo cost function is introduced. In Section 3, the necessary condition for the optimal sensor rules is presented. For the K-out-of-L fusion rule, the analytical form for the optimal sensor rules is provided. In Section 4, the Monte Carlo Gauss-Seidel iterative algorithm and its convergence analysis are presented. The extension to search for the optimal sensor rules and the optimal fusion rule are simultaneously described in Section 5. Simulation results are provided in Section 6. Conclusions are contained in Section 7.

Advertisement

2. Preliminaries

2.1 Problem formulation

The L-sensor parallel Bayesian detection network structure with two hypotheses H0 and H1 in the presence of nonideal channels is considered (see Figure 1). Assume that y1, y2, …, yL are sensor observations and the jth sensor compresses the nj-dimension vector observation yj to one bit: Ijyj:Rnj01,j=1,,L. For notational convenience, nj=1 in the following description. The L sensors transmit the compressed data to the fusion center and the fusion center makes the decision between H0 and H1. Since external interference and internal errors may occur, the channels are not reliable and the fusion center may not correctly receive the symbol Ij sent by the jth sensor. Let Ij0 denote the received bit by the fusion center for j=1,2,,L. Generally speaking, Ij0 may not be equal to Ij. The definition and assumptions on channel errors (see e.g., [29, 31]) are summarized below:

Figure 1.

The L-sensor parallel binary Bayesian detection network structure in the presence of nonideal channels.

Definition 1: The channel errors between the jth sensor and the fusion center are described as Pjce1=PIj0=0Ij=1 and Pjce0=PIj0=1Ij=0 for j=1,2,,L, where Pjce1 is the probability of channel error when the jth sensor sends 1 but the fusion center receives 0, and Pjce0 is the probability of channel error when the jth sensor sends 0 but the fusion center receives 1.

Assumption 1: The probabilities of channel error are statistically independent of the hypotheses, namely PIj0IjHν=PIj0Ij,ν=0,1.

Remark 1: Assumption 1 is due to the hierarchical structure based on the Markov property (see [29]).

Assumption 2: The channels that connect the sensors to the fusion center are independent, i.e., PI10I20IL0I1I2IL=j=1LPIj0Ij.

We consider the parallel binary Bayesian detection network with nonideal channels that is built on the above definition and assumptions. The final decision is made by the fusion center based on the received binary bits I10I20IL0 from the L sensors. From the definition of a general Bayesian cost function given in [25], the L-sensor binary Bayesian cost function with channel errors at the fusion center can be written as follows:

CI10y1IL0yLF0=C00P0PF0=0H0+C01P1PF0=0H1+C10P0PF0=1H0+C11P1PF0=1H1E1
=c+aPF0=0H1bPF0=0H0,E2

where Cαβ,α,β=0,1 are suitable cost coefficients, P0 and P1 are the prior probabilities for the hypotheses H0 and H1, respectively, F0 is the fusion rule, and PF0=μHν,μ,ν=0,1 denotes the conditional probability of the event that the fusion center decides in favor of hypothesis μ when the real hypothesis is Hν. The cost function (1) is simplified to (2) by defining c=C10P0+C11P1,a=P1C01C11,b=P0C10C00. F0 is actually a function of the disjoint set of all possible binary messages I10I20IL0. The received decisions are divided into two sets denoted as H00 and H10 which are given by

H00=u10u20uL0:F0I10I20IL0=0Ij0=uj0uj0=0/1j=1L;H10=u10u20uL0:F0I10I20IL0=1Ij0=uj0uj0=0/1j=1L.

Obviously, H0=u10u20uL0:Ij0=uj0uj0=0/1j=1L=H00H10. For any binary decisions I10I20IL0 received by the fusion center, the original sensor decision bits before transmission are I1I2IL and they consist of the set H=u1u2uL:Ij=ujuj=0/1j=1L. Therefore, based on the law of total probability, the conditional probability formula, and Assumption 1:

PF0=0Hν=s0H00PD0Hν=s0H00sHPD0DPDHν,E3

where D0=I10I20IL0, s0=s01s0L, Ij0=s0j, and s0j=0/1 is a specific value of Ij0; in the same way, D=I1I2IL, s=s1sL, Ij=sj, and sj=0/1 is a specific value of Ij. Strictly speaking, we should use PD0=s0Hν to represent PD0Hν and we use the latter for notational simplicity. It is similar to PDHν. Based on Assumption 2:

PD0D=j=1LPIj0Ij,E4

where for any 1jL

PIj0Ij=1Pjce01Ij01Ij+Pjce0Ij01Ij+1Pjce1Ij0Ij+Pjce11Ij0Ij.E5

Thus, the cost function (2) becomes

CI10y1IL0yLF0=c+s0H00sHPD0DaPDH1bPDH0E6
CI1y1ILyLF0Pce0Pce1,E7

where Pce0=P1ce0PLce0,Pce1=P1ce1PLce1. Hence, the cost function now becomes a function of the sensor rules I1IL, the probabilities of channel errors Pce0,Pce1, and the fusion rule F0. The goal of this chapter is to optimize the sensor rules and the fusion rule so as to minimize the cost function with known probabilities of channel errors.

We rewrite aPDH1bPDH0 as follows:

aPDH1bPDH0=Ωsapy1,,yLH1bpy1,,yLH0dy1dyL=IΩsapy1,,yLH1bpy1,,yLH0dy1dyL,E8

where Ωs=y1yL:I1y1=s1ILyL=sL, IΩs is an indicator function on Ωs, and the region of integration in (8) is the full space. Assume that py1y2yLHν,ν=0,1 (or pyHν) are the known conditional joint probability density functions. If not, we can learn the joint probability density functions from training data using copula functions (see, e.g., [17]). Note that I1y1,,ILyL are indicator functions and sj=0/1,j=1,,L,

IΩs=Iy1yL:I1y1=s1ILyL=sL=Iy1:I1y1=s1IyL:ILyL=sL=1I11s1+I1s11IL1sL+ILsL.E9

For simplicity, denote QjIj=1Ij1sj+Ijsj. Substituting (8) into (6),

CI1y1ILyLF0Pce0Pce1=c+s0H00sHP(D0DQ1I1QLIL[apyH1bpyH0dy=c+PH00L̂ydy,E10

where PH00=s0H00sHPD0DQ1I1QLIL and L̂y=apyH1bpyH0. Note that from the definition of H00,H10,H0, and H, we have

PH00=s0H01F0D0sHPD0DQ1I1QLIL=k=12Lk=12L1F0skPskskj=1L1Ijyj1skj+Ijyjskj,E11

where sk is the element of H0 and sk is the element of H. F0D0=0/1 is used in the first equality. The second equality holds since there are 2L elements in both H and H0.

2.2 Monte Carlo cost function

An essential difficulty of the Bayesian cost function (10) is the required high dimensional integration when dealing with large-scale sensor networks. Monte Carlo importance sampling is an attractive method to deal with this problem. In this subsection, we approximate the Bayesian cost function (10) by the Monte Carlo importance sampling method (see, e.g., [34, 35]). According to (10),

CI1y1ILyLF0Pce0Pce1=c+PH00I1y1ILyLF0Pce0Pce1L̂ygygydyE12
=EgPH00YL̂YgY+c,E13

where y=y1y2yL, and gy is a given importance sampling density such that (12) is well-defined (i.e., gy>0). In (13), the expectation is taken with respect to the importance sampling density g. Consequently, assume that N samples Y1,,YN are generated from the density g, that is, Ygy, where Yi=Yi1Yi2YiL. Then

CI1y1ILyLF0Pce0Pce11Ni=1NPH00Yi1Yi2YiLL̂YigYi+cE14
CMCI1y1ILyLF0Pce0Pce1N.E15

Based on the strong law of large numbers, the expectation (13) can be approximated by the empirical average (14). Denote (14), namely the Monte Carlo cost function, as CMCI1y1ILyLF0Pce0Pce1N. The optimal importance sampling density is gy1y2yLPH00L̂(y1,y2,,yL) (see, e.g., [34, 35]).

The initial goal is to minimize the Bayesian cost function (10). Instead, we can minimize the Monte Carlo cost function (15) by selecting a set of optimal sensor rules I1y1,I2y2,,ILyL and an optimal fusion rule F0. In this manner, the high-dimensional integration problem is converted to a problem where we need to deal with the single summation objective function for large-scale sensor networks. Thus, for dependent observations with channel errors, the computational complexity is reduced significantly by the Monte Carlo importance sampling method. In the following sections, we assume that the samples drawn from the importance sampling density are fixed so that CMC(I1,,IL;F0;Pce0,Pce1,N) does not have any randomness, since only deterministic decision rules are considered in this chapter.

Advertisement

3. A necessary condition for the optimal sensor rules

In this section, when the fusion rule is fixed, we derive a necessary condition for the optimal sensor rules that minimize the Monte Carlo cost function. First, we need some equivalent transformations for PH00. Then based on the transformations, the necessary condition can be obtained. At the same time, an analytical result is obtained when the fusion rule is the K-out-of-L rule.

3.1 Necessary condition

First, we need some equivalent transformations for PH00.

Lemma 1 PH00 can be rewritten as follows:

PH001IjyjPj1I1y1Ij1yj1Ij+1yj+1ILyLF0Pce0Pce1+Pj2I1y1Ij1yj1Ij+1yj+1ILyLF0Pce0Pce1,E16

where for j=1,2,,L,

Pj1k=12L1Fsk1Pjce0Pjce112skjPmj,E17
Pj2k=12L1Fskskj+Pjce112skjPmj,E18
Pmjm,mjL1Pmce01skm1Im+Pmce0skm(1Im)+1Pmce1skmIm+Pmce11skmIm.E19

Proof: If skm=Imym for all m=1,,L, then the continued product term m=1L1Imym1skm+Imymskm=1 in PH00. Otherwise, it is 0. Thus, PH00 can be rewritten as PH00=k=12L1FskPskI1I2IL, where the terms that equal zero are omitted and PskI1I2IL=j=1LPskjIj. Recalling the conditional probability formula (5), we rewrite PskjIj as PskjIj=1Ij1Pjce0Pjce112skj+skj+Pjce112skj. Based on these transformations, PH00 can be decomposed as (16).

Remark 2: Note that Pj1 and Pj2 are both independent of Ijyj for j=1,,L. In addition, they can also be applied in the Riemann sum approximation (see, e.g., [31]). Compared with [36], the sum of 2L terms about sk is eliminated and it greatly reduces the computational time. In addition, the expression for Pj1 given in (17) is also a key equation in the following results:

Substituting the transformations (16) into (15), we obtain

CMCI1y1ILyLF0Pce0Pce1N=c+1Ni=1N1IjYijPj1(I1Yi1Ij1Yij1Ij+1Yij+1ILYiLF0Pce0Pce1)+Pj2(I1Yi1Ij1Yij1Ij+1Yij+1ILYiLF0Pce0Pce1)L̂YigYi,E20

where Yi=Yi1Yi2YiL. According to (20), the necessary condition for the optimal sensor rules that minimize CMC(I1y1,,ILyL;F0;Pce0,Pce1,N) is stated in the following lemma:

Lemma 2: Let I1y1ILyL be the set of optimal sensor rules, i.e., they minimize CMC(I1y1,,ILyL;F0;Pce0,Pce1,N) in the parallel Bayesian detection network, then they must satisfy the following equations:

I1Yi1=IP11I2Yi2I3Yi3ILYiLF0Pce0Pce1L̂Yi,E21
I2Yi2=IP21I1Yi1I3Yi3ILYiLF0Pce0Pce1L̂Yi,E22
ILYiL=IPL1I1Yi1I2Yi2IL1YiL1F0Pce0Pce1L̂Yi,E23

where Pj1 are defined by (17) and I is an indicator function defined as follows:

Ix=1,ifx0;0,ifx<0.E24

Proof: Note that both Pj1 and Pj2 are independent of Ijyj for j=1,,L. If I1Yi1 minimizes the Monte Carlo cost function under the given I2Yi2,,ILYiL, we only need to minimize the first term of the summation in (20), that is, 1I1Yi1P11I2Yi2I3Yi3ILYiLF0Pce0Pce1L̂YigYi. Note that the value of I1Yi1 is 0 or 1 and gy is well defined, that is, gYi>0, I1Yi1 should be equal to 1 when P11(I2Yi2,I3Yi3,,ILYiL;F0;Pce0,Pce1)L̂Yi0 for i=1,,N, otherwise it should be equal to 0. Therefore, we obtain (21) by the definition of Ix in (24). Similarly, we obtain (22) and (23) by minimizing (20).

3.2 An analytical result for the K-out-of-L rule

When the fusion rule is a K-out-of-L rule, we would obtain an analytical result in the presence of nonideal channels. It is described as follows:

Theorem 1.1: If the fusion rule is a K-out-of-L rule and the probabilities of channel errors are less than 0.5 (i.e., 0<Pjce0<0.5,0<Pjce1<0.5) for each channel, the optimal sensor rules are IjYij=IL̂Yi for i=1,,N and j=1,,L.

Proof: From Lemma 1, we know

Pj1=k=12L1FskPmj1Pjce0Pjce112skj=1Pjce0Pjce1k=12L11Fskskj=01Fskskj=1Pmj=1Pjce0Pjce1k=12L1Fskskj=1Fskskj=0Pmj.E25

Since 0<Pjce0<0.5,0<Pjce1<0.5, we have 1Pjce0Pjce1>0. Obviously, Pmj>0 holds from its definition. If Fskskj=1Fskskj=00, Pj10 can be derived. When the fusion rule is a K-out-of-L rule, Fsk=Ij=1LskjK. Thus,

Fskskj=1=Im=1,mjLskm+1K,Fskskj=0=Im=1,mjLskm+0K.

If m=1,mjLskm+0K0, then m=1,mjLskm+1K0, and we can get that Fskskj=1Fskskj=0=0. If m=1,mjLskm+0K<0, then Fskskj=1Fskskj=00. In a word, Fskskj=1Fskskj=00 is derived, thus Pj10. It is easy to find a skm,mj so that m=1,mjLskm+0=K1 and m=1,mjLskm+1=K. Thus, there must exist Fskskj=1Fskskj=0>0. Therefore, the Pj1>0 is derived. Recalling the necessary condition for the optimal sensor rules, that is, IYij=IPj1L̂Yi, the proof is completed.

Remark 3: The K-out-of-L rule counts the number of sensors that vote in favor of H1 and compares it with a given threshold K [37]. It is also referred to as the counting rule or voting rule and is widely used in the practical decision fusion area [38, 39]. It encompasses a general class of fusion rules such as AND, OR, and Majority Boolean fusion rules [40]. The reason we assume that the probabilities of channel errors are less than 0.5 is based on practical considerations. If the probabilities of channel errors are greater than or equal to 0.5, the channel is totally unreliable and the performance is not better than a random decision. Obviously, the analytical solution is very efficient to tackle large-scale sensor networks with dependent observations and channel errors.

Advertisement

4. Monte Carlo Gauss-Seidel iterative algorithm and its convergence

For general fusion rules that do not have the form of a K-out-of-L rule, an efficient algorithm can be obtained that is inspired by Lemma 2. Next, we present a Monte Carlo Gauss-Seidel iterative algorithm and derive its convergence, when the fusion rule is fixed.

4.1 Monte Carlo Gauss-Seidel iterative algorithm

Based on the fixed-point type necessary condition given in Lemma 2, the Monte Carlo Gauss-Seidel iterative algorithm is presented in Algorithm 1.

Algorithm 1: Optimization of the sensor rules.

Given the fusion rule F0:

  • Step 1: Generate N samples: Y1,,YNgy, where gy is an importance sampling density and Yi=Yi1Yi2YiL.

  • Step 2: Initialize the L sensor rules, for j=1,2,,L and i=1,,N,

    Ij0Yij=0/1.E26

  • Step 3: Iteratively search for the L sensor rules until a termination criterion in Step 4 is satisfied. The n+1th iteration is given as follows: for i=1,,N,

    I1n+1Yi1=I[P11I2nYi2I3nYi3ILnYiLF0Pce0Pce1L̂(Yi],E27
    I2n+1Yi2=IP21I1n+1Yi1I3nYi3ILnYiLF0Pce0Pce1L̂Yi,E28
    ILn+1YiL=IPL1I1n+1Yi1IL1n+1YiL1F0Pce0Pce1L̂Yi.E29

  • Step 4: For i=1,,N, the termination criterion of the iteration process is

    I1n+1Yi1=I1nYi1,I2n+1Yi2=I2nYi2,ILn+1YiL=ILnYiL.E30

Remark 4: When we obtain I1Yi1 for i=1,,N, we can compress y1 by defining I1y1=I1Yi1 if the distance y1Yi1y1Yi1 for all ii. In the same way, we can compress yj for j=2,,L. In fact, the method is to find one nearest neighbor of yj for j=1,,L and use the corresponding compression rule. Moreover, we can utilize the k-nearest neighbor (knn) to compress yj (see more in [41]).

Remark 5: The main computation burden of Algorithm 1 is included in (27)(29). If we let the number of discretized points N1=N2==NL=N in [31], then Pj1L̂Yi, j=1,,L, and i=1,,N are computed LNL times for each iteration, as in [31]. But they only need to be computed LN times in Algorithm 1. Thus, the computational complexity of Algorithm 1, i.e., OLN is much less than that in [31], that is, OLNL. It is more efficient to tackle large-scale sensor networks with dependent observations and channel errors.

4.2 Convergence of the iterative algorithm

Now, we show that Algorithm 1 must converge to a stationary point and the algorithm cannot oscillate infinitely, that is, it terminates after a finite number of iterations.

Lemma 3: Given the fusion rule F0, for any initial values I10IL0 in (26), CMC(I1n,,Ijn,Ij+1n,,ILn;F0;Pce0,Pce1,N) must converge to a stationary point after a finite number of iterations.

Proof: For j=1,,L, we denote CMC(20) in the n+1th iteration process by

CMCI1n+1Ijn+1Ij+1nILnF0Pce0Pce1N=1Ni=1N1Ijn+1YijPj1(I1n+1Yi1Ij1n+1Yij1Ij+1nYij+1ILnYiLF0Pce0Pce1N)+Pj2(I1n+1Yi1Ij1n+1Yij1Ij+1nYij+1ILnYiLF0Pce0Pce1N)L̂YigYi+c.E31

Similarly, we denote the n+1th iteration process of the iterative items Pj1L̂ in (27)(29) by

Gji=Pj1I1n+1Yi1Ij1n+1Yij1Ij+1nYij+1ILnYiLF0Pce0Pce1NL̂Yi,E32

for i=1,,N and j=1,,L. Plugging Gji into (31), we know

CMCI1n+1Ijn+1Ij+1nILnF0Pce0Pce1N=1Ni=1N1Ijn+1YijgYiGji+Cji,E33

where Cji=c+1Ni=1NPj2(I1n+1Yi1,,Ij1n+1Yij1,Ij+1nYij+1,,ILnYiL;F0;Pce0,Pce1,N)L̂YigYi is independent of Ijn and Ijn+1. Splitting 1Ijn+1Yij into two terms, we obtain

CMCI1n+1Ijn+1Ij+1nILnF0Pce0Pce1N=1Ni=1N1IjnYij+IjnYijIjn+1YijgYiGji+Cji=1Ni=1N1IjnYijgYiGji+Cji+1Ni=1NIjnYijIjn+1YijgYiGji=CMCI1n+1Ij1n+1IjnILnF0Pce0Pce1N+Djn+1,E34

where

Djn+1=1Ni=1NIjnYijIjn+1YijgYiGji.E35

Note that (27)(29) imply that Ijn+1Yij=0 if and only if Gji<0 and Ijn+1Yij=1 if and only if Gji0 for i=1,,N,j=1,,L. It means

IjnYijIjn+1YijGji0.E36

Thus, for i,j

IjnYijIjn+1YijGji/gYi0,E37

where the inequality (35) holds since g is well-defined (i.e., g>0). Substituting (35) into (33) yields Djn+10. Thus, for jL,

CMCI1n+1Ijn+1Ij+1nILnF0Pce0Pce1NCMCI1n+1Ij1n+1IjnILnF0Pce0Pce1N.E38

Furthermore,

CMCI1n+1I2n+1ILn+1F0Pce0Pce1NCMCI1nI2nILnF0Pce0Pce1N.E39

It means CMC is nonincreasing. Note that CMC(I1n,I2n,,ILn;F0;Pce0,Pce1,N) is a finite value. We conclude that it must converge to a stationary point after a finite number of iterations.

Theorem 1.2: Given the fusion rule F0, the sensor rules I1n,I2n,,ILn are finitely convergent, i.e., Algorithm 1 converges after a finite number of iterations.

Proof: By Lemma 3, CMC must attain a stationary point after a finite number of iterations. It means that the value of CMC cannot change after nth iteration, that is,

CMCI1n+1Ijn+1Ij+1nILnF0Pce0Pce1N=CMCI1n+1Ij1n+1IjnILnF0Pce0Pce1N.E40

Using (32) and (37), we derive that Djn+1=0. Combining (33)(35), we know

IjnYijIjn+1YijGji=0,fori=1,,N,E41

which implies either IjnYijIjn+1Yij=0,i.e.,IjnYij=Ijn+1Yij or Gji=0,i.e.,Ijn+1Yij=1,IjnYij=0. It follows that when CMC converges to a stationary point, either Ijn+1Yij is invariant or Ijn+1Yij=1,IjnYij=0. Namely, Ijn+1Yij can only change from 0 to 1 at most a finite number of times. Therefore, the I1n,I2n,,ILn are finitely convergent.

Advertisement

5. Extension for simultaneous search for the optimal sensor rules and fusion rule

In this section, we extend the Monte Carlo method to search for the optimal sensor rules and the optimal fusion rule simultaneously. Firstly, the necessary condition is generalized to search for the optimal sensor rules and the optimal fusion rule simultaneously. Secondly, we describe a generalized Monte Carlo Gauss-Seidel iterative algorithm. We also give the convergence of the iterative algorithm.

5.1 A necessary condition for the optimal sensor rules and the optimal fusion rule

Note that (15) can be rewritten as follows:

CMCI1y1ILyLF0Pce0Pce1N=c+1Ni=1Nk=12Lk=12L1F0skPskskPsk(IYiL̂YigYi=c+1Nk=12L1F0ski=1Nk=12LPskskPsk(IYiL̂YigYi,E42

where PskIYij=1LskjIjYij+1skj1IjYij and IYi=I1Yi1I2Yi2ILYiL. Since Psk(IYi=1 if and only if Ij=skj for all j=1,,L, (39) can be simplified as follows:

CMCI1y1ILyLF0Pce0Pce1N=c+1Nk=12L1F0ski=1NPskI1Yi1ILYiLL̂YigYi,E43

where the terms PskIYi=0 are eliminated.

Remark 6: According to (20) and (40), the necessary condition for the optimal sensor rules is similar to Lemma 2 and the necessary condition for the optimal fusion rule is given by

F0sk=Ii=1NPskI1Yi1ILYiLL̂YigYiE44

for k=1,,2L. The proofs are similar to Lemma 2.

5.2 Generalized Monte Carlo Gauss-Seidel iterative algorithm

Based on the fixed-point type necessary condition, the generalized Monte Carlo Gauss-Seidel iterative algorithm is presented in Algorithm 2.

Remark 7: For any initial values I10IL0F00, the Monte Carlo cost function CMCI1nIjnIj+1nILnF0nPce0Pce1N must converge to a stationary point and Algorithm 2 terminates after a finite number of iterations. The proofs are similar to those of Lemma 3 and Theorem 1.2.

Algorithm 2: Simultaneous optimization of the sensor rules and the fusion rule.

  • Step 1: Generate N samples: Y1,,YNgy, where gy is an importance sampling density and Yi=Yi1Yi2YiL.

  • Step 2: Initialize the L sensor rules and the fusion rule, respectively, for j=1,2,,L, i=1,,N, and k=1,,2L,

    Ij0Yij=0/1,F00sk=0/1.

  • Step 3: Iteratively search for the L sensor rules and the fusion rule until a termination criterion in Step 4 is satisfied. The n+1th iteration is given as follows: for i=1,,N and k=1,,2L

    I1n+1Yi1=IP11I2nYi2I3nYi3ILnYiLF0nPce0Pce1L̂Yi,I2n+1Yi2=IP21I1n+1Yi1I3nYi3ILnYiLF0nPce0Pce1L̂Yi,ILn+1YiL=IPL1I1n+1Yi1I2n+1Yi2IL1n+1YiL1F0nPce0Pce1L̂Yi,F0n+1sk=Ii=1NPskI1n+1Yi1I2n+1Yi2ILn+1YiLL̂YigYi.

  • Step 4: For i=1,,N and k=1,2,,2L, the termination criterion of the iteration process is

    I1n+1Yi1=I1nYi1,I2n+1Yi2=I2nYi2,ILn+1YiL=ILnYiL;F0n+1sk=F0nsk.

Advertisement

6. Numerical examples

In this section, in order to evaluate the performance of Algorithms 1 and 2, we present some examples with a Gaussian signal s observed in the presence of Gaussian sensor noises.

The random signal s and observation noises v1, v2, …, vL are as follows:

H0:yj=vj;H1:yj=s+vj,forj=1,,L,E45

where v1, v2 …, vL, s are all mutually independent and

vjN0,0.6,sN1,0.4,forj=1,,L.

Thus, given H0 and H1, the two conditional probability density functions are

py1y2yLH0Nμ0Σ0,py1y2yLH1Nμ1Σ1,

where μ0,μ1,Σ0,Σ1 are easily obtained from the relationship of s,v1, v2, …, vL.

Assume that each sensor is required to transmit a bit through a channel with probabilities of Pjceo=Pjce1=p, where p=0.05,0.15,0.3, for j=1,2,,L. In the cost function (2), let the cost coefficients C00=C11=0 and C10=C01=1. The receiver operating characteristics (ROC) curves are used to evaluate the performance of the algorithms. Pf and Pd denote the probability of false alarm and the probability of detection, respectively.

6.1 Two-sensor network

We compare the Monte Carlo Gauss-Seidel iterative algorithm with the centralized algorithm and the iterative algorithm based on the Riemann sum approximation in [31] by using the receiver operating characteristics (ROC) curves.

In this case, we know μ0=00T,μ1=11)T and Σ0=0.6,00,0.6, Σ1=1,0.40.4,1. Some discrete values of a and b are used to plot ROC curves. We refer to the optimal importance sampling density gyPH00yL̂y in Section 2.2 and L̂y=apyH1bpyH0. The form is similar to the mixture-Gaussian distribution. Therefore, the importance sampling density gy is chosen to be the mixture-Gaussian distribution. The effects of choosing different gy in terms of the performance of the Monte Carlo method were shown in [21] via numerical examples. For Algorithm 1, we take N=200 samples from the density gy. For the Riemann sum approximation iterative algorithm in [31], we take a discretized step-size Δ=0.09, yi810, i.e., N1=N2=N=200. The ROC curves for three important fusion rules: AND, OR, and XOR rules with p=0.05,0.15,0.3 are plotted in Figure 2. We compare the computational time of the two algorithms with p=0.15 in Figure 3. Note that the analytical solution is used for the AND rule and the OR rule. Since the XOR rule is not a K-out-of-L rule, we use Algorithm 1 to search for the sensor rules.

Figure 2.

Two-sensor ROC curves with the probability of channel errors p=0.05,0.15,0.3.

Figure 3.

Two-sensor computational time as N increases with the probability of channel errors p=0.15.

Some observations in Figures 2 and 3 are presented as follows:

  • Given the fusion rule, the two points 00 and 11 may not be the beginning or ending points of the ROC curves, which is different from the case in the ideal channel cases. In addition, the larger the probability of channel errors is, the farther away from 00 or 11 the ROC curves are. A possible reason is that the detection probability is not equal to 0 or 1, even when the false alarm probability is 0 or 1 in the presence of channel errors.

  • From Figure 2, when the probability of channel transmission errors increases, the decision fusion performance of different methods using the optimal sensor rules decreases.

  • It can be seen in Figure 2 that the ROC curves of the new Monte Carlo approach are very close to those of the previous algorithm based on the Riemann sum approximation. However, from Figure 3, the computational time of the Monte Carlo importance sampling approximation is much less than that of the Riemann sum approximation for the three different fusion rules. It also implies that the new method can be used to deal with large-scale sensor networks.

  • Note that the computational time of the AND rule and the OR rule is less than that of the XOR rule for the Monte Carlo importance sampling approximation from Figure 3. The reason is that the AND rule and the OR rule belong to the K-out-of-L rules. The analytical form is used for the AND rule and the OR rule, therefore, the corresponding computation time is relatively lower.

6.2 Ten-sensor network

We consider a larger sensor network with 10 sensors, which cannot be dealt with by the previous decision fusion algorithm based on the Riemann sum approximation due to its heavy computation requirements. For different probabilities of channel errors, the ROC curves of the AND rule, the OR rule, the 4-out-of-10 rule, the 6-out-of-10 rule, and Algorithm 2 are plotted in Figure 4.

Figure 4.

Ten-sensor ROC curves with the probability of channel errors p=0.05,0.15,0.3.

Some observations in Figure 4 are presented as follows:

  • The ROC curves for the ten-sensor network exhibit similar behavior as those for the two-sensor network.

  • Given the fusion rule and the probability of channel errors, the decision fusion performance of the AND rule is better than the other rules for a small false alarm probability and the decision fusion performance of the OR rule is better than the other rules for a large false alarm probability. The reason may be that both of them are extreme cases of the fusion rules. For other cases, the 4-out-of-10 rule and the 6-out-of-10 rule perform better than the AND rule and the OR rule, respectively.

  • Regardless of the centralized detection algorithm, Figure 4 shows that the ROC curves generated by Algorithm 2 obtain almost the best performance for different probabilities of channel errors. It implies that a simultaneous search for the sensor rules and the fusion rule would provide better decision fusion performance.

6.3 One-hundred-sensor network

We consider a large-scale network with one hundred sensors. The parameter settings are similar to Section 6.2. The ROC curves of the 20-out-of-100 rule, the 40-out-of-100, the 50-out-of-100, the 60-out-of-100 rule, and the 80-out-of-100 rule are plotted in Figure 5.

Figure 5.

One-hundred-sensor ROC curves with the probability of channel errors p=0.05,0.15,0.3.

From Figure 5, it can be seen that the dramatically lower computational requirement of our method enables us to handle a large sensor network consisting of one hundred sensors. This is due to the fact that we have shown that there exist analytical forms of the optimal sensor rules for the K-out-of-L rule. In addition, the decision fusion performance of different methods is improved, as the number of sensors becomes large.

Advertisement

7. Conclusion

By employing the Monte Carlo importance sampling technique, decision fusion algorithms have been provided for large-scale sensor networks with dependent observations and channel errors. The Bayesian cost function is approximated by the Monte Carlo cost function. The necessary conditions for the optimal sensor rules and the optimal fusion rule that minimize the Monte Carlo cost function have been obtained. Computationally efficient Monte Carlo Gauss-Seidel iterative algorithms have been proposed to search for the optimal sensor rules and the optimal fusion rule. These algorithms have been shown to converge after a finite number of iterations. The computational complexity of the new algorithm (i.e., OLN) is much less than that of the previous algorithm based on Riemann sum approximation (i.e., OLNL). For the K-out-of-L rule, an analytical solution has been presented for the optimal sensor rules. Simulations have demonstrated the effectiveness of Algorithms 1 and 2. Future work will include the decision fusion algorithms under the Monte Carlo framework for other networks such as tandem networks, tree networks, and sensor networks under Byzantine attack.

Advertisement

Acknowledgments

This work was supported in part by the Sichuan Youth Science and Technology Innovation Team under Grant 2022JDTD0014, and Grant 2021JDJQ0036.

References

  1. 1. Blum RS, Kassam SA, Vincent H, Poor. Distributed detection with multiple sensors: Part II-advanced topics. Proceedings of the IEEE. 1997;85(1):64-79
  2. 2. Chair Z, Varshney PK. Optimal data fusion in multiple sensor detection systems. IEEE Transactions on Aerospace and Electronic Systems. 1986;22(1):98-101
  3. 3. Li C, Li G, Varshney PK. Distributed detection of sparse stochastic signals with 1-bit data in tree-structured sensor networks. IEEE Transactions on Signal Processing. 2020;68:2963-2976
  4. 4. Robert R Tenney and Nils R Sandell. Detection with distributed sensors. IEEE Transactions on Aerospace and Electronic Systems. 1981;17(4):501–510
  5. 5. Tsitsiklis J, Athans M. On the complexity of decentralized decision making and detection problems. IEEE Transactions on Automatic Control. 1985;30(5):440-446
  6. 6. Varshney PK. Distributed Detection and Data Fusion. New York: Springer-Verlag; 1997
  7. 7. Viswanathan R, Varshney PK. Distributed detection with multiple sensors: Part I-fundamentals. Proceedings of the IEEE. 1997;85(1):54-63
  8. 8. Chen H, Chen B, Varshney PK. Further results on the optimality of the likelihood-ratio test for local sensor decision rules in the presence of nonideal channels. IEEE Transactions on Information Theory. 2009;55(2):828-832
  9. 9. Hoballah IY, Varshney PK. Distributed Bayesian signal detection. IEEE Transactions on Information Theory. 1989;35(5):995-1000
  10. 10. Reibman AR, Nolte LW. Optimal detection and performance of distributed sensor systems. IEEE Transactions on Aerospace and Electronic Systems. 2010;23(1):24-30
  11. 11. Blum RS, Kassam SA. Optimum distributed detection of weak signals in dependent sensors. IEEE Transactions on Information Theory. 1992;38(3):1066-1079
  12. 12. Tang Z-B, Pattipati KR, Kleinman DL. An algorithm for determining the decision thresholds in a distributed detection problem. IEEE Transactions on Systems, Man, and Cybernetics. 1991;21(1):231-237
  13. 13. Tang Z-B, Pattipati KR, Kleinman DL. A distributed m-ary hypothesis testing problem with correlated observations. IEEE Transactions on Automatic Control. 1992;37(7):1042-1046
  14. 14. Willett PK, Swaszek PF, Blum RS. The good, bad and ugly: Distributed detection of a known signal in dependent Gaussian noise. IEEE Transactions on Signal Processing. 2000;48(12):3266-3279
  15. 15. Zhu Y, Blum RS, Luo Z-Q, Wong KM. Unexpected properties and optimum-distributed sensor detectors for dependent observation cases. IEEE Transactions on Automatic Control. 2000;45(1):62-72
  16. 16. Chen H, Chen B, Varshney PK. A new framework for distributed detection with conditionally dependent observations. IEEE Transactions on Signal Processing. 2012;60(3):1409-1419
  17. 17. Ozdemir O, Allen T, Choi S, Wimalajeewa T, Varshney PK. Copula based classifier fusion under statistical dependence. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017
  18. 18. Sundaresan A, Varshney PK, Rao NSV. Copula-based fusion of correlated decisions. IEEE Transactions on Aerospace and Electronic Systems. 2011;47(1):454-471
  19. 19. Zhang S, Khanduri P, Varshney PK. Distributed sequential detection: Dependent observations and imperfect communication. IEEE Transactions on Signal Processing. 2020;68:830-842
  20. 20. Liao Y, Shen X, Rao H. Analytic sensor rules for optimal distributed decision given K-out-of-L fusion rule under Monte Carlo approximation. IEEE Transactions on Automatic Control. 2020;65(12):5488-5495
  21. 21. Rao H, Shen X, Zhu Y, Pan J. Distributed detection fusion via Monte Carlo importance sampling. In: 35th Chinese Control Conference. Chengdu, China: IEEE; 2016. pp. 4830-4835
  22. 22. Drakopoulos ELIAS, Lee C-C. Optimum multisensor fusion of correlated local decisions. IEEE Transactions on Aerospace and Electronic Systems. 1991;27(4):593-606
  23. 23. Kam M, Zhu Q, Steven W, Gray. Optimal data fusion of correlated local decisions in multiple sensor detection systems. IEEE Transactions on Aerospace and Electronic Systems. 1992;28(3):916-920
  24. 24. Quan Z, Ma WK, Cui S, Sayed AH. Optimal linear fusion for distributed detection via semidefinite programming. IEEE Transactions on Signal Processing. 2010;58(4):2431-2436
  25. 25. Zhu Y, Li X. Unified fusion rules for multisensor multihypothesis network decision systems. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans. 2003;33(4):502-513
  26. 26. Zhu Y, Zhou J, Shen X, Song E, Luo Y. Networked Multisensor Decision and Estimation Fusion: Based on Advanced Mathematical Methods. Boca Raton: CRC Press; 2012
  27. 27. Shen X, Zhu Y, He L, You Z. A near-optimal iterative algorithm via alternately optimizing sensor and fusion rules in distributed decision systems. IEEE Transactions on Aerospace and Electronic Systems. 2011;47(4):2514-2529
  28. 28. Chen B, Jiang R, Kasetkasem T, Varshney PK. Channel aware decision fusion in wireless sensor networks. IEEE Transactions on Signal Processing. 2004;52(12):3454-3458
  29. 29. Chen B, Willett PK. On the optimality of the likelihood-ratio test for local sensor decision rules in the presence of nonideal channels. IEEE Transactions on Information Theory. 2005;51(2):693-699
  30. 30. Niu R, Chen B, Varshney PK. Fusion of decisions transmitted over Rayleigh fading channels in wireless sensor networks. IEEE Transactions on Signal Processing. 2006;54(3):1018-1027
  31. 31. Ren Q, Zhu Y, Shen X, Song E. Optimal sensor rules and unified fusion rules for multisensor multi-hypothesis network decision systems with channel errors. Automatica. 2009;45(7):1694-1702
  32. 32. Thomopoulos SCA, Zhang L. Distributed decision fusion in the presence of networking delays and channel errors. Information Sciences. 1992;66(1–2):91-118
  33. 33. Yilmaz Y, Moustakides GV, Wang X. Channel-aware decentralized detection via level-triggered sampling. IEEE Transactions on Signal Processing. 2012;61(2):300-315
  34. 34. Liu JS. Monte Carlo Strategies in Scientific Computing. New York: Springer Science & Business Media; 2008
  35. 35. Christian P. Robert and George Casella. Monte Carlo Statistical Methods. 2nd ed. New York: Springer; 2004
  36. 36. Liao Y, Shen X, Zhu Y. Distributed detection fusion with nonideal channels under Monte Carlo framework. In: 20th International Conference on Information Fusion (Fusion). Xi’an, China: IEEE; 2017. pp. 1-8
  37. 37. Vergara L. On the equivalence between likelihood ratio tests and counting rules in distributed detection with correlated sensors. Signal Processing. 2007;87(7):1808-1815
  38. 38. Abdelhakim M, Lightfoot LE, Ren J, Li T. Distributed detection in mobile access wireless sensor networks under byzantine attacks. IEEE Transactions on Parallel and Distributed Systems. 2014;25(4):950-959
  39. 39. Viswanathan R, Aalo V. On counting rules in distributed detection. IEEE Transactions on Acoustics, Speech, and Signal Processing. 1989;37(5):772-775
  40. 40. Chaudhari S, Lundén J, Koivunen V, Vincent H, Poor. Bep walls for cooperative sensing in cognitive radios using K-out-of-N fusion rules. Signal Processing. 2013;93(7):1900-1908
  41. 41. Friedman J, Hastie T, Tibshirani R. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. Boca Raton: Springer; 2009

Notes

  • In the rest of the paper, the term “sensor rules” refers to the “decision rules at the sensors.”

Written By

Yiwei Liao, Xiaojing Shen, Junfeng Wang and Yunmin Zhu

Reviewed: 26 June 2022 Published: 01 September 2022