Wireless sensor networks (WSNs) have attracted a lot of recent research interest due to their applicability in security, monitoring, disaster relief and environmental applications. WSNs consist of a number of low-cost sensors scattered in a geographical area of interest and connected by a wireless RF interface. Sensors gather information about the monitored area and send this information to gateway nodes. The radio on board these sensor nodes has limited range and allows the node to transmit over short distances. In most deployment scenarios, it is not possible for each node to communicate directly to the sink and hence, the model of communication is to transmit over short distances to other peers in the direction of the sink nodes.
In order to keep their cost low, the sensors are equipped with limited energy and computational resources. The energy supply is typically in the form of a battery and once the battery is exhausted, the sensor is considered to be dead. The nodes also have limited memory and processing capabilities. Hence, harnessing the potential of these networks involves tackling a myriad of different issues from algorithms for network operation, programming models, architecture and hardware to more traditional networking issues. For a more detailed survey on the various computational research aspects of Wireless Sensor Networks, see the survey papers [2, 13, 24, 37, 39], or the more recent books [23, 28] and a special issue of the CACM .
This section focuses on the algorithmic aspects of Wireless Sensor Networks. Specifically, we look at the problem of covering a set of targets or an area for the longest duration possible. The next section focuses on a more detailed discussion of the problem and provides a formal statement for it. It is worth mentioning that there is an abundance of algorithmic research related to WSNs. A lot of this focuses on traditional distributed computing issues like localization, fault tolerance, robustness. This naturally raises the interesting question of how different are WSNs as a computational model than more traditional distributed computing environments or even ad-hoc networks? This question has been explored briefly in .
2. Coverage problems
Many intended applications of Wireless Sensor Networks involve having the network monitor a region or a set of targets. To ensure that the area or targets of interest can be covered, sensors are usually deployed in large numbers by randomly dropping them in this region. Deployment is usually done by flying an aircraft over the region and air dropping the sensors. Since the cost of deployment far exceeds the cost of individual sensors, many more sensors are dropped than needed to minimally cover the region. The leads to a very dense network and gives rise to an overlap in the monitoring regions of individual sensors.
A simplistic approach to meet the coverage objective would be to turn on all sensors after deployment. But this needlessly reduces the lifetime of the network since the overlap between monitoring regions implies that not all sensors need to be on at the same time. This can also lead to a very lossy network with several collisions happening in the medium access control (MAC) layer due to the density of nodes. In order to extend the lifetime of a sensor network while maintaining coverage, a minimal subset of the deployed sensors are kept active while the other sensors can sleep. Through some form of scheduling, this active subset changes over time until there are no more such subsets available to satisfy the coverage goal. In using such a scheme to extend the lifetime, the problem is two fold. First, we need to select these minimal subsets of sensors. Then there is the problem of scheduling them wherein, we need to determine how long to use a given set and which set to use next. For an arbitrarily large network, there are exponential number of possible subsets making the problem intractable and it has been shown to be NP-complete in [6, 20].
Centralized solutions like those in [6, 41] are based on assuming that the entire network structure is known at one node (typically the gateway node), which then computes the schedule for the network. The schedule is computed using linear programming based algorithms. Like any centralized scheme, it suffers from the problems of scalability, single point of failure and lack of robustness. The latter is particularly relevant in the context of sensor networks since sensor nodes are deployed in hostile environments and are prone to frequent failures.
Existing distributed solutions in [4, 5, 42] work by having a sensor exchange information with its neighbors (limited to k-hops). These algorithms use information like targets covered and battery available at each sensor to greedily decide which sensors remain on. Distributed algorithms are organized into rounds so that the set of active sensors is periodically reshuffled at the beginning of each round. The problem with these algorithms is that they use simple greedy criteria to make their decision on which sensors become active at each round and thus, do not efficiently take into account the problem structure.
3. Problem statement
The lifetime problem can be stated as follows. Given a monitored region, a set of sensors and a set of targets, find a monitoring schedule for these sensors such that
the total time of the schedule is maximized,
all targets are constantly monitored, and
no sensor is in the schedule for longer than its initially battery.
A related problem is that of monitoring an area of interest. In general, the area and target coverage problems have been shown to be equivalent. [3, 10, 41] provide ways to map an area to a set of points (targets). In the work presented in the remainder of this dissertation, we focus on the target coverage problem with the implicit understanding that the algorithms and techniques presented can be translated to the area coverage problem by mapping the area to a set of points (virtial targets) with an appropriate granularity.
There are also several other variations of this basic problem. For example the coverage problem  requires only a certain percentage of all targets to be covered. The fault tolerant -coverage version of this problem requires each target to be covered by at least sensors [27, 47]. Also, the basic problem has been modified to include sensors that have adjustable sensing ranges, non uniform sensing shapes and othe heterogeneous sensor network models.
4. Related work
In this section, we briefly survey existing approaches to maximizing the lifetime of sensor networks, while meeting certain coverage objectives.  gives a more detailed survey on the various coverage problems and the scheduling mechanisms they use.  also surveys the coverage problem along with other algorithmic problems relevant to sensor networks. We end this section by focusing on two algorithms, LBP  and DEEPS , since we use them for comparisons against our algorithms.
A key application of wireless sensor networks is the collection of data for reporting. There are two types of data reporting scenarios: event-driven and on-demand . Event-driven reporting occurs when one or more sensor nodes detect an event and report it to the sink. In on-demand reporting, the sink initiates a query and the nodes respond with data to this query.
Coverage problems essentially state how well a network observes its physical space. As pointed out in , coverage is a measure of the quality of service (QoS) for the WSN. The goal is to have each point of interest monitored by at least one sensor at all times. In some applications, it may be a requirement to have more than one sensor monitor a target for achieving fault tolerance. Typically, nodes are randomly deployed in the region of interest because sensor placement is infeasible. This means that more sensors are deployed than needed to compensate for the lack of exact positioning and to improve fault tolerance in harsh environments. The question of placing an optimal number of sensors in a deterministic deployment has been looked at in [17, 26, 34]. However, in this dissertation we focus on networks with very dense deployment of sensors so that there is significant overlap in the targets each sensor monitors. This overlap will be exploited to schedule sensors into a low power sleep state so as to improve the lifetime of these networks. Note that this definition of the network lifetime is different from some other definitions which measure this in terms of number of operations the network can perform .
The reason for wanting to schedule sensors into sense-sleep cycles that we talked about in Section 1, stems from the fact that sensor nodes have four states - transmit, receive, idle and sleep. As shown in  for the WINS Rockwell sensor, the transmit, receive and idle states all consume much more power than the sleep state - hence, it is more desirable for a sensor to enter a sleep state to conserve its energy. The goal behind sensor scheduling algorithms is to select the activity state of each sensor so as to allow the network as a whole to monitor its points of interest for as long as possible. For a more detailed look at power consumption models for ad-hoc and sensor networks we refer the reader to [18, 19, 25]. We now look at coverage problems in more detail.
The maximum lifetime coverage problem has been shown to be NP-complete in [1, 6]. Initial approaches to the problem in [1, 6, 41] considered the problem of finding the maximum number of disjoint cover sets of sensors. This allowed each cover to be used independently of others. However, [3, 8] and others showed that using non-disjoint covers allows the lifetime to be extended further and this approach has been adopted since.
Broadly speaking, the existing work in this category can be classified into two parts - Centralized Algorithms and Distributed Algorithms. For centralized approaches, the assumption is that a single node (usually the base station) has access to the entire network information and can use this to compute a schedule that is then uploaded to individual nodes. Distributed Algorithms work on the premise that a sensor can exchange information with its neighbors within a fixed number of hops and use this to make scheduling decisions. We now look at the individual algorithms in both these areas.
|Abrams, Goel |
|Greedy: Max uncovered area|
Integer Linear Program
Mixed Integer Programming
LP, Garg Könemann
Integer Linear Program
A common approach taken with centralized algorithms is that of formulating the problem as an optimization problem and using linear programming (LP) to solve it [3, 6, 16, 33]. In , the authors develop a most-constrained least-constraining heuristic and demonstrated its effectiveness on variety of simulated scenarios. In this heuristic, the main idea is to minimize the coverage of sparsely covered areas within one cover. Such areas are identified using the notion of the critical element, defined as the element which is a member of the smallest number of sets. Their heuristic favors sets that cover a high number of uncovered elements, that cover more sparsely covered elements, that do not cover the area redundantly and that redundantly cover the elements that do not belong to sparsely covered areas.  is a followup work by the same authors in which they formulate the area coverage problem using a Integer LP and relax it to obtain a solution. They also presented several ILP based formulations and strategies to reduce overall energy consumption while maintaining guaranteed sensor coverage levels. Additionally, their work demonstrated the practicality and effectiveness of these formulations on a variety of examples and provided comparisons with several alternative strategies. They also show that the ILP based technique can scale to large and dense networks with hundreds of sensor nodes.
In order to solve the target coverage problem,  considers the disjoint cover set approach. Modeling their solution as a Mixed Integer Program shows an improvement over . The authors define the disjoint set covers (DSC) problem and prove its NP-completeness. They also prove that any polynomial-time approximation algorithm for DSC problem has a lower bound of 2. They first transform DSC into a maximum-flow problem (MFP), which is then formulated as a mixed integer programming. Based on the solution of the MIP, the authors design a heuristic to compute the number of covers. They evaluate the performance by simulation, against the most constrainedï¿½minimally constraining heuristic proposed in [?] and found that their heuristics has a larger number of covers (larger lifetime) at the cost of a greater running time.
 formulates a packing LP for the coverage problem. Using the Garg-Könemann approximation algorithm , they provide a approximation of the problem. They also present an efficient data structureto represent the monitored area with at most points guaranteeing the full coverage which is superior to the previously used approach based on grid points in . They also present distributed algorithms that tradeoff between monitoring and power consumption but these are improved upon by the authors in LBP and DEEPS.
A similar problem is solved by us for sensors with adjustable ranges in . We present a linear programming based formulation that also uses the Garg-Könemann approximation algorithm . The main difference is the introduction of an adjustable range model that allows sensors to vary their sensing and communication ranges smoothly. This was the first model that allows sensors to vary their range to any value upto a maximum. The model is an accurate representation of physical sensors and allows significant power savings over the discreetly varying adjustable model.
A different algorithm to work with disjoint sets is given in . Disjoint cover sets are constructed using a graph coloring based algorithm that has area coverage lapses of about 5%. The goal of their heuristic is to achieve energy savings by organizing the network into a maximum number of disjoint dominating sets that are activated successively. The heuristic to compute the disjoint dominating sets is based on graph coloring. Simulation studies are carried out for networks of large sizes.
|Sliepcivic ||Area||Yes||Greedy: Max uncovered fields|
|Tian ||Area||No||Geometric calculation of sponsored area|
|PEAS ||Area||No||Probing based determination of sponsored area|
|CCP ||Area||No||Random timers to evaluate coverage requirements|
|OGDC ||Area||No||Random back off node volunteering|
|Lu ||Area||No||Highest overall denomination sensor picks|
|Abrams ||Area||Yes||Randomized, Greedy picks max uncovered area|
|Cardei et al. ||Target||No||Sensor with highest contribution to bottleneck|
|LBP ||Target||No||Targets are covered by higher energy nodes|
|DEEPS ||Target||No||Minimize energy consumption for bottleneck target|
 also gives a centralized greedy algorithm that picks sensors based the largest uncovered area. They have designed three approximation algorithms for a variation of the SET K-COVER problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized. The first algorithm is randomized and partitions the sensors within a fraction of the optimum. The other two algorithms are a distributed greedy algorithm and a centralized greedy algorithm. The approximation ratios are presented for each of these algorithms.
 also deal with the target coverage problems. Like similar algorithms, they also extend the sensor network life time by organizing the sensors into a maximal number of set covers that are activated successively. But they allow non-disjoint set covers. The authors model the solution as the maximum set covers problem and design two heuristics that efficiently compute the sets, using linear programming and a greedy approach. The greedy algorithm selects a critical target at each step. This is the least covered target. For the greedy selection step, the sensor with the greatest contribution to the critical target is selected.
The distributed algorithms in the literature can be further classified into greedy, randomized and other techniques. The greedy algorithms [1, 4, 5, 8, 29, 41] all share the common property of picking the set of active sensors greedily based on some criteria.  considers the area coverage problem and introduces the notion of a field as the set of points that are covered by the same set of sensors. The basic approach behind the picking of a sensor is to first pick the one that covers that largest number of previously uncovered fields and to then avoid including more than one sensor that covers a sparsely covered field.  builds on this work and presents three algorithms that solve variations of the set k-cover problem. The greedy heuristic they propose works by selecting the sensor that covers the largest uncovered area.  defines the sensing denomination (SD) of a sensor as its contribution, i.e., the area left uncovered when the sensor is removed. The authors assume that each sensor can probabilistically detect a nearby event, and build a probabilistic model of network coverage by considering the data correlation among neighboring sensors. The more the contribution of a sensor to the network coverage, the higher the sensorï¿½s SD is. Based on the location information of neighboring sensors, each sensor can calculate its SD value in a distributed manner. Sensors with higher sensing denomination have a higher probability of remaining active.
 gives a distributed algorithm based on using the faces of the graph. If all the faces that a sensor covers are covered by other sensors with higher battery that are in an active or deciding state, then a sensor can switch off (sleep). Their work has been extended to target coverage in the load balancing protocol (LBP).
Some distributed algorithms use randomized techniques. Both OGDC  and CCP  deal with the problem of integrating coverage and connectivity. They show that if the communication range is at least twice the sensing range, a covered network is also connected.  uses a random back off for each node to make nodes volunteer to be the start node. OGDC addresses the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. They investigate the relationship between coverage and connectivity. They also derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. OGDC achieves similar coverage with an upto 50% improvement in the lifetime of the network. A drawback of OGDC is that it requires that each node knows its own location.
In  the authors combine computational geometry with graph theoretic techniques. The use Voronoi diagrams with graph search to design a polynomial time worst and average case algorithm for coverage calculation in homogeneous isotropic sensors. The also analyze and experiment with using these techniques as heuristics to improve coverage.
 sets a random timer for each node following which a node evaluates its current state based on the coverage by its neighbors. The authors present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications. They also integrate CCP to SPAN [11, 12] to provide both coverage and connectivity guarantees.  also present a randomized algorithm that assigns a sensor to a cover chosen uniformly at random.
A different approach has been taken in PEAS [42, 45]. PEAS is a distributed algorithm with a probing based off-duty rule is given in . PEAS is localized and has a high resilience to node failure and topology changes. Here, every sensor broadcasts a probe PRB packet with a probing range. Any working node that hears this probe packet responds. If a sensor receives at least one reply, it can go to sleep. The range can be chosen based on several criteria. Note that this algorithm does not preserve coverage over the original area. The results for PEAS showed an increase in the network lifetime in linear proportion to the number of deployed nodes. In  the authors give a distributed and localized algorithm. Every sensor has an off-duty eligibility rule. They give an algorithm for a node to compute its sponsored area. To prevent the occurrence of blind-points by having two sensors switch off at the same time, a random back off is used. They show improved performance over PEAS.
To our knowledge,  was the first work to consider the k-coverage problem.  also addresses the k-coverage problem from the perspective of choosing enough sensors to ensure coverage. Authors consider different deployments with sensors given a probability of being active and obtain bounds for deployment.  solves the problem of picking minimum size connected k-covers. The authors state this as an optimization problem and design a centralized approximation algorithm that delivers a near-optimal solution. They also present a communication-efficient localized distributed algorithm for this problem.
Now, we look at the two protocols that we compare our heuristics against. The load balancing protocol (LBP)  is a simple 1-hop protocol which works by attempting to balance the load between sensors. Sensors can be in one of three states sense/on, sleep/off or vulnerable/undecided. Initially all sensors are vulnerable and broadcast their battery levels along with information on which targets they cover. Based on this, a sensor decides to switch to off state if its targets are covered by a higher energy sensor in either on or vulnerable state. On the other hand, it remains on if it is the sole sensor covering a target. This is an extension of the work in . LBP is simplistic and attempts to share the load evenly between sensors instead of balancing the energy for sensors covering a specific target.
The other protocol we consider is DEEPS . The maximum duration that a target can be covered is the sum of the batteries of all its nearby sensors that can cover it and is known as the life of a target. The main intuition behind DEEPS is to try to minimize the energy consumption rate around those targets with smaller lives. A sensor thus has several targets with varying lives. A target is defined as a if it is the shortest-life target for at least one sensor covering that target. Otherwise, it is a. To guard against leaving a target uncovered during a shuffle, each target is assigned an in-charge sensor. For each sink, its in-charge sensor is the one with the largest battery for which this is the shortest-life target. For a hill target, its in-charge is that neighboring sensor whose shortest-life target has the longest life. An in-charge sensor does not switch off unless its targets are covered by someone. Apart from this, the rules are identical as those in LBP protocol. DEEPS relies on two-hop information to make these decisions.
5. The lifetime dependency graph model
In this section, we introduce the Lifetime Dependency (LD) Graph as a model for the maximum lifetime coverage problem defined in Section 3. This model is a key contribution of this dissertation since the heuristics and algorithms that follow in subsequent sections rely heavily on the LD Graph.
Recall from Section 3 that given a sensor network and a set of static targets, the maximum lifetime sensor scheduling problem is to select a subset of sensors that covers all targets and then periodically shuffle the members of this subset so as to maximize the total time for which the network can cover all targets.
Since these sensors are powered by batteries, energy is a key constraint for these networks. Once the battery has been exhausted, the sensor is considered to be dead. The lifetime of the network is defined as the amount of time that the network can satisfy its coverage objective, i.e., the amount of time that the network can cover its area or targets of interest. Having all the sensors remain “on” would ensure coverage but this would also significantly reduce the lifetime of the network as the nodes would discharge quickly. A standard approach taken to maximizing the lifetime is to make use of the overlap in the sensing regions of individual sensors caused by the high densit y of deployment. Hence, only a subset of all sensors need to be in the “on” or “sense” state, while the other sensors can enter a low power “sleep” or “off” state. The members of this active set, also known as a cover set, are then periodically updated so as to keep the network alive for longer duration. In using such a scheduling scheme, there are two problems that need to be addressed. First, we need to determine how long to use a given cover set and then we need to decide which set to use next. This problem has been shown to be NP-complete [1, 6].
A key problem here is that since a sensor can be a part of multiple covers, these covers have an impact on each other, as using one cover set reduces the lifetime of another set that has sensors common with it. By making greedy choices, the impact of this dependency is not being considered, since none of the heuristics in the literature study this reduction in the lifetime of other cover sets caused by using a sensor that is a member of several such sets. The earlier disjoint formulations mentioned in the previous section, entirely avoided this problem by preventing it.
We capture this dependency between covers by introducing the concept of a local Lifetime Dependency (LD) Graph. This consists of the cover sets as nodes with any two nodes connected if the corresponding covers intersect. The graph is an example of an intersection graph since it represents the sensors common to different cover sets. By looking at the graph locally (fixed 1-2 hop neighbors), we are able to construct all the local covers for the local targets and then model their dependencies. Based on these dependencies, a sensor can then prioritize its covers and negotiate these with its neighbors. We also present some simple heuristics based on the graph. The material presented in this section was published in .
6. Symbols and definitions
Let us begin with a few basic conventions and definitions that will be used in the rest of this dissertation. Individual chapters will introduce additional notation as and when necessary. The notation presented here applies to the basic LD grapg model and will be utilized for all the chapters that follow.
We will use, etc., to represent sensors, , etc., to represent targets, and, , etc., to denote covers.
Let us assume we have sensors and targets, both stationary.
Consider the sensor network in Figure 1 with and targets, , , and.
We will employ the following definitions, illustrated using this network.
Cover: is a cover for targets in set if
(i) for each target there is at least one sensor in which can sense and
(ii) is minimal. For example, the possible (minimal) covers for the two targets of are and. There are other non-minimal covers as well such as which need to be avoided. Likewise, the possible covers for the only target of sensor are and.
An optimal lifetime schedule of length 6 for this network is, , , , , where each tuple is a cover for the entire network followed by its duration.
7. Lifetime dependency (LD) graph
Let the local lifetime dependency graph be where nodes in denote the local covers and edges in exist between those pairs of nodes whose corresponding covers share one or more common sensors. For simplicity of reference, we will not distinguish between a cover and the node representing it, and an edge between two intersecting covers and and the intersection set. Each sensor constructs its local LD graph considering its one- or two-hop neighbors and the corresponding targets. Figure 2 shows the local lifetime dependency graph of sensor in the example network of Figure 1, considering its one-hop neighbors and its targets.
In the LD graph, we will use the following two definitions:
In Figure 2, the two local covers and for the targets of sensor have in common, therefore the edge between the two covers is and. Therefore,’s battery of 3 is an upper bound on the lifetime of the two covers collectively. It just so happens that the individual lifetimes of these covers are each 1 due to their bottleneck sensors and, therefore, a tighter upper bound on their total life is 2. In general, given two covers and, a tight upper bound on the life of two covers is.
8. The basic algorithm
For the purpose of this explanation, without loss of generality, let us assume that the covers are constructed over one-hop neighbors. The algorithm consists of two phases. During the initial setup phase, each sensor calculates and prioritizes the covers. Then, for each reshuffle round of predetermined duration, each sensor decides its on/off status at the beginning, and then those chosen remain on for the rest of the duration.
Initial setup: Each sensor communicates with each of its neighbor exchanging mutual locations, battery levels and, and the targets covered and. Then it finds all the local covers using the sensors in for the target set being considered. The latter can be solely or could also include for all. It then constructs the local LD graph over those covers, and calculates the degree of each cover in the graph.
The “priority function” of a cover is based on its degree (lower the better). Ties among covers with same degree are broken first by preferring (i) those with longer lifetimes, then (ii) those which have fewer remaining sensors to be turned on, and finally (iii) by choosing the cover containing the smaller sensor id. A cover which has a sensor turned off becomes infeasible and falls out of contention. Also, a cover whose lifetime falls below the duration of a round is taken out of contention, unless it is the only cover remaining.
Reshuffle rounds: The automaton in Figure 3 captures the algorithm for this phase. A sensor starts with its highest priority cover as its most desirable configuration for its neighborhood. If successful, the end result would be switching on all the sensors in, while others can sleep. Else, it transitions to the next best priority cover, , etc., until a cover gets satisfied. The transitions are as follows.
Continue with the best cover: Sensor continues with its current best cover if its neighbor goes off (thus not impacting the chances of ultimately satisfying) or if neighbor becomes on (thus improving chances for).
To on/sense status: If all the neighboring sensors in cover except become on, switches itself on satisfying the cover for its neighborhood, and sends its on-status to its neighbors.
To off/sleep status: If all the neighboring sensors in cover become on thus satisfying, and itself is not in cover, switches itself off, and sends its off-status to its neighbors.
Transition to the next best cover: Sensor transitions to the next best priority cover, if (i) becomes infeasible because a neighboring sensor has turned off, or (ii) priority of is now lower because a sensor has turned on causing another cover, with same degree and lifetime as, with fewer sensors remaining to be turned on.
The transitions from are analogous to that from, with the possibility of even going back to.
Correctness: We sketch a proof here that this algorithm ensures that, in each reshuffle round, all the targets are covered and the algorithm itself terminates enabling each sensor to decide and reach on/off status.
For contradiction, let us assume that in a given round a target remains uncovered. This implies that either this target has no neighboring sensor within sensing range and thus network itself is dead, or else all the neighboring sensors which could have covered have turned off. In the latter case, each of the sensor whose contains t has made the transition from its current best cover to off status. However, only does that if covers all its targets in and. The last such sensor to have turned off ensures that is satisfied, which implies that all targets in including are covered, a contradiction. Next, for contradiction, let us assume that the algorithm does not terminate. This implies that there exists at least one sensor which is unable to decide, i.e., make a transition to either on or off status. There are three possibilities: (i) all the covers of have become infeasible, or
(ii) is continually transitioning to the next best cover and none of them are getting satisfied, or
(iii) is stuck at a cover.
For case (i), for each cover, at least one of its sensor has turned off. But the set of targets considered by sensor is no larger than. Since itself can cover, there exist a target, from, that none of the cover sets at s are able to cover. This implies that is off, else would have formed part of a cover at covering (given that constructs all possible covers). This leads to the contradiction, as before turning off, ensures that is covered.
For case (ii), each transition implies that a neighbor sensor has decided its on/off status, thereby making some of the covers at infeasible and increasingly satisfying portions of some other covers, thus reducing the choices from the finite number of its covers. Eventually, when the last neighbor decides, will be able to decide as well becoming on if any target in is still uncovered, else going off.
For case (iii), the possibility that all sensors are stuck at their best initial covers is conventionally broken by a sensor with least id in its current best cover pro actively becoming on, even though may not be completely satisfied. This is similar to the start-up problem faced by others distributed algorithms such as DEEPS with similar deadlock breaking solutions. At a later stage, if is stuck at, it means that either all its neighbors have decided or one or more neighbors are all stuck. In the former case, there exists a cover at which will be satisfied with becoming on (case i). The latter case is again resolved by the start-up deadlock breaking rule by either or pro actively becoming on.
Message and time complexities: Let us assume that each sensor constructs the covers over its one-hop neighbors to cover its targets in only. Let , the maximum number of neighbors a sensor can communicate with. The communication complexity of the initial setup phase is, assuming that there are constant number of neighboring targets that each sensor can sense. Also, for each reshuffle round, a sensor receives status messages and sends out one. Assuming is a constant practically implies that message complexity is also a constant. Let maximum number of targets a sensor considers is, a constant. The maximum number of covers constructed by sensor during its setup phase is, as each sensor in N(s,1) can potentially cover all its targets considered. Hence the time complexity of setup phase is to construct the LD graph over all covers and calculate the priorities. For example, if, the time complexity of the setup phase would be. The reshuffle rounds transition through potentially all the covers, hence their time complexity is.
9. Variants of the basic algorithm
We briefly discussed some of the properties of the LD graph earlier. For example, an edge connecting two covers and yields an upper bound on the cumulative lifetime of both the covers. However, if, which equals for weakest sensor, is larger than the sum of the lifetimes of and, then the edge no longer constrains the usage of and. Therefore, even though and are connected, they do not influence each other’s lifetimes. This leads to our first variant algorithm.
Variant 1: Redefine the edge weight e as follows:If, then, else.
Thus, when calculating the degree of a cover, this edge would not be counted when not constraining, thus elevating the cover’s priority. Next, the basic framework is exploiting the degree of a cover to heuristically estimate how much it impacts other covers, and the overall intent is to minimize its impact. Therefore, we sum the edge weights emanating from a cover for its degree. However, if a cover is connected to two covers and such that both and have the same bottleneck sensor, is depleted by burning either or. That is, in a sense, only one of and can really be burned completely, and then the other is rendered unusable because is completely depleted. Therefore, for all practical purposes, and can be collectively seen as one cover. As such, the two edges connecting to and can be thought of as one as well. This yields our second variant algorithm.
Variant 2: Redefine the degree of a cover in the LD graph as follows. Let a cover be connected to a set of covers in graph. If there are two covers and in sharing a bottleneck sensor, then if then else. With this reduced set of neighboring covers, the degree of cover is
In the basic algorithm, each sensors constructs cover sets using its one-hop neighbors to cover its direct targets. However, with the same message overheads and slightly increased time complexity, a sensor can also consider its neighbors’ targets. This will enable it to explore the constraint space of its neighbors as well.
Variant 3: In this variant, each sensor constructs LD graph over one-hop neighbors and targets in.
Variant 4: In the basic two-hop algorithm, each sensor constructs LD graph over two-hop neighbors and targets in. In this variant, each sensor constructs LD graph over two-hop neighbors and targets in.
10. Taming the exponential state space of the maximum lifetime sensor cover problem
If we consider the LD graph, it is quickly obvious that even creating this graph will take exponential time since there are cover sets to consider where, is the number of sensors. However, the target coverage problem has a useful property - if the local targets for every sensor are covered, then globally, all targets are also covered. In , we make use of this property to look at the LD graph locally (fixed 1-2 hop neighbors), and are able to construct all the local covers for the local targets and then model their dependencies. Based on these dependencies, a sensor can then prioritize its covers and negotiate these with its neighbors. Simple heuristics based on properties of this graph were presented in  and showed a 10-15% improvement over comparable algorithms in the literature.  built on this work by examining how an optimal sequence would pick covers in the LD graph and designing heuristics that behave in a similar fashion. Though the proposed heuristics are efficient in practice, the running time is a function of the number of neighbors and the number of local targets. Both of these are relatively small for most graphs but theoretically are exponential in the number of targets and sensors.
A key issue that remains unresolved is the question of how to deal with this exponential space of cover sets. In this paper we present a reduction of this exponential space to a linear one based on grouping cover sets into equivalence classes. We use to denote the equivalence class of a cover. The partition defined by the equivalence relation on the set of all sensor covers Given a set and an equivalence relation, the equivalence class of an element is the subset of all elements in which are equivalent to. The notation used to represent the equivalence class of is. In the context of the problem being studied, is the set of all sensor covers and for any single cover, represents all other covers which are equivalent to as given by the definition of some equivalence relation. Our approach stems from the understanding that from the possible exponential number of sensor covers, several covers are very similar, being only minor variations of each other. In Section 11, we present the definition of the relation, based on a grouping that considers cover sets equivalent if their lifetime is bounded by the same sensor. We then show the use of this relation to collapse the exponential LD Graph into an Equivalence Class (EC) Graph with linear number of nodes. This theoretical insight allows us to design a sampling scheme that selects a subset of all local covers based on their equivalence class properties and presents this as an input to our simple LD graph degree-based heuristic. Simulation results show that class based sampling cuts the running time of these heuristics by nearly half, while only resulting in a less than 10% loss in quality.
11. Dealing with the exponential space
In this section, we present our approach of dealing with the exponential solution space of possible cover sets. The next section utilizes these ideas to develop heuristics for maximizing the lifetime of the network. Even though the total number of cover sets for the network may be exponential in the number of sensors, for any given cover set, there are several other sets that are very similar to this set. We begin by attempting to define this notion of similarity by expressing it as an equivalence relation.
Definition 1: Let be an equivalence relation defined on the set of all sensor covers such that if and only if and share the same bottleneck sensor.
Theorem: is an equivalence relation
Proof: is reflexive, since. is symmetric, since if then, since both covers and share the same bottleneck sensor. Finally, if and, then and is transitive since if shares the same bottleneck sensor with and shares the same bottleneck sensor with then, clearly both and have the same bottleneck sensor in common. Therefore, is and equivalence relation.
Every equivalence relation defined on a set, specifies how to partition the set into subsets such that every element of the larger set is in exactly one of the subsets. Elements that are related to each other are by definition in the same partition. Each such partition is called an equivalence class. Hence, the relation partitions the set of all possible sensor covers into a number of disjoint equivalence classes.
Notation: Henceforth, we represent the equivalence class of covers sharing a bottleneck sensor by. Note that this is a slight abuse of notation since is not a member of this class, but is instead the property that is common to all members of this class. Hence, can be read as the equivalence class for all covers having sensor as their bottleneck sensor.
We now define what we would call the Equivalence Class (EC) Graph. Each node of this graph represents an equivalence class. Just as the LD graph models the dependency between sensor covers, the EC Graph models the dependency between classes of covers.
Definition 2: Equivalence Class Graph (EC). The Equivalence Class graph where, is the set of all possible equivalence classes defined by and two classes and are joined by an edge for every cover in each class that share some sensor in common. Hence, the graph is a multi-edge graph.
The cardinality of the vertex set of the Equivalence Class Graph is at most. This result follows from the observation that for any network of sensors, there can be at most one equivalence class corresponding to each sensor, since every cover can have only one of the sensors as its bottleneck (in case two or more sensors all have the same battery and are the bottleneck, sensor id’s can be used to break ties).
To better understand these definitions, let us consider an example. Consider the sensor network shown in Figure 4. The network comprises of seven sensors, and two targets,. Observe that is the bottleneck target for the network since it is the least covered target (8 units of total coverage compared to 33 for). Also note that only one sensor, can cover both targets.
For the given network, the set of all possible minimal sensor covers, is,
For each individual cover in this set, the bottleneck sensor is the sensor shown in bold face.
Figure 5 shows the Lifetime Dependency graph for these covers. As defined, an edge exists between any two covers that share at least one sensor in common and the weight of this edge is given by the lifetime of the common sensor having the smallest battery (the bottleneck). For example, an edge of weight exists between and because they share the sensor having a battery of.
To obtain the EC Graph from this LD Graph, we add a node to represent the equivalence class for each sensor that is a bottleneck sensor for any cover. For the above example, given all sensor covers in the set, there are three sensors that are each the bottleneck for one or more covers in. Hence, the EC Graph is a three node graph. Figure 6 shows the complete EC Graph for the covers in. There is a node corresponding to the equivalence class for each of the three sensors and for each cover in the class we retain edges to the class corresponding to the bottleneck sensor of the cover on which the edge terminated in the LD graph. Hence, we have three edges between the nodes and.
It is key to realize that the EC graph is essentially an encapsulation of the LD Graph that can have at most nodes. This view is presented in Figure 7, where we show the LD Graph that is embedded into the EC Graph. Each rectangular box shows the nodes in the LD graph that are in the same equivalence class. This figure also illustrates our next theorem.
Theorem: For sensor covers in the same equivalence class, the induced subgraph on the LD Graph is a clique
Proof: This theorem states that for the nodes in the LD graph that belong to the same class, the induced subgraph is a clique. Since by definition, all sensor covers in a class share the sensor as their bottleneck sensor, the induced subgraph will be a complete graph between these nodes.
Also, a subtle distinction has been made between inter-class edges and intra-class edges in going from the LD graph to the EC graph.
12. Sampling based on the equivalence class graph
The previous section defined the concepts behind reducing the exponential space of covers in the LD Graph to the linear space of the EC Graph. In this section, we build on these concepts to discuss techniques for generating a limited number of covers for the LD Graph. Specifically, our goal is to improve the timing performance of the distributed algorithms we presented in [15, 35]. As presented, the EC Graph is not very useful since it still requires the exponential LD graph to be populated, before it can be constructed. However, by realizing that the exponential space of cover sets can be expressed in this linear space of equivalence classes, we can generate only a subset of the set of all covers.
Recall that even though the number of global sensor covers is exponential in the number of sensors, our heuristics presented in [15, 35] worked by constructing local covers. After exchanging one or two hop coverage information with neighboring sensors, a sensor can exhaustively construct all possible local covers. A local cover here is a sensor cover that covers all the local targets. The number of local covers is also exponential but is determined by the maximum degree of the graph and the number of local targets, typically much smaller values than the number of all sensors or targets. The heuristics then construct the LD graph over these local covers. The choice of which cover to use is determined by looking at properties of the LD graph such as the degree of each cover in the LD graph.
By making use of the idea of related covers in the same equivalence class, our goal is to use our existing heuristics from [15, 35] but to modify them to run over a subset of the local covers as opposed to all local covers. This should give considerable speedup and if the subset is selected carefully, it may only result in a slight reduction of the overall lifetime. We present such a local cover sampling scheme in Section 12.1 and then present the modified basic algorithm of [15, 35] to operate on this sample in Section 8. Finally, we evaluate the effectiveness of sampling in Section 13.
12.1. Local bottleneck target based generation of local cover sets
Understanding the underlying equivalence class structure, we now present one possible way of generating a subset of the local cover sets. Our approach is centered around the bottleneck target. For any target, the total amount of time this target can be monitored by any schedule is given by:
Clearly there is one such target with the smallest value, and is hence a bottleneck for the entire network . Without global information it is not possible for any sensor to determine if the global bottleneck is a target in its vicinity. However, for any sensor, there is a least covered target in that is the local bottleneck. A key thing to realize is the fact that the global bottleneck target is also the local bottleneck target for the sensors in its neighborhood. Hence, if every sensor optimizes for its local bottleneck target, then one of these local optimizations is also optimizing the global bottleneck target. We use to denote this local bottleneck target. Let be the set of sensors that can cover this local bottleneck target.That is,
Implementation: This understanding of bottleneck targets, along with our definition of equivalence classes, now gives us a simple means to generate local covers. Since no coverage schedule can do any better than the total amount of time that the global bottleneck can be covered, instead of trying to generate all local covers, what we really need are covers in the equivalence classes corresponding to each sensor, such that each class can be completely exhausted. Also, to only select covers that conserve the battery of the sensors in, we want to ensure that the covers we generate are disjoint in. In terms of equivalence classes, for any two classes and such that, we want to generate cover sets that are in these classes but do not include both and.
To generate such cover sets, we can start by picking only one sensor in. This ensures that the local bottleneck target is covered. For each target in the one/two-hop neighborhood being considered, we can then randomly pick a sensor, giving preference to any. Note that this does not necessarily create a sensor cover in the class, since any one of our randomly picked sensors could be the bottleneck for the cover generated. However, replacing that sensor with another randomly picked sensor that covers the same target ensures that the we finish by using a cover in. Such a selection essentially ensures that we burn the entire battery of this sensor in through different covers, while trying to avoid using other sensors in. This process is then repeated for every sensor in. Hence, instead of generating all local covers, we only generate a small sample (constant number) of these corresponding to the equivalence class for each sensor covering the bottleneck target and some related randomly picked covers. We already showed that there can be at most equivalence classes for the network. Thus, the sampled graph generated has O(n) nodes. If we consider the maximum number of sensors covering any target as a constant for the network, sampling only takes cumulative time of, where, since we do this for sensors, each of which has a maximum of targets to cover, which are in turn covered by a constant number of sensors (as per our assumption). Even if this assumption is removed, in the worst case, all sensors could be covering the same target making the time complexity. Next, we run our basic heuristic from  on this sampled LD graph.
13. Performance evaluation
In this section, we evaluate the performance of the proposed sampling scheme and evaluate it against our degree based heuristics of . By not constructing all local covers and instead constructing a few covers for key equivalence classes, we should achieve considerable speedup. But the effectiveness of sampling can only be evaluated by analyzing its tradeoff between faster running time for possible reduced performance. The objective of our simulations was to study this tradeoff. For completeness, we create both one-hop and two-hop versions of our sampling heuristic and also compare its performance to two other algorithms in the literature, the 1-hop algorithm LBP  and the 2-hop algorithm DEEPS .
In order to compare the equivalence class based sampling against our previous degree based heuristics, LBP, and DEEPS, we use the same experimental setup and parameters as employed in . We carry out all the simulations using ++. For the simulation environment, a static wireless network of sensors and targets scattered randomly in area is considered. We conduct the simulation with targets randomly deployed, and vary the number of sensors between 40 and 120 with an increment of 20 and each sensor with a fixed sensing range of. The communication range of each sensor assumed to be two times the sensing range [44, 46]. For these simulations, we use the linear energy model wherein the power required to sense a target at distance is proportional to. We also experimented with the quadratic energy model (power proportional to). The results showed similar trends to those obtained for the linear model.
Figure 8 shows the Network Lifetime for the different algorithms. As can be seen from the figure, the sampling heuristics is only between 7-9% worse than the degree based heuristic. Sampling also outperforms the 1-hop LBP algorithm by about 10%. It is interesting to observe that for smaller network sizes, sampling is actually much closer to the degree-based heuristics in terms of performance.
Now that we have seen that sampling works well when compared to the degree based heuristic, the question that remains to be answered is how much faster is the sampling algorithm? Figure 9 compares head-to-head the running time for the degree based heuristic (potentially exponential in) and the linear time sampling algorithm. As can be seen from the figure the running time for the sampling algorithm is about half of the running time for the degree-based heuristic.
Finally, we individually study the 1-hop (Table 3) and 2-hop (Table 4) sampling heuristics with comparable algorithms. For the 1-hop algorithms, we also include a randomized-sampling algorithm that makes completely random picks for each target, without considering properties of the equivalence classes. The intention is to ensure that the performance of our sampling-heuristic can be attributed to the selection algorithm. For the 2-hop versions of our proposed sampling heuristic, the target set of each sensor is expanded to include and the neighbor set is expanded to all 2-hop neighbors, i.e.,. Covers are now constructed over this set using the same process as before. As can be seen from the table, both the 1-hop and 2-hop version are under 10% worse than the comparable degree-based heuristics. Also, the 2-hop sampling slightly outperforms the DEEPS by a 5% improvement in network lifetime.
Degree-Based (2-hop) 
Despite a lot of recent research effort, creating real-world deployable sensor networks remains a difficult task. A key bottleneck is the limited battery life of sensor motes. Hence, energy conservation at every layer of the network stack is critical. Creating realistic theoretical models for problems in this domain that take this into account remains a challenge. Our work addresses energy efficiency at only point in the network stack. However, a holistic approach to energy efficiency design should not only account for energy concerns in each layer of the network stack for problems like routing, medium access etc., but also consider cross-layer issues and interactions.
In this chapter, we present innovative models and heuristics to address the coverage problem in Wireless Sensor Networks. Our work points to the potential of lifetime dependency graphs while serving to highlight the shortcomings of using standard distributed algorithms to this problem. In order to successfully bridge the gap between the theory and practice of wireless sensor networks, there is a clear need for algorithms that are designed keeping the unique constraints of these networks in mind. The improvements in network lifetime obtained by our approach using the dependency graph and heuristics that stem serve to underscore this point.
The author would like to thank his wife Kelli for her support while working on this chapter.