Review of Optimization Problems in Wireless Sensor Networks

collision-free medium access. In this subsection we present the basic problem related to each of them. We then describe three extended versions of TDMA relating to i) connectivity, ii) trafﬁc and iii) delay. For FDMA the extended constraint is throughput. Regarding CDMA, we discuss the problems related to joint use of CDMA with TDMA or FDMA. transmission (and hence a power control mechanism). optimal solution of this an upper bound for network lifetime. While the problem of lifetime or ﬂow maximization under these constraints can be solved in polynomial time for continuous values of ﬂow x , the version strongly NP-hard The distributed version of this problem is discussed in (Madan where the subgradient algorithm is used to solve the problem. At each iteration the algorithm estimates the gradient value at a given point of the objective function and determines the next point to be considered, until the optimum is reached. The distributed implementation of this algorithm requires that every node keeps track of two variables, namely the ﬂow rate of every outgoing link and the network These variables are updated during each iteration of the algorithm based on their previous values and the subgradient function values (also a function of ﬂow rates and network lifetime) are calculated according the information received from neighbor nodes. Subgradient methods are also used by as convenient tools for designing a distributed approach in sensor networks. Another characteristic of WSNs is the data aggregation applied by nodes. This phenomenon can easily be taken into account by slightly modifying the conservation ﬂow constraint. For instance, each node sends the maximum amount of between the received and the generated data in future Telecommunications pressing research issues industry Current and Future Trends surveys of recent investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing.


Introduction
Wireless Sensor Networks (WSNs) are an interesting field of research because of their numerous applications and the possibility of integrating them into more complex network systems. The difficulties encountered in WSN design usually relate either to their stringent constraints, which include energy, bandwidth, memory and computational capabilities, or to the requirements of the particular application. As WSN design problems become more and more challenging, advances in the areas of Operations Research (OR) and Optimization are becoming increasingly useful in addressing them.
This study is concerned with topics relating to network design (including coverage, topology and power control, the medium access mechanism and the duty cycle) and to routing in WSN. The optimization problems encountered in these areas are affected simultaneously by different parameters pertaining to the physical, Medium Access Control (MAC), routing and application layers of the protocol stack. The goal of this study is to identify a number of different network problems, and for each of these network problems to examine the underlying optimization problem. In each case we begin by presenting the basic version of the network problem and extend it by introducing new constraints. These constraints result mainly from technological advances and from additional requirements present in WSN applications. For all the network problems discussed here a wide range of algorithms and protocols are to be found in the literature. We cite only some of these, since we are concerned more with the network optimization problem itself, together with its different versions, than with a state of art of methods for solving it. Moreover, the cited methods have originated in a variety of disciplines, with approaches ranging from the deterministic to the opportunistic, including computational geometry, linear, nonlinear and dynamic programming, metaheuristics and heuristics, game theory, and so on. We go on to discuss the complexity inherent in different optimization problems, in order to give some hints to WSN designers facing new but similar scenarios. We try to highlight distributed solutions and information that is required to implement these schemes. For each topic the general presentation scheme is as follows: i) Present the network problem ii) Identify the relevant optimization problem 7 www.intechopen.com iii) Discuss the theoretical complexity of the optimization problem iv) Describe some representative solution methods, including distributed methods The relations between the two areas of WSN network design and OR have been discussed in some other works (Li, 2008;Nieberg, 2006;Ren et al., 2006;Suomela, 2009). In (Li, 2008;Nieberg, 2006;Suomela, 2009) the goal is to relate a network problem to its corresponding optimization problems and to discuss related questions in the OR literature that might feature in a solution. For example, Suomela (2009) is focused on data gathering and scheduling problems in WSN. He identifies the respective optimization problems and presents some nice properties that a graph should have (e.g. bipartite, graph with unique identifiers, planar, spanners, etc) to facilitate the design of distributed algorithms for these optimizations problems. Ren et al. (2006) present a survey highlighting certain methodologies from operational research and the corresponding network problems that they can solve. In particular they relate graph theory and network flow problems to routing problems in WSN, fuzzy logic to clustering, and game theory to the problem of bandwidth allocation. Following on from these works we attempt to enlarge the spectrum of the network problems addressed, and for each network problem we highlight the optimization problem together with some effective methods proposed in the literature. Furthermore, we report at the end the study a discussion on open issues. This chapter is organized as follows. The second section introduces certain methods from OR which are used to solve problems in WSN. The goal is to familiarize the reader with both the terminology and methods that are encountered in the OR domain and we refer to in the reminder of the study. In the third section we discuss several problems of WSN design, most of which must be addressed in the setup phase of the network. The fourth section is concerned with the routing problems. We report a classification of most used models and focus on how each of them is useful in addressing routing problems in WSN. The final section identifies some open issues in WSN and gives concluding remarks.

Operations research methodology used in WSN design
This section aims to introduce the reader to OR terminology and some representative solution methods from OR that are already used in WSN design. An Optimization Problem (OP) in OR is composed of two main parts. One is the objective/cost function to be maximized/minimized, and the second is concerned with the associated constraints that determine the feasibility domain. A solution of the OP is feasible if it satisfies all the constraints. From computational complexity point of view an OP is said to be polynomial if there exists a polynomial-time algorithm for solving it, otherwise it falls into NP-hard problems class. The solution methods used to solve the OP can be classed into two groups: exact methods and heuristic methods.
1. Exact methods seek a global optimal solution (if it exists) for the problem. The most familiar techniques among the exact methods commonly used for OPs in WSN are Linear, Nonlinear and Dynamic Programming. A general linear programming (LP) formulation is as follows: max cx (1) x ≥ 0 2. Heuristic methods are an important class of solution methods for practical optimization problems in WSN exhibiting high computational complexity. These approaches are intended to quickly provide near-optimal solutions to difficult optimization problems that cannot be solved exactly. Their advantages include easy implementation, rapidly-obtained solutions and robustness to variations in problem characteristics. However, in most cases they cannot guarantee the quality of the solution produced. Heuristic methods include local improvement methods that perform searches within the neighborhood of a feasible solution to the problem, and improve/construct the solution step by step by taking the best local optimal decision at each step. The main danger here is getting trapped at a local optimum, and to overcome this danger these methods may be combined with random approaches, multi-start approaches, and so on. Similarly, metaheuristics are very general approaches used to guide other methods or procedures towards achieving reasonable solutions. Metaheuristics aim at reducing the search space and avoiding local optima. Most metaheuristics are life-inspired approaches such as Tabu Search (Glover, 1989), Evolutionary/Genetic algorithms (EA/GA) (Holland, 1975), Memetic algorithms (Moscato, 1999), Ant Colony Optimization (ACO) (Dorigo et al., 1996), and Particle Swarm Optimization (PSO) (Kennedy & Eberhart, 1995). Tabu Search starts with one feasible solution and constructs its neighborhood out of members that are obtained by permuting the elements of the feasible solution. The objective function is next calculated for each member of the neighborhood and the best one is selected. The process is then repeated but with the newly selected member as the starting point. An important element in this algorithm is loop-avoidance, meaning that it must not return to a solution that has been already processed, and for this reason all the forbidden movements are saved in a tabu list. In evolutionary or genetic algorithms the solutions of the problem are called individuals. A relatively small set of individuals selected within the enormous search space of the optimization problem are chosen to form the population. The population evolves during the iterations in a certain order known as generations. Genetic operators such as mutation and crossover are applied to produce better individuals. Their performance is evaluated based on a fitness/cost function. The algorithm stops when the solution is close to the optimum, or when a specific number of generations has been reached. Memetic algorithms combine GA with a local search. These algorithm follow the logic of a GA, but before applying genetic operators, every individual carries out a local search with the aim of improving its fitness. In ant colony optimization, an ant starts from a random node in the graph and selects the next node based on Equation (4).
where P ij is the probability of choosing node j when the current node is i, τ ij is the pheromone value of edge (i, j), η ij the heuristic value, List contains all possible nodes accessible by the ant, and α, β are constants whose values depend in some way on the nature of the problem. In order to use an ACO algorithm for an OP, it is really important to present meaningfully the pheromone and heuristic values. When an ant passes through a node/edge, it deposits a pheromone value τ ij in it. This value has to be proportional to the quality of the solution and it will help to attract other ants from the colony. The intention is that all the ants end up following the same trail, which hopefully represents the optimal solution. In order to avoid local optima this algorithm contains a process known as evaporation which periodically reduces the pheromone value deposited on a trail. The PSO algorithm imitates the flocking of birds. It initializes a number of agents (birds) and attaches two parameters, position and velocity (the velocity is given by two vectors which have orthogonal directions), to each of them. At each iteration the algorithm has to evaluate the positions of the agents and determine the subsequent positions, while accelerating their movement toward "better" solutions.

Network design issues
WSN design has to address a number of challenging factors. These include node deployment and coverage, connectivity and fault tolerance. The overall aim is always to lower costs, reduce the power consumption of the wireless environment and ensure a reliable network. Node deployment is the first essential stage in WSN design, and it strongly impacts the performance of the network as regards accurate event detection and efficient communication.
Once the node is deployed, the problems of network organization become crucial, with topology and power control problems on one side, and medium access and scheduling strategies on the other. Solving these problems is an integral part of the design of a viable, energy-efficient network.

Optimal sensor deployment and coverage
WSN applications have particular requirements to satisfy, and one in common for all of them is coverage. The problem of maximizing the coverage of a given monitoring area has received a lot of attention in the literature. In this subsection we focus on three main problems related to this topic. First we discuss the problem of the minimum number of sensors required to cover a given area and guarantee network connectivity. The second problem is finding the best locations for a finite number of sensor nodes when seeking to satisfy the requirement of event detection. The third problem is identifying the regions that are not covered by sensors, assuming that the deployment is known.
The WSN deployment (or layout) problem is concerned with minimizing the number of deployed sensor nodes while ensuring the full coverage and connectivity of the monitoring area. As presented in (Efrat et al., 2005), the problem is a version of the Art Gallery problem, which is known to be NP-hard. The Art Gallery problem involves placing the smallest number of guards in an area such that every point in it can be surveyed by the guards. In this work Efrat et al. (2005) also show that the problem of deciding whether k sensors are sufficient to survey a region such that every point within the region is covered by three sensors is NP-hard. They propose an approximation algorithm based on geometry calculations for solving the problem.
However, most of the algorithms proposed for the layout problem derive from metaheuristic and heuristic methods. The work of Rotar et al. (2009) uses a new algorithm known as the Guided Hyper-plane Evolutionary Algorithm (GHEA). GHEA behaves basically the same as a multi-objective evolutionary algorithm manipulating a population and individuals. Whereas in the evolutionary algorithms the individuals are evaluated according to a fitness function, the novelty of GHEA lies in its evaluation of the population based on the hyperplane. The hyperplane will consist of points in the space which have better performances than any individual within the current population. Fidanova et al. (2010) propose an ant colony algorithm for this problem. As previously mentioned, ACO algorithms emulate the behavior of real ant colony where the greater the number of ants following a trail, the more attractive the trail becomes. In this case the area is modeled as a grid and all the points on the grid (or nodes) represent the search space. In order to apply the ant algorithm for the layout problem, from Equation (4) it is necessary to calculate the pheromone and the heuristic value every time that an ant passes through a node. The heuristic value attempts to reflect the best candidate node for the future movement of the ant (the new sensor placement) based on local information such as the number of grid points that the new candidate covers, whether the new candidate is reachable at a given distance, which is determined by the sensor transmission range, and finally whether this new position has already been selected by another ant. The pheromone, on the other hand, is initialized with a small value (e.g. the inverse of the number of ants) and for the upcoming iterations its value is updated according to the best solution value of the previous iteration.
In terms of the quality of service, attempts are made to find areas of lower observability from sensor nodes and to detect breach regions. The problem known as the Sensor Location Problem (SLP), formulated by (Cavalier et al., 2007), can be stated as follows: given a planar region, a given number of sensor nodes need to be positioned so that the probability of detecting an event in this region is maximized. The non-detection probability is expressed as a function of the distance between the sensor and a given position in the space where an event may take place, while the objective function aims to minimize the maximum of this product. In this formulation the problem is a difficult nonlinear nonconvex programming problem. Cavalier et al. (2007) proposes a heuristic algorithm that uses Voronoi polygons to estimate the probability of non-detection and to determine a search direction. The heuristic begins with an initial solution of m sensor locations (x 1 , x 2 , ...x m ), on the basis of which the Voronoi diagram is constructed (see Fig. 1(a) and (b)). The construction of the Voronoi diagram must also take into account the area of the region. For every node the algorithm determines the point in the Voronoi polygon with the highest probability that an event will not be detected, and defines these points as the new node locations. The process is repeated until no further improvement is possible. We note that a similar problem is encountered by the wireless communication community in GSM networks and content-distribution wired networks (CDN). In GSM networks the problem is to find an optimal deployment of base stations within a region so that it provides maximum possible coverage. In CDN the problem is to determine the locations of proxies where the popular streams can be cached. This problem turns out to be the classical weighted p − center location problem, where the objective is to locate p identical facilities that minimize the maximum weighted distance between clients and their corresponding (closest) facilities, assuming that each client is served by the closest facility (Averbakh & Berman, 1997). The p − center problem is slightly different from SLP (note that for the SLP problem the clients correspond to events and the facilities correspond to sensors). A p − center solution gives an assignment because each demand is assigned to a facility, while in SLP the event point (demand) can be visible to more than one sensor node (facility).
Once the sensors are deployed, coverage describes how well the sensors observe their target area or certain moving targets within this target area. In this context we need to know the path, known as the maximal breach path, that minimizes the maximum distance between every point on the path and its nearest sensor node. In other words this path represents the shortest path connecting the two endpoints which remains as far away as possible from sensor nodes. It was shown in (Duttagupta et al., 2008) that this problem is NP-hard. Most works in the literature propose methods relying on computational geometry and graph theory. Meguerdichian et al. (2001) suggest constructing the Voronoi diagram for the set of nodes in order to compute the maximal breach path. The edges of the Voronoi diagram provide the points of space which are at the greatest distance from the given set of sensors. These edges are weighted according to their distance from the nearest sensor. In this graph, the maximal breach path is a path maximizing the weight of its edges. A breadth-first-search (BFS) algorithm is then applied to find the maximal breach path. The Voronoi diagram and the maximal breach path are depicted in Fig. 1.

158
Telecommunications Networks -Current Status and Future Trends www.intechopen.com

Topology control
Node deployment can give rise to dense networks where sensors can have multiple potential neighboring nodes in common. This situation may lead to congestion and energy waste. To overcome this problem, topology control techniques are used to reduce the initial topology by choosing a subset of nodes having some property. Here the problem is finding a strongly connected subset of nodes that covers the rest of the nodes, so as to guarantee the connectivity of the whole network. This subset will be the backbone of the network, and every node excluded from it must have at least one edge in common with a node belonging to the subset.
There are a number of advantages in obtaining a backbone topology, since for instance it may i) reduce network traffic by performing data aggregation and in-network processing, ii) avoid packet collisions as only the backbone nodes will forward packets to the sink while improving network throughput, and iii) make it possible to turn off the non-backbone nodes to save energy. This subsection will discuss the optimization problem for constructing the reduced topology, and the special case in which the lossy links are taken into account.
The problem is modeled as a widely-known mathematical problem called the Connected Dominating Set (CDS). A Dominating Set of a graph G(V nodes , E edges ) is the subset of nodes D ⊂ V, such that every node that does not belong to D has at least one link in common with a node in D. In the special case in which these nodes have to be connected, the set is called the Connected Dominating Set (CDS). For many applications the smallest dominating set is sought, which brings us to the problem of finding the Minimum Connected Dominating Set (MCDS). The nodes in a CDS are called dominators, while other nodes are called dominatees. The MCDS problem is known to be NP-hard and is of the same difficulty and directly convertible to the vertex cover problem, the independent set computation problem, or the maximum clique problem. Yuanyuan et al. (2006) propose a two-phase method for obtaining The MIS is the maximal IS, which means that it is not possible to include more nodes in S.
In the second phase, the goal is to build a CDS using nodes that do not belong to the MIS. These nodes are selected in a greedy manner. At the end, the non-MIS node with the highest weight (the weight depends on the remaining energy and the degree of the node) becomes part of the CDS, as depicted in Fig. 2. Unfortunately, a CDS only preserves 1-connectivity and it is therefore very vulnerable. When fault tolerance against node failures is taken into account, the problem becomes the kmCDS problem, (k-Connected m-Dominating Set). The requirement of k − connectivity guarantees that between any pair of dominators there exist at least k different paths, and the m − domination guarantees that each dominatee is connected with m dominators. Wu & Li (2008) propose a distributed algorithm for this problem with time complexity O((m + ∆) · Diam), where ∆ is the maximum node degree and Diam is the diameter of the network (the length of the longest shortest path between any pair of nodes in the graph). Li (2008) assumes that the MCDS nodes are aligned according to a strip-based deployment pattern, as in Fig. 3 where the nodes are deployed in straight lines. The difference with a grid pattern is that the odd lines are horizontally shifted by a given distance in relation to the even lines. This pattern is shown to be a near-optimal solution of MCDS for an infinite network in terms of space. Because a WSN is a finite network, the spacing parameter in this pattern and consequently the number of nodes needs to be adapted. The optimization problem aims to minimize the number of nodes in the strip-based pattern such that the areas, defined by the node's transmission range, of three neighboring nodes in this pattern intersect each other (see Fig. 3). The solution of this problem gives the positions of the CDS. Implementation in a real scenario is easier. Assuming a given finite area with the sensor nodes uniformly deployed in it, for every position determined by the algorithm the closest sensor in the network will be selected for belonging to the CDS. The distributed approach requires Fig. 3. A strip-based pattern (redrawn from (Li, 2008)).
that nodes exchange certain information, such as the distance from the ideal positions and the number of neighbors that they cover, in order to make a decision regarding membership of the CDS. Nonetheless, the problem of finding the MCDS becomes more complex for dynamic or mobile networks, and this question is still open.
Up to now we have taken "neighbors" to refer to those nodes that are reachable if a node transmits with a given power. In (Liu et al., 2010;Ma et al., 2008) the authors also take into account the existence of lossy links. A lossy link has an additional parameter representing the probability of a successful transmission over the link. Topology control algorithms that consider these links are known as opportunistic algorithms. The related problem in (Ma et al., 2008) is to minimize the number of hops between a node in the network and the sink while guaranteeing that the path utility (the utility used is a metric reflecting the expected number of packet transmission required to successfully deliver a packet) falls within in a given interval. The distributed approach requires that a node knows the utility value and the IDs of its 2 − hop neighbors and that it decides whether or not to act as a relay node. (Liu et al., 2010), on the other hand, demonstrated that the problem of finding a subnetwork of the original network (the subnetwork has to contain all the nodes but only a subset of link of the original network) which minimizes the overall energy consumption and guarantees that the reachability coefficient (RC) for every node-sink pair exceeds a particular threshold is NP-hard. RC is a coefficient that indicates the probability of a node being able to reach another node in the network while the respective threshold is imposed by application requirement. When calculating the RC for two nodes that are connected by a path, the RC will be equal to the mean of the RC values of the links that constitute the path. The key idea of their solution is that link-disjoint trees can be constructed, the union of which will give the subnetwork. A node will make a decision to join in some tree construction if its RC is less than a particular threshold.

Power control
Unlike the topology control problem which seeks to minimize the size of the network backbone while assuming uniform and constant power transmission, the power control problem (also referred to as the Range Assignment (RA) problem or Strong Minimum Energy Topology (SMET)) aims to fix the node's transmission power at appropriate levels. The goal here is to reduce energy consumption while preserving connectivity in the network. Different methods proposed in the literature for solving this problem are discussed in this subsection. We present some extended versions which add new constraints to this problem with respect to i) throughput, ii) traffic and iii) reliability.
SMET has been shown by Cheng et al. (2003) to be an NP-hard optimization problem. To tackle the problem they propose two heuristics: Minimum Spanning Tree (MST), where power is assigned to nodes such that they can reach the farthest children in the MST, and Incremental Power (IP). In the IP heuristic the power of the node is allocated in a greedy manner. The heuristic begins with an empty set of nodes, to which it then adds a node chosen randomly from the network. This node adjusts its power to reach its closest neighbor. Further, each member of the set tries to increase its power to include another node, but the only member to succeed will be the one that expends the least energy in achieving this end.
The algorithm stops when all the nodes are included in the set. Since transmitting with the same power can lead to energy waste, some methods based on computational geometry, such as Relative Neighbor Graph (RNG) (Wan et al., 2001), Gabriel Graph (GG) (Ke et al., 2009), Yao graph or Voronoi Diagram have been put forward to determine the "best neighborhood". In these methods two nodes can be neighbors if there are no other nodes in the zone of intersection. The main difference between them is the way that they define this intersection zone. Fig. 4 shows how the intersection zone is constructed in RNG and GG. The idea behind computational geometry implementations is that the energy cost of transmitting directly to some nodes would be less than the cost of using any other relaying scheme to reach them, and so it is worthwhile to use certain methods to discover a node's best neighbors. In many cases the node can reduce its energy so as to be able to reach only its best neighbors. Many algorithms proposed to construct these graphs are centralized, but there also exist distributed versions (Li et al., 2002). A memetic algorithm is proposed by Konstantinidis et al. (2007) to solve the SMET problem. In reality, the difficulty of applying this kind of algorithm is modeling the problem according to the algorithm's logic, and deciding for example how to define a chromosome, how to implement crossover, how to handle population diversity, etc. The solution to the SMET problem takes the form of an array of positive integers, in which the elements of the array correspond to the power levels assigned to each node, and the respective indexes correspond to the node ID. In Fig. 5 we have 5 sensor nodes which are transmitting with a given power. From this scenario an array of 5 elements is constructed which contains the power values ordered by node ID. This array alternatively represents an The objective of the SMET problem is given by the fitness function defined by the sum of the powers assigned to each node. The first phase of the algorithm proceeds by initializing a random population. It then applies a local search to check the feasibility of the solutions, modifies them in order to obtain feasible ones, and improves the solutions by reducing the assigned power if it is possible. In the second phase a genetic algorithm is applied which involves the crossover of the selected individuals and the mutation for maintaining population diversity. Finally the best individuals from each generation are generated. The procedure is repeated until the solution cannot be further improved.
Lately, this problem has been extended to take some other important parameters into account. The problem of maximizing the throughput using topology control is discussed in (Tao et al., 2010). Assuming that the WSN is presented through an RNG (or a GG), their algorithm adjusts the intersection zone between two neighbor nodes in the respective graph (the intersection zone between two neighbors is depicted in Fig. 4) such that the throughput is maximized. They show that if the area of the intersection zone between two neighboring nodes changes in a given interval, the network will preserve the connectivity and energy efficiency properties. Their solution is based on mathematical analysis and a complex equation is derived to find the optimal solution which guarantees the maximal throughput. The equation takes as inputs the node density and the expected throughput of the network. In Gogu et al. (2010), on the other hand, there is a discussion concerning the problem of transmission range assignment and optimal deployment to reduce the energy consumption while taking node traffic into account. The solution is based on dynamic programming methods and it gives the optimal number of sensor nodes and their transmission ranges for a linear network operating under different traffic scenarios. This work also includes an extension to the multihop network case with aggregation (Fig. 6). Hence for a given random deployment of sensors (the blue points in figure), the algorithm calculates the number of nodes that will be in charge for aggregating and relaying the data towards the base station (the red points), their location, and the respective transmission range. Valli & Dananjayan (2008) discuss the problem of topology control to maximize network reliability measured by the bit error rate (BER). They model the problem as a game where a node in the network represents a player. Based on some local information a node calculates the utility function which depends on the link's BER. In every iteration each node will try to optimize this function in a non-cooperative way until the system reaches Nash equilibrium. Another approach is adopted by (Yang & Cai, 2008) to deal with QoS requirements.
Residual energy, end-to-end delay and link loss ratio are the QoS parameters considered. The question is how to allocate the power values to the nodes such that the energy consumption is minimized, the network is connected and the above QoS requirements are met. The solution proposed is a distributed heuristic based on the minimum spanning tree (MST), where the link metric is a function of delay and packet loss ratio. When this tree is constructed, a node adjusts its power simply so as to be able to reach its parent.

Medium access strategies
In this subsection we are looking at a node's strategies for accessing the medium. These strategies govern the coordination between the nodes in the network in order for them to access the medium and perform successful transmissions. Most of the work related to medium access strategies in WSN is related to the two main approaches, which are, first, scheduled and secondly, random access/contention-based (Ye & Heidemann, 2003). TDMA (Time Division Multiple Access) is one of the common mechanisms falling under the scheduled approaches, whereas CSMA (Carrier Sense Multiple Access) and derivatives are the most commonly-used methods based on channel contention. Other solutions, more common in cellular networks, but also used by the WSN community, are FDMA (Frequency Division Multiple Access) and CDMA (Code Division Multiple Access). The TDMA, FDMA and CDMA mechanisms are employed in WSN to ensure a collision-free medium access. In this subsection we present the basic problem related to each of them. We then describe three extended versions of TDMA relating to i) connectivity, ii) traffic and iii) delay. For FDMA the extended constraint is throughput. Regarding CDMA, we discuss the problems related to joint use of CDMA with TDMA or FDMA.
Under the scheduled approach the basic problem is to obtain a slot allocation for all nodes in the network using the smallest possible number of slots such that k-hop neighbor nodes (where k is a positive integer usually equal to 2) are not allocated to the same time slot. The respective optimization problem is the chromatic graph optimization problem, which aims to minimize the number of colors used to color the nodes such that two neighbor elements do not use the same color. This problem is addressed in several works that have put forward a number of distributed algorithms (Al-Khdour & Baroudi, 2010;Gandham et al., 2005;Kawano & Miyazaki, 2009;Sridharan & Krishnamachari, 2004). In (Sridharan & Krishnamachari, 2004) slot allocation uses the logic of a breadth-first search algorithm where the first node which allocates the slot is the root of the tree (the sink). Once a node is selected it continues the operation of slot allocation based on the information from its neighbors. Gandham et al. (2005) discuss the edge-coloring problem, where two edges incident on the same node cannot be assigned to the same time slot. They propose a greedy heuristic whose first step involves coloring the edges and whose second step proposes a strategy to map the colors to the time slots. The second step uses the edge orientation to avoid the hidden and exposed node terminal problem. A simple example is shown in Fig. 7. The process begins with node 6 (the node with the largest ID), which picks a color from a set of colors and broadcasts this information to its neighbors. On reception of the information node 5 picks a different color, and so on. This process continues until all the nodes have colored their edges. Then, edge orientation is applied to the edges with the same color. So, for instance, in Fig. 7(c) let us imagine the case where node 4 transmits to 6. Because of the node 4 transmission, the level of interference may be sufficiently high to corrupt the transmission of the link (2, 3) The same problem is reexamined by (Al-Khdour & Baroudi, 2010), under the assumption that nodes can communicate with different frequencies. Nowadays radio chips support multichannel transceivers which can help to reduce the number of required time slots in a TDMA frame. The distributed heuristic algorithm proposed in this work is based on solving the TDMA problem in a tree structure. The base station collects the information from its children to calculate how many slots are needed (e.g. 3 slots are required in Fig. 8(a)). Next, every parent allocates a time slot to its children 8(b). Each branch of the tree will use a different channel (the frequencies can be repeated in space), whereas the nodes in one branch will transmit in different slots. In other versions of the problem the scheduling solution must satisfy certain requirements such as connectivity, data rates and delay. Kedad et al. (2010) formulate the problem as follows: construct a frame with the minimum number of time slots such that at each time slot the activated links are not in conflict, and form a strongly-connected graph. The second constraint ensures that a node will be able to send a data packet to any other node in the network through the activated links. The links will be in conflict if they have the same transmitting or receiving node, or if the transmitting node of one link is the same as the receiving node of the other link. They show that this problem is NP-hard and propose two approximation algorithms. In (Ergen & Varaiya, 2010) the problem is to find an available slot allocation with minimum frame length, taking into account the quantity of data that a node needs to transmit. Notice that a link can be scheduled more than once in a time frame to satisfy the node data rates, which is the main difference with the basic version. Wang et al. (2007) formulate a multi-objective optimization problem. The question is to find a time slot allocation that satisfies i) the data delivery delay and ii) the node energy constraint. Here, not only are the transmitting and receiving energies taken into account, but also the energy consumed in switching between sleep and active modes. The two selected objectives contradict each other, since the energy objective seeks to maximize the number of nodes that are turned off, which in turn increases the delay. The trade-off between energy and delay is solved using the particle swarm optimization approach. This example gives a meaningful illustration of interdependence between problems coming from different layers. We have here a scheduling problem combined with a routing one in the sense that the latter one is responsible for the delay.
While TDMA-based approaches schedule transmissions sequentially over time, FDMA-based approaches permit multiple concurrent transmissions between neighboring nodes by allocating different channels/frequencies to them. Sensors in the network can thus tune their operating frequency over different channels to avoid interference and packet collisions in the network. One of the advantages of FDMA is the improvement of network throughput and packet transfer delay. In FDMA the problem can also be modeled as a graph-coloring problem, given that no two adjacent nodes are allowed to use the same channel. Yu et al. (2010) show that the problem of assigning the channels such that the interference is minimized is NP-hard. They model the problem as a game where every node is a player and the interference is the objective to be minimized. Their algorithm assumes that routing is based on the tree structure. In each iteration an intermediate node selects its own channel so as to cause the least possible interference for its neighboring nodes. The interference is calculated using local data that include the number of interfering parents in different branches existing in their neighborhood, their respective numbers of children, and whether or not these children are leaves within the tree structure. Notice that neighbors of a given node can belong to different branches and have different roles, parents or leaves. Based on an empirical study,  find it more appropriate for a WSN to communicate using a single channel, but they suggest harnessing channel diversity by spreading the frequencies in space. They therefore propose a node-disjoint tree structure where every branch (subtree) communicates via a given channel. The objective here is to divide the network into multiple disjoint subtrees such that the interference between them is minimized. They show that the problem is NP-hard and propose a greedy heuristic.
CDMA spreads the baseband signal using different Pseudo Noise (PN) codes to enable multiple concurrent transmissions. In WSN, a PN code may be implemented as an attribute in the packet header (nodes simply need to check whether the code in an incoming packet matches their own set of codes) in order to reduce the complexity of modulation and decoding techniques in comparison to CDMA implementations using other technologies. Optimization problems relating to code allocation in CDMA are slightly different from those relating to time or channel allocation. For instance, in CDMA it is possible that two neighboring nodes share the same code but only one can use the code for transmitting while the other node can use it for receiving. The optimization problem may require that no two adjacent directed links have the same code. The difference between WSN and other wireless CDMA networks is not really to be found in the problem of code allocation, but in the CDMA concept itself. CDMA codes are not completely orthogonal. The high density of sensors in the network makes the problem of interference in concurrent transmission a very serious one. High interference causes problem for receiver nodes because they cannot 'understand' the signal addressed to them. In the literature the pure CDMA problem is addressed simultaneously with the channel and slot allocation problems. The problem of channel and code allocation to reduce interference is discussed in (Liu et al., 2006). Their distributed solution is a heuristic which tries to solve first the problem of channel allocation and subsequently the code allocation one. When CDMA is combined with scheduling, Chen et al. (2006) looks for a feasible schedule for all the nodes in the network, together with their respective PN codes such that there is no interference (or the interference falls below a given threshold) in any time slot and the total energy consumption is minimized.

Duty cycle
The node duty cycle is determined by its activity and sleep periods. During the sleep periods the sensor nodes do not consume energy, and so short activity periods mean energy savings. However, this has to be scheduled, because nodes can communicate with each other only during the activity periods. The set of active nodes in the network at a given moment must satisfy certain requirements, the most important being connectivity and coverage. In the first paragraph below we discuss the problem of node scheduling with a connectivity constraint. In the second paragraph coverage is taken into account and two additional constraints are introduced, namely i) life dependency between sensors and ii) connectivity. (Nieberg, 2006) models the node duty cycle with a connectivity constraint as the MCDS problem. He also proposes a distributed algorithm for finding this set of nodes. Here it is assumed that the network is very dense and nodes are close to each other such that a large number of nodes can become passive while the remaining nodes continue to ensure a connected structure. The active nodes correspond precisely to the CDS. According to the algorithm some nodes will have a special role: those nodes that form a Maximal Independent Set perform the role of anchors, and nodes used to connect anchor nodes perform the role of bridges. Nieberg (2006) shows that the set of anchor and bridge nodes forms the CDS. The initialization phase has self-organizing properties. Each node will try to get an active time slot according to the TDMA scheme. Then, any other node that enters into the network needs to decide locally whether or not it will be active (either as a bridge or as an anchor). The decision is based on the information provided by the neighbor nodes. This information includes the neighbor node ID, a list of all time slots showing the slots occupied by them and their respective neighbors, their role as an active node, and some synchronization information. When a node observes that there are less than two anchor nodes in its neighborhood for a given time slot, it becomes an anchor otherwise it seeks for the existence of bridge nodes. If it finds that any pair of anchor nodes are connected with bridges, it becomes passive.
The node duty cycle is also related to coverage requirements. Because monitoring is one of the main objectives of a WSN, the active nodes have to guarantee that a set of given targets will be monitored throughout the lifetime of the WSN. The problem is to group the nodes such that i) each group (known as a cover) is able to cover the targets and ii) the groups form disjoint sets of nodes in order to maximize the WSN lifetime. Usually a redundant sensor network is considered in this case. This question has elements of both a coverage problem (targets which need to be covered) and a scheduling problem. Only the nodes belonging to a cover are to be activated, while the others are to be put to sleep, and the covers are to be activated in a sequential manner. Cardei & Du (2005) have shown that this problem is NP-hard. In (Rossi et al., 2010), the problem is modeled as a linear program whose aim is to maximize the sum of the different covers' lifetimes, the constraint being that the total duration of a nodes' activity periods does not exceed the lifetime of its battery. The problem is solved using the column generation method. Aioffi et al. (2007) model the problem as the weighted set cover problem (WSCP). Given n sets (S 1 , S 2 , ...S n ) formed from elements of a universal set denoted the US, together with their associated activation costs, WSCP seeks to find a subset of these sets such that the sum of the activation costs is minimized, and whose union corresponds to the US. The set of the targets in the network problem is modeled as the US, the sets S i represent the set of the target covered by sensor i, and the cost of S i is the inverse of energy for the sensor i. The problem is solved off-line and the results are fed into the sink. When the mobile sink gathers data from the nodes, it also indicates to them whether they will be active in the following period. This method is used particularly for this case because the number of possible combinations is exponential (the number of constraints is very small) and it can achieve faster convergence. (Dhawan & Prasad, 2009) remove the constraint of disjoint covers. If a node is included in two or more cover sets, then its energy capacity will influence the life of these sets. They therefore propose a solution based on the construction of a life dependency (LD) graph. In this graph covers are represented by vertices, linked by an edge if they share the same sensors. The LD graph is introduced into the problem in order to identify the covers having the least impact on the other covers. Their distributed approach adds a communication cost between neighboring nodes which need to exchange information such as the remaining energy and the region (area or targets) they can cover. A further cost is added, corresponding to the processing of this information and to making a decision. Every sensor thus needs to construct an LD graph based on its local information and to identify the cover with the smallest impact in order to be part of it. Finally, there is also a communication cost corresponding to the negotiation phase where nodes attempt to obtain a stable solution. In (Cardei & Cardei, 2008;Zou & Chakrabarty, 2005) the same problem is discussed and an additional constraint imposed: each set is required to be connected with the base station. In (Cardei & Cardei, 2008) the problem is formulated as Integer Linear Programming. It is first centrally solved using ILOG CPLEX, and then via a distributed approach. In the distributed case each node needs to know not only its own coordinates but also those of the given targets and base station. The initialization phase has a considerable communication cost resulting from exchanging the list of targets that the two-hop neighbors cover, the status of every node, and the synchronization message. This initialization phase includes the creation of the cover sets, while the subsequent phase finds the relaying nodes for connecting the cover with the base station (one node in the cover constructs a spanning tree that includes the target set and the BS).

Routing
Data transmission in WSNs, also referred to as the routing problem, is one of the most widely studied problems in WSN. Different to the previous section, we focus here on the main proposed models and give some analysis on their use. The models and methods used for solving routing problems in WSN can be roughly divided in two main groups. The first group includes related shortest and spanning tree models, while the second group is centered around flow models and comprises a range of different minimum cost/maximum multicommodity flow models. While abundant work relating to such problems exists for wired networks, some new challenges have appeared for wireless networks, and especially for WSNs. The nature of some of these problems can change quite radically when they are placed in a WSN context and new requirements are introduced. These requirements include sensors' energy constraints, the interference caused by the broadcast nature of transmissions over wireless links, as well as data compression, aggregation and processing constraints. For instance, in traditional formulations of the network flow problem, link capacity is a strong constraint, while in WSN this constraint is frequently supplanted by the node energy constraint. Another important difference between these two paradigms is the inclusion of the dynamic topology models and the need for distributed solutions for wireless sensor networks.

Shortest Path and Spanning Tree based models
Shortest Path Tree (SPT) and Minimum Spanning Tree (MST) remain widely used models for routing design, even in WSNs. The goal of a SPT is to find a path of minimum cost from a specified source node to another specified sink node, assuming that each edge has an associated cost. In the WSN context the edge cost usually represents the power that would be consumed by the transmitting node when sending a packet to the node at the opposite end of the edge. Distributed routing algorithms based on Dijkstra, Bellman-Ford or Chandy-Misra's distributed algorithms can thus be employed (Rodoplu & H., 1999;Yilmaz & Erciyes, 2010). One of the disadvantages of SPT is the unbalanced load between the sensors and the disparity in the energy used by them that such methods can lead to. To overcome this problem, different strategies are proposed. In (Yilmaz & Erciyes, 2010) every node can regenerate a path when a fault occurs or available energy is depleted. Other works consider edge cost to be a combination of several metrics such as residual energy, buffer size, or the number of neighboring nodes. Going further, WSN brings new constraints which may modify the nature of the problem. For instance, many applications of WSN require that the intermediate or relay nodes aggregate the data, while the criterion used is minimizing energy consumption. For (Cristescu et al., 2006) the joint problem of data aggregation and routing is NP-hard, and their heuristic combines an MST with an SPT. Normally, in cases where there is a high aggregation coefficient, the amount of traffic increases slightly from the source to the sinks, and an MST is a good compromise. However, where the aggregation coefficient is low, routes need to be found that minimize the number of hops, and therefore an SPT should be constructed. MST is the tree structure which minimizes the sum of edge costs, and the problem is polynomial. The difference between a shortest path tree and a minimum spanning tree is shown in Fig. 9.
Minimizing the total energy consumption is, however, not enough, since some nodes deplete their energy faster than others and may cause network partition. To balance the energy consumption, one strategy is to minimize the maximum energy consumption of the nodes. This problem has been modeled by (Gagarin et al., 2009) as the minimum degree spanning tree (MDST), which is an NP-hard optimization problem. Variations of this problem are encountered in the literature, in (Erciyes et al., 2008;Huang et al., 2006). A joint routing and data aggregation problem is also discussed in (Karaki et al., 2009) for a two-tier network, and some heuristic algorithms such as GA and greedy are proposed. From a distributed perspective, adapted versions of Prim's and Kruskal's algorithms have been proposed in (Attarde et al., 2010). In the distributed versions of SPT a node need only communicate to its neighbors information concerning the cost of links. Each node decides to communicate with the node that provides the minimal cost to the base station. An ACK mechanism is needed to dictate the end of the process. It may be remarked here that almost all the above cited models lead to single path routing schemes. They have the great advantage of being simple from an implementation point of view, while their main drawback is their difficulty in embracing additional requirements, energy consumption in particular. We now present some flow-based models that can model such requirements in a suitable way.

Flow-based models
The need to include energy/capacity constraints leads naturally to the use of flow models. Particularly for the WSN, routing problems are formulated as MultiCommodity Flow Problems (MCFPs). The commodity is a source-destination pair, and we are faced with an MFCP whenever several commodities share the network resources. In an MCFP the commodities will have different sources and/or destinations, but they are bound together insofar as they share the same link capacities. Regarding commodities, a WSN gives rise to either single-sink or multi-sink models, and in the case of single-sink models all commodities will have the same extremity, namely the base station. In the following subsection 4.2.1 we discuss some basic versions of flow models used for routing path calculation in WSN. Then, in subsection 4.2.2 some further extended routing problems are presented.

Conventional flow models in WSN
A standard flow problem in WSN (regardless of whether it is a multicommodity flow problem) includes two type of constraints, namely the flow conservation constraint and the energy constraint.
where t (respectively T) is a time instance (respectively the network lifetime), N the set of sensors, N i the set of neighboring nodes of i, x ij the flow over the edge ij (that is to say the data transmitted over this link), y i the data generated by node i, e ij the energy consumed in transmitting a unit flow and E i the initial energy of the sensor. The flow conservation constraint, Equation (5), shows that the total amount of flow that a sensor receives plus the amount of data that it generates is equal to the amount of information that it transmits. The second constraint given in Equation (6) is the capacity constraint, which is related to energy. This constraint implies that the energy consumed by a sensor for transmitting the flow throughout the lifetime of the network must be less than its initial energy. In standard network flow problems this constraint is usually related to link capacity.
One of the first works to formulate this problem in terms of Integer Linear Programming is to be found in (Chang & Tassiulas, 2004). The flow is represented here by the number of packets and the transmission energy is calculated based on the distance between the nodes (and hence assuming a power control mechanism). The optimal solution of this problem gives an upper bound for network lifetime. While the problem of lifetime or flow maximization under these constraints can be solved in polynomial time for continuous values of flow x, the integer version is shown to be strongly NP-hard in Bodlaender et al. (2010). The distributed version of this problem is discussed in (Madan & Lall, 2006), where the subgradient algorithm is used to solve the problem. At each iteration the algorithm estimates the gradient value at a given point of the objective function and determines the next point to be considered, until the optimum is reached. The distributed implementation of this algorithm requires that every node keeps track of two variables, namely the flow rate of every outgoing link and the network lifetime. These variables are updated during each iteration of the algorithm based on their previous values and the subgradient function values (also a function of flow rates and network lifetime) are calculated according the information received from neighbor nodes. Subgradient methods are also used by Rabbat & Nowak (2004) as convenient tools for designing a distributed approach in sensor networks. Another characteristic of WSNs is the data aggregation applied by nodes. This phenomenon can easily be taken into account by slightly modifying the conservation flow constraint. For instance, in Cheng et al. (2009) each node sends the maximum amount of information between the received and the generated data set as in Equation (7).
The routing problem with data aggregation for lifetime maximization in a network has been formulated by Xue et al. (2005) as a concurrent multicommodity flow problem. Here the flow constraint implies that the amount of the flow commodities transmitted from a sensor node cannot be less than the sensor's data. They propose a polynomial time approximation scheme, strongly inspired by the Garg-Konemann algorithm. In outline, their algorithm is as follows: construct the shortest path between every source and the sink, initialize a cost unit flow for every node, push the maximum possible flow along the path for every commodity, update the cost of energy for every node and repeat the process.
As regards routing paths, the routing schemes can use several paths (in other words perform multipath routing), or a single path (single-path routing.) Although requiring routing via a single path would appear preferable for WSN, adding such a constraint to the mathematical model gives rise to NP-hard problems. Worth citing here are two approaches proposed for WSN that attempt to circumvent the computational burden of such models while providing simplicity in implementation. The first approach computes a solution involving multiple paths, but uses only one single path at a time. Hou et al. (2004) propose an algorithm to solve the problem in two phases. In the first phase a solution is found for the multipath routing problem. Consequently every node knows the set of the relaying nodes and the respective amount of information to send to them. In the second phase one node, according to some local rule, will select one of its relaying nodes and will transmit to it the whole amount of information to be sent in this round. The second approach, in stark contrast to the first approach just described where routing takes place from the sensors to the BS (i.e. flat routing), may be seen as hierarchical routing, in that it decomposes the data transmission into two levels and thus converges to a cluster-based scheme. Each cluster head (CH) receives the data from the nodes of its cluster and from the other CHs, and transmits this data to another CH in the direction of the BS. Bari et al. (2008) consider a two-tier heterogeneous network containing powerful relay nodes which form a connected network that can relay data to the BS. They formulate the optimization problem as follows: knowing the positions of sensors and relay (CH) nodes, how should the network be clustered in order to maximize its lifetime? A sensor is not obliged to transmit directly to the CH, and sensors may have different amounts of flow to transmit. The problem is formulated as a max-min LP. Because the decision variables can take only binary values (1 if the sensor belongs to a given cluster and 0 otherwise) and the flow rate variable corresponds to a number of bits, we are dealing with an ILP problem. The heuristics presented for this problem are centralized. Other centralized techniques for solving the clustering problem in WSN are based on Fuzzy Logic (FL) (Anno et al., 2007;Ran et al., 2010), while Mehrjoo et al. (2011) proposes genetic algorithms.

Enhanced flow based models
Advances in technology and the broad range of applications for WSN have given rise to new QoS requirements and made routing a more complex matter. Interference, delay and questions of reliability may all place additional constraints and lead to more elaborate and challenging versions of routing problems. All this will be in the focus of this paragraph.
Radio interference has a significant impact on the performance of WSN as it affects the functioning of both MAC and routing protocols, and directly affects the transmission capacity of links. In contrast to traditional networks where the capacity of links is determined by physical parameters only, in wireless communications radio interference strongly affects the transmission capacity of links that are located close to one another. The models we have cited above assume that the quantity of information generated is sufficiently low, or the channel capacity sufficiently high, for transmission capacity not to be an issue. But this assumption clearly does not always hold, and capacity constraints over links are sometimes unavoidable. It should be noted that IEEE 802.15.4 defines data rates of 20, 40, or 250 Kb/s for the physical layers. Channel capacity may therefore represent a strong constraint where huge amounts of data need to be transmitted, or when many sources have to transmit simultaneously. Interference needs to be taken into account because of the high bit error rates that it may cause. The capacity of wireless channels is calculated from the Shannon-Hartley formula given in Equation (8).
where C is the channel capacity (in bits per second), B the channel bandwidth (Hz) and S/N the signal-to-noise ratio.
From the point of view of computational complexity, including this constraint in the model makes the problem NP-hard, as shown in (Jain et al., 2003). More precisely, they show that the problem of finding a maximal flow for a source-destination pair under the interference constraint is equivalent to the Minimum Independent Set problem in a graph, and therefore NP-hard. Krishnamachari & Ordonez (2003) add the link capacity constraint to the basic version of the flow problem with the goal of maximizing the throughput or minimizing the overall energy consumption. To ensure that the solution will not generate scenarios in which the traffic load is unfair for the nodes in the network, the flow transmitted by a node has to be less than a given fraction of the total flow generated by the network. Patel et al. (2006) add the following two constraints to the basic version of the routing problem: (i) the link capacity constraint where the rate (the number of packets per unit time) at each link has to be smaller than its capacity, and (ii) the node capacity constraint where the number of packets that a node can process in a unit time has to be smaller than its given capacity. The proposed algorithm is centralized and aims to find a maximum flow with the smallest possible energy cost. It is a kind of combination of maximum flow (getting as much flow as possible from the source to the sink) and shortest path (traveling from the source to the sink with minimum cost). The problem addressed in (Xu et al., 2008) has the same structure as that found in Patel et al. (2006), but the objective is utility maximization, which is a nonlinear convex function of the transmission rate. The problem is solved using the Lagrangian method. This method attempts to decompose the problem into a number of sub-problems via a Lagrange multiplier and to solve each of them separately. In these problems it is assumed that the bandwidth B is shared between different node channels, or that the nodes use the whole bandwidth but are already scheduled in order to avoid interference.
There are two possible ways of modeling a successful transmission in the presence of interference: i) the physical context, which requires that the Signal-to-Interference and Noise Ratio (SINR) given in Equation (10) exceeds a certain threshold; ii) the protocol context, where no two neighboring nodes may transmit at the same time.
Routing under the physical interference model is more complex. Wang et al. (2011) discuss a link scheduling problem where flow capacities are satisfied and the time taken for scheduling is minimized. In this case the channel capacity is variable over time due to SINR, and its integral gives the service provided by the channel as expressed in Equation (9).
where C ij (t) is the channel service of link (i, j) during time t, and B is the channel bandwidth.
where SINR ij is the SINR parameter for the link (i, j), ω ij the gain of the fading channel for the link ij, P i the power transmission of node i, ω kj P k measures the interference of the other links over the link (ij) and N a is the floor noise which is a constant. The channel service calculated in each time slot is used as parameter to bound the link data rate. The problem is solved off-line using the column generation method.
Interference can be more easily modeled in a protocol context. Wang et al. (2008) study the routing problem in the presence of interference by scheduling the nodes in accordance with the TDMA approach. The constraint added for the interference implies that the sum of the number of times a link is scheduled plus the sum of the number of times that all the links in its interference zone are scheduled in the time frame has to be smaller than the frame size, as in Equation (11).
where N(e) is the number of times that the edge e is scheduled in the time frame, I(e) is the subset of links of the original graph that can be influenced from e transmissions and S is the number of time slots in the frame.
We shall now focus on how WSN takes some QoS requirements and their associated metrics into consideration. We begin with a discussion of QoS metrics and the computational complexity that they introduce. Different metrics have different composition rules. Metrics such as delay, delay jitter and cost are additive (an additive metric is a metric which obeys the additive rule, meaning that the path metric is equal to the sum of the metric links that compose the relevant path). A multiplicative metric is a metric which obeys the multiplicative rule, meaning that the path metric is equal to the product of the link metric for all the links that compose the relevant path. Metrics like reliability (the probability that the transmission was successful) can thus be seen to be multiplicative. Finally, concave metrics obey the concave rule, meaning that the path metric is equal to the minimum (or maximum) link metric for all the links that compose the relevant path. Bandwidth is an example of a concave metric. Fig.  10 illustrates the concept of multicommodity flows in a graph and QoS multipath routing with two metrics.
In (Wang & Crowcroft, 1996) it is shown that the problem of finding a path which satisfies N additive metrics, and/or K multiplicative metrics (where N and K are positive integers) is NP-hard, while it becomes polynomial when one is concave and the other additive or multiplicative. Most works dealing with QoS routing in WSN are concerned either with finding (disjoint) paths for guaranteeing network resilience (fault-tolerant network), or with finding a minimal number of paths such that QoS requirements are met. We recall that the problem of finding k disjoint paths (edge or vertex disjoint) such that the total cost of the paths is minimized has been shown in (Li et al., 1992) to be NP-hard, even for k = 2 in directed graphs. Heuristics therefore provide practical approaches for solving these kinds of problems. (Okdem & Karaboga, 2009) report an approach combining ACO with a tabu search. Each source node wishing to transmit data toward the BS has to launch n ants (n corresponds to the number of data packages that the source transmits). The ant's movement is based on the probabilistic decision where the heuristic value represents the estimation of the residual energy. After all the ants have completed their journey (from source to destination), each ant k deposits a quantity of pheromone equal to the inverse of the total number of nodes included in the path. This task is performed by sending ant k back to its source node following the arrival path. In this type of ACO each receiver node has to maintain a tabu list with the identities of the ants that it has encountered, enabling it to decide whether to accept the upcoming packet of ant k. Routing the information efficiently to guarantee the delay and reliability constraint is discussed in Saleem et al. (2010), who proposes a multi-agent approach for ant colony optimization (ACO). The movement of the ant is guided by the probabilistic decision rule, equation (4). The pheromone value corresponds to the end-to-end delay. The two heuristic evaluation parameters of every edge are determined by the residual energy at the extremity of the edge and its packet receive rate (PRR).
In (Bagula & Mazandu, 2008) the QoS routing problem is concerned with delay and reliability criteria. The goal is to find the smallest set of disjoint paths between a source and a destination such that both criteria are satisfied and energy consumption is minimized. Delay is a stringent metric, meaning that if the delay is not respected in any of the set paths, the packet is dropped. In contrast, the reliability of every source-destination connection obeys the multiplicative composition rule. Hence the more paths in the set, the more reliable the set will be. The problems of finding the path which minimizes the energy or the delay, or maximizes the reliability, taken separately, are solvable in polynomial time, but the problem considered in its entirety is not.

Open issues and concluding remarks
There are several issues in WSN which are still open or which have not been sufficiently addressed.
• Dynamicity is one of the most noticeable characteristics of WSN and also one of the biggest challenges. The term covers such phenomena as node failure, link fluctuations, node attacks and mobile nodes. Many studies in routing, coverage, scheduling or topology control have attempted to find solutions where these events occur, but including them in optimization problem models remains a challenge.
• We consider that scalability is an important issue which is frequently neglected when solution methods are proposed. The eventually changes in network dimensioning may sometimes require to resolve the problem or to sufficiently increase the computation time.
We observe this particularly in relation to issues related to multi-sink/multicommodity design and network cross layer design.
• With respect to coverage problems, there are several potential directions that have not been fully explored. These include solving the deployment problem in the presence of obstacles, taking into account the restrictions for node placement and 3D deployments. In routing and topology control, cooperative decision-making strategies and opportunistic approaches also need to be modeled and examined in optimization problems, since in both areas some of the problems discussed here have been successfully addressed through opportunistic approach. But not many theoretical works have been undertaken in relation to this paradigm. Many questions remain open. For instance, in what scenarios should an opportunistic approach be favored over other approaches? How close is an opportunistic approach solution likely to be to the optimal solution? Routing in opportunistic networks 1 adopts a people-centric approach to model the network semantics (Verdone & Fabri, 2010). This routing group is classified as sociability-based routing and has been modeled in (Yoneki et al., 2007) based on human behavior characteristics. They propose a Socio-Aware Overlay (multi-point event dissemination using an overlay constructed by closeness centrality nodes in communities) for publish/subscribe communication. It is not clear whether these strategies might be appropriate for WSN.
• Another crucial issue is the difference that still exists between theoretical studies and practical implementations in WSN. Some theoretical studies have already presented models for cross-layer design, together with corresponding solutions. But many of them remain centralized and require off-line computation. We remark that in some mathematical formulations the variables are considered continuous, despite the discontinuous nature of the corresponding events such as power transmission and flow. On the other hand, algorithms or protocols implemented in real hardware or tested in simulations do not address cross-layer design. They aim at distributed and on-line computations and handle mostly simplified problems. Moreover, in these works the analyses that might yield an optimal solution are neglected, and it is difficult to grasp the problem complexity and to know whether there is room for further improvement. Combining these two approaches is far from straightforward and calls for substantial work. We see as a primary concern in this context the development of optimization tools and dedicated software to bridge the gap between optimization methods and their practical implementation in WSN.
• Finally, we consider that uncertainty has received very little attention until now. Nonetheless, uncertainty is an important characteristic inherent in the nature of WSNs, and is related to different aspects such as event detection, sensor location and data delivery. Some attempts to model these situations use probabilities associated with these different kinds of events. The main difficulties in taking the uncertainty of WSNs into account are twofold. First, measuring the distribution of events is not an easy task and is both environment-and application-dependent. Secondly, despite recent advances in robust optimization 2 tackling probabilistic optimization problems is not for the faint-hearted.
To conclude, wireless sensor networks represent an attractive research area due to several factors as the resource-constrained nature of sensor nodes, interference, data aggregation, power consumption model and the wide range of both commercial and military applications that this technology offers. Successful network design and deployment include understanding and modeling several problems related to these factors, which ultimately determine the available range and data rate of a WSN, as well as cost and battery lifetime. Therefore this study, intended to researchers and graduate students in computer science and fields related to operations research, information technology and applied mathematics, gives some highlights on a number of representative network problems in WSN and focuses on their respective optimization problems. This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks -Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing.