Open access peer-reviewed chapter

Wireless Real-Time Monitoring System for the Implementation of Intelligent Control in Subways

Written By

Alessandro Carbonari, Massimo Vaccarini, Mikko Valta and Maddalena Nurchis

Submitted: 12 November 2015 Reviewed: 23 February 2016 Published: 08 June 2016

DOI: 10.5772/62679

From the Edited Volume

Real-time Systems

Edited by Kuodi Jian

Chapter metrics overview

1,996 Chapter Downloads

View Full Metrics

Abstract

This chapter looks into the technical features of state-of-the-art wireless sensors networks for environmental monitoring. Technology advances in low-power and wireless devices have made the deployment of those networks more and more affordable. In addition, wireless sensor networks have become more flexible and adaptable to a wide range of situations. Hence, a framework for their correct implementation will be provided. Then, one specific application about real-time environmental monitoring in support of a model-based predictive control system installed in a metro station will be described. In these applications, filtering, resampling, and post-processing functions must be developed, in order to convert raw data into a dataset arranged in the right format, so that it can inform the algorithms of the control system about the current state of the domain under control. Finally, the whole architecture of the model-based predictive control and its final performances will be reported.

Keywords

  • Environmental monitoring
  • Real-time control
  • Wireless sensor networks
  • Performance evaluation
  • Network design

1. Introduction

One of the most critical components in an advanced real-time control system is the implementation of a real-time monitoring network. Indeed, real-time monitoring is in charge of measuring the actual state of the domain that is controlled. The features and requirements of the network depend on the type of control that is implemented and on its architecture. In this chapter, we will focus on the importance of wireless sensor-based environmental networks (WSNs) targeted to real-time control of complex buildings. Then, we will show what role they play in the implementation of a model-based predictive control (MPC) system, that is one of the most advanced approaches presently considered.

To this purpose, the basic characteristics of MPC will be clarified and some of the most popular applications will be described in Section 2. Section 3 will report on the technical features of typical WSNs for environmental monitoring and will sum up its background. The purpose of Section 4 is analyzing the whole architecture of real-time environmental monitoring systems. Hence, it will argue typical constraints, requirements, technical issues, choice criteria that are helpful for designing and installing such systems in harsh environments. For the sake of clarity, this chapter will refer to the real case of an MPC system applied to the “Passeig de Gràcia” (PdG) subway station in Barcelona, which was the topic of an EU funded 3-year research project. In that same section, all the issues connected with real-time data management will be faced, including pre-processing (i.e. filtering and re-sampling) and post-processing functions (e.g., more complex procedures that convert pre-processed data into a format that can be interoperable with the controller unit of the MPC system). Finally, the most important information about the MPC system integrated in the PdG station and the benefits obtained thanks to the MPC will be discussed in Section 5. Conclusions and references will terminate this chapter.

Advertisement

2. Model-based predictive control systems

2.1. State-of-the-art and applications

Efforts to reduce the energy consumption of public buildings and spaces have recently been receiving increasing concern. Energy savings define a clear objective: to minimize energy consumption, while maintaining acceptable comfort levels in the presence of uncertainties. Thus, uncertainty must be explicitly considered by including adaptation capabilities in order to adjust the control strategy to changing conditions. Comfort and H&S requirements define operative constraints that can be either hard or soft: this implies that constraints must be explicitly considered in the control strategy. These constraints can be satisfied only if a suitable and robust sensor network is deployed in the controlled spaces.

Among the hard control approaches, MPC [13] is one of the most promising techniques for HVAC systems because of its ability to integrate disturbance rejection, constraint handling, and dynamic control and energy conservation strategies into controller formulation. In fact, for complex constrained multi-variable control problems, MPC has become the accepted standard in the process industries [4].

MPC computes the optimal control policy by minimizing a proper cost function subject to certain constraints on input, output, or state variables. Its success is largely due to its unique ability to optimally control either linear or nonlinear MIMO processes using a predictive model and explicitly considering constraint in its formulation. Explicit consideration of uncertainty is discussed in a number of contributions [58] and can be achieved using stochastic models and by minimizing the probability of constraint violation. Due to these reasons, MPC has been successfully implemented in various researches on buildings over the last years. A review of the representative studies about MPC can be found in [9], which also exploits the Building Controls Virtual Test Bed (BCVTB) for running a building energy simulation with real-time Building Energy Management System (BEMS) data as inputs.

In subways applications, as far as metro operators are concerned, they suffer from the high-energy consumption of their facilities. While the lighting, ventilation, and vertical transport systems are crucial for the safety and comfort of passengers, they represent the most of the non-traction energy required in underground stations. Hence, as shown in [10], the intelligent control of these subsystems can significantly reduce their energy consumption without impacting the passenger comfort or safety or requiring expensive refurbishment of existing equipment.

Differently from the buildings above ground, the environmental conditions (temperature and humidity) are quite stable in underground spaces and, usually, there is no need for heating in winter but just for cooling in summer. Since no air-conditioning plant is usually installed in underground spaces, the air change becomes a key requirement that must be guaranteed by means of mechanical ventilation that compensates for lacks of natural ventilation. Therefore, in this domain, both the natural and the forced air flows are relevant and produce thermal effects that cannot be neglected. Nevertheless, it has to be remarked that a specific norm about air change in underground spaces has not been drawn up yet.

Underground stations are also characterized by a large number of output variables (temperature, air exchange, CO2 and PM10 concentrations in platform and energy consumption of actuators) but by a small number of input variables (usually just one fan): this reduces the controllability and the possibility of simplifying the control task by coupled sub-problems decomposition. The decomposition into simpler (eventually coupled) sub-problems, in fact, is possible when different output variables are mainly affected by different control inputs: since usually the control input is just the speed of the station fan for controlling many output comfort and air-quality variables, there is no chance for decomposition. Therefore, the problem is handled with a cost function formed by a weighted sum of conflicting objectives and subject to appropriate constraints, while the control task is faced through an optimization problem.

This severe controllability issue means that a whole building control strategy, such as MPC, is much more effective.

Model predictive control, in fact, may be used to enhance BEMSs so that they can improve their control performances getting close to optimal behavior [11].

2.2. The control architecture

The overall control scheme is depicted in Figure 1. The energy manager is in charge of enabling or disabling controller functionalities. A supervisory subsystem is in charge of checking the correct operation of each subsystem and alerting the energy manager in the case of failures, faults, or constraint violations. It is also in charge of detecting unreliable sensory data and to switch to a safe control policy (the original one) when needed. A set of software proxies, one for each sensor or external data or actuator, acts as middle-ware, thus making information available to and from the control system independently from the specific hardware or external service adopted. A monitoring subsystem (see Section 4) collects information about station status (via the sensor network detailed in Section 3), and predictions about disturbance factors that affect the dynamic behavior, such as external uncontrolled fan (i.e., not controlled fans), people, trains, and weather.

Figure 1.

Typical architecture of an MPC.

The disturbances have to be measured and, if possible, predicted by a model that provides the future behavior of non-controllable factors. For trains and external fans, the station operator can always provide schedules that may be reasonably used as a measure or prediction. Any slight variation from the schedule can be addressed by the disturbance rejection capability of the feedback control scheme (Figure 1). When available, these schedules or the status of trains and external fans can be accessed in real time through the SCADA system already installed in the station, thus improving the reliability of the measures of these disturbances. A large-scale simulation study performed in [12] showed a potential for the energy savings and/or improvements in thermal indoor environment when using the weather forecasts in a predictive control strategy compared to a simple rule-based control, despite the uncertainty in the weather forecasts. Therefore, weather forecasts and local weather stations are used to improve control performance. As shown in [13], a large part of the energy savings potential can be captured by taking into account also for instantaneous occupancy information.

This information is then processed and made available for use by the controller subsystem (Section 5.1). The prediction model (described in following Section 5.2), fed with all the available information, is used to carry out a scenario analysis and to select what control policy ensures the best predicted performance in terms of energy consumption and comfort. The corresponding optimal solution is applied to the station as control action, and the process is repeated at each control step.

In this framework, as for each control system, the sensor network that will be described in the next section is of paramount importance, since it determines the ability of the system to reject disturbances and unpredictable events.

Advertisement

3. Environmental monitoring networks

3.1. Typical architecture of an WSN-based environmental monitoring system

In the last decades, we have witnessed a significant evolution of environmental monitoring systems. This wide category embraces a considerable range of application scenarios—from public surveillance to industrial automation. The first monitoring systems were mainly based on high-precision sensors; hence, the cost and complexity for deploying such systems is typically higher and the efficiency lower than most current solutions. Nowadays, the terrific advances in low-power devices and wireless technologies have driven an evolution toward wireless sensor networks (WSN)-based monitoring approaches. Typically, a set of sensor nodes is deployed over an area, each of them including a set of sensing units as well as one or more wireless interfaces. In many cases, off-the-shelf sensor nodes are used, providing a trade-off between the cost and complexity of network deployment and maintenance, and the accuracy of the measurements.

Although each application scenario imposes its own requirements, there exist some common rules when deploying a wireless sensor network for monitoring purpose. A set of sensor nodes is deployed over the area of interest, and one or more entity is added for collecting the data. Usually, there is at least one central sink gathering sensor measurements, either periodically or after a “long” monitoring time interval, which could be defined for instance as one entire day or 1 month, depending on the time requirements of the application. A real-time monitoring application normally imposes stricter time requirements (e.g., collect measurements every 2 min). In fact, in many practical deployments sensor nodes are divided in subsets (e.g., based on location) and a hierarchical architecture is employed. In addition, it is very common to deploy also a set of gateways, each collecting measurements from a subset of the sensor nodes, so as to improve scalability and reliability of the system. A gateway is normally a device that is not limited in power and storage, but in some cases, it could be one of the low-power devices taking also the role of collection point.

Concerning the communication point of view, there has been a great deal of effort in defining new standards, algorithms, and protocols for WSN connectivity, strongly motivated by the high potential of using those networks in several different application scenarios. Hence, nowadays a considerable number of solutions are employed in real deployment to provide robust connectivity. Although many technologies and mechanisms exist, a complete description of all of them is out of the scope; hence, we briefly mention some of those that are more related with our area of interest.

In fact, physical and media access control layers are most commonly implemented according to the IEEE 802.15.4 standard, which is designed for low-power and low-data-rate devices using short-range transmission. The maximum data rate is 250 kbits/s, and range is typically few tens of meters. As the standard is intended for low data rate, the packet size is limited to 127 bytes. The standard allows three frequency bands (868, 915, and 2450 MHz). Commercially, available devices typically work on 2.4 GHz band. IEEE 802.15.4 defines three topologies that may be used by the upper layer protocols: star, mesh and cluster tree. Typically, the radio listening and transmitting power consumptions are in the order of few tens of milli-watts, and idle and sleep modes consume significantly less.

Many communication mechanisms are based on 802.15.4 standard. 6LoWPAN (IETF working group) is an adaption layer allowing IPv6 traffic transmitted through 802.15.4 network, which apply packet restructuring due to the very limited size of 802.15.4 packets. One of the most widely used approaches is the ZigBee stack, which defines various layers on top of 802.15.4 and specifies three possible roles for nodes. The coordinator and the router are fully functional devices, while an end device cannot route packets. On the other hand, WiFi typically is not used by sensor devices, due to the high power need, but it is often used for connecting gateway(s) to the Internet, or to a central server receiving all the collected data.

In the next section, we briefly discuss some of the existing solutions for WSNs used in the context of environmental monitoring, highlighting the most common approaches and technologies employed, whenever disclosed.

3.2. State-of-the-art of WSN-based environmental monitoring systems

The class of WSNs deployed for environmental monitoring purpose is relatively wide, thus including a large set of different technologies, environments, architectures, and applications. Let us summarize them based on a simple taxonomy, by first distinguishing between outdoor and indoor.

In the former case, it is possible to identify two main sub-classes, depending on the location where sensors are placed—under the ground or over a surface. If the signal is transmitted over the air, the transmission propagation is significantly different, thus the communication distance significantly wider, allowing longer-range transmissions also compared to the indoor case, since less obstacles are typically present in this case. The most typical use case is the monitoring of a field or farm, aimed to obtain environmental measures data, number of animals within an area or any moving object, for detecting intruders. In urban areas, the most common objectives are surveillance and traffic or road surface monitoring. In [14], the authors describe a WSN for monitoring a potatoes crop. More than 100 TinyOS-based sensing nodes were sensing the environment and sending data to one gateway, connected to a central server through Wi-Fi. Barrenetxea et al. [15] reports the case of a similar deployment of 16 nodes on a rock glacier in Switzerland, but GPRS was used instead of Wi-Fi. In collaboration with biologists, authors of [16] deployed a IEEE802.15.4-based WSN for capturing the habitat of foxes, whereas [17] capture the distribution of seabirds on an island through a network of 150 TinyOS-based sensors organized in one single-hop and one multi-hop network, both connected to a central gateway through their own gateway.

If the sensors are placed below the ground surface, the goal is the monitoring of the status of the ground, either for complementing the monitoring above the ground (e.g., for intelligent irrigation), or for more specific objectives, such as landslide monitoring for earthquake prediction. A fundamental issue is the modeling of the underground communication channel, in order to properly design the network and avoid burying and un-burying sensors many times, which is complicated and time-consuming [18]. The architecture is usually less structured, and based on the characteristics of the soil, typically obtained in a pre-deployment phase, with one or more gateways over the ground surface [19].

Concerning indoor scenarios, we can further split this class into two sub-categories: subway (i.e., underground transportation systems) and building (e.g., office, museum). Both of them may experience a significant impact of the human presence, as well as carried wireless-equipped devices, on the signal propagation [20]. Usually, building hosting offices, museums or malls, are regularly structured, and sensors are installed for surveillance in the offices of a company, or inside other public place such as a museum. In [21], a 16-nodes WSN is deployed over the three floors of a medieval tower, in order to monitor and detect potentially dangerous vibrations. One sink collecting all data was placed at the top floor where access to the external Wi-Fi network was available. Peralta et al. [22] describe a preliminary testbed and lessons learnt from a deployment in a museum in Portugal.

On the other hand, the environment of subways is proven to be more harsh, mainly due to high-speed train crossing the area of interest, and the (building) structure, which can seriously affect the signal propagation and quality of a link between two adjacent nodes, due to tunnels, stairs, and substantial use of metal and concrete materials [23]. The typical motivation for deploying a WSN for underground transportation system monitoring is safety and energy saving [24]. Several interesting results and practical guidelines are reported in [25], which describes two deployments in two underground railway stations, one in Prague and one London. Before the actual deployment, accurate measurements were performed in order to model the propagation of the signal within the tunnels of interest and properly plan the network design. The evaluation suggested the centre of the tunnel as the best place for placing the receiver and transmitter’s antennas, which turn out to be not practical in real deployment, confirming the gap between ideal installation design, and feasibility in real settings.

Advertisement

4. Wireless real-time environmental monitoring systems

4.1. Requirements, constraints, and challenges

One of the most peculiar issues of a WSN is the lifetime of the system. Typically, environmental monitoring needs a network stable and reliable for months or years, whereas employing sensor nodes implies significantly lower time ranges. Moreover, several reasons can lead to node failure or reboot, causing unexpected network behavior and loss of data, all these events potentially requiring access to the (failed) node for re-configuring or re-charging it. However, the cost and potentially high distance make it unfeasible to operate on the network continuously, implying the need for self-configuration and self-maintenance feature implementation. In addition, several mechanisms can be employed not only for energy harvesting, but also for reducing the energy consumption associated with sensing and/or transmission: MAC duty cycle, data aggregation, data prediction [24, 26].

One of the main challenges of a wireless network is the communication reliability, as the wireless channel is intrinsically unstable and requires a maximum distance and certain physical conditions to be verified in order to guarantee data delivery (e.g., obstacles among nodes might undermine the connectivity). On the other hand, wireless links provide the flexibility to easily and gradually deploy a network, reducing the need and cost for maintenance and facilitating incremental deployment (e.g., through self-organizing and self-healing properties).

Real-time monitoring implies that the measurements taken from the sensors have to be delivered over a short-time interval. Although the duration depends on the specific application, data must be delivered quickly, thus an off-line data gathering method is not suitable. For all the above reasons, the system must be robust in terms of failure and reliable in terms of connectivity. When deploying a network for real-time monitoring, a certain level of redundancy is usually desired, so that some sensor nodes can mitigate the effect of a failure or lack of others, not only for sensing the environment, but also to guarantee forwarding of other nodes’ measurements to the central sink [25].

The wide range of possible application scenarios make it unfeasible to identify a widely recognized “best” strategy, although a set of general rules can be inferred. The location of sensor nodes must ensure coverage of the entire area under monitoring, both from a sensing and a connectivity point of view. Regarding the former, the actual coverage depends on the application and the monitoring area, where some sub-areas might be excluded and some others might require higher sensor density (e.g., tunnel). As for the latter, it is essential to guarantee that there are not isolated nodes at any time during the period of monitoring, and that all of them are able to reach the data collection entity. In practical terms, (at least) one routing path to the sink and/or gateway must be available and known at each sensor node. Several implications can be deduced: (1) each node must be located so as to be within the transmission range of at least another node of the network; (2) routing path(s) must be acquired from each sensor and dynamically updated whenever needed (e.g., after failure); (3) if duty cycle is performed, it must be designed so as to guarantee that awake periods of nodes belonging to a routing path overlap enough to guarantee proper transmission/reception. In addition to it, each specific (category of) environment imposes physical constraints on where nodes can be located, the propagation of the signal and the network design.

To conclude this discussion, let us focus on the constraints and challenges typically related to subway environments, and the real deployment under study. Several real deployments evaluation have demonstrated the special care needed before the actual installation [25], highlighting the importance of performing accurate measurements of the signal power under different conditions, as the structure of underground transportation systems typically exacerbate some issues, such as the multi-path fading.

4.2. The case study: design choices and criteria

In this section, we describe a real-world use case of an environmental monitoring system deployed in the context of the EU-funded SEAM4US (Sustainable Energy mAnageMent for Underground Stations) project, aimed to energy management optimization. The main role of the sensor network is to track in real-time both environmental and energy measures of interest, an intelligent control system (Section 2). The WSN was installed in the “Passeig de Gracia” (PdG) subway station in Barcelona, one of the most crowded in the city, connecting three important railway lines and having on average more than fifty trains per hour.

The monitoring platform is in charge of gathering data from sensors, re-arranging and post-processing them into a database, and forwarding clean, time aligned measurements to the intelligent control unit that is in charge of applying optimum control policies for energy savings. PdG metro station is a harsh environment for wireless sensor placement, operation and communication, as many station areas are located far from each other, many obstacles can severely affect the signal propagation, and it is a highly dynamic scenario due to passengers and trains crossing the station every day. The station area is rambling, thus connecting all nodes directly to the gateway was neither feasible nor efficient. For that reason, the network architecture was done in such a way to provide multi-hop paths from all nodes to the gateway.

Several sensor nodes were installed inside the metro station, each of them sensing a specific set of environmental aspects under investigation. One of the most important design choices was to deploy a wireless network, as the need for wiring all the nodes can easily limit the installation options. For similar reasons, most of the sensor nodes are battery powered, whereas all the sensor gateways have power supply, in order to guarantee continuous operation. The position of sensors was decided according to the specific requirements for modeling and controlling the system. Due to the limitation mentioned above, it is often necessary to choose a trade-off between requisites from monitoring, wireless communication, and building structure.

Regarding the values transmitted, they should typically be estimated as the average value out of a set of measurements, thus sensors should be typically installed in several locations scattered around the station. Then, a calibration process was applied to estimate at what extent measurements were affected by their sub-optimal locations and, when feasible, correction factors were applied.

Routing plays a key role to ensure correct data delivery. To allow multi-hop communication and adapt to the variable condition, we implemented a dynamic routing protocol to achieve flexibility and quicker setup. Indeed, the protocol periodically exchanges a very restricted amount of information, taking advantage of other control packets, so that routing information are updated frequently, but the amount of additional control traffic is very low. The actual ad-hoc routing procedure, involving more control packet to be exchanged, is used only in case of missing information. The routing algorithm takes both link quality and hop-count into account, in order to better capture the quality of each available path.

Each sensor acquires its settings from the gateway after reboots and error conditions, and it stores some previous measurements. This approach significantly helped to avoid data loss in case of unexpected events, such as sudden worsening of signal quality in the station. Concerning the energy consumption, using cheap battery instead of more expensive energy harvesters allowed us to achieve a battery lifetime that was enough for the project requirements.

Data reliability was strengthened by implementing a mechanism that periodically verifies the data received at the gateway server and requests data re-transmission in case of missing values. In practice, real-time monitoring is stricter in terms of delay requirements; hence, the mechanism parameters must be tuned according to the specific requirements. During the modeling and testing time, the design of the monitoring network turned out to be redundant both in terms of measurements collected and in terms of available adjacent nodes for routing, due to some changes in terms of requirements. Clearly, in this kind of harsh environment, redundancy is often needed or desired, as it contributes to increase the reliability of the system.

4.3. Implementation and system performance

The system consists of the central SEAM4US server, various gateway servers, gateway nodes, and sensor nodes. Gateway servers are Linux servers (FitPCs) connected to the gateway node. The gateway and sensor nodes are implemented on the processing and communication board by Redwire LLC called Econotag, with an additional 32 KHz oscillator and one of the four sensor boards specifically designed to fulfill the requirements of the project. Table 1 lists all of them, reporting the specific environmental aspects measured through every board. In addition, Figure 2 illustrates how the WSN is installed within the station, also showing that the network is divided in four sub-networks from the communication point of view (SN2, SN3, SN4, SN5), whereas SN1 includes only the weather station.

Type Probes Sensor IDs
Sensor
Board 1
-Air pressure
-Air temperature
-Surface temperature
-High-speed
anemometer
1, 3, 4, 5, 6, 7,
8, 10, 11, 12,
13, 14, 15, 16,
17, 18, 20, 21,
22, 23, 24, 25,
27, 28, 29, 30,
31, 32, 33, 34,
57
Sensor
Board 2
-Air pressure
-Relative humidity
-CO2
-PM10
26, 35
Sensor
Board 3
-Air temperature
-Surface temperature
-Low-speed
anemometer
-Differential pressure
9, 56
Weather
Station
-Solar radiation
-CO2
-PM10
55

Table 1.

List of sensors deployed within PdG station.

Figure 2.

WSN deployment within PdG station.

We installed Contiki OS as an operating system to the nodes. We made various additions to OS, such as routing and control protocol, to make it more suitable to our needs. Concerning power saving, we implemented an energy efficient MAC protocol that combines R-MAC [27] and ContikiMAC [28], leading to a cross-layer mechanism able to allow nodes to stay in sleep mode most of the time, according to the application’s data sampling and delay requirements [29]. This way, we are able to achieve battery replacement periods of many years, as shown from the estimation in Figure 3, which was satisfactory for this use case.

Figure 3.

Battery replacement period plotted over network transmission interval.

The requirement in terms of data delivery can be summarized by specifying the maximum allowed packet loss (PL) as 20%. In fact, the system was able to satisfy the requirement as the average packet delivery ratio over the entire network during the evaluation period was 13%. An interesting finding is that in this kind of environment, sub-networks can show significant differences in terms of performance compared to other more homogeneous scenarios. For a quick look at this aspect, in Table 2, we reported the values of some performance indicators observed during a period of 1 week, both the average value over the entire network and the average for each sub-network: the PL, the number of hops to the gateway (NH), and the number of available routing paths to the gateway (NP).

Average PL NH NP
TOT 13% 2.04 17.34
SN2 7% 2.19 16.8
SN3 24% 2.3 4.25
SN4 5% 1.9 31
SN5 35% 1.77 1.6

Table 2.

Performance indicators observed during one week.

The total average is not the average of the values shown below, since each sub-network has different number of nodes. The sub-network SN4 is the one that cover the tunnel where on average one train every minute passes, implying that it is located in a very dynamic environment mainly due to passengers and trains crossing. As you can see from Table 2, SN4 is the sub-network with the lowest PL and the higher number of available paths. Indeed, we could observe that the number and position of sensor nodes in SN4 was sufficient to ensure many multi-hop alternative paths for each node, despite the challenging condition. On the other hand, SN5 has the highest PL, mainly because each node has very limited alternative paths to reach the gateway, and renovation areas surround or partially cover this area, limiting communication and worsening channel condition. Hence, we can say that in harsh environments it is important that every node has some alternative paths to reach the central server, as this redundancy contributes to reduce the PL.

4.4. Architecture of the monitoring sub-system

A fundamental issue in the implementation of MPC is the interface between the model used to drive the control logics and the data gathered by means of the sensor network. Models used in the MPC control loop are based on a somewhat idealized representation of the environment: clean data, perfect time alignment, direct measures of all the necessary physical quantities, etc. Of course, this is not the case in real systems [30]. Therefore, specific modules must be developed to recover a data flow from the sensor network that is suitable for feeding the model predictions: the monitoring subsystem (Figure 4).

Figure 4.

Raw data pre-processing, filtering and resampling.

The main task of the monitoring subsystem is to act as an interface between the model used to drive the control logics and the data gathered by means of the WSN described in Section 3. Indeed, the control model accepts as inputs synchronized clean data, complete records at regular time intervals. However, this is never the case of raw data sent by a WSN. For that reason, the monitoring subsystem was made up of a set of units developed to recover a data flow from the WSN and convert them into a suitable form for feeding the control model computations. Summarizing, three main steps are accomplished by this component:

  • filtering in order to reduce the aliasing and the noise of raw data;

  • re-sampling to perform time alignment;

  • post-processing (i.e., unit conversion, calibration, estimation of indirect measurements).

In this sub-section, filtering and re-sampling will be described, whereas post-processing will be the object of the next subsection.

Raw data are asynchronously acquired by the sensor network with sampling frequency fs (subject to some drift due to network latency); therefore, they have to be aligned in time before entering the controller. This task is carried out by a re-sampling centralized process that, at a fixed rate fr, captures the updated value for each sensor and stores it. In order to avoid aliasing, according to Shannon’s theorem [31], this process requires input data to be low-pass filtered with a cut-off frequency greater than half the re-sampling frequency fc ≤ fr/2.

Filtering is used to smooth data; however, it introduces a delay that, when too long, could make the information useless for control purposes. The delay introduced by the filter depends on filter order, type and cutoff frequency (i.e., frequency at −3 dB of attenuation) with respect to sampling frequency fs. An IIR filter type was selected and used as the best compromise between complexity, selectivity, and phase shift. Once the cutoff frequency is given, the filter parameter can be computed as

a=e-2πfc/fsE1

and a filter recursive form for implementation is

yn=1-axn-ayn-1E2

A sampling frequency fs must be established in order to avoid, or limit, aliasing noise in the digital signal. Again, according to Shannon’s theorem, this is done based on the spectrum occupancy band of the continuous signal to be sampled. In order to determine the spectrum occupancy of each sensor type, a data collection campaign was performed in the real station. Data were acquired for a whole week with a high sampling rate in order to obtain an oversampled dataset. The sampling rate was 1 min for temperatures, wind speed, and wind direction and 10 s for air speed and concentration of pollutants. The sampled data were then re-sampled and aligned in time every 10 s. A Welch mean-square spectrum was then estimated and analyzed. Defining B−20dB as the frequency at which power attenuation is at least −20 dB between the amplitude of the lower harmonics in the spectrum and the amplitude of the spectral components beyond spectrum occupancy band B−20dB itself, its value can be graphically determined from the spectrum plots.

Shannon’s theorem states that, given a signal with occupancy band B, in order to keep all the original information in the sampled signal and to avoid aliasing, the sampling interval must be fs > 2B. In our case, the sensors are much more reactive than the system dynamics and much of the original information contain noise that can be removed by post-process filtering with cut-off frequency fc ≪ fs. Thus, a small amount of aliasing can be tolerated if it falls in the part of the spectrum that is cut by the post-process low-pass filter, that is, when fc < B, a less restrictive relation can be applied: fs > B + fc. In other words, it is necessary to have fs>fsL, where:

fsL=B+fc,whenfc<B2B,otherwiseE3

The cut-off frequency for post-process low-pass filter fc and the re-sampling frequency fr are chosen so that reasonable values for raw sampling frequency fs and residual aliasing are achieved. The results reported in Table 3 are achieved with fr = 1/600 Hz and fc=fr2=1/1200Hz. The filter cutoff frequency is chosen as large as possible (equal to its upper bound) in order to limit the consequent phase delay and hence to make the controller more reactive.

Sensor Occupancy band Aliasing f s min δ s max
Temperature B − 20dB = 0.3 mHz 1% 0.6 mHz 1667 s
Wind speed B − 20dB = 3.0 mHz 1% 3.8 mHz 261 s
Wind direction B − 20dB = 6.0 mHz 1% 6.8 mHz 146 s
Air speed B − 20dB = 10.0 mHz 6% 10.8 mHz 92 s
CO2 concentration B − 20dB = 0.7 mHz 1% 1.4 mHz 714 s
PM10 concentration B− 20dB = 2.0 mHz 1% 2.8 mHz 353 s

Table 3.

Sampling frequencies and sampling intervals for each sensor type.

Based on the spectrum analysis, a sampling interval δs ≐ 1/fs = 60s is selected as the final value for all the sensors involved in the control. In this way, the sampling interval is large enough to limit the network traffic and the storage requirements, but it is also small enough to avoid significant aliasing in acquired information.

In order to exploit the disturbance rejection capability of the closed loop, the control interval, that is the interval used for updating the control action, is selected as the fastest synchronous data update rate available, that is the re-sampling rate fr = 1/600Hz (δr1fr=600s).

Finally, the prediction interval, that is the step used for updating predictions, is selected as the slowest available prediction update rate, which is the weather forecast update rate δw = 1 h. This allows prediction horizons of hours to be used without introducing excessive computational burden and with better prediction accuracy (lower propagated uncertainties). Since, as will be shown in following Section 5.3, satisfactory control results and energy savings have been obtained with a prediction horizon of one step p = 1, a prediction horizon of 1 h is finally adopted.

From now on, post-processing functions will be intended to be applied to filtered and resampled values.

4.5. Post-processing functions

Post-processing functions were used to convert data processed as described in Section 4.4 into a format that is compliant with the controller unit of the MPC system. In the case of PdG station, the most important indoor environmental parameters, which were sent to the controller in order to estimate the current state of the controlled domain at each iteration of the MPC, were as follows: air change rates; air temperature values; dust (PM10) concentration.

4.5.1. Air change rates

Thanks to the layout of PdG station, that is made up of a series of corridors, if air flow rates through the corridors leading to the PdG’s platform are estimated, their balance will give back the total amount of air change rates across the platform, that is of interest because it is the most crowded room in the station and it is polluted by the passage of trains. As a consequence, an accurate evaluation of air speed flowing through corridors is of utmost importance. This measure was collected by means of the “high-speed anemometer” reported in Section 4, whose records are filtered and re-sampled in real-time. However, this is just correlated to the air flow rate, whose value is still unknown and must be estimated, instead. In addition, it must be estimated by means of a straightforward algorithm because it must be implemented in real-time. Among the factors that make this conversion task quite cumbersome, we cite the sensor’s sheltering by a pipe and by a net at the pipe’s ends, and its location to one of any room’s top corners. So, preliminary numerical and experimental surveys were carried out in order to find out an accurate conversion procedure. More specifically, numerical simulations supported by experimental evidence showed that obstacles that may be found in corridors (e.g. people) affect just locally air speed field in any cross section of corridors, and do not change the overall balance estimated in the case of unobstructed corridor’s cross section [32]. In other words, any air speed value on the middle cross section of corridors could be correlated with the average air speed across corridors. These values were then multiplied by the cross section’s area in order to estimate the overall air flow rates across any corridor. However, the disturbance caused by presence of the sensor’s sheltering was corrected by means of a calibration process, that adjusted real-time measurements of high-speed anemometers (Q’) through a relation including an y-offset (q) and a scale factor (m):

Q=mQ+qE4

The two coefficients were estimated from a set of on field measurements performed in PdG station by means of hand-held instruments. The dataset was then split into the first 75%, which was used for estimating the coefficients q and m, and the second 25%, which was used for validation purposes. Technically, the estimation was based on an OLS (ordinary least square) algorithm [33]. Six calibration curves were worked out for as many sensors placed in corridors. Each curve was based on two sets of measurements taken for about 15 min at two different times of the day. The calibrated measurements of air speed were then multiplied by the cross section, so as to work out air flow rates. The validity of this procedure was already demonstrated by the authors in a previous research paper [32].

In Figures 5 and 6, an example of the inputs and outcomes from this calibration process is provided. Figure 5 compares the plot of the data logged by the hand-held instrument (i.e., benchmark) and the data plotted by the installed Seam4us sensor, where the benchmark presents peaks higher than the Seam4us sensor, and its average value is slightly higher than the other series. Those plots are the result of filtering and resampling, as reported in Section 4.4. Then, the y-offset (0.1510 m/s) and scale factor (1.2719) of the calibration curve were estimated by means of OLS analysis, like in Eq. (4). Similarly, the calibration curves for all the other corridors were worked out. In the case of node no. 18, Figure 6 shows that the calibration brought the two curves to almost superimpose. At this juncture, air flow rates through corridors are known and they are ready to be combined one another in order to estimate outdoor air supplied to the platform (PL3), which is air change per hour from outdoor air (ACO):

+M´ACOPL3=Qas+QCNl+QCNe-QSLbE5

where Qas is the air supplied by the mechanical ventilation system; QCNl is air flow rate entering across the corridor called CNl, and the last two terms computes the difference between air flow rate flowing through corridor CNe and another corridor called SLb. The difference was due to the evidence that only the air flow coming from CNe, but that was not directed toward SLb, entered the platform PL3. The plus apex indicates that this contribution was taken into account just in case the balance is positive.

Figure 5.

Raw data at node no. 18.

Figure 6.

Transformed data compared with the dataset used for verification.

Another finding was that trains in tunnels affect the amount of air flow rates flowing across tunnels. For that reason, air flow rates estimated in tunnels by means of sensors’ measurements were reduced by a factor included in post-processing functions, which was computed as a result of an overall air flow balance in the station. Although no contribution to the platform’s daily ventilation came from the two Pdg’s tunnels, because they always worked in extraction mode, the reduction determined on air flow rates due to the presence of trains was estimated just for the purpose of general knowledge. The overall air flow balance in the station around the platform can be written as:

-Qas=QCNl+QCNq+2QCNop+αQtunE6

where, Qas is the same as mentioned above, Qtun is the air flow rate of the fans extracting air from tunnels and CNl, CNq, and CNop are corridors. The overall air flow balance brought to determine the coefficient α = 0.7, that is the reduction due to the presence of trains in tunnels.

4.5.2. Air temperature

Similarly to what described in Section 4.5.1, air temperature plots provided by the Seam4us’ WSN were compared with those ones measured by accurate hand-held instruments. We checked two types of deviation: an y-offset of the respective average values of the two plots; peaks of the Seam4us networks were lower than the peaks of the hand-held instruments. The latter was probably due to the packaging inside which the Seam4us sensors were sheltered. But it was deemed as not relevant, because just average temperatures over time steps of 1 h were needed by the intelligent control module, whereas peaks were due just to trains passing by, whose consequences lasted for a few minutes. The former was corrected by comparing the two average measures from the two plots and applying an y-offset factor to every Seam4us sensor. So, one specific y-offset conversion for each Seam4us temperature sensor was estimated.

4.5.3. Dust concentration

The raw measurements provided by the PM10 sensors were relative to the number of particles counted within 0.283 l of air volume (u.o.m. is pcs/0.283 l). The purpose of the post-processing function was twofold: firstly, to extend particles’ count over the whole spectrum, due to the fact the sensor was able to sense only particles sized more than 0.5 μm; secondly, PM10 concentration should be measured in standard unit of measure, that is, μg/m3. In order to pursue that, a post-processing function made of six steps was set up. Given that raw data (rawPM) were measured as pcs/0.283 l, the first step turned it into n measures in pcs/m3, through the factor k = 3534 l/m3, hence n = k*rawPM. The second step assessed the ratio of particles out of their total number, which was not considered in the raw measures (because limited above 0.5 μm). So an on-site survey through a hand-held instrument (i.e., “Fluke particle counter”) was done, and the distribution in Table 4 was had. As a result, if the sum of particles measured by the WSN between 0.5 and 10 μm is 100%, which number must be increased by 79.3% to include even those particles between 0.3 and 0.5 μm.

Range of diameters [μm] Central value of diameters dsj [μm] Ratio Dj [%]
0.3–0.5 0.4 79.3
0.5–1.0 0.75 62.2
1.0–2.0 1.5 19.1
2.0–5.0 3.5 17.7
5.0–10.0 7.5 0.88
>10.0 12.5 0.13

Table 4.

Distribution of particles found in PdG station.

Given the distribution of particles in Table 4, in the third step, the number of particles per size was rewritten in the form: nj = n*Dj, where nj is the number of particles whose diameter’s central value is equal to dsj, n is the raw measure of particles and Dj is the ratio. Steps 4–6 were determined according to literature and used to convert the number of particles in concentration. In particular, step no. 4 computed the volume occupied by nj particles [34]:

volj=π6dsj3nE7

As a fifth step, the concentration mj [μg/m3] will be computed as [35]:

CPM10´PM10=mj=CFρeffjvoljE8

where the coefficient CF = 1 in our case and an overall value of ρeff was assessed experimentally according to the relationships [36]:

ρeff=PM10j=15voljFPM10jE9

The coefficients F are provided by the literature [36], while PM10 was measured in the station, at the same time, when the counting in Table 1 was done. It came out that PM10 = 320 μg/m3, all the other values are known, hence ρeff = 3.15·1012 μg/m3. By combining all the steps described above, the conversion of the kind depicted on Figures 7 and 8 were performed automatically by the Seam4us system, and in real-time during monitoring.

Figure 7.

Raw data collected about PM10.

Figure 8.

PM10 post-processed and converted into concentration.

Advertisement

5. Implementation of the MPC

5.1. MPC installed in the case study

In the considered station (PdG-Line3), the air exchange and thermal comfort is achieved by two fans located in the station and two fans situated in the middle of the two tunnels. The original control policy, hereafter referred to as baseline (BSL), requires the station fans to inject air into the station in the daytime, and the tunnel fans to extract air from the platform in the daytime (between 07.00 and 22.00) and inject air at night (between 22.00 and 07.00 of the next day), when the station fans are switched off. All the fans are driven by an inverter based on the input frequency on the basis of a day/night schedule and a seasonal schedule set by the station operator as follows (the sign of the frequency input represents the air flow direction: positive when entering platform):

  • Winter (January, February, March) and Autumn (November, Decdmber) modes: tunnel fans at −25 Hz in the daytime and +25 Hz at night, station fans at +25 Hz only in the daytime;

  • Spring (April, May, June) and Summer (July, August, September, October) modes: tunnel fans at −50 Hz in the daytime and +25 Hz at night, station fans at +50 Hz only in the daytime.

Since the tunnel fans serve different contiguous stations, their control must be implemented at a higher level in order to coordinate multiple stations on the same line. On the contrary, the station fans can be controlled locally (even if they must be driven in parallel in order to avoid falling into stall conditions) and are driven by the MPC agent. Therefore, the ventilation controller has to manage power consumption and indoor comfort by acting on only one actuator that is the driving frequency of the two station fans. MPC control is active only when the fans are active for the baseline; therefore, MPC is ON between 07.00 and 22.00 (daytime).

While the standard approach adopted by the station operator is to drive the devices solely based on time schedules, with MPC the problem is tackled in a dynamic way: the information coming from the monitoring network and from the models are used to decide on the optimal ventilation control to be applied at any given time.

Weather conditions and forecasts are obtained by means of online web-service wunderground.com©. A software proxy is in charge of querying this service and periodically, thus providing weather conditions and forecasts. The quality of the current weather condition is improved by using a local weather station connected to the wireless sensor network. The CCTV-based crowd density estimator (see [37, 38]) is used here to detect people, since it is the main source of data for modeling passenger behavior, and it is based exclusively on the video streams of the CCTV surveillance system (which usually exists in a station).

Based on what was stated in Section 2.2 and is shown in Figure 9, at each re-sampling instant (i.e., every 1/fr seconds), the MPC algorithm collects information about the station by taking re-sampled data from the monitoring, evaluates the best control action by using hourly predictions over a predefined prediction horizon p and applies it to the station. Data acquisition is carried out by waiting a maximum defined time-out interval after each re-sampling instant: when time-out is reached, MPC proceeds to compute the control action corresponding to that interval and applies it via the actuator proxy. The generation of the candidate control policy is strongly related to the adopted searching technique. In the case study, since the controlled actuators have the same input values that are discretized from 0 to 50 Hz with a resolution of 1 Hz {0, 1,…, 49, 50}, and the prediction horizon is set to p = 1, an exhaustive case generation is used as a simple solution for determining the optimal control action to be applied.

Figure 9.

Timing of the control system.

For the sake of generality, parameters and variables are normalized here with respect to the maximum value of the related term in the cost function. Therefore, all the physical variables identified with tilde in their raw version, will be represented in the MPC problem formulation with their normalized versions (without tilde).

The total absorbed electric power has to be minimized while keeping thermal comfort and air quality parameters as close as possible to the desired values (soft constraints). Moreover, the thermal comfort and air-quality parameters must be kept strictly inside the bound constraints:

  • Air exchange level on platform greater than a minimum threshold: M(t) > MLO(t)

  • Difference between inside and outside CO2 level lower than a maximum level:CCO2t-CCO2Ot<ΔCCO2U

  • Particulate level lower than a maximum threshold: CPM10t<CPM10U

  • Temperature lower than a maximum threshold: T(t) < TU

The sum of squares of the total absorbed electric power of tunnel fans PT and station fans PS have to be minimized. The temperature in platform T should be as close as possible to the outside temperature TO and to the desired value T´ and it must be lower than the upper bound TU. The air change rate with outdoor air M should be as big as possible and it must be bigger than the lower bound MLO, which takes into account for the current occupancy also. Pollutant concentrations in platform CCO2 and CPM10 should be minimized, moreover CCO2 must be lower than the outdoor concentration CCO2O plus an allowed increase ΔCCO2U and CPM10 must be lower than the upper bound CPM10U. In order to achieve these multiple conflicting objectives over the fixed prediction horizon p, the single objectives are combined in a global objective by arbitrary weighting factors αPT, αPS, αΔT, αT, αM, αCO2, αPM10, αΔF. Therefore, the following cost function is defined and evaluated:

J(t)=k=1p [ αPTPT(t+k)2+αPSPS(t+k)2+αΔT(TO(t+k)T(t+k))2 +αT(T´T(t+k))2+αM(1M(t+k))2+αCO2CCO2(t+k)2+αPM10CPM10(t+k)2+αΔF(F(t+k1)F(t+k2))2 ]E10

The last term in cost function has been added for considering the amplitude of change in frequency F that drives the actuators. It is a stability objective that could be useful to smooth the control movements. Denoting with superscripts ⋅ L and ⋅ U lower bound and upper bounds, respectively, the previous minimization is subject to the following comfort constraints ∀ t ∈ Z:

M(t)>MLO(t),CCO2(t)CCO2O(t)<ΔCCO2U,CPM10(t)<CPM10U,T(t)<TUE11

and to the following operative constraints ∀ t ∈ Z that considers the physical limits of the actuators:

FL<Ft<FHE12

Once the bounds are derived from regulations or operative limits and set point T´ is fixed by comfort requirements, the remaining degrees of freedom for tuning the controller are the weights of the different cost terms α and the prediction horizon p. At each control step t, the MPC problem consists in finding the optimal control sequence that minimizes the cost function (10), under constraints (11) and (12). The presence of constraints makes the optimization problem not trivial, however, as reported in literature [3941], barrier functions can be used to transform the constraints into objectives. In constrained optimization, a barrier function is a continuous function whose value increases to infinity when approaching the boundary of the feasible region: it is used as a penalizing term for the violations of constraints.

The logarithmic barrier function is used here to implement a generic upper bound constraint x ≤ xU, thus transforming the original constrained optimization problem into an unconstrained equivalent one. Then, by exploiting the predictive models, all the possible scenarios are evaluated and the best one is selected for controlling the fan.

Further details about the control approach can be found in [12].

5.2. The predictive models

Two dynamic Bayesian networks (DBN) were embedded in the MPC scheme in order to act as the control models [42]. The main benefits deriving from their use consist in: having established a formalism for dealing with uncertain and incomplete information; their graphical representation explicitly states causal relations between variables; the causal assumptions allowed us to limit the amount of data needed to define joint probability distributions; when additional data are available, the “belief updating” process can be applied, that is, conditional probability tables can be updated and refined when new evidence is available.

All the afore-listed features dramatically match with the needs typically arising in contexts monitored in real-time. For instance, data coming from sensors are always affected by uncertainty; even more important, while monitoring is in progress, new knowledge is available and it should be used to refine the parameters of the nonlinear probability distribution functions embedded in DBN, etc…. So, the choice of DBN fits particularly well in this case, when simpler models (e.g., auto-regressive) cannot work, due to the complexity of the domain. The word “Dynamic” means that these networks are able to make inference about some variables of the state of a system at time “t + 1”, once the state of the same system at time “t” is known, thanks to the data provided by the WSN environmental monitoring system.

A detailed description of the two DBNs embedded in the whole control architecture is out of the scope of this contribution, although it was provided by another book chapter [43]. However, it is worth recalling that the two DBNs were relative to:

  1. The first DBN estimated indoor air temperature in the station halls and in the platform; the temperature values at selected nodes is estimated based on the temperature values recorded at various places in the station and outdoors, number of trains passing per hour and occupation density during previous hours;

  2. The second DBN estimated air flow rates in the station and the rate of air changes in the platform, starting from the temperature values forecasted by the first DBN, expected mechanical air supply, outdoor forecasts, and the state of some variables of the station in previous hours, namely airflow rates in some corridors and in tunnels, outdoor air supplied by the mechanical air supply system, number of trains per hour.

  3. Both networks were trained on a set of data generated by means of a lumped parameter simulation models. But their parameters were being gradually improved by means of datasets made up of measurements by the WSN.

5.3. Performances of MPC

The described MPC strategy was successfully applied to control the forced ventilation of the PdG-Line3 station for 5 months (from August to December 2014). Some relevant measures collected by the monitoring system, representing filtered, re-sampled and post-processed quantities, are depicted in Figures 10 and 11.

Figure 10.

Performances measured during a working day (left) and a weekend (right) in October.

Figure 11.

Performances measured during the whole month of October.

Figure 10 refers to the system operating in the summer mode during a working day (when the station opens at 05:00 and closes at 24:00) and during a weekend (when the station remains open all the time). Similarly, Figure 11 refers to the whole month of October. Note that from October 18th to 21st there is an interruption due to maintenance on the station and the fans were switched manually off. The corresponding comfort levels in the station were significantly worsened in that period, showing the control effectiveness.

Note that, during the working days, irrespective of the summer or winter modes, the higher number of people and trains passing through the station makes the indoor climate less comfortable and the savings margins become smaller, albeit still relevant w.r.t the weekend. The comfort indexes all remain acceptable: temperature is kept stable and at acceptable levels while pollutants comply with constraints.

Permanently installed energy meters where used for monitoring energy consumption before and after the installation of the SEAM4US system. The total energy saving with respect to the baseline S ≐ EMPC − EBSL is defined as the difference between the energy consumption achieved with the MPC strategy EMPC and the energy consumption obtained with the baseline strategy EBSL. The percent energy saving S is the same value S divided by the baseline energy consumption EBSL.

The resulting values for energy saving relative to the direct and continuous measures taken during the 4 months of full operation are reported in Table 5. These periods are representative of the different operating conditions occurring in PdG-Line3 station and produce global savings of about 33% without significantly affecting passenger comfort.

Months S (kWh) S%
September 2304 30
October 3859 48
November 240 11
December 297 14
Total 6701 33

Table 5.

Monthly and total energy saving produced by the MPC.

Advertisement

6. Conclusions and lessons learnt

The experimental results and lessons learnt reported in this chapter, confirm the importance of the pre-evaluation of the environment, and of a careful pre-deployment design for a wireless real-time environmental monitoring system. Several are the technologies and protocols available to build the wireless system, allowing to accomplish many tasks, from the periodical sensing to the recovery from PL at the gateway. However, the selection of those to be used is highly dependent on the scenario and must be a trade-off between requirements and limitations. An essential aspect is accessing the nodes remotely, and to implement the network with the capability of self-configuration and recovery after failure, so as to reduce human intervention. However, we also showed that knowing the complete architecture of the building, and the details of the operational condition, is of utmost importance for identifying the source of unexpected problems or sudden changes of operation (e.g. unusual event, such as city festival). From the data forwarding point of view, our results confirm the challenging signal propagation, especially in tunnel at rush hours, and observe the importance of installing redundant nodes for both sensing and connectivity objective. Ensuring redundant paths from a node to the gateway has shown to be of paramount importance for keeping data loss below a reasonable value.

Once WSNs are in place and real-time monitoring is running, several control applications can be developed on top of them. However, the controller needs filtered, re-sampled, and processed datasets with no missing values, hence post-processing must be implemented to this purpose. In the specific case described in this chapter, real-time monitoring was used in support of MPC and was tested in a subway in Barcelona. Not only did the system show to be able to comply with air quality and comfort requirements set for these places, but it determined energy savings as high as 33%, too. In conclusion, considering the affordability and wide availability of monitoring technologies nowadays, much research should be devoted to the development of intelligent control logics that can determine very high energy savings, even when installed in existing buildings.

References

  1. 1. Mayne D.Q., Rawlings J.B., Rao C.V., Scokaert P.O.M. Constrained model predictive control: Stability and optimality. Automatica. 2000;36(6):789–814.
  2. 2. Mayne D.Q. Model predictive control: Recent developments and future promise. Automatica. 2014;50(12):2967-2986.
  3. 3. Maciejowski J.M. Predictive control with constraints. Prentice Hall, England; 2002.
  4. 4. Borrelli F. Constrained Optimal control of linear and hybrid systems. Springer, Heidelberg; 2003.
  5. 5. Zhang X., Grammatico S., Margellos K., Goulart P.J., Lygeros J. Randomized nonlinear MPC for uncertain control-affine systems with bounded closed-loop constraint violations. Proceedings of: IFAC World Congress, Cape Town, South Africa, 2014, August.
  6. 6. Zhang X., Schildbach G., Sturzenegger D., M.Morari. Scenario-based MPC for energy-efficient building climate control under weather and occupancy uncertainty. Proceedings of: European Control Conference, Zurich, Switzerland, 2013, July: 1029–1034.
  7. 7. Oldewurtel F., Parisio A., Jones C.N., Gyalistras D., Gwerder M., Stauch V., Lehmann B., Morari M. Use of model predictive control and weather forecasts for energy efficient building climate control. Energy Build. 2012;45:15–27.
  8. 8. Mesbah A., Streif S., Findeisen R., Braatz R.D. Stochastic nonlinear model predictive control with probabilistic constraints. American Control Conference (ACC), 2014, IEEE, 2014:2413–2419.
  9. 9. Kwak Y., Huh J.H., Jang C. Development of a model predictive control framework through real-time building energy management system data. Appl. Energy. 2015;155:1–13.
  10. 10. Casals M., Gangolells M., Forcada N., Macarulla M., Giretti A. A breakdown of energy consumption in an underground station. Energy Build. 2014;78:89–97.
  11. 11. Henze G., Kalz D. Experimental analysis of model-based predictive optimal control for active and passive thermal storage inventory. HVAC&R, 2005;11(2):189–213.
  12. 12. Vaccarini, M., Giretti, A., Tolve, L. C., & Casals, M. Model Predictive Energy Control of Ventilation for Underground Stations. Energy and Buildings, Volume 116, 15 March 2016, Pages 326-340
  13. 13. Oldewurtel F., Sturzenegger D., Morari M. Importance of occupancy information for building climate control. Appl. Energy. 2013;101:521–532.
  14. 14. Langendoen K., Baggio A., Visser O. Murphy loves potatoes: Experiences from a pilot sensor network deployment in precision agriculture. Proceedings of: 20th IEEE International Parallel & Distributed Processing Symposium. IEEE, 2006.
  15. 15. Barrenetxea G., et al. Sensorscope: Out-of-the-box environmental monitoring. Proceedings of: Information Processing in Sensor Networks, 2008. IPSN'08. International Conference on., IEEE, 2008.
  16. 16. Hakala, Ismo, et al. Evaluation of Environmental Wireless Sensor Network-Case Foxhouse. International Journal on Advances in Networks and Services. Vol 3, no. 1 & 2, special issue, 2010.
  17. 17. Szewczyk R., et al. An analysis of a large scale habitat monitoring application. Proceedings of: 2nd International Conference on Embedded Networked Sensor Systems. ACM, 2004.
  18. 18. A. Silva, and M. Vuran. Development of a Testbed for Wireless Underground Sensor Networks. EURASIP Journal on Wireless Communications and Networking. Volume 2010, Article ID 620307, 14 pages doi:10.1155/2010/620307.
  19. 19. Cardell-Oliver R., et al. Field testing a wireless sensor network for reactive environmental monitoring [soil moisture measurement]. Intelligent Sensors, Sensor Networks and Information Processing Conference, 2004. Proceedings of the 2004. IEEE, 2004.
  20. 20. Turner J.S.C., et al. The study of human movement effect on Signal Strength for indoor WSN deployment. Wireless Sensor (ICWISE), 2013 IEEE Conference on. IEEE, 2013.
  21. 21. Ceriotti M., et al. Monitoring heritage buildings with wireless sensor networks: The Torre Aquila deployment. Proceedings of: 2009 International Conference on Information Processing in Sensor Networks. IEEE Computer Society, 2009.
  22. 22. Peralta L.M.R., de Brito L.M.P.L. An integrating platform for environmental monitoring in museums based on wireless sensor networks. Int. J. Adv. Netw. Serv. 2010;3(1 & 2).
  23. 23. Li X., et al. A feasibility study of the measuring accuracy and capability of wireless sensor networks in tunnel monitoring. Front. Struct. Civil Eng. 2012;6(2):111–120.
  24. 24. M. Nurchis, M. Valta, M. Vaccarini, and A. Carbonari. A Wireless System for Real-time Environmental and Energy Monitoring of a Metro Station: Lessons Learnt from a Three-year Research Project. Proceedings of the 32nd ISARC Symposium, 15–18 June 2015.
  25. 25. Bennett P.J., et al. Wireless sensor networks for underground railway applications: Case studies in Prague and London. Smart Struct. Syst. 2010;6(5–6):619–639.
  26. 26. G. Martins, S. Oechsner, and B. Bellalta. A Centralized Mechanism to Make Predictions Based on Data from Multiple WSNs. “Multiple Access Communications”: Volume 9305 of the series Lecture Notes in Computer Science pp 19–32, 25 August 2015.
  27. 27. Koskela P., Valta M., Frantti T. Energy efficient MAC for wireless sensor networks. Sens. Transducers. 2010;121(10):133.
  28. 28. Dunkels A., The ContikiMAC Radio Duty Cycling Protocol, SICS, Tech. Rep. T2011:13, 2011.
  29. 29. Hiltunen J., Valta M., Ylisaukko-oja A. and Nurchis M. Design, Implementation and Experimental Results of a Wireless Sensor Network for Underground Metro Station. International Journal of Computer Science & Communication Networks. Vol. 4(3), 58–66; 2014.
  30. 30. Žáčeková E., Váňa Z., Cigler J. Towards the real-life implementation of MPC for an office building: Identification issues. Appl. Energy. 2014;135:53–62.
  31. 31. Shannon C.E. A mathematical theory of communication. ACMSIGMOBILE Mobile Comput. Commun. Rev. 2001;5(1):3–55.
  32. 32. Di Perna C., Carbonari A., Ansuini R., Casals M. Empirical approach for real-time estimation of air flow rates in a subway station. Tunn. Undergr. Space Technol. 2014;42:25–39.
  33. 33. Wonnacott T.H., Wonnacott R.J.. Introductory statistics. 5th ed. John Wiley and sons; New York. 1990. 705 p.
  34. 34. Hinds W. C. Aerosol Technology. John Wiley and Sons; New York. 1999.
  35. 35. Kulkarni P., Baron P. A., Willeke K. Aerosol measurements: Principles, Techniques and Applications. John Wiley & Sons; New York, 3rd ed. 2011.
  36. 36. De Carlo P.F., Slowik J.G., Worsnop D.R., Davidovits J.L., Jimenez J.L. Particle morphology and density characterization by combined mobility and aerodynamic diameter measurements. Part 1: Theory. Aerosol Sci. Technol. 2004;38(12):1185–1205.
  37. 37. Rahmalan H., Nixon M.S., Carter J.N. On crowd density estimation for surveillance. 2006
  38. 38. Chow T.W.S., Yam J.Y.F., Cho S.Y. Fast training algorithm for feedforward neural networks: application to crowd estimation at underground stations. Artificial Intelligence in Engineering, Vol. 13, Issue. 3, July 1999, pp. 301–307.
  39. 39. Nash S.G., Polyak N.R., Sofer A. A numerical comparison of barrier and modified barrier methods for large-scale bound-constrained optimization. In: Large scale optimization, Springer; 1994. pp. 319–338.
  40. 40. Hauser J., Saccon A. A barrier function method for the optimization of trajectory functionals with constraints. Proceedings of: 45th IEEE Conference on Decision and Control, 2006. IEEE; 2006. pp. 864–869.
  41. 41. Wright S.J., Nocedal J. Numerical Optimization. Springer-Verlag; New York; 2nd ed. 1999.
  42. 42. Korb K.B., Nicholson A.E. Bayesian artificial intelligence. 2nd ed. Chapman & Hall, London; 2011.
  43. 43. Carbonari A., Vaccarini M., Giretti A. Bayesian networks for supporting model based predictive control of smart buildings. In: Nezhad M.S.F., editor. Dynamic programming and Bayesian inference, concepts and applications. In Tech; 2014. pp. 3–37.

Written By

Alessandro Carbonari, Massimo Vaccarini, Mikko Valta and Maddalena Nurchis

Submitted: 12 November 2015 Reviewed: 23 February 2016 Published: 08 June 2016