Open access peer-reviewed chapter

Improving Quality of Life: Home Care for Chronically Ill and Elderly People

Written By

Natalia López Celani, Sergio Ponce, Olga Lucía Quintero and Francisco Vargas-Bonilla

Submitted: 04 May 2017 Reviewed: 13 June 2017 Published: 20 December 2017

DOI: 10.5772/intechopen.70113

From the Edited Volume

Caregiving and Home Care

Edited by Mukadder Mollaoglu

Chapter metrics overview

1,580 Chapter Downloads

View Full Metrics

Abstract

In this chapter, we propose a system especially created for elderly or chronically ill people that are with special needs and poor familiarity with technology. The system combines home monitoring of physiological and emotional states through a set of wearable sensors, user-controlled (automated) home devices, and a central control for integration of the data, in order to provide a safe and friendly environment according to the limited capabilities of the users. The main objective is to create the easy, low-cost automation of a room or house to provide a friendly environment that enhances the psychological condition of immobilized users. In addition, the complete interaction of the components provides an overview of the physical and emotional state of the user, building a behavior pattern that can be supervised by the care giving staff. This approach allows the integration of physiological signals with the patient’s environmental and social context to obtain a complete framework of the emotional states.

Keywords

  • home care
  • domotics
  • health monitoring
  • emotions recognition
  • elderly care giving

1. Introduction

The statistics and surveys of the World Health Organization (WHO) [1] reveal an increasing in life expectation (expecting 400 millions of population over 60 years old at 2050), longer treatments for chronic diseases as cancer, diabetes, cardiovascular conditions, and prevalence of Alzheimer’s disease and dementia, which leads to an increase in health costs and a social problem for personal care and hospitalization. According to the prediction of the United Nations, 65 years and older will occupy 15.7% of the total population in 2030 [2].

A recent survey in Europe has shown the socioeconomic impact of this problem, because chronic illness and aging affect 8 out of 10 people aged over 65 in Europe and consume 70–80% of health care budgets, both by the increasing number of patients and by the focus in acute diseases that manage the resources inefficiently for this case. In this perspective, the chronicity is a major societal challenge to develop strategies to enhance quality of life and prevent unnecessary hospitalizations. Similar situation has been reported in the United States, with an older proportion of 14.9% in 2015 [3], where the population aged 80 and over is projected to more than triple between 2015 and 2050.

In the Latin-American region, the situation is not different. According to the Pan-American Health Organization’s (PAHO) report on Innovative Care for Chronic Conditions [4], 76% of deaths (4 million people annually) are due to chronic conditions and only 1 of 10 people receives adequate treatment. All countries in the region show an accelerated increase in aging population, due to a better life expectative, but the adverse side is the risk of chronic diseases and the increment in the public health costs, which are already insufficient for current needs.

Besides legislative and educational support to this problem, is necessary to change the structure of health systems and the paradigm of caring for the elderly and the chronically ill, promoting home care and family support in a controlled environment. This initiative has to be accompanied for the educational, management, and budget changes that will be essentials for the next decades.

Technology will be a key instrument for a new concept of home care and continuous monitoring. New and smaller devices and communication possibilities lead to a more humanitarian model of assisted alternative to institutionalization as a great potential to reduce the burden of chronic diseases by making better use of this knowledge. A multidisciplinary approach is needed, because an integral scheme must to consider physiological, emotional, social, and environmental conditions that promote the well-being and support for the patients.

Men and women in their older ages suffer increased fragility, incapacity, chronic illness, and dependency. Several attempts of remote health monitoring can be found in the literature [57]. For example, Stikic et al. [8] present an effective and unobtrusive long term monitoring and recognition of activities of daily living based on the combination of the data from two different types of sensors: radio frequency identification (RFID) tag readers and accelerometers. A system based on wireless body sensors and smart phones was presented in [9] to offer remote monitoring and alert family and ambulance if detects an emergency. As was established by the authors, iCare is not only a real-time health monitoring system for the elderly, but also a living assistant.

Related to home automation and control, in the work of Ghazal and Al-Khatib [10], the authors propose the use of a single controller to command home appliances and sensors by using XBee transceivers. The remote controller has command buttons, alert LEDs and a LCD for showing messages. The unique master board toggles the ON/OFF switches of the appliances by means of relays.

Other works of Akanbi and Oladeji [11] use voice commands to control multiple home electrical or electronic appliances with maximum of 5 V using an Arduino Uno microcontroller. The design transmits voice command through a wireless microphone and a graphical interface. A systematic review of smart-home technologies to assist older adults can be found in [7], which concludes that smart-home technologies are readily accepted by older adults and their family members, and must be improved to enhancing safety and privacy. This work remarks the future challenge in aspects such as social and emotional well-being as well as outdoors interests and activities monitoring.

The system herein proposed has significant differences with these approaches, because of it was created especially for people with special needs and poor familiarity with technology. The system combines home monitoring of physiological and emotional states through a set of wearable’s sensors, user-controlled (automated) home devices, and adapted human-computer interfaces (HCI) are used in order to provide a safe and friendly environment according to the limited capabilities of the users. The main objective is to create the easy, low-cost automation of a room or house to provide a friendly environment that enhances the psychological condition of immobilized or vulnerable users, improves their safety and allows living at home longer. Moreover, the complete interaction of the components provides an overview of the physical and emotional state of the user, building a behavior pattern that can be supervised by the caregiving staff.

The emotional content is included here, as one of the main concerns in elder and disabled people. We propose the use of emotion recognition from voiced speech algorithms in order to maintain the integrated emotional state of the patient and use the signals as indicator for certain behavior. It has been an author’s effort through years to develop and offer the most possible accurate algorithms, and here we propose a data fusion of the best approaches. Moreover, from voice analysis can be possible to detect Parkinson’s disease (PD) so our proposal allows for constant monitoring and early detection of this condition. In addition, there is an important feature added to our proposal such as a methodology for detecting movements of interest, i.e., fall detection. Consequently, we present a couple of methodologies that can be easily understood and implemented in an in-house application.

There is a wide variety of commercial domotic systems, almost focus in lights automation, blinds, air conditioning, and other predefined tasks, driven by ever-increasing needs for house comfort, user connection, energy saving, and security. Phone remote access or voice commands are the most common user interfaces, not always suitable for use by the elderly. House automation for elderly people has different needs and objectives, attempting to provide the minimal user intervention and maximizing the physical and emotional support.

With this aim, we present an integral assistive home care system (IAHCS), specially designed for elderly or chronically ill people, who present mobility difficulties and reduced decision-making capacity. The main objective is to provide a technological aid that balances promoting autonomy while protecting a vulnerable adult from harm. Along with this, the caring for the elderly and people with chronic diseases, have also improved emphasizing the need of preventive medicine and monitoring of these patients in their home.

The approach proposed in this chapter comprises a user wearable device, a domotic system’s core installed in a personal computer (PC) and an ichnographic software (SICAA) that allows the interaction of the patient with the environment and peripheral devices. Besides, a microphone equipped with Wi-Fi connection record the user’s voice and sends it to the processing unit, in order to recognize emotional states. This scheme is integrated in a control block, described in Figure 1.

Figure 1.

Overall scheme of the integral assistive home care system. The user block belongs to wearable sensors for health and voice acquisition. Home automation block includes the control board of relays and home appliances. Human-computer interfaces communicate the user with the central control. All communications are wireless.

Advertisement

2. Human-computer interface (HCI)

Regarding the HCI, the major requirements are noninvasiveness, low cost, robustness, and adaptability for a group of users with similar capacities. For many years, we have developed HCI that was essayed as control input with acceptable results, especially for disabled people needs, such as adapted keypads, head-mouse controllers, vision based Interfaces using hand or head position detection, electromyogram, and electrooculogram-based interfaces, among others [1216].

For this system, a conventional mouse can be used, and also tablets, Wii commands or more sophisticated controls. Voice commands are eligible too, and preferred because they allow the emotion recognition processing as will be explained below.

In any case, the control input is configurable and versatile according with user’s abilities, allowed by the modularity of the system.

Advertisement

3. Wearable monitoring

Wearable sensors are steadily becoming the most prevalent personal devices, playing a crucial role in the chronic patients monitoring [17, 18]. Biological variables such as heart rate and oxygen saturation are being used for self-monitoring and preventing health conditions such as hypertension and stress [1921]. A wearable system should meet certain design conditions, such as data storage, wireless connection, power supply, portability and versatility, among others. We propose a platform of wearable sensors whose core is a master module that deals with data acquisition, synchronization, and wireless transmission [22]. This module is connected to sensors or slaves that acquire biological signals and process them to minimize the amount of data to be transmitted. The connection between slaves and master is intended to be in a properly wired article of clothing or via Bluetooth. Also, active sensors are selected to minimize battery consumption, and store data selectively. It is important to note that the communication protocol must be the same, independently of the nature of the acquired signal.

The data transmission from the master is carried out through Wi-Fi using TCP/IP protocol to a personal computer in order to process and extract descriptive features for health monitoring that will be used in the supervised control. The master board is shown in Figure 2, noting that the main core is the CC3200 microcontroller unit of Texas Instruments®

Figure 2.

Wearable modules. Up: master module. Down: slaves modules for heart rate/SpO2, IMU, temperature, and microphone. It is important to note the size related with a wristwatch.

The sensors module or slave units consist of a base platform where the principal component is a microcontroller unit (MCU) with ARM architecture (Advanced Reduced instructions Machine) that allows the addition of different analog circuits according to the biologic variable that is intended to measure. This module meets specific requirements of consumption, inputs/outputs and radio-frequency transmission. Thanks to the ultra-low power consumption, it allows to operate for long periods of time, even with smaller batteries.

The biological variables (each with its own specific acquisition and preprocessing module) acquired are:

  • Temperature module: It registers two temperature channels. Especially for chronic illness the body temperature is sensed and also the ambient temperature, to prevent hypothermia or excessive perspiration (this control is in a higher level that the domotic air conditioning, chosen by the user). This module is based on LMT70 sensor, ultra-small (0.88 by 0.88 mm) and with 0.1°C of accuracy, and the temperature was estimated with a first order approximation in the range of interest, from 30 to 45°C.

  • Heart rate module and pulse oximetry: This module is of crucial importance in the monitoring of cardiac and respiratory patients. On the other hand, the module is useful to detect sedentarism in elderly and obese users. The module is based on the absortion of the red light (660 nm) for deoxygenated and infrared light (940 nm) for oxygenated hemoglobin. The sensors must be placed directly on the skin and a single circuit (AFE4403 based) detects the relation between absorbed light and the peaks related with cardiac pulse. The output is a signal for SpO2 and heart rate.

  • Kinetics measurements module: This module has an inertial sensor inertial measurement unit (IMU), based on the MPU6050 that contains an accelerometer and a gyroscope. The information is processed to avoid singularities and both sources (an accelerometer and a gyroscope) are merged through a complementary filter reference. The output is processed to provide information related to the user’s activity level and the algorithms explained in the next section are implemented for fall detection and alarm.

  • Personal microphone module: To transmit human voice to the central PC, a commercial wireless Wi-Fi microphone is used and connected to the control unit. This module provides connection with the emotion processing unit.

The main software should be modular to suit the sensors chosen for each application, which give versatility to the system, functioning in an interchangeable way.

Advertisement

4. Fall detection algorithm

Using the data provided by the IMU sensor, we also analyze the problem of falls in independent elderly people. In Ref. [23], authors demonstrated that the consequences of a fall could be minimized by a portable automatic detection system, which sends an alarm right after an event. A dataset (SisFall) was created with 38 participants that conducted 19 activities of daily life and simulated 15 types of falls. This responded to the need of a fall detection dataset with a large number of activities determined after a literature review and our own survey; well documented acquisition conditions (authors recorded videos of each activity); and data from an embedded device fixed to the body (other datasets publicly available were recorded with smartphones). SisFall dataset consisted of 34 activities [falls and activities of daily living (ADL)] performed by 38 participants (15 of them had more than 60 years old). All files were made publicly available for the scientific community. The dataset was tested with the most widely used features to detect falls. With this work, it was demonstrated that a simple fourth order Butterworth filter with a cut frequency of 5 Hz is enough to detect falls without loss of information. Additionally, it was found that dynamic features based on statistical moments are the most accurate to classify among falls and ADL. However, it was also found that training algorithms with young people are insufficient to obtain acceptable accuracy with the objective population [24].

Preliminary tests with feature extraction algorithms commonly used in the literature to discriminate between falls and activities of daily living presented up to 96% of accuracy. They were implemented with a low computational cost threshold-based classifier, which can operate in real-time embedded systems. An individual activity analysis with each feature extraction algorithm demonstrated that some of them are complementary to each other. This analysis was used as a starting point to develop nonlinear discrimination metrics that improved the accuracy upto 99%. It is important to note that most false-positives are due to high acceleration periodic activities, and could be detected solely based on their period.

An energy-based fall detection algorithm was also implemented. The authors used a static feature extraction characteristic together with an energy-based feature. This algorithm was tested in data from a smartphone and the embedded device, with acceptable results [23].

Moreover, the most important finding was that the combination of different features provides higher discrimination capabilities than the individual ones. This result led to a second conclusion, a threshold-based classifier is enough to achieve accuracy levels of up to 99%. The importance of this final finding relays in the low complexity (and consequently energy consumption) that threshold-based classifiers require (Figure 3).

Figure 3.

Example for acquisition and processing of IMU signals during jogging and fall detection. The raw data is initially filtered. Then, features are computed and if thresholds (horizontal lines) are crossed a fall is detected. Left: threshold algorithm, parameter C2 is the standard deviation magnitude. Right: Energy based detection (Figure from Ref. [23], with permission).

It was found that most of the errors in the threshold-based algorithms were focused in some individual activities, such as periodic ADL with high energy, namely walk, jog, or going up or down stairs. Consequently, was developed a novel methodology for detecting and characterizing walk and jog based on nonpeak-based acceleration features. It was demonstrated that with the Kurtosis of wavelet coefficients it was possible to obtain a measure to correctly identify these activities. However, the authors found that it was more stable to obtain the period of the acceleration signal using its autocorrelation. A posteriori statistical analysis demonstrated that the period provides statistical significant differences between walk and jog. This methodology proved to be sensitive enough to provide a “quality of the activity” measure. The authors were able to determine online the regularity of the activity when the subject walked or jogged. This result could be useful for sports, allowing the person to maintain a regular jog rhythm for long periods of time.

In order to guarantee that the developed methodology can be implemented on the IMU slave sensor without affecting their computational capability (and the consequent battery consumption), authors propose an algorithm based on a Kalman filter, with a preprocessing stage based on a fourth order Butterworth filter, a nonlinear feature based in two commonly used feature extraction characteristics, and a threshold-based classifier. This algorithm was implemented the module in the in-situ microcontroller and validated by simulating the same activities of the dataset acquired in this work, along with a pilot test in real conditions with elderly adults. Both tests presented an error rate below 1%.

The algorithm was tested with the wearable sensor in full-day tests with objective population (two females and one male, all over 60 years all). The volunteers were asked to do what they use to, including traveling in train and bus, making exercise and cooking or cleaning. With a sampling frequency of 25 Hz (lower than most works in the literature), It was obtained more than 17 continuous hours of acquisition (we recorded online, increasing the consumption) with encouraging results, just with a couple of false-positives due to hits of the device during cooking.

Advertisement

5. Home automation

The domotic SICAA software and control hardware were designed to achieve some automatic tasks, such as air conditioning, basic lights, environmental music, and alarms (related to the health and emotional monitoring). Through an ichnographic software (Figure 4) is possible to access to these and other functions: house control (that comprises blinds, lights, orthopedic bed, air conditioner, television, and intercom); medication alarm; carer communication (nurse call, voice synthesizer), and computer access (internet, chat, games, text processors). The software must accomplish several requirements, regarding the limited experience with technology or reduced capabilities of the users. An intuitive interface, big buttons and vibrant colors were chosen for the software. The software has a principal panel, (Figure 3) with date, hour, a control menu showing the outputs states, and icons representing the TV control, home control, internet access, voice synthesizer, nurse call, bed control, medication reminder, and diary. The icons of the menu are also easy to interpret and relative large in the screen, regarding the vision problems or essential tremor. Each main icon allows the access to another submenu or screen for specific tasks, such as ambient or home control, TV, and so on. All system configurations are accessible by the programmer or authorized personnel, in order to prevent system failures. The software is available in Spanish, English and Portuguese [25].

Figure 4.

Principal panel of SICAA system and submenus with functions for communication and for home control. The icons, colors, and size were designed especially for people with poor contact with technology.

A custom-made electronic board was designed to control peripheral devices, including relays, analogical and digital inputs, isolation conditions, and an ATMega 8-16PU microcontroller of 1 MHz. A generic IR transducer was used for TV and air conditioning control, to prevent incompatibilities with previously installed devices, but door lock and intercom has to be adapted for electric opening.

The Aid Call is a luminous and sonorous alarm, interrupted only by hardware, assuring the presence of the caregiver. Other devices such as orthopedic bed and lights only need a minimum electrical connection to be controlled by the SICAA and other devices must be added without affecting the functioning. In the main menu a panel control shows the appliances activation state. As is possible to see, is a simple and low cost solution in order to enhance autonomy, improving the relationship of the elderly or ill users with family or caregivers. The home care system would provide elderly people more satisfied and autonomous behaviors and habits as well as more nominal care.

Advertisement

6. Voice speech emotion recognition

As we stated in introduction, the emotional status of the patient is one of the milestones of our work. Even if the physiological signals are constantly monitored in order to detect any physiological damage, we have demonstrated that emotion can be recognized from voice and developed specialized algorithms to be used in this kind of application.

Biomedical signal processing allows the identification of certain paralinguistic parameters in a speaker such as pathologies, nationality, gender, among others [2628]. The interest in emotion recognition from speech has increased in the last decade because emotion recognition can improve the quality of services and the quality of life of people. It is possible to make predictions by using a supervised learning scheme, whose final scope is determining the health status of individuals [29]. Through a set of features, it is possible to identify and characterize states of people that can be as common as unpredictable such as emotional states.

Health status of a person has drawn the attention of researchers involved in several different branches of knowledge, such as psychology, cognitive sciences, economy, bioengineering, and medicine [31, 32]. In Ref. [33], we developed an algorithm to recognize certain regions on the emotional plane specialized on sadness and in Ref. [34], an algorithm for Parkinson’s disease detection and monitoring was proposed. Another practical use is the monitoring of telephone interactions, i.e., call center, [35].

Here, we present a general framework methodology for the understanding of the reader, widely used in our developed algorithms. In particular, voiced-speech signals are initially processed using multiple techniques, e.g., filtering, Fourier transform, which allow revealing important information inside the signals [3638]. Then, relevant features should be selected in order to find patterns that divide meaningfully the data in terms of the classes of interest, e.g., healthy person/sick person. Identified features are used to training machine learning algorithms that automatically perform the classification of new data, in this case voiced-speech signals, into the previously defined classes [39]. In traditional methodologies, spectrograms time-frequency representation have been used for identifying subtle cues related with utterance acoustics, providing the means to identify the speaker and his emotional [40, 48], or health status [41, 42].

Several works have used conventional features such as Mel Frequency Cepstral Coefficients, Renyi entropy, Fisher rate, Local Hu moments, and vocal source parameters, among others in order to find patterns inside audio-voice signals [30]. Additionally, these methods use classifiers with different schemes, some of them are K-nearest neighbor, Gaussian mixture models, hidden Markov models, multilayer-perceptron, and support vector machine [4345]. One of the main problems in emotion recognition from speech is to find suitable features to represent the phenomenon. In Refs. [46, 47], new features based on the energy content of wavelet-based time-frequency (TF) representations to model emotional speech were proposed. Three TF representations were considered: (1) the continuous wavelet transform, (2) the bionic wavelet transform, and (3) the synchrosqueezed wavelet transform.

A simple methodology can be stated as signal acquisition, signal preprocessing, feature extraction, and classification. A summary can be found in the following steps:

  1. Windowing: Since the speech signal is quasi-stationary is necessary a windowing process. Blackman window is used, an overlap of 60% was done, and a window size of 16384 samples, near to 1 second.

  2. Wavelet decomposition: The wavelet decomposition was implemented using the stationary wavelet transform (SWT). The SWT can be considered as a filter bank, with low-pass and high-pass filter, that decomposed the input signal into two groups of subsignals, which are approximation and detail signals [36]. Each filter bank is related to a mother wavelet, in this work Daubechies wavelet is used particularly the wavelet db1, db6, db8, and db10. Each segment taken from the signal on the windowing process is decomposed into five levels with each type of wavelet; for analyzing this signal a matrix was constructed for each type of wavelet and for each window size, with 5 coefficients of detail and the fifth coefficient approximation of the decomposed signal.

  3. Statistical features: Each subsignal obtained by the SWT was analyzed using the 12 features, some with a temporal approach: root mean square, absolute mean value, mean, variance, standard deviation, wavelength, standard wavelength, and kurtosis; and with a spectral approach, such as zero crossing, mean frequency, median frequency, and maximum power spectral value

  4. Features selection: In Ref. [49], a new set of features based on nonlinear dynamics measures obtained from the wavelet packet transform for the automatic recognition of “fear-type” emotions in speech was proposed. The experiments are carried out using three different databases with a Gaussian mixture model for classification. The results indicate that the proposed approach is promising for modeling “fear-type” emotions in speech. On the other hand, classification of sets with high dimensionality of descriptive characteristics is computationally demanding [50]. Subsets of features may be associated with elements of noise or information different from the state of health of the speaker. The feature selection was implemented, with a recursive elimination method that iteratively eliminates the feature that least contributes to the determination between the two states, using a classifier to assign weights. This process is carried out until obtaining the target number of features

  5. Classification: A simple Multilayer Perceptron (MLP) can be trained based on the new set of features obtain by the feature selection method. The final MLP structure called patternet must be configured as follows: the output layer has the number of neurons of the states to be determined, preferably only one hidden layer shall be considered to avoid problems of gradient backpropagation. The hidden layer must contain multiple numbers of neurons, i.e., 1, 2, 5, 10, 50, and 100 to find the best set to be used fulfilling the conditions of the problem. The suggested transfer function is the hyperbolic tangent sigmoid, and trained using a Bayesian regularization [9]. Also, other classifiers such as Gaussian mixture model (GMM) have been recently used [43]: the classification is performed using GMM supervectors. Different classification problems are addressed, including high vs. low arousal, positive vs. negative valence, and multiple emotions. The results indicate that the proposed features are useful to classify high vs. low arousal emotions, and that the features derived from the synchrosqueezed wavelet transform are more suitable than the other two approaches to model emotional speech.

In our approach, the emotion recognition is focused in sadness, joy, and fear detection, because they are the most critical emotional situations for the system’s users. The election for the algorithm is based on the computational cost, since the data must to be classified and stored to construct an emotional pattern of behavior. Throughout the days, family or medical staff can study this pattern to rethink visits, or treatment support for the patient. Also, sadness and fear are connected to the SICAA block, activating nurse call for fear detection and selected music and family alarm for sadness. This simple connection elicits a safety sensation for patients and also for the family in the daily living.

On the other hand, the use of portable devices for the assessment of Parkinson’s disease (PD) patients at home from voice analysis is feasible from the technical point of view; however, it could be relatively expensive either the patients or the health system. The information and communication technologies (ICT) allow thinking on doing telemonitoring of PD patients using different communication tools already existing in internet. There are several aspects in such new technologies and tools that have to be studied to analyze the feasibility of using them in real scenarios. For instance, there exist different communication systems that can be used for the remote evaluation of speech, e.g., the mobile communications network, the internet, and the landline, among others. All of these technologies compress the audio signals in order to transmit them through the communication channel. The compression rates depend on the technology and on the bandwidth available in the network.

In Ref. [34], a method was developed, addressing to discriminate between the speakers with Parkinson’s disease (PD) and healthy controls (HC). It consists on the systematic segmentation of voiced and unvoiced speech frames. Each kind of frame is characterized independently. For voiced segments noise, perturbation, and cepstral features are considered. The unvoiced segments are characterized with Bark band energies and cepstral features. According to the results, the codecs evaluated do not affect significantly the accuracy of the system, indicating that the addressed methodology could be used for the telemonitoring of PD patients through internet or through the mobile communications network.

Advertisement

7. Conclusions

We have described the use of new technologies applied in the health monitoring and assistance for elderly and chronically ill people, focused in home care and independence for a better quality of life. Aging does not have to be a social limit, or a hospitalization cause. In the last years, technology has brought the major advances in elderly care, but the access is still limited. In the Latin America, most of the health systems financed the home hospitalization, but in specific cases and not for long time. Our proposal is exactly in the opposite paradigm, encouraging the use of technologies, in order to extend as much as possible the home care possibilities and family well-being. The main challenge in this area is the correct identification of user’s needs, because the acceptation of the technology aid depends on this completely.

The system proposed has three main parts, the wearable block that accomplish with requirements of performance, autonomy, size, and low cost. The electronic parts are available in any country and in constant evolution.

Acquisition of real falls demonstrated that setting an algorithm with young adults does not perform well with falls of elderly people. So, even the methodology proposed seemed to solve this issue, it is necessary to increase the number of falls with elderly people to have a representative sample. However, with the impossibility of performing simulated falls with elderly people (the risk of accident is too high), with an average of one fall per year, it is unrealistic to expect acquiring confident data of real falls. Then, this remains as an open issue that must be solved in the near future.

Regarding emotion recognition, health status, and similar algorithms from voiced speech, we demonstrated that wavelet-based algorithms with the proper feature selection procedure could be easily trained and implemented to be merged with other devices within this integrated framework. We also can guarantee that algorithms performance can increase under data fusion schemes so far widely developed and implemented. Data fusion schemes will also include the physiological signals from the patient such as temperature, heart rate module, and pulse oximetry. Alerts are included for the transmission in order to ask for help in case of accidents or a “nonnormal state.”

The second part, home automation, enables limited users to control their household appliances. This was achieved by using low-cost control interfaces and a customized board that interact with the central control block. For the target, user population of the system has shown great adaptability and the potentiality of increasing user independence.

The third block, central control, allows the integration of physiological and emotional measures, constructing a behavior pattern for the user and detecting risk or emergency situations. Alarms can be activated either by predefined levels of biological variables, by emotional risk detected by fear or sadness or by the user himself.

A further test with a control and an experimental group is needed in order to quantify the satisfaction and acceptation of this system by the users. Tests are needed in a controlled environment with emotions inducement or with physiologic and environment disturbances to valid the development and obtain data for a multimodal analyses of the signals. However, making a behavioral pattern must be customized. This requires a previous collection of data and a dynamic model of adjustment and learning for each patient.

As we have mentioned, it is a special group and several considerations are necessary to determine definitely the usability of our approach. Also, it is important to note that these technologies can help to improve communication, independence, and self-esteem, but it cannot replace the role of caregivers and medical staff.

Advertisement

Acknowledgments

Authors acknowledge their colleagues and students for the collaboration and support during these years of work.

Natalia Lopez wants to thank Max Valentinuzzi, Paola Bustamante and Elisa Pérez.

Sergio Ponce wants to acknowledge David Piccinini, Nicolás Andino, Sofía Avetta, Alexis Sparapani, Martín Roberti, and Camilo García for the incomparable effort to deal with electronics.

Lucia Quintero deeply recognizes the contributions of Damian Campo, Manuela Bastidas, Alejandro Uribe and Alejandro Gómez for following carefully the insights for research.

Francisco Vargas wishes to thank his colleagues Rafael Orozco and José David López, and his students Angela Sucerquia, Nicanor García, and Juan Camilo Vásquez.

References

  1. 1. World Health Organization. World Report on Ageing and Health. World Health Organization. Geneva, Switzerland; 2015. ISBN: 9789241565042
  2. 2. Tang YP, Wang W, Fu YZ. Elder health status monitoring through analysis of activity. Chinese Journal of Computer Engineering and Applications. 2006;43(3):211-213
  3. 3. He W, Goodkind D, Kowal P. U.S. Census Bureau, International Population Reports, P95/16-1, An Aging World: 2015. Washington, DC: U.S. Government Publishing Office; 2016
  4. 4. Pan American Health Organization. Innovative Care for Chronic Conditions: Organizing and Delivering High Quality Care for Chronic Noncommunicable Diseases in the Americas. Washington, DC: PAHO, 2013
  5. 5. Najafi B, Aminian K, Paraschiv-Ionescu A, Loew F, Bula CJ, Robert P. Ambulatory system for human motion analysis using a kinematic sensor: Monitoring of daily physical activity in the elderly. IEEE Transactions on Biomedical Engineering. 2003;50(6):711-723
  6. 6. Xiang Y, Tang Y-P, Ma B-Q, Yan H-C, Jiang J, Tian X-Y. Remote safety monitoring for elderly persons based on Omni-Vision analysis. PLoS One. 2015;10(5):e0124068. DOI: 10.1371/journal. pone.0124068
  7. 7. Morris ME, Adair B, Miller K, Ozanne E, Hansen R, et al. Smart-Home technologies to assist older people to live well at home. Aging Science. 2013;1:101. DOI: 10.4172/jasc.1000101
  8. 8. Stikic M, Huynh T, Van Laerhoven K, Schiele B. ADL recognition based on the combination of RFID and accelerometer sensing. In: Pervasive Computing Technologies for Healthcare, 2008. Second International Conference on Pervasive Health 2008; 30 Jan–1 Feb 2008
  9. 9. Lv Z, Xia F, Wu G, Yao L, Chen Z. iCare: A mobile health monitoring system for the elderly. In: IEEE Conferences: 2010 IEEE/ACM International Conference on Green Computing and Communications (GreenCom) & International Conference on Cyber, Physical and Social Computing (CPSCom). Hangzhou, China, October 30–November 1, 2010
  10. 10. Ghazal B, Al-Khatib K. Smart home automation system for elderly, and handicapped oeople using XBee. International Journal of Smart Home. April 2015;9(4):203-210. DOI: 10.14257/ijsh.2015.9.4.21
  11. 11. Akanbi CO, Oladeji DO. Design of a voice based intelligent prototype model for automatic control of multiple home appliances. Transactions on Machine Learning and Artificial Intelligence. 2016;4(2):23-30
  12. 12. López NM, Orosco EC, Perez E, Bajinay S, Zanetti R, Valentinuzzi ME. Hybrid Human-Machine interface to mouse control for severely disabled people. International Journal of Engineering and Innovative Technology. 2015;4(11):164-171. ISSN 2277-3754
  13. 13. Perez E, López N, Orosco E, Soria C, Mut V, Freire-Bastos T. Robust human machine interface based on head movements applied to assistive robotics. The Scientific World Journal. 2013;2013:11 pages. DOI: 10.1155/2013/589636. Article ID 589636
  14. 14. Perez E, Soria C, López NM, Nasisi O, Mut V. Vision Based Interface applied to assistive robots. International Journal of Advanced Robotic Systems: Computer Vision and Ambient Intelligence. 2013;10(116):1-10. DOI: 10.5772/53996. Print ISSN 1729-8806; Online ISSN 1729-8814
  15. 15. Orosco EC, López NM, di Sciascio F. Bispectrum-based features classification for myoelectric control. Biomedical Signal Processing and Control. 2013;8(2):153-168. ISSN: 1746-8094. Available from: http://dx.doi.org/10.1016/j.bspc.2012.08.008
  16. 16. López NM, di Sciascio F, Soria CM, Valentinuzzi ME. Robust EMG sensing system based on data fusion for myoelectric control of a robotic arm. BioMedical Engineering OnLine. 2009;8(5). DOI: 10.1186/1475-925X-8-5, ISSN 1475-925X.
  17. 17. Teng X-F, Zhang Y-T, Poon CCY, Bonato P. Wearable medical systems for p-Health. IEEE Reviews in Biomedical Engineering. 2008;1:62-74
  18. 18. Bonato P. Wearable sensors and systems. From enabling technology to clinical applications. IEEE Engineering in Medicine and Biology Magazine. 2010;29:25-36
  19. 19. Anzaldo D. Wearable sport technology – Market landscape and compute SoC trends. In: IEEE Conference Publications: International SoC Design Conference. 2-5 Nov. Gyungju, South Korea. 2015. pp. 217-218. DOI: 10.1109/ISOCC.2015.7401796
  20. 20. Chen B, Patel S, Buckley T, Rednic R, McClure D, Shih L, Tarsy D, Welsh M, Bonato P. A web-based system for home monitoring of patients with Parkinson’s disease using wearable sensors. IEEE Transactions on Biomedical Engineering. 2011;58(3):831-836
  21. 21. Asada HH, Shaltis P, Reisner A, Rhee S, Hutchinson RC. Mobile monitoring with wearable photoplethysmographic biosensors. IEEE Engineering in Medicine and Biology Magazine. 2003;22:28-40
  22. 22. Piccinini D, Andino N, Ponce S, Roberti M, López N. Wearable system for acquisition and monitoring of biological signals. Journal of Physics: Conference Series. 2016;705:012009. ISSN: 1742-6596. Available from: http://iopscience.iop.org/1742-6596/705/1/012009
  23. 23. Vega LAS. Methodology for Detecting Movements of Interest in Elderly People. Colombia: Universidad de Antioquia UDEA Facultad de Ingeniería Medellín; 2016
  24. 24. Sucerquia A, López JD, Vargas-Bonilla JF. SisFall: A fall and movement dataset. Sensors. 2017;17(1):198. DOI: 10.3390/s17010198
  25. 25. López N, Piccinini D, Ponce S, Perez E, Roberti M. From hospital to home care. Creating a domotic environment for elderly and disabled people. IEEE Pulse, IEEE. 2016;7(3):38-41. DOI: 10.1109/MPUL.2016.2539105. ISSN 2154-2287
  26. 26. Kumari M, Ali I. A new gender detection algorithm considering the Non-stationarity of speech signal. In: International Conference on Communication, Control and Intelligent Systems CCIS. IEEE Conferences: 18-20 Nov. 2016 Mathura, India Vol. 2. 2016.
  27. 27. Martinez C, Rufiner H. Acoustic analysis of speech for de tection of laryngeal pathologies. In: Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Cat. No. 00CH37143). Vol. 3. 2000. pp. 2369-2372 Chicago, IL, USA.
  28. 28. Bouselmi G, Fohr D, Illina I, Haton JP. Discriminative phoneme sequences extraction for non-native speaker’s origin classification. International Symposium on Signal Processing and Its Applications, Vol. 9, 2007; 12-15 Feb 2007; Sharjah, United Arab Emirates: IEEE; 2007
  29. 29. Orozco-Arroyave JR, Belalcazar-Bolaños EA, Arias-Londoño JD, Vargas-Bonilla JF, Skodda S, Rusz J, Daqrouq K, Honig F, Noth E. Characterization methods for the detection of multiple voice disorders: Neurological, functional, and laryngeal diseases. IEEE Journal of Biomedical and Health Informatics. 2015;19(6):1820-1828
  30. 30. Sierra-Sosa D, Bastidas M, Ortiz PD, Quintero O. Double fourier analysis for emotion identification in voiced speech. Journal of Physics: Conference Series. Oct 2016;705:012035
  31. 31. Cipriano PF, Bowles K, Dailey M, Dykes P, Lamb G, Naylor M. The importance of health information technology in care coordination and transitional care. Nursing Outlook. 2013;61(6):475-489. Available from: http://dx.doi.org/10.1016/j.outlook.2013.10.005
  32. 32. Shin D-H, Biocca F. Health experience model of personal informatics: The case of a quantified self. Computers in Human Behavior. 2017;69:62-74. Available from: http://linkinghub.elsevier.com/retrieve/pii/S0747563216308391
  33. 33. Bustamante PA, Lopez Celani NM, Perez ME, Montoya OLQ. Recognition and regionalization of emotions in the arousal-valence plane. In: Conference of the Proceeding IEEE Engineering in Medicine and Biology Society. 2015:6042-6045
  34. 34. Orozco-Arroyave JR, García N, Vargas-Bonilla JF, Nöth E. Automatic detection of Parkinson’s disease from compressed speech recordings. In: Král P, Matoušek V, editors. Text, Speech, and Dialogue. Vol. 9302. Lecture Notes in Computer Science. Cham: Springer. ISSN: 0302-9743 DOI: 10.1007/978-3-319-24033-6_10
  35. 35. Vásquez-Correa JC, Orozco-Arroyave JR, Arias-Londoño JD, Vargas-Bonilla JF, Nöth E. Non-linear Dynamics Characterization from Wavelet Packet Transform for Automatic Recognition of Emotional Speech. Smart Innovation, Systems and Technologies. Springer Publishing Company, Inc.; 2016. Berlin, Germany pp. 199-207. ISBN: 2190-3018
  36. 36. Campo D, Quintero OL, Bastidas M. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech. Journal of Physics: Conference Series. 2016;705(1):12034. Available from: http://stacks.iop.org/1742-6596/705/i=1/a=012034
  37. 37. Chen L, Mao X, Xue Y, Cheng LL. Speech emotion recognition: Features and classification models. Digital Signal Processing: A Review Journal. 2012;22(6):1154-1160. Available from: http://dx.doi.org/10.1016/j.dsp.2012.05.007
  38. 38. Sun Y, Wen G, Wang J. Weighted spectral features based on local Hu moments for speech emotion recognition. Biomedical Signal Processing and Control. 2015;18:80-90. Available from: http://dx.doi.org/10.1016/j.bspc.2014.10.008
  39. 39. He L, Lech M, Maddage NC, Allen NB. Study of empirical mode decomposition and spectral analysis for stress and emotion classification in natural speech. Biomedical Signal Processing and Control. 2011;6(2):139-146. Available from: http://dx.doi.org/10.1016/j.bspc.2010.11.001
  40. 40. Vayrynen E, Toivanen J, Seppanen T. Classification of emotion in spoken Finnish using vowel-length segments: Increasing reliability with a fusion technique. Speech Communication. 2011;53(3):269-282
  41. 41. Dang A, Likhar N, Alok U. Importance of economic evaluation in health care: An indian perspective. Value in Health Regional Issues. 2016;9(6):78-83. Available from: http://dx.doi.org/10.1016/j.vhri.2015.11.005
  42. 42. Shepley MM, Watson A, Pitts F, Garrity A, Spelman E, Fronsman A, Kelkar J. Mental and behavioral health settings: Importance; effectiveness of environmental qualities; features as perceived by staff. Journal of Environmental Psychology. 2017;50:37-50. Available from: http://linkinghub.elsevier.com/retrieve/pii/S0272494417300051
  43. 43. Lanjewar RB, Mathurkar S, Patel N. Implementation and comparison of speech emotion recognition system using gaussian mixture model (GMM) and K-Nearest neighbor (K-NN) techniques. Procedia Computer Science. 2015;49:50-57
  44. 44. Albornoz EM, Milone DH, Rufiner HL. Spoken emotion recognition using hierarchical classifiers. Computer Speech & Language. 2011;25(3):556-570
  45. 45. Truong KP, Van Leeuwen DA, De Jong FMG. Speech-based recognition of self-reported and observed emotion in a dimensional space. Speech Communication. 2012;54(9):1049-1063. Available from: http://dx.doi.org/10.1016/j.specom.2012.04.006
  46. 46. Gomez A, Quintero L, Lopez M, Castro J, Villa L. Emotion recognition in single-channel EEG signals using Stationary Wavelet Transform. IFMBE Proceedings. 2016
  47. 47. Gomez A, Quintero L, Lopez M, Castro J, Villa L. An approach to emotion recognition in single-channel EEG signals using Discrete Wavelet Transform. In: The 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Vol. 6. Florida, USA; August 17-20, 2016
  48. 48. Gomez A, Quintero L, Lopez N, Castro J. An approach to emotion recognition in single-channel EEG signals: A mother child interaction. Journal of Physics: Conference Series. 2016;705(1):12051. Available from: http://stacks.iop.org/1742-6596/705/i=1/a=012051.
  49. 49. Vasquez-Correa JC, Arias-Vergara T, Orozco-Arroyave JR, Vargas-Bonilla JF. Evaluation of wavelet measures on automatic detection of emotion in noisy and telephony speech signals. In: 48th IEEE International Carnahan Conference on Security Technology (ICCST)
  50. 50. Vasquez-Correa JC, Arias-Vergara T, Orozco-Arroyave JR, Vargas-Bonilla JF, Noeth E. Wavelet-based time-frequency representations for automatic recognition of emotions from speech. Itg-Fachbericht. 2016;267:235-239. ISSN: 0932-6022 ed: fasc

Written By

Natalia López Celani, Sergio Ponce, Olga Lucía Quintero and Francisco Vargas-Bonilla

Submitted: 04 May 2017 Reviewed: 13 June 2017 Published: 20 December 2017