Open access

Technology, Science, and Culture: A Global Vision

Written By

Sergio Picazo-Vela and Luis Ricardo Hernández

Submitted: 10 December 2018 Published: 20 February 2019

DOI: 10.5772/intechopen.83691

From the Proceeding

Technology, Science and Culture - A Global Vision

Edited by Sergio Picazo-Vela and Luis Ricardo Hernández

Chapter metrics overview

1,291 Chapter Downloads

View Full Metrics

Proceedings

Universidad de las Américas Puebla

November 6, 2018

Proceedings of the Workshop: Technology, Science, and Culture: A Global Vision 2018

Editors

Sergio Picazo-Vela

Luis Ricardo Hernández

Knowledge Area Co-editors

Ileana Azor Hernández

Nelly Ramírez Corona

Roberto Rosas Romero

Erwin Josuan Pérez Cortés

Andrés Alfonso Peña Olarte

The aim of the Workshop: Technology, Science, and Culture: A Global Vision is to create a discussion forum on research related to the fields of Water Science, Food Science, Intelligent Systems, Molecular Biomedicine, and Creation and Theories of Culture. The workshop is intended to discuss research on current problems, relevant methodologies, and future research streams and to create an environment for the exchange of ideas and collaboration among participants.

This first edition of the workshop was held on November 6, 2018, at Universidad de las Americas Puebla. In this edition, we had four keynotes and nine posters presented in the poster session, which aimed to show selected research from doctoral students. At the end of the workshop, the best poster was awarded.

Keynote speakers are researchers with recognized trajectories, who have published in leading academic and scientific journals. In this edition, the invited speakers were: Dr. Horacio Bach, Dr. Andreas Linninger, Dr. Miguel Ángel Rico-Ramírez, and Dr. Theodore Gerard Lynn.

During keynotes, Dr. Horacio Bach discussed the problem of multidrug resistant bacteria and the lack of R&D in the development of new antibiotics in pharmaceutical companies. In his talk, Dr. Andreas Linninger focused on mathematical modeling, and he proposed a definition, explained applications on chemistry and biochemistry, and emphasized the benefits of viewing math as a language for scientific inquiry and math education. In his turn, Dr. Rico-Ramírez explained the importance of measuring and forecasting precipitations; he also discussed latest advances of the measurement and forecasting of precipitations with weather radars. Finally, Dr. Theodore Lynn explained the importance of Intelligent Systems in the Internet of Everything; he explained the building blocks of Intelligent Systems and research opportunities.

In the first edition of this workshop, the best poster was awarded to Omar López Rincón, a student of Intelligent System Doctorate, who presented his work “A 3D Spatial Visualization of Measures in Music Compositions.”

The number and impact of water-related natural disasters have increased since the middle of last century. As a result of increased climate variability and the effects of global warming, the hydrometeorological risk has increased and spread, while the resilience of societies, in many cases, is not adequate. Consequently, the risk has increased. Floods and droughts, particularly in a changing climate, require greater understanding to generate better forecasts and proper management of these phenomena. Mexico, like other countries in the world, and of course in Latin America and the Caribbean region, suffers from both weather extremes, so the study of these phenomena is important in Mexican context.

The UNESCO Chair on Hydrometeorological Risks, held at the Universidad de las Américas Puebla, is devoted to the analysis, measurement, modeling and management of extreme hydrometeorological events in the context of a more urbanized world, climate change, and vulnerable regions. The Chair focuses on the development of basic and applied research for the design of adapting and mitigating measures, and it also focuses on the dissemination of information and training of decision makers as well as the public. In its activities, the Chair keeps a gender focus, targeting to reduce the vulnerability of women to hydrometeorological disasters.

The Chair acts in the following fields:

  1. Hydrometeorological risks and climate change.

  2. Modeling and forecasting of hydrometeorological risks.

  3. Integrated management of hydrometeorological risks.

  4. Gender and hydrometeorological risks.

A detailed description of the UNESCO Chair on Hydrometeorological Risks, members, and publications, can be obtained at its Website https://www.udlap.mx/catedraunesco/

The Chair publishes a quarterly Newsletter, in Spanish and English, that can be consulted at https://www.udlap.mx/catedraunesco/newsletters.aspx

Advertisement

Contents

A New Era without Antibiotics 1

Horacio Bach

Mathematical Modeling: The Art of Translating between Minds and Machines and How to Teach It 23

Andreas A. Linninger

Advances in the Measurement and Forecasting of Precipitation with Weather Radar for Flood Risk Management 39

Miguel A. Rico-Ramirez

Toward the Intelligent Internet of Everything: Observations on Multidisciplinary Challenges in Intelligent Systems Research 51

Theo Lynn, Pierangelo Rosati and Patricia Takado Endo

Hydrological Modeling in the Rio Conchos Basin Using Satellite Information 69

Paul Hernández-Romero, Carlos Patiño-Gómez, Benito Corona-Vázquez and Polioptro Martínez-Austria

Modeling of the Controlled Release of Essential Oils Encapsulated by Emulsification 77

Mónica Dávila-Rodríguez, Aurelio López-Malo, Nelly Ramírez-Corona and María Teresa Jiménez-Munguía

Extraction, Composition, and Antibacterial Effect of Allspice (Pimenta dioica) Essential Oil Applied in Vapor Phase 83

Ana Cecilia Lorenzo-Leal, Enrique Palou and Aurelio López-Malo

Stability of the Antimicrobial Activity of Lactobacillus plantarum NRRL B-4496 Supernatants during Storage against Staphylococcus aureus ATCC 29413 93

Daniela Arrioja Bretón, Emma Mani-López and Aurelio López-Malo

Music as a Medium of Encounter of Otherness in Animated Cinema 99

Luis Daniel Martínez Álvarez

Revolutionary Veganism 103

Victor Fonseca López

Aerodynamic coefficient Calculation of a Sphere using Incompressible Computational Fluid Dynamics Method 109

Carlos Duran-Hernandez, Rene Ledesma-Alonso, Gibran Etcheverry and Rogelio Perez-Santiago

Comparison of Dispersion Measures for a Territory Design Problem 117

María Gabriela Sandoval Esquivel, Roger Z. Ríos-Mercado and Juan Díaz

A 3D Spatial Visualization of Measures in Music Compositions 123

Omar Lopez-Rincon and Oleg Starostenko

Advertisement

A New Era without Antibiotics

Horacio Bach

Abstract

The appearance of multidrug-resistant bacteria is challenging the research community to find new antimicrobial agents. The problem is exacerbated because of the lack of new antibiotics and an uncontrolled use of antibiotics in the human and husbandry health. All these factors contributed to the development of more resistant pathogenic bacteria, which is alarming the health systems. In this chapter, the problems related to the lack of R&D in the development of new antibiotics in pharmaceutical companies as well as the misuse of antibiotics will be discussed. In addition, the new avenues of research in the development of new antimicrobial entities will be also examined.

Keywords: antibiotics, multidrug-resistant bacteria, antibiotic misuse, bacteriocins,bacteriophages, antibacterial peptides, nanoparticles

1. Brief overview of Bacteria

Bacteria are ubiquitous unicellular organisms able to adapt to the environmental changes in a very fast way. The double time of bacterial cells varies between 20 and 60 min.

To visualize bacteria, a microscope is required. However, as a result of their transparency, their visualization is impaired unless a stain is used. In 1884, the Danish bacteriologist Hans C. Gram published a technique by which, bacterial cells can be divided into two groups according to their color after the staining [1]. Based on the staining remnant, bacteria are classified as Gram-positive (purple) and Gram-negative (pink). This separation is based on the ability of the Gram-positive bacteria to retain the dye crystal violet, according to their cell wall composition (Figure 1).

Figure 1.

Cell wall comparison according to Gram staining [2].

Not all of the bacteria are classified in these groups. For instance, mycobacteria species do not respond to Gram staining as the result of a lipidic cell wall resistant to the stains. However, the Ziehl-Neelsen stain or acid-fast staining was developed and these species are visualized as a bright red color. The classification of bacteria in the Gram-positive and Gram-negative group is important to understand the activity of antibiotics, which will be described later on. In this regard, antibiotics can be specific to treat either Gram-positive, Gram-negative, or both of them and then defined as broad-spectrum antibiotics.

2. Brief overview of Viruses

Viruses are ubiquitous infective materials composed of a genetic material generally protected by a proteinaceous coat. Only an electron microscope can visualize them. Viruses are obligatory parasites, which require a live cell to multiply. They cannot proliferate outside of the cell because they need the cell machinery to multiply their genetic material and to produce their own proteins always depending on the machinery of the host.

Generally, viruses infect by introducing their genetic material into the host cell. Then, the viral genetic material hijacks the host systems and the host starts to produce the viral proteins as well as the genetic material. At the end of the process, the viruses opt for staying inside the host cell or to rupture it and disseminate.

In order to use the host machinery, the genetic material of the virus codes for a few specific proteins able to interact with the host proteins. Although a small number of viral proteins are produced by the host, they have a high affinity for the host proteins. This is the reason why viruses are very specific to their host and very rarely viruses can infect different species.

3. Antibiotics

Antibiotics are molecules able to inhibit the growth of bacteria. In nature, antibiotics are produced as secondary metabolites by specific groups of bacteria and fungi. The definition of secondary metabolites means that they are not involved in essential metabolic reactions in the cell. Then, if the genes responsible for their production are deleted from the bacterial DNA, they still can proliferate. Instead, it looks that antibiotics are produced in order to compete for nutritional sources by inhibiting or stopping the development of other bacterial competitors.

Penicillin was the first antibiotic discovered in 1928 by Alexander Fleming and started to be used to combat infections in 1942. Since then, new antibiotics were approved with a concomitant decrease over the last decades. The reasons for this decrease are discussed below.

When discussing the development of new antibiotic targets, it should be taken into consideration the bacterial target. Many metabolic pathways and enzymes in bacteria are highly conserved across living organisms. Therefore, these pathways and enzymes are not useful as targets because it will inflict similar damage(s) to human cells. Thus, the antibiotic targets should be directed to any bacterial target (e.g., protein, biosynthetic pathway, etc.) that does not have any similarity in human. Examples of antibiotics targeting bacteria and mechanism of resistance are depicted in Figure 2.

Figure 2.

Mechanisms of bacterial resistance to selected antibiotics. (A) Antibiotic mechanisms. (B) Mechanisms of bacterial resistance [2].

3.1 Bacterial variation and development of resistance

Bacteria multiply by binary fission, which means that the parental cell divides into two daughters. Each daughter is considered a clone or genetically identical offspring generated by vegetative multiplication. As mentioned before, bacteria multiply exponentially very fast with a generation time between 20 and 60 min, depending on the species. Thus, in a bacterial culture, although originated from a single cell, a prolonged growth may generate a residual change as a result of an adaptive process, resulting in spontaneous mutations. If we calculate the number of mutations (at a rate of 10−10 mutations per nucleotide base) in the genome of the bacteria Staphylococcus aureus, which contains 2.8 million nucleotide base pairs in its genome, an astonishing number of 300 mutations will be produced in that population within a period of 10 h [3]. On the other hand, the human genome will accumulate approximately 60 mutations within a period of 20–25 years [4].

The term drug resistance refers to acquired changes in the bacterial genome against an antibiotic. These genomic changes will continue to exist even when the drug is removed from the environment and they will be inherited to the descendants of this bacterial clone. This type of changes can be driven either by a change in the sequence of a protein (target of the antibiotic) or by the infection of the bacteria with a foreign piece of DNA that brings new genetic material to it.

Treating bacteria with antibiotics produces a selection pressure on the bacterial cells. Based on the information provided above, it is reasonable to think that spontaneous mutations will appear, which will confer to that specific cell an advantage over the rest of the population. This new resistant strain can multiply in presence of the antibiotic because it has developed an adaptive mechanism to cope with the killing activity of the antibiotic.

This problem is aggravated when bacteria develop resistance to different antibiotics. In fact, different terms are used depending on the resistance. For example, multidrug-resistant (MDR) bacteria are resistant to at least one antibiotic in three or more antimicrobial categories; extensively drug-resistant (XDR) bacteria resistant to at least one antibiotic in all but two or fewer antimicrobial categories; and pandrug-resistant (PDR) bacteria, which are resistant to all antimicrobial categories [5].

To acquire resistance to an antibiotic, bacteria should develop a mechanism to neutralize it. Bacteria have developed different mechanisms to cope with the presence of antibiotics, which can be generalized as follows: (1) destruction of the antibiotic (enzymatic alteration of the antibiotic molecule by phosphorylation, adenylation, or acetylation), (2) changes in the antibiotic target (mutational alterations in the sequence of the protein targeted by the antibiotic), and (3) reduction in the permeability of the antibiotic (efflux pumps that pump out the internalized antibiotic) [6].

Once a single bacterial cell generates a mutation, which provides advantages to survive in the presence of the antibiotic, the genetic material conferring this resistance can be transferred to other bacterial cells by small autonomous pieces of DNA, termed plasmids, that are not integrated into the bacterial genome and exist as independent entities. This autonomous pieces are multiplied by the bacteria and transferred to the progeny (vertical transfer) or can be transferred to other species (horizontal transfer) during a process called conjugation [7]. Moreover, bacteria can continually exchange plasmids, and these pieces of DNA may contain resistance genes that will be passed to new bacteria. Interestingly, plasmids can move to new bacteria even in the absence of an antibiotic, suggesting that resistance can be disseminated in the bacterial population without the presence of an antibiotic agent.

3.2 Multidrug-resistant bacteria

The number of deaths related to infections is alarming in both Gram-positive and Gram-negative groups. For example, the toll death related to the Gram-positive Staphylococcus aureus and Enterococcus species is of great threat [8]. According to published studies, Staphylococcus aureus–resistant strains, such as MRSA, kill more Americans than HIV infections together with Parkinson’s disease and homicides combined [9]. On the other hand, the Gram-negative group with serious infections includes Klebsiella pneumoniae, Pseudomonas aeruginosa, and Acinetobacter species [8].

Health-care systems cope with antibiotic-resistant infections with a high economic burden. The main problems occur in hospitals as a result of vulnerable patient crowding. This issue is aggravated because the invasive procedures performed in these facilities with an excessive use of antibiotics to safeguard the lives of critical patients. For instance, a study published in the U.S. in 2002 revealed that approximately 2 million people developed hospital-acquired infection per year, causing 99,000 deaths as the result of antibacterial-resistant pathogens [10]. The appearance of infections caused in hospitals by antibacterial-resistant pathogens extends the hospitalization of the patients with a subsequent increase in the cost of hospital days, depending on the type of infection [11].

3.3 The role of pharmaceutical companies

The role of pharmaceutical companies is to develop drugs to prevent or cure illness. Positive outcomes are based on a continuous introduction of new medicines with an increase in the life expectancy to 78 years.

The drug discovery process is complex and involves investments of billions of dollars with a high risk. Therefore, pharmaceutical companies should assess carefully the profitability of their products before deciding in what drug to invest.

The process of introducing a new drug in the market comprises mainly four phase: (1) drug discovery (3–4 years), (2) drug development (clinical phases, 5–6 years), (3) FDA filing and review (2–3 years), and (4) manufacturing and marketing. Thus, pharmaceutical companies need to evaluate a long drug discovery process associated with a patent that will expire at some point and a potential drug recall or withdrawal from the market. In conclusion, all the process involved from drug discovery until marketing may last for over a period of 12–15 years. In the case of the introduction of new antibiotics, the process is aggravated because of the appearance of bacterial resistance that will reduce the profitability of the antibiotics in the short term. Moreover, when new antibiotics are released into the market, they are often used as last resort because clinicians prefer to reserve them to treat complex infections. This situation prolongs the shelf live of the antibiotic reducing the profitability of the company.

When the pharmaceutical company chooses a specific drug, the transition between the phases is of high risk because regulatory agencies will monitor that each stage is safe for human consumption even before the drug is entering into clinical trials. For example, the selection of a candidate involves the screening of thousands of compounds, which as a result of its toxicity, efficacy, or safety may not proceed to the next step. This cost of the investment will be recovered only if the candidate drug successfully passes all the phases. As an illustration, 38% of the drugs failed in phase I (safety/blood levels), 60% of the remaining failed in phase II (basic efficacy), 40% of the remaining candidates failed in phase III (big, expensive efficacy), and 23% failed to be approved by the FDA [12]. Taking together, the number of medicines approved for new treatments has consistently dropped from approximately 35 to 20 new drugs/year in the last decade [13].

Based on all these explanations, it is reasonable to deduce that pharmaceutical companies have more interest to develop new medicines for the treatment of chronic diseases rather than antibiotics. Patients treated for these chronic diseases will consume the drugs for a long period of time (years) even for life, whereas antibiotics will be prescribed for a period of few days and then stopped. Taking all these concerns together, only three pharmaceutical companies in the world continue to develop new antibiotics [14]. Other contributors to the development of new antibiotics, such as the academy, have been affected by funding restrictions [15].

Over the last two decades, regulatory agencies such as the FDA have changed the way antibiotic clinical trials are executed [16]. For example, the use of placebo in the clinical trials of antibiotics is now considered unethical, and instead, trials are addressing noninferiority of new antibiotics compared to existing drugs. These regulations increase the cost of the trials because larger populations are required with a concomitant reduction of the profitability [16]. Taking together, changes in the regulations should be pursued to accelerate the approval of new antibiotics [17]. These changes can, for example, be based on reducing the clinical trial to a smaller population, which will reduce the cost of the trial, as well as its acceleration for completion.

3.4 Patent cliff

Pharmaceutical companies face an additional problem and it is related to patent expiration. Taking into consideration that the patenting of a specific drug has been performed earlier and during the discovery phase, after the FDA approval and product launching to the market, companies have a period of approximately 10–12 years to recover the investment during the different phases. Once the patent expired, the company faces what is known as a “patent cliff” [18].

Patent cliff means that the company loses the exclusive marketing of a specific drug and it becomes “generic.” A generic drug means that the product is sold at a considerably lower price compared to the original equivalent. Thus, sales and revenues for that specific drug plummet with a loss of price of up to 70% in a short period of time after the patent expiration.

3.5 Misuse of antibiotics

Antibiotics, without doubts, have had a positive impact on human health. In the past, deadly untreatable bacterial infections became treatable and stopped to be the main cause of death.

The history teaches us that penicillin resistance in Staphylococcus aureus appeared in 1945 only 3 years after the onset of its commercialization [19]. Today, Staphylococcus aureus has become completely resistant to penicillin and related derivatives [20]. Continuing with the same bacterial strain, a rapid increase (5–80%) in the antibiotic resistance to ciprofloxacin, thought to be effective because a novel mechanism unknown in nature, was observed within 1 year of antibiotic use [21].

Physicians routinely prescribe antibiotics to treat infections whose origin may have not been identified yet. For example, the treatment of viral infections with antibiotics has no benefits to the patient, but instead, increases the antibiotic resistance in other bacteria present in the patient microbiome. Thus, increasing the resistance to antibiotics in the normal flora of the patients will neutralize their activity in future infections. For example, after the examination of a patient with ear pain, the family doctor concluded that her/his ear has an infection. There is a 25 and 75% probability that this infection is caused by bacteria and viruses, respectively. Although it may be a discomfort for a few days, the infection will resolve without treatment in case that its origin is viral. To determine the origin of the infection, a culture test should be performed, which may take a couple of days with a higher cost than an antibiotic prescription. Thus, the patient prefers to purchase the antibiotics knowing that there is only a 25% probability. Under these facts, the patient will press her/his physician for an antibiotic prescription, making the patient happy. In conclusion, this event multiplied by thousands of doctor visits/year develops the resistance for untreatable bacteria in the future.

Overuse of antibiotics by physicians occurs when surgeons decide to administer antibiotics to the patients facing a surgery as a mean of prophylaxis to prevent infections during and after the procedure [22].

Another aspect of antibiotic overuse is observed in the livestock industry, which uses large quantities of antibiotics not only to prevent infections [23, 24] but also to increase their growth [25]. These infections may take an enormous toll of death very fast and reduce considerably the number of animals, especially in intensive husbandry (e.g., turkey, chicken, and fish ponds). These antibiotics reach the environment where they create an ideal niche for the development of resistance in the microbiome. Thus, the misuse of antibiotics in these industries provides pressure on bacteria to acquire resistance. For example, it has been demonstrated the presence of resistant bacteria in meat consumers [26]. This phenomenon follows a sequence of events that starts in the antibiotic overuse in the farms. This overuse depletes susceptible bacteria and helps with the appearance of antibiotic-resistant bacteria, which are transmitted to humans through the food supply. Studies have demonstrated that approximately 90% of the antibiotics provided to animals are secreted in urine and stool, which subsequently are used as fertilizers altering the environmental microbiome [26].

Another growing problem is related to antibacterial products found in cleaning or hygienic purposes. For example, their effect on the environment affects the composition of indigenous bacterial populations having a direct effect on the development of a proper immune system in humans.

In order to tackle the antibiotic overuse, national or provincial programs should be established to:

  1. educate not only health professionals but also the society to reduce this burden, including behavioral interventions;

  2. develop a fast test to evaluate whether an infection is caused by bacteria or viruses;

  3. restrict or limit the excessive use of antibiotics by providing education programs to farmers.

An examination of the FDA approval during the period 1998–2003 revealed that the approval of new antibiotics has declined by 56% over the past 20 years [23]. Surprisingly, only 7 of a total of 225 new drugs approved in that period were antibiotics [9] and only 2 antibiotics had a new mechanism of action [27]. This low number is insufficient to meet with the growing needs of our society to cope with infections.

4. Alternatives to antibiotics

The fact that the introduction of new antibiotics in the market decreased over the last decades together with the appearance of resistance fueled the investigation of alternative sources of antimicrobial agents.

The new research venues include bacteriocins, phages, and nanoparticles.

4.1 Bacteriocins

Bacteriocins are short or long sequences of amino acids with antibacterial activities produced by lactic bacteria. Their sequences are heterogeneous and classified according to their molecular weight [28]. For example, some of them consist of short peptide sequences (19–37 amino acids), but others can reach molecular weights of up to 90,000 Da.

Bacteriocins are considered to possess antibacterial activity against a broad spectrum of bacteria, making them nonspecific and considered safe and natural antimicrobial agents because of their consumption in dairy products since ancient times [29]. In other words, bacteria considered beneficial to human produce bacteriocins.

Bacteriocins are produced by lactic bacteria in the intestine probably to gain access to nutrients in a highly competitive environment with trillions of different bacterial species striving to survive. However, bacteriocins are not exclusive to the lactic bacteria group. Other bacterial strains have been shown to produce bacteriocin as well such as Fusobacterium mortiferum and Enterococcus faecium, which was isolated from chicken with in vitro antibacterial activities [30, 31].

Bacteriocins are grouped in different classes, but lantibiotics and thiopeptides are the most extensively studied [32]. For example, lantibiotics are very effective to control Gram-positive infections in vitro and in vivo caused by Staphylococcus aureus, Staphylococcus epidermidis, Streptococcus pneumonia, and Streptococcus pyogenes [33, 34, 35, 36].

On the other hand, thiopeptides have shown extraordinary results as antimicrobial agents, but their applications have been restricted because of water solubility issues [37, 38]. However, analogs of these thiopeptides have been generated with successful applications to control infections of Clostridium difficile, Salmonella enterica, and Staphylococcus aureus using rodent models [39, 40, 41].

Although bacteriocins can be delivered as bacteriocin-producing bacteria, their activity in the intestinal tract should be monitored. In the case of bacteriocin treatment in chicken, it has been shown that low-molecular-weight bacteriocins are active in the intestinal environment. For instance, the secretion of curvacin produced by Lactobacillus curvatus showed growth inhibition of the pathogens Escherichia coli and Listeria innocua in the digestive tract [42]. Experiments performed to determine the degradation of the bacteriocin in the digestive tract revealed that it was degraded in the last portion of the intestine (ileum) [42].

The bacteriocin nisin produced by Lactobacillus lactis showed a change in the fermentation parameters in an artificial rumen model [43]. These changes are probably attributed to changes in the microbiome of the rumen caused by the bacteriocin administration.

4.1.1 Mechanism of action of bacteriocins

Studies have reported that bacteriocins target different pathways. For example, lantibiotics and other bacteriocins bind the lipid II, which is an intermediate in the peptidoglycan biosynthesis [44, 45, 46]. Moreover, upon binding lipid II, lantibiotics enable the formation of pores in the bacterial cell membrane leading to a membrane potential unbalance, resulting in cell death [44, 45].

It looks that pore formation is a mechanism observed in different types of bacteriocins. Their activity depends on the binding to specific receptors on the bacterial membrane in order to exert their activity. For instance, some bacteriocins recognize the cell envelope-associated mannose phosphotransferase system (Man-PTS), whereas others recognize siderophore receptors (e.g., FepA, CirA, or Fiu) [47, 48].

Other mechanisms of action of bacteriocins such as the interference in gene expression and protein biosynthesis have been proposed. Examples include interference with DNA (e.g., inhibition of supercoiling mediated by gyrase), RNA (e.g., blocking mRNA synthesis and binding to the 50S ribosomal subunit), and protein synthesis (e.g., modification of amino acids and binding to the elongation factor Tu) [49, 50, 51, 52, 53].

4.1.2 Mechanisms of resistance to bacteriocins

The appearance of resistance is always a concern that may be developed as a result of changes in the membrane composition/structure. In this regard, resistance to nisin has been reported in specific strains of Clostridium and Listeria [54, 55, 56]. Moreover, exposure of the bacteriocins microcin-24 and nisin to Salmonella enterica and Streptococcus bovis, respectively, showed that the resistant cells had also resistance to other antibiotics [57, 58].

Resistance mechanisms have been mainly identified with bacteriocins targeting the cell envelope. In this regard, studies have shown that a decrease in the receptor of the bacteriocin targeting the lipid II conferred resistance to Staphylococcus aureus [46] and a regulation of the ABC transporter in Listeria monocytogenes [59]. However, mutations on genes encoding the RNA polymerase subunit and the gyrase have been observed [60, 61, 62].

In conclusion, resistance to bacteriocins has already been reported and potential solutions should be taken into consideration to reduce the appearance of such resistance. These include the derivatization of the original molecule to synthesize new molecules that may bind the receptors to reduce their recognition by the bacteria [63]. Alternatively, the use of a cocktail of bacteriocins in combination with other antibacterial agents should be evaluated as well.

4.1.3 Bacteriocin delivery

One attractive system for the delivery of bacteriocins is the use of Lactobacillus strains. For example, growth inhibition of the pathogens Listeria monocytogenes and enterohemorrhagic Escherichia coli in a mouse model has been reported using Lactobacillus casei str. LAFTI L26 [64, 65]. Interestingly, the use of bacteriocins was successful to control buccal pathogens by using an engineered Streptococcus mutans strain, which produces the bacteriocin mutacin 1140. This bacteriocin was able to control plaque formation [66] and the engineered strain was retained in the buccal microbiome for 14 years after the application [67].

4.2 Antimicrobial peptides

Small antimicrobial peptides are produced by probably every organism to cope with bacterial invasion. Antimicrobial peptides are short peptides with a molecular mass of 1000–5000 Da. Analysis of their sequences revealed that they interact with the negatively charged of bacterial membranes based on their net positive charge [68]. Further analysis of antibacterial peptides revealed that in their sequences a hydrophobic sequence is required to bind to the bacterial membrane as well as a conformation change to intercalate in the membrane.

Structural analysis of the peptides showed that they may acquire different 3D-conformations such as helices, sheets, or loops [69]. The structure of the peptides is very important because a redesign of the secondary structures of the peptides may increase their antibacterial activities or their stability being more resistant to the activity of proteases [70, 71, 72, 73].

4.2.1 Mechanism of action of antibacterial peptides

It appears that the main mechanism of action of antibacterial peptides is permeabilization. Therefore, they depend on the interaction with the cell membrane. This interaction involves an electrostatic interaction when the cationic peptide binds the negatively charged outer bacterial envelope. The negative charge on the cell membrane is the result of phosphate or lipoteichoic groups present in the lipopolysaccharides or surface of Gram-negative and Gram-positive bacteria, respectively. Once the electrostatic interaction occurs, hydrophobic interactions allow the insertion of them into the outer membrane structure in Gram-negative strains. Then, a translocation may occur led by an unknown mechanism, which can be the formation of a transient channel, dissolution of the membrane, or translocation across the membrane [74].

Antibacterial peptides act in different targets such as the inhibition of nucleic acid and protein syntheses, enzymatic activity, and cell wall synthesis [75]. For example, buforin II (isolated from a frog) crosses the bacterial membrane by penetration, binding both DNA and RNA molecules in the cytoplasm of Escherichia coli [76]. Likewise, other peptides inhibit DNA and RNA synthesis without destabilizing the bacterial membrane [77, 78, 79] and protein synthesis [78, 79]. Other inhibitions including the enzymatic activity of pyrrhocidin that inhibits the activity of the heat shock protein DnaK (ATPase activity for a correct folding) and the transglycosylation of lipid II for peptidoglycan synthesis have been reported [80, 81, 82].

4.2.2 Mechanisms of resistance to antibacterial peptides

As similar to bacteriocins, the development of resistance against antimicrobial peptides has been shown. This resistance is apparently associated but studies have shown that certain genes can confer increased resistance to antimicrobial peptides, such as the gene rcp in Legionella pneumophila [83, 84]. Other resistance mechanisms have not been yet elucidated as well whether or not this resistance is transferred between bacteria.

Antimicrobial peptides, such as gallinacins, have been isolated from leukocytes in chicken and showed antimicrobial activity against Listeria monocytogenes, Escherichia coli, and the yeast Candida albicans [85]. Other antimicrobial peptides were isolated from turkey and showed activity against Staphylococcus aureus and Escherichia coli [86].

The use of antimicrobial peptides faces stability issues. As a result of their proteinaceous nature, they are subjected to degradation by proteolytic enzymes highly abundant in the body. Although antimicrobial peptides are also produced by the immune system, they do not face any vulnerability as their activity is very close to the production site. Thus, a potential use of these antimicrobials should address the proteolysis issue perhaps designing more resistant peptides, including chemical modification as well as an encapsulation to protect them or to develop a system of slow release. Other alternatives of delivery have been proposed, such as their production in genetic-modified plants, which can be used as an animal feed [87].

4.3 Bacteriophages

Bacteriophages or phages are viruses that infect and multiply in bacteria. As mentioned earlier, viruses infecting cells can be released into the environment by a process of bacterial cell destruction or lysis.

Phages are attractive for therapy because of their specificity of interaction with only a specific strain of bacteria. The interaction of phages with their hosts is based on the identification of specific binding sites, rendering strains without these receptors unaffected. On the other hand, this host specificity may signify a challenge for phage therapy. For example, lytic phages able to infect all Salmonella serovars (same species but with differences in the surface antigens) have not been yet discovered.

To overcome this problem, a mixture of phages will be necessary to cover the most common infections caused by the same pathogen. For example, studies have reported that the use of a cocktail of lytic phages was effective to control Salmonella isolates in chicken and fresh-cut fruits [88, 89, 90, 91].

Trials using phage therapy were not enthusiastic and the attractiveness for this therapy has decreased in the past. However, with the appearance of antibiotic resistance in bacteria, its potential use has reemerged.

Therapeutic phages also face issues related to their interaction with the target bacteria. The introduction of the viral genetic material can cause undesired changes in the bacterial strain. For example, some phages may integrate into the bacterial chromosome, which may introduce new characteristics or modifies the expression of host genetic characteristic. These characteristics may include effects on the secretion of bacterial virulence factors, such as toxins, or antibiotic-resistant genes [92, 93, 94, 95]. Taking together, it is desired that phages will enter in a lytic cycle to destroy their bacterial host rather than to be incorporated into the bacterial chromosome. In this way, cell lysis is preferred in phage therapy because of the destruction of the host, reducing the chances for viral interactions into the bacterial chromosome.

It seems that future phage therapy will focus principally in the digestive and respiratory tracts with little possibilities to be used as a systemic therapy. In blood, phages will be exposed to circulating antibodies, which will clear the phage from the blood circulation. However, in the digestive tract, phages are subjected to adverse factors such as pH changes, which might change their antimicrobial activity. For example, the load of Salmonella enteritidis was reduced on contaminated melons, but not in apple slices with a pH of 4.2 [89].

Safety concerns have been also elevated in the production of phages for phage therapy. For example, phages should be produced in live microorganisms and then their production is limited to their pathogen hosts. In this regard, phages can carry genetic material from the host that, in this case, is the pathogen and transmit it to other bacteria. It seems that this scenario is not a frequent event, but it will be desirable to produce the phages in a nonvirulent pathogen to reduce this likelihood. In some application, the use of the enzyme responsible for the lysis of the host may suffice to control the pathogen [96], but it may be limited to topical applications or mucosal infections to avoid its travel through the digestive tract with little possibilities to survive.

Although disadvantages related to phage therapy have been discussed above, it is still considered a natural alternative to control infections in humans [97, 98]. Its use is supported by studies that showed protecting effects in different animal models. For example, intramuscular injection of phages protected mice infected with Escherichia coli O18:K1:H7, and a reduction in the enteropathogenic Escherichia coli strain was measured in the digestive tract of infected calves, piglets, and lambs treated with phage therapy [99, 100]. Similar studies showed the effectiveness of phage therapy when mice were infected with a vancomycin-resistant Enterococcus faecium infection [101].

An alternative approach based on a genetically engineered phage to deliver genetic material into the bacteria has been reported. The approach is based on the use of lysogenic (nonlytic) phage to deliver the genetic material, which encodes proteins with bactericidal activity, such as toxins [102].

4.4 Nanoparticles

The use of nanoparticles (NPs) to control bacterial diseases has shown promising results. Over the last decade, NPs mainly synthesized from Ag, Au, Zn, and Cu have been tested as a potential antibacterial agent. AgNPs are the most studied NPs probably because of the long use of Ag in medicine already described in the ancient literature by Hippocrates of Kos (c.460-c.370 BC). As a result of the enormous amount of papers published regarding AgNPs as antibacterial agents, this section will focus only on these NPs.

NPs possess a range between 1 and 100 nm and have different physicochemical characteristics compared to the bulk material. One of their characteristics is the large surface area compared to their volume, making them very reactive.

During the process of AgNP synthesis, Ag ion (Ag+) is reduced to Ag0 by using chemical reductants. However, over the last years, a more friendly technology using plant extracts has been proposed to diminish the toxicity problems linked to classical chemical synthesis [103, 104].

Physical characterization of the AgNPs revealed that the shape and size are important parameters with a profound effect in their antibacterial activity. For example, maximal activity was achieved when the size of the AgNPs is <40 nm and the highest activity was measured when an elongated or spherical shape was attained [2, 105, 106, 107].

4.4.1 Activity mechanisms of AgNPs

The antibacterial activity of AgNPs appears to be based on different mechanisms. It is not completely clear whether AgNPs internalize into the bacterial cell or as a result of their activity the membrane ruptures allowing their internalization [2, 108]. Many studies indicated that the adsorption of the NPs on the extracellular portion of the bacteria is the main mechanism of toxicity [105]. As a result of the adsorption, a depolarization of the cell wall ensues and the cell becomes more permeable, leading to cell death [109, 110]. Other studies have reported that AgNPs aggregate on the bacterial cell wall, causing a cell envelope disruption [105, 111, 112] with interactions with different functional groups, such as carboxyl, amino, and phosphate groups, leading to Ag precipitation [113].

Another mechanism of bacterial toxicity is the generation of reactive oxygen species (ROS) by the AgNPs. ROS (free radicals, superoxides, and peroxides) are generated in any cell as a result of metabolic reaction (Figure 3); however, cells have different systems to cope with the toxicity of these ROS. The production of ROS, either intracellular or extracellular, may lead to membrane disruption [114], including lipid peroxidation [115].

Figure 3.

Production of ROS and their activity on AgNPs [2].

Other toxicity mechanisms are related to the inhibition of the bacterial respiration [116, 117, 118], and protein and thiol binding [109, 114, 119]. It is noteworthy that the amino acid cysteine has a high affinity for Ag+ and has an important role in the proper folding of proteins and also is involved in the catalytic activity of many enzymes. Then, AgNPs target a diversity of enzymes at once with detrimental effects on the bacterial cell [119, 120]. A model of AgNP toxicity in Escherichia coli is depicted in Figure 4.

Figure 4.

A model showing the toxicity of AgNPs in Escherichia coli. (A) Disruption and disintegration of the membrane/cell wall. (B) AgNPs access the periplasmic space gaining entrance to the cytosol where they interact with (C) DNA, and (E) ribosomes (protein synthesis impaired), generating (F) ROS and (G) binding to cysteines in proteins [2].

5. Conclusions

The continued misuse of antibiotics, as well as other factors, has accelerated the appearance of bacteria showing multidrug resistance. The problem is aggravated by a lack of new antibiotics introduced by the pharmaceutical companies. Both situations have incurred in a dangerous position to humanity, which will need to cope with a lack of antibiotics to combat diseases in a short term. To overcome this problem, the development of new antibacterial agents has ensued. It is of great importance that everyone in our society will take responsibility in reducing the burden of diseases; including regulatory agencies by accelerating the process of approvals, governmental agencies to provide incentives to pharmaceutical companies to continue with the development of new antibacterial agents, agricultural extension to educate the farmers for a wise use of antibiotics, and everyone in society to be aware of the misuse of antibiotics.

Author details

Horacio Bach

Department of Medicine, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada

*Address all correspondence to: hbach@mail.ubc.ca

Advertisement

Mathematical Modeling: The Art of Translating between Minds and Machines and How to Teach It

Andreas A. Linninger

Abstract

In this chapter, I will argue that the proposition of “math is a language” has beneficial implications on the way we conduct scientific inquiry and math education. I posit that model-based discovery and learning are the acts of “replacing a theory-less domain of facts by another for which a theory is known.” Data collected over 30 years of mathematical modeling research demonstrate that the creative process of model formulation is much more taxing than the solution of mathematical artifacts with scientific computing methods. Approaching mathematics as an acquired language also has profound benefits for engineering math education. Evidence from 20 years of teaching engineering students further reveals a shockingly short-lived retention of math literacy skills. A novel course pedagogy that treats ‘math as a foreign language’ was able to improve long-term learning outcomes at the undergraduate and graduate level. The chapter closes with an outlook on existing scientific frontiers in neuroscience that may be overcome by more math-eloquent scientists and engineers.

Keywords: mathematical modeling, ontologies, mathematical education, mathliteracy, scientific computing

1. The art of mathematical modeling

Mathematicalmodels appear everywhere in science. Models explain our understanding of the natural world even beyond the frontiers of science. For example, our worldview of the space/time continuum of Einstein’s theory of relativity is not merely a study subject in theoretical physics, but is present in popular knowledge. What exactly constitutes a mathematical model? Can the art of generating a mathematical model about the physical world be learned or taught? Wikipedia offers the following on mathematical modeling: “A mathematical model is a description of a system using mathematical concepts and language.” The Wiki page proceeds to characterize mathematical terms such as types of equations, but has no information on the role of language in modeling, or how this language would be employed. Despite its significance, mathematical modeling is not a term readily defined. A rare exception is Aris’ book entitled mathematical modeling techniques [1]. This book entirely devoted to the art of mathematical modeling offers a definition as a set of equations corresponding to a physical biological or economical prototype. Aris also cites the logician Tarski who sees it as realizations in which all valid sentences of a theory are satisfied. I prefer a more general definition: mathematical modeling is the replacement of a theory-less domain by another for which theory is known.

The replacement of theory-less domain by another for which the theory is known has hardly ever been more successfully demonstrated than in the revolutionary discoveries of Alan Turing whose model of mathematical operations establishes the logical basis of modern computing. Turing realized that algebraic operations in our decimal system, additions and multiplications in our common natural number system, can be replaced with AND or OR logic operators within a binary number system. Accordingly, natural and real numbers could be digitized. More importantly, expensive tasks such as evaluating long lists of summation or multiplication operations could be executed by clocking thousands of AND and OR operations on a simple electronic machine, which became later known as a computer.

His contribution fits the definition of modeling as the replacement of decimal operations in the common number systems with the extremely simple and fast binary AND/OR logic operations; his breakthrough achievement rang in the digital age. His seminal work rooted in an ingenious mathematical model that combined well-known facts of mathematical logic with electronic principles, heralded a new era of computing.

Discoveries like Turing’s spring from inspiration and creativity; thus, mathematical modeling is an art. But is there a formal way to generate modeling art? The sciences offer a useful template for model-based discovery and learning. I propose an analogy with a feedback circuit depicted in Figure 1. The learning cycle begins with a problem formulation conceived in the mind of the modeler that described properties and possible transformations about a novel physical prototype such as a chemical process, whose extent and critical parameters may not yet be fully known or understood. Problem formulation implies the transformation of domain-specific states and transition into suitable mathematical relations. When the physical domain, its state descriptions, and transitions are adequately mapped into a suitable mathematical surrogate, the algorithmic machinery is invoked to make mathematical predictions about the system states, X, typically using digital computers. Mathematical analysis involves the solution of mathematical equations for which we typically use computers. This involves the numerical solution of algebraic equations, nonlinear models, and optimization or system dynamics using well-known rules in the mathematical domain.

Figure 1.

Schematic of the model-based learning paradigm.

But model-based learning does not stop here: properties of the mathematical surrogate, X, are not the object of study. Instead, it is critical to interpret (=or back-transform) predictions in terms of the physical states in the problem domain. In engineering, this means inferences about the states of the matter such as pressure, temperatures, and concentrations (P, T, and C). Since the range and deliberate manipulation in the original prototype space are not fully known, or may be just vaguely known, it is often necessary to sharpen the original mathematical problem formulation or assumptions. The incorporation of feedback about the original domain realized in the domain mathematical analysis actually constitutes the essence of model-based learning and discovery. Let us also emphasize that the model-based learning paradigm proposed here requires frequent translations between the prototype domain and the mathematics. The need to frequently translate between different reference systems characterizes mathematical modeling as a linguistic activity. Mathematical modeling strongly relies on math literacy, which has implications on how we should teach mathematics to engineering students. This point will be discussed more at the end of this section. When feedback is omitted and mathematical predictions are taken at face value about the actual system, a gap between reality and its mathematical surrogate may open. This undesirable phenomenon is called model mismatch. Model mismatch is the most severe problem affecting mathematical models and is often unavoidable in the early stages of a study, but needs to be gradually mitigated by repeated reformulation, testing, and interpretation feedback cycles. Learning requires frequent model adjustments and reformulations.

2. The cost of problem formulation with mathematical models

Unfortunately, model formulation is a rather strenuous task performed in the creative mind of a modeler. The graphs from Aris’ book in Figure 2 illustrate profound modeling insights generated about reactive systems. The wonderful illustrations were not created spontaneously, but were conceived by intensive contemplation. From a practical point of view, it is legitimate to ask how taxing is the art of model generation, and whether industry can afford to pay for the labor of modeling artists? Let me offer an empirical estimate for the cost of mathematical model generation using examples of my past 20 years in chemical engineering research. My PhD thesis on coal gasification mathematical problem formulation took 2 years: its resulting nonlinear equations were encoded in Fortran. Today, this task may be accomplished a few months faster using modern modeling languages such as gPROMS or GAMS [2, 3]. My second example concerns chemical process flowsheets typically analyzed in chemical engineering senior design course. Development and analysis of the process flowsheet may take a student team operating the Aspen FlowSheet Software at the beginner’s level about 45 man-hours, or 3 hours per week for 15 weeks. An expert user may be able to set up the flowsheet in only 10 hours. The third example concerns the development of a computational fluid mechanics (CFD) model, of which we will see more later in the study of the brain. Using commercial CFD software such as FLUENT [4], it may take a few days or weeks to set up a routine flow problem such as laminar flow in a cylindrical domain. A complex problem like subject-specific blood flow predictions such as an aneurysm may require 1–2 years of a PhD student. In biological systems, some CFD problems require even more than 2 years. The chart of Figure 3 shows that in cost for model formulation falls in the range of a few hundred thousand dollars for junior level engineers; it may reach or exceed a half a million dollars if the modeling problem requires an expert such as a senior expert or a scientist. In contrast, the computational time for merely solving the mathematical model was calculated to amount to only $500 of CPU time (=1 week cpu time). Accordingly, the cost of formulating models is much higher than the expense for solving mathematical equations. The expense for mathematical model-based learning stems mainly from the effort for problem formulation, a much smaller fraction is attributable to the solution on the computer. In a computationally intense scenario (=1 week CPU time), computational cost may accumulate about 10% of the cost. In the more complex modeling situation, the cost essentially lies mainly in the mathematical model formulation, interpretation, and testing of the system. It is therefore desirable to accelerate model generation and thus reduce its cost.

Figure 2.

Collage of Aris’ figures about reactive systems. Assembled from Aris’ book entitled mathematical modeling techniques [1].

Figure 3.

Overview of the cost of modeling. Problem formulation absorbs more than 90% of the cost for the development of mathematical models.

3. Intelligent systems for mathematical modeling

My former advisor, George Stephanopoulos at MIT, was one of the first chemical engineers to systematically consider whether machines could help formulate mathematical models. He called such machines intelligent systems for which he developed several modeling languages including MODEL.LA, a mathematical language to formulate process engineering models [5, 6]. He introduced a series of papers on formal modeling frameworks, intelligent systems in process engineering, and agent-based approaches for mathematical modeling [7, 8]. I had the pleasure to collaborate with a team of colleagues on one of his projects on a machine-assisted modeling approach entitled BatchDesignKit (BDK). BDK is a software architecture designed to interactively generate mathematical models [9, 10, 11, 12, 13, 14, 15]. It is composed of a batch sheet which features the formalized natural language input chemist dialects to define operational tasks as well as to allow the scheduling and sequencing of parallel and sequential tasks as they commonly used in chemical recipes of batch pharmaceutical plants. The example of Figure 4 shows a typical batch sheet which lists operational tasks: “Charge a reactor by an amount of water and charge the same reactor with a chemical, then heat the system.” As these batch sheets receive natural language commands from a chemist or chemical engineer, the equations describing the chemical operations are synthesized and solved inside the computer leading to new or purified process streams that evolve on the virtual computer reality as they would in the real chemistry lab. In accordance with the user interacting with BDK, the new batch recipe is refined in the virtual lab. The input of a chemical recipe in BDK is essentially identical to written instructions from a chemist’s lab note book. To better inform users about the evolution of mathematical models for the synthesis of pharmaceutical compounds, BDK also generated a flowsheet which features a virtual representation of all processing units such as reactors or distillation columns, materials mixtures, compounds and phases as well as processing streams. Together, the batch sheet with its natural language input and the flowsheet for virtual representation of operational equipment and material streams constituted a virtual laboratory in which experiments could be conducted. Figures 4 and 5 show an example of a batch sheet and a flowsheet as well as the name of operations that formulated that natural BDK input language, which constituted the vocabulary of a chemists’ language. The intelligent BDK system was a model generation framework that used natural language input, synthesized equations, and created a virtual representation of materials and eventually solved these equations to predict the physiochemical state changes resulting from material transformations, phase separations, and reactions. The idea of using formal algorithms to support mathematical model generations has received attention in the community and continues to be an interesting research task in system science.

Figure 4.

BDK batch and flowsheet. Chemist can define batch process operation using natural language input in a chemist dialect.

Figure 5.

List of operational tasks in the BDK batch for natural language input of chemical recipes (right). Flowsheet of batch operations and stream table for a batch recipe.

4. The practicality of computer-aided mathematical model generation

I will now turn to the question of whether model generation is practical. Numerous examples of my work in brain research provide evidence that formal generation of mathematical models has an attractive place in knowledge discovery. I will highlight oxygen exchange and blood flow in the aging brain to show how mathematical models can serve as an instrument of knowledge discovery. The mathematical modeling paradigm we propose for the brain is based on model generation from medical images, and we term it anatomical model generation. I demonstrate this in an example of the generation of the mathematical model for the mouse as well as the synthesis of vascular trees in humans. The guiding principle of generating mathematical models shown in Figure 5 is the convergence of medical images into mathematical model representations. The schematic outlines this process. For instance, in this case study of the mouse brain, two-photon microscopy was used to acquire the anatomy of the brain cells and blood vessels in a large section of the primary somatosensory cortex of the mouse. My lab then used image segmentation to create vectorized image data to create an inventory of all the blood vessels and the cells capturing their precise location, diameter, and connectivity. This vectorized image data were encoded into a network graph using adjacency matrices to store connections of nodes with arcs and property vectors for Cartesian dimensions, diameters, and sizes. Based on this image-derived domain representation, we have generated an anatomically concise network topology of the primary somatosensory cortex of the mouse. Figure 4 shows examples of four different sample sections, which are digital representations of the same cortical region of the brain of four different animals. Once we have generated the topological representation, in this case the anatomy of the cerebral cortex, we can automatically generate the equations from this network. The network representation of the mouse brain enables the computer to perform the task of synthesizing transport equations using a set of rules automatically.

Figure 6 depicts the phenomenological description of the model generation methodology. Biphasic blood flow equations for mass conservation as well as a simplified momentum equation can be generated automatically for each node of the vascular network. Blood flow in the microcirculation does not behave like a single fluid, but at the microlevel, blood plasma and red blood cells act as a bi-phasic suspension. It is also very well known that red blood cells in the microcirculation do not travel uniformly between the branches of the microcirculation but essentially tend to concentrate in vessels with higher flow and larger diameter. Therefore, we need a biphasic representation of blood flow which was implemented with a very simple drift flux model of biphasic blood flow [16] depicted in Figure 6. This kinematic model is equivalent to a mixture with species of different volatilities; thus, red blood cells tend to divide into the thinner or thicker branch of a bifurcation as a function of their relative kinematic affinity (volatility) described by an empirical drift flux parameter, m. This simple hematocrit split rule allows us to predict the uneven distribution of red blood cells; the descriptive equations of the biphasic drift flux models are again automatically synthesized. Additional modeling details instantiate equations for oxygen transport to brain tissues, expressed by molar flux balances for red blood cells and oxygen unbinding from hemoglobin into plasma according to dissociation kinetics.

Figure 6.

Information flow for anatomical model generation from medical image. Two photon image acquisition of the mouse brain in an open cranial window (left), vectorization of multiscale data (middle) and network representation of vectorized data using graphs (fight).

The entire set of these complex model equations fit on a single piece of paper as shown in Figure 7. The anatomical modeling approach enables the automatic generation of system equations from information encoded in the network topology. Once these network equations are automatically generated, the numerical solution of these equations on the computer yields predictions of microcirculatory blood flow patterns and oxygen extraction at an unprecedented level of detail as is shown in Figure 8 for four samples of the primary somatosensory cortex. The diagram shows with microscopic detail the distribution of red blood cells, blood oxygen saturation, the uneven distribution of hematocrit, and the patterns of blood pressure for any capillary and its surrounding tissue in the mouse cortex. The computer-generated mathematical model allows analysis at an unprecedented scale, down to the detail of individual cells or capillaries. The model predicted that the blood pressure is not uniform, but there are large deviations of hemodynamic states along different paths traversing the microcirculatory network. Previously, it was believed that in the capillary bed there are representative average conditions of pressures or oxygen saturation as a function of level in the anatomical vessel hierarchy. Our research shows that the hematocrit distribution, the hydrostatic pressure, and red blood cell saturation all experience large variability as a function of the different pathways red blood cells can take to traverse the network. These findings are nonintuitive and have been revealed with the help of the anatomical mathematical model. Key findings include that hemodynamic states in the microcirculation are not uniform, but that the tissue is evenly oxygenated, and that pressure drop occurs mainly in the capillary evenly [17]. None to these findings were previously predicted by prior models that did not offer the fine-grained level of anatomical detail shown in this research.

Figure 7.

Overview of the equation generation mechanism for cortical blood flow and oxygen exchange mechanism.

Figure 8.

Predictions of hemodynamic states in microcirculation of mouse, blood pressure, hematocrit, and red blood cell (RBC) oxygen saturation.

5. Growing circulatory trees to explore cortical oxygen transport in the human brain

I attempt to address how oxygen is supplied to the human brain. In humans, it is not possible to access microcirculation data through open cranial windows as was shown for rodents; rather, a noninvasive approach was needed. My lab successfully deployed a model generation methodology to overcome this limitation. In humans, we used a modified constructive constrained optimization algorithm [18] originally developed by Wolfgang Schreiner for the synthesis of coronal arterial networks [19]. Schreiner randomly added segments to a main coronal arterial tree and determined the optimal segment location in the tree hierarchy, its coordinates, and segment diameters by minimizing the vascular tree volume subjected to flow conditions. Remarkably, when sequentially repeating the process of random segment addition followed by deterministic optimization, a tree emerges whose topology resembles natural vasculature. This discovery suggests that in nature, vascular trees grow in a manner that perfuses capillaries evenly, while at the same time, the segment diameters as well as locations of bifurcations are chosen so that the total required blood filling volume is at a minimum. We have modified this original algorithm to generate vascular structures for very complicated organs such as the brain. Our modified algorithm is versatile and is capable of delineating vascular structures in quite complicated domains. The example in Figure 9 shows the initials of my laboratory (lppd-laboratory for product and process design) literally painted in blood. Each letter constitutes a physiological vascular tree that discharges the exact same amount of flow through its terminal nodes (capillaries). We have successfully used vascular synthesis to generate cerebrovascular models for rodents as well as for humans that are virtually indistinguishable from real vascular structures. Specifically, we made a computer-generated anatomical model of human microcirculation. Figure 10 depicts a comparison between the synthetic vasculature structure and a real sample. Using an artificially generated human cortical structure, we were able to predict oxygen exchange in humans at a length scale that has not been acquired experimentally. These results allowed us to predict blood flow and oxygen exchange in a large section of the somatosensory cortex for a 3 × 3 × 3 millimeter section [18].

Figure 9.

Demonstration of vascular growth in a complex domain. The example shows a synthetic blood vessel networks delineated the initials of the laboratory for product and process design-lppd. Each letter constitutes a physiological vascular tree that discharges the exact same amount of flow through its terminal nodes (capillaries).

Figure 10.

Depiction of synthetic and in vivo cortical architectures. A synthetic network of a 3 × 3 × 3 mm section of the microcirculatory network in humans (left). Rendering of a blood flow simulation performed on a murine somatosensory cortical section (1 × 1 × 1 mm, see [17]), which was acquired by photon microscopy image acquired in the Kleinfeld laboratory [22]. The morphology of the synthetic and the real trees have a striking similarity.

These two examples show how model generation can create mathematical representations of complex biological domains to make them amenable to mathematical analysis. Specifically, these models allow nonintuitive inferences about cerebral circulation. The first conclusion concerned the uneven distribution of hemodynamic states in the microcirculation and the role that the network plays in ensuring even oxygenation. The second example of vascular synthesis enabled predictions of the oxygen change in humans where currently there is no imaging modality capable of penetrating into the human brain at the level of individual capillaries. Having demonstrated the practical role of model generation and automatic formulation of process models for the normal brain, I now ask the question, is model generation significant?

6. The significance of mathematical modeling as an instrument of scientific discovery in mental disorders

Here, let me demonstrate the potential value of modeling for knowledge discovery in the aging brain. Brain diseases create a worldwide health problem. For instance, the total cost for brain disorders in the European Union amounted almost 800 billion in 2010. Thus, the average cost per capita was about 5500. Nature medicine quoted a similar expensive picture for the United States [20]. Alzheimer disease is the most common cause of dementia. Mental disorders are at the top of the list of most costly health conditions in the United States. According to the Information Technology and Innovation Foundation (ITIF), annually $1.5 trillion are spent on mental disorders, a cost to the economy of about 8.8% of the GDP [21]. These diseases also have a personal face. Parkinson’s disease and Alzheimer’s disease affect popular personalities as well as loved ones. Due to the severity of the crisis, the ITIF policy recommendations include expansion funding for NIH/NINDS and to encourage pharmaceutical companies to invest in more research.

But how can progress be made to address diseases of the brain especially in an aging population? Here, I point out preliminary data about research in mathematical model to better understand metabolic changes that affect the aging brain. Figure 11 shows microcirculatory changes that affect cerebral perfusion. In this preliminary research, we can see that the capillary blockage caused by lymphocytes or the increase of tortuosity is capable of causing subtle changes in blood microperfusion and also enhances intracranial resistance. Even though these results are still in preliminary stages, mathematical models at the microcirculatory level offer a unique perspective capable of answering questions at the macroscopic scale which is very difficult to access experimentally. There are numerous additional questions concerning brain pathologies that mathematical models can effectively address. The example in Figure 12 depicts the computation of hemodynamic risk factors in the human arterial tree. We envision a system in which virtual vascular intervention planning can be conducted on the computer. Physicians will have an interface for loading patient-specific images onto a computer system, enabling immersed visualization of the diseased vascular territory potentially improving diagnosis. For example, location and extent of arterial or venous stenoses can be detected and rigorously analyzed. In order to quantify the risk to the patient stemming from such disorder of the cerebral vasculature, the endovascular interventionist will have access to computer models with subject-specific vascular tree representations that are automatically generated from medical images. Based on subject-specific data, computational network representations are generated, and the equations of momentum and mass transfer are automatically built and solved. This computer-assisted analysis will give physicians access to detailed 3D CFD simulation results including velocity profiles and streamlines as well as hemodynamic risk factor such as the average wall-shear stress or the RRT values. Currently, only engineers and scientists can perform these expensive simulations, limiting benefit to patients. But improving access to these rigorous computational results would inform physicians about the possible risk that the endovascular pathology stenosis poses to downstream blood vessels or possible redistributions of cerebral blood flow. Same day, or even better, real-time CFD results may also influence the physician’s decision for intervention planning such as the need to place a flow-diverting stent. Before placing the stent in the real patient, the physician may choose to assess the effect on the stenosed vessel by performing a virtual stent procedure using the subject-specific virtual representation on the computer to predict expected blood flow changes. Figure 13 shows the post treatment simulation, the virtual angioplasty would be able to remove the problematic wall-shear stress and RRT values. This result would encourage the physician to proceed with this intervention. Post-treatment is also possible to compare the actual flow measurements with the computational predictions in order to inform and refine the computational model for future interventions. By integrating computational methods into virtual endovascular planning, we expect to advance the clinical practice and improve outcomes for patients in the future.

Figure 11.

Preliminary simulation results for the aging brain. In a simulated blockage of the capillaries (5, 20, and 20% occlusion), color codes show a drastic reduction in blood flow.

Figure 12.

Computer model of blood follow in the entire arterial tree in a human subject.

Figure 13.

Virtual intervention planning. Physicians will be able to plan interventions using subject-specific blood flow simulations.

A second area of significance for future intelligent modeling environments concerns the rational design of intrathecal drug delivery methods into the central nervous system. A prototype interface is depicted in Figure 14. Currently, the fate of drugs inserted into the central nervous system is difficult to predict, so new drugs need to undergo trial and error testing in animals. We are working on a three-dimensional virtual reality tool that will enable physicians to perform virtual infusion experiments with drug pumps to decide on continuous infusion or bolus injection for the purpose of achieving desired biodistribution of drugs in the central nervous system. Innovative treatments with gene vectors or antisense oligonucleotide (ASO) therapies that are designed to treat patients with brain diseases have never been used clinically before. For these situations, the use of mathematical models can help optimize drug dosing and anticipate risks in silico before animal or human experimentation occurs. Together with experimental assessments, these virtual methods may improve the capability to introduce new drugs with lower risk to the patient, as well as shorten development times for drugs to market.

Figure 14.

Virtual drug injection. Physicians can optimally plan intrathecal drug injection procedures on a virtual patient.

It is therefore possible to conclude that modeling can dramatically accelerate the discovery about complex systems as we have shown in the aging brain and the rational design of drug delivery methods. Formulation of a process model is rather expensive, meaning machine generation is both lucrative in terms of cost savings, and effective because it allows a faster feedback/inference cycle. There is an intellectual demand for system engineering research to generate these models.

7. Implication for teaching students the art of mathematical modeling

In the last section, I would like to apply conclusions from the discussion of mathematical modeling in sciences to engineering education. Throughout this essay, mathematical modeling has been characterized as an effort of replacing a theory-less domain of facts by another in which all theories are known. This definition explains why pure mathematical property exploration—the solution of equation—does not necessarily lead to insights about the prototype. The proposed model-based learning model delineated a continuing feedback cycle of sharpening the problem formulation, solution, and interpretation of results. Accordingly, the rigorous solution of mathematical properties is only a subtask, but not the essence of mathematical modeling which requires translation between physical prototype and mathematical relations and between computational predictions and actual process system states. It is a key that the interpretation of mathematical results (predictions) informs knowledge of the behavior of the original study system. The repeated translations pose a linguistic, more than a mere logical challenge. We therefore suggest that problem formulation of process models is similar to a communication and composition task. The realization about the linguistic nature of mathematical modeling has implications on how it ought to be taught.

Mathematical modeling involved frequent translation between the physical and mathematical languages. The view that mathematical modeling is a form of translation and composition between languages gives indications on how modeling can effectively be learned and taught. First, let us appreciate that languages requires a grammar and syntax. In the world of mathematics, these are the mathematical properties that need to be studied before any serious composition can commence. In this aspect, students are often at a loss, not because they fail to comprehend the logic of mathematics, but because they fail to parse its terminology. Even if the logic is clear, we do not comprehend wisdom written in a foreign tongue. It requires good reading practice before students are able to compose in this language.

I have implemented the “math-as-a-foreign language” pedagogy in several course offerings in the past 10 years, for instance, a course in biological system analysis. Accordingly, we have reading exercises to make sure that students are familiar with the words of the mathematical syntax. Grammatical rules are introduced as the properties of linear and nonlinear systems. All assignments are given as a natural language memo, which forces students to translate instruction in natural language into mathematical expressions. There are translation exercises in which students practice physical prototypes conversion into quantitative expressions. This involves the choice of mathematical entities (scalars, vectors, and matrices) and a suitable mapping into biological properties. Temperatures are an example of scalar fields, velocities form vector fields. For the description of the state of stress, a tensor field is needed. The characterization of an anisotropic porous medium requires a diffusion tensor field. We also have composition exercises in which multi-physical phenomena are transcribed from the physical world of reactors with connecting processing streams into networks of mathematical relationships using property vectors and connectivity matrices. Finally, students are tasked in the validation steps in which mathematical predictions are interpreted in terms of physical model behavior.

This course has been running the last 10 years with a high success in terms of student retention rates. Typically, our students are more confident about mathematical modeling than they were before the course or the sequence of math courses before. The biggest obstacle to mathematic learning was removed by recognizing that mathematics is a language.

Advertisement

Acknowledgements

An earlier version of this text was delivered as a presentation of the 2040 Visions of Process Systems Engineering—A Symposium on the Occasion of George Stephanopoulos’s 70th Birthday and Retirement from MIT, June 2, 2017.

Author details

Andreas A. Linninger

University of Illinois, Chicago, United States

*Address all correspondence to:

Advertisement

Advances in the Measurement and Forecasting of Precipitation with Weather Radar for Flood Risk Management

Miguel A. Rico-Ramirez

Abstract

Precipitation is the main driver of the hydrological cycle, and therefore, the measurement and forecasting of precipitation is a key element in hydrological and meteorological applications such as rainfall-runoff modeling, precipitation forecasting, flood forecasting, flood risk management, and hydrological and climate studies. Flooding is one of the most vulnerable natural hazards in the world. It has vast impacts, including loss of life, damage to property and goods, and negative health, social and economic impacts. Reliable and accurate meteorological and hydrological forecasting is therefore a major priority to minimize such impacts. Significant progress has been made to improve the forecasting of extreme rainfall events for flood prediction in large rural catchments. However, accurate, reliable, and timely flood forecasting in urban areas is a challenging task that it is now crucial for the reduction of hazard and the preservation of life and property. This chapter discusses some of the latest advances in the measurement and forecasting of precipitation with weather radars, including applications for river catchments and small urban areas.

Keywords: weather radar, rainfall, flooding, nowcasting, hydrological forecasting, polarimetric radar

1. Introduction

Flooding is an important hydrometeorological hazard in the world that affects the local population and it has significant consequences for the socioeconomic development of the local region. Flash floods are produced by very heavy localized precipitation affecting urban areas producing vast human and economic impacts. Climate change can further increase the frequency and intensity of floods, and so, it is important to develop measures to manage flood risk. Structural measures, such as dams, river embankments, channels to divert flood water, storage tanks, etc., can help to reduce flood risk, but they can be very expensive to build. Nonstructural measures such as early flood forecasting and warning systems can help to forecast floods several hours ahead allowing a timely emergency response to take place and benefiting the local population during a major flooding event. The economic benefit of early flood warnings in Europe was estimated to be around 400 Euros per 1 Euro invested [1], and so, flood forecasting systems are a fundamental part of flood risk management. In the UK, the National Flood Forecasting System (NFFS) provides hydrological forecasts for all the catchments across England and Wales [2]. These flood forecasts are only possible with the help of suitable models (e.g., hydrological models and inundation models) and reliable rainfall forecasts (based on radar rainfall or numerical weather prediction models or a combination of both) to produce reliable flood warnings. So, hydrological models and rainfall forecasts are essential parts in flood forecasting systems.

The hydrological cycle is controlled by different processes such as precipitation, evapotranspiration, groundwater fluxes, and changes in catchment water storage and soil moisture. Understanding the impact of changes in the hydrological cycle due to climate change, urbanization, land use, etc. is an active area of research. Hydrological models allow to simulate the hydrological processes in a catchment. Hydrological models are largely classified into lumped, semidistributed, and distributed models. Lumped models consider the catchment as a single unit, and catchment-averaged values (forcing inputs and model parameters) are used to model the hydrological processes in the catchment. In contrast, distributed models can describe spatial variability within the catchment by using distributed measurements (e.g., rainfall, land use, soil characteristics, terrain elevation, etc.). Semidistributed models take into account some of the spatial variability within the catchment by dividing the catchment in subcatchments and treating each subcatchment as a lumped model. The choice of the model is highly dependent on the task. For instance, distributed models can be useful to study the effects of land use change in the hydrological response of the catchment. However, increasing model complexity does not always guarantee better hydrological simulations [3]. Urbanization has a strong influence in the hydrological response of the catchment by increasing runoff rates and decreasing infiltration due to the presence of impervious surfaces, whereas changes in infiltration and land use can affect evapotranspiration [4]. In fact, urban hydrological processes such as infiltration, evaporation, and storm drainage vary at small spatial and temporal scales, and therefore, the water losses due to these processes need to be accounted for when computing the amount of rainfall that becomes runoff [5]. There are a number of models available in the literature to model the hydrological processes in river catchments and urban areas, each of them with their own complexity, data requirements, and mathematical formulations to estimate the rainfall-runoff processes. These models require calibration of the model parameters to ensure the simulated runoff is close to the observations for a number of storms (or calibration period) that is representative of the climatology of the study area.

Precipitation is one of the key drivers of the hydrological cycle, and so, any errors in the measurement of precipitation have important implications when modeling hydrological processes in river catchments or urban areas. Rainfall can be measured by different instruments such as rain gauges, disdrometers, microwave links, weather radars, and active/passive satellite sensors. Both rain gauges and disdrometers provide point measurements and therefore unable to measure the spatial distribution of precipitation. Typical rain gauge instruments are tipping bucket rain gauges (TBRs) and weighing rain gauges (WGs). The measurements of both instruments are affected by systematic and random errors [6, 7, 8]. Typical errors in TBR measurements include gauge malfunctioning, blockages, wetting and evaporation, delayed in rain delivery, underestimation during high rain rates, condensation errors, wind effects, and timing errors [9]. WGs are subject to systematic delays in producing an accurate weight measurement of the precipitation collected in the bucket [7] and the measurements can be affected by evaporation. Disdrometers do not measure the rainfall rates directly, but the drop size distribution (the number of raindrops of different sizes) which can be related to the precipitation rates. The disdrometers are often used to validate radar rainfall and satellite observations and require very little maintenance in comparison with rain gauges. Satellite rainfall measurement is improving and the latest global precipitation measurement (GPM) mission will help to improve our understanding of the water and energy cycles across the globe and to improve our capabilities to forecast extreme rainfall events. The GPM core observatory includes active and passive instruments such as the dual-frequency phased-array precipitation radar (DPR), infrared (IR) sensors, and the GPM microwave imager (GMI) which can provide the three-dimensional structure of storms [10]. GPM provides rainfall measurements from space with spatial and temporal resolutions of 0.1° (approximately 10 km) every 3 h, respectively, from 65° south to 65° north in latitude. Satellite precipitation is particularly important in places where there are no other ground precipitation observations available. For instance, the measurement of precipitation over the oceans is an active area of research and the early detection of hurricanes, tropical cyclones, and large precipitation systems allows meteorologists to forecast these large-scale events several days in advance. Microwave links (MLs) measure the signal attenuation due to rain from commercial communication MLs (e.g., from mobile telephone networks), and the precipitation rates along the link can be estimated from the measured attenuation in rain [11]. Although the application of this technique is very promising in urban areas due to both, the lack of rain gauge stations and the large number of MLs available, it is not straightforward to get access to ML data from mobile network operators.

Weather radars, on the other hand, provide distributed rainfall measurements with good spatial and temporal resolutions over a larger area. For instance, the operational C-band weather radar network in the UK, consisting of 15 radars, produces rainfall measurements at 1 km every 5 min over the UK (see Figure 1). Mobile polarimetric X-band weather radars can produce rainfall measurements at even higher spatial and temporal resolutions (e.g., 250 m/1-min) which make them suitable for urban flash flooding applications [12]. Radar technology was developed during the World War II to detect enemy aircraft at long distances. Early radar systems used long wavelengths that require huge antennas to operate, but the development of the magnetron allowed radar systems to use shorter wavelengths typically in the microwave frequency range resulting in more compact systems that can be installed on aircraft [13]. At the time, radar operators realized that radar systems were sensitive enough to detect precipitation, and so, there was a huge potential for radar in weather forecasting. Nowadays, weather radars are used for meteorological services around the world to estimate precipitation over large regions at high spatial and temporal resolutions for hydrological and meteorological purposes. Weather radar measurements can be used to produce short-term precipitation forecasts up to several hours ahead (typically 3–6 h of forecasting lead time) for real-time flood forecasting and warning. Weather radar measurements can be used with other atmospheric observations to improve the initial conditions of numerical weather prediction models through data assimilation to advance weather forecasting. The following section briefly describes how radar operates and the latest advances in weather radar technology.

Figure 1.

Real-time radar rainfall mosaic over the UK.

2. How weather radar works?

A weather radar typically sends a high-power signal in the microwave frequency range (S-band at 3 GHz, C-band at 5 GHz, and X-band at 10 GHz), and if precipitation particles lie along the path of the radar beam, then a small percentage of energy is reflected back to the radar antenna. This reflected power is related to a measurement known as the radar reflectivity (Z), which is commonly used to estimate the rainfall rate. If the average diameters (D) of the precipitation particles are small compared to the radar wavelength (λ), then the approximation of the Rayleigh scattering applies (i.e., D < <λ) and the radar reflectivity can be expressed as a function of the sixth moment of the drop size distribution N(D), that is, Z=D6NDdD. The rainfall rate, on the other hand, is a function of the 3.67 moment of the drop size distribution (DSD), that is, R=D3.67NDdD, with Z more sensitive to large drops than R. This produces a source of uncertainty because both Z and R depend to different extend on the DSD, which can continuously change between storms and even during the same storm. The radar reflectivity (Z) can be related to the rainfall rate (R) by using a nonlinear Z-R equation of the form Z=aRb, where a and b are parameters that depend on the DSD. The parameters can be obtained empirically by establishing a climatological Z-R relationship or by simulating Z and R over a wide range of DSDs. However, updrafts and downdrafts can cause the Z-R relationship to vary from the one obtained in still air. The Z-R relationship is critically dependent on the calibration of the radar system and Z is subject to attenuation due to precipitation at frequencies higher than 3 GHz. In the US, the relationship Z = 300R1.4 is often used due to the convective nature of the precipitation, whereas in the UK, the equation Z = 200R1.6 is more suitable for stratiform precipitation. However, there are many different equations quoted in the literature, often very specific to the type of precipitation or the climatology of the area.

Radar rainfall can be affected by different error sources. Weather radars do not measure rainfall directly, but the power reflected from precipitation particles, which gives a measure of reflectivity, which in turn can be used to estimate the rainfall rate. In general, the quality of radar rainfall decreases with range (distance from radar location) because the radar sampling volume also increases with range and the radar beam height may be at several kilometers above the ground at long ranges. As a result, the precipitation particles intercepted by the radar sampling volume might be due to rain, melting snow, snow, ice, etc., or a combination of these. This variability affects reflectivity measurements and the estimation of precipitation may not be representative of the rainfall rate at the ground. The variation of the vertical profile of reflectivity (VPR) is due to factors such as growth or evaporation of precipitation, melting of precipitation particles, thermodynamic phase of precipitation (rain, snow, hail, etc.), wind effects that can cause considerable errors in radar rainfall measurements. For instance, interception of the radar beam with melting snow precipitation particles can cause overestimation of precipitation up to a factor of 5. Typical errors in radar rainfall include radar calibration, radar signal attenuation due to rain, radar measurements contaminated with nonmeteorological echoes (e.g., echoes due to ground or sea, buildings, ships, airplanes, birds, insects, wind farms, etc.), variation in the reflectivity-rainfall (Z-R) equation, variations of the vertical profile of reflectivity, extrapolation of reflectivity measured aloft to the ground, wind drift effects, radar beam overshooting the shallow precipitation, radar beam blockage and occultation, etc. Substantial work has been carried out to correct these errors in radar rainfall measurements and the literature is too large to summarize here. Perhaps the most significant progress has been the development of dual-polarization radar (or polarimetric radar), which has demonstrated significant improvements in terms of both data quality and accuracy of rainfall estimation [14] and this is why many countries have upgraded (or replaced) their operational weather radar networks with polarimetric capability.

3. The development of polarimetric weather radar

Operational weather radars can be largely classified into single-polarization (SP) and dual-polarization (DP) weather radars. SP radars measure the reflectivity (Z) only and if the radar has Doppler capability, they also measure the radial velocities (V) of precipitation particles. DP radars, on the other hand, simultaneously (or alternately) transmit vertically and horizontally polarized electromagnetic waves and receive polarized backscattered signals. DP radars can measure additional variables such as the reflectivities at horizontal and vertical polarizations (Zh and Zv, respectively), the differential reflectivity (Zdr), the linear depolarization ratio (LDR), the correlation coefficient (ρhv), and the differential phase (Φdp) (or its derivative, the specific differential phase Kdp) that provide more information about the nature of the precipitation event. DP radars are sensitive to size, shape, orientation, and thermodynamic phase of the precipitation particles. This allows them to distinguish rain, snow, hail, melting snow, ice particles, etc., that can help to improve not only the precipitation measurement but also the identification of extreme precipitation events such as hail and snow storms.

DP weather radar can contribute not only to improve precipitation estimation, but also to improve our understanding of the microphysics of precipitation. DP radars measure the reflectivities at horizontal and vertical polarizations (Zh and Zv, respectively), which can be used to compute the differential reflectivity (Zdr) which is a measure of the size of the raindrops. In fact, Zdr was originally proposed to improve the estimation of precipitation because large raindrops falling to the ground are distorted into oblate spheroids due to aerodynamic forces being in average their maximal dimensions horizontally oriented [15]. From this, the backscattering cross section for large raindrops is larger for a horizontal polarized wave than for a vertical polarized wave (i.e., Zh > Zv). The differential reflectivity Zdr measures this difference, and typical small raindrops have Zdr values of about 0 dB, whereas large raindrops can have values of 3–4 dB depending on the radar frequency. Seliga and Bringi [16] showed that the mean volumetric diameter of raindrops can be related to the value Zdr, and therefore, Zdr is a measure of the mean particle shape, where large raindrops are associated with large values of Zdr. LDR provides a measure of depolarization of the precipitation particles. When nonspherical particles fall with their major axis at an angle to the axis of polarization, a small percentage of the transmitted energy is depolarized and yields a cross-polar return [17]. Depolarization is also a measure of the canting angle of the raindrops. Similar to Zdr, the response of LDR is also strongly tied to the dielectric constant of the precipitation particles, and so, LDR is larger for melting snow than the one from rain or snow. In fact, LDR can be used to classify rain, snow, or melting snow. ρhv is the correlation between successive estimates of horizontal and vertical reflectivities, and it gives a measure of the variety of shapes of the precipitation particles present in the volume illuminated by the radar beam [17]. Measurements in rain reveal an average value of 0.98 or higher, whereas that for melting snow is considerably less.

4. Advances with polarimetric radar

4.1 Identification of nonmeteorological echoes

A weather radar usually scans at low-elevation angles to obtain rainfall measurements closer to the ground for hydrological purposes, but echoes from high ground or buildings can be misinterpreted as heavy precipitation. Although these echoes can be easily identified by using ground clutter maps, occasionally the radar beam is bent towards the earth surface due to changes in the vertical temperature and humidity distributions producing ground echoes due to anomalous propagation (AP), where their location is unpredictable. DP radar has enabled accurate classifications of clutter and AP echoes through the use of fuzzy logic or Bayes classifiers (see Figure 2) [18]. In fact, the fluctuating characteristics of the radar echoes such as the spatial variability (e.g., texture or the standard deviation) of Zdr and Φdp are good indicators of ground clutter and AP echoes. These features have been exploited to identify nonmeteorological echoes in radar observations, and recently, the same principle has been applied to identify echoes due to wind farms [19].

Figure 2.

Identification of nonmeteorological echoes in radar rainfall measurements [18].

4.2 Improvements in attenuation correction

Attenuation (A) is the loss of signal power due to absorption and scattering of electromagnetic waves along the propagation path. Attenuation is considered negligible at S-band (3 GHz) frequencies, but not at C-band (5 GHz) or X-band (10 GHz) frequencies. Attenuation can cause significant underestimation of precipitation if no correction is performed. DP radar has enabled the development of robust algorithms for attenuation correction in the reflectivity through the use of differential phase measurements (Φdp and Kdp), which are immune to rain attenuation. Figure 3 shows an example of rain attenuation caused by an extreme storm, which caused a large attenuation of reflectivity (see circled region on the left figure) that can be observed by the large phase shifts in Φdp (see circled region on the right figure). The figure shows a circled region where the reflectivity values decreased (in some areas by more than 20dBZ) due to the signal attenuation produced by the large heavy rainfall cell shown in red.

Figure 3.

Reflectivity and differential phase measured by a C-band radar during an extreme rain event [20].

4.3 Classification of precipitation particles

DP weather radars are sensitive to size, shape, orientation, and thermodynamic phase of the precipitation particles, and hence, this has enabled the identification of rain, snow, melting snow, ice, and hail. The identification of precipitation particles (also known as hydrometeors) can help to improve our understanding of the microphysics of precipitation [21] and it can also benefit the weather forecasting research community by improving the initial conditions of numerical weather prediction models to produce better forecasts. The classification of precipitation particles can be achieved using fuzzy logic classifiers. This is because DP radar measurements share some similarities for different types of precipitation particles. For instance, heavy rain is associated with large values of reflectivity and differential reflectivity (due to large oblate raindrops), whereas hail is associated with large values of reflectivity and small values of differential reflectivity (due to spherical shapes). Similarly, ρhv and LDR are sensitive to melting snow. Figure 4 shows an example of hydrometeor classification using DP weather radar.

Figure 4.

Classification of precipitation particles [22].

4.4 Improvements in rainfall estimation

DP radars measure the differential reflectivity Zdr, which is related to the size of raindrops and when combined with Z has the potential to improve the measurement of precipitation. Kdp is almost linearly related to the liquid water content and it also provides the possibility of better estimates of rainfall rates (R) in heavy precipitation [14]. Therefore, in addition to the Z-R algorithm, other algorithms of the form R = f(Z, Zdr) and R = f(Kdp) have been proposed to estimate precipitation with radar. The R-Kdp algorithm is useful in heavy precipitation and it has the advantage that Kdp is immune to attenuation. The parameters of these algorithms can be computed using measurements of drop size distributions. Therefore, given the advantages of each rainfall algorithm in different rainfall conditions, a composite algorithm was developed that uses the Z-R equation in light rain, the algorithm R = f(Z, Zdr) in moderate rain and the algorithm R = f(Kdp) in heavy rain [23]. Composite algorithms have shown to provide more accurate rainfall rates (See Figure 5). Figure 5 demonstrates how the total rainfall measured by a DP weather radar (shown in colour) matches the rain gauge observations on the ground (shown with the numbers).

Figure 5.

Total rain accumulation between July 19 and July 24, 2007 using a composite rain rate algorithm. The numbers on the figure are the rain accumulations measured by a network of rain gauges [23].

5. Precipitation and hydrological forecasting

Some approaches to reduce the errors in radar rainfall are based on merging radar rainfall with rain gauge measurements in order to bring the benefits of both instruments, such as the accuracy of point observations from rain gauges and better representation of the spatial distribution of precipitation from radar [24, 25]. Figure 6 shows a rainfall product that blends radar rainfall with point rain gauge observations using a geostatistical technique known as kriging with external drift (KED). The figure also shows the use of this merged rainfall product to predict a flood in Kingston upon Hull located in the north of England (from [26]). The results show that the simulated flooded areas produced by the inundation model agree with the flooded areas reported by the Environment Agency and Hull City Council. The results show the enormous potential of weather radar for real-time flood forecasting.

Figure 6.

Merged radar-rain gauge rainfall product (top) used to simulate a flood (bottom) in Hull, England [26].

Although the above simulation was performed using measured radar and rain gauge data, it highlights the potential of weather radar to predict flood inundation in real time. In fact, precipitation forecasts can be produced either by numerical weather prediction (NWP) models or by using a sequence of radar rainfall scans. NWP models have a better performance over longer timescales as they dynamically resolve the large-scale atmospheric processes. Radar-based precipitation forecasting is often known as precipitation nowcasting. Nowcasting models are based on the extrapolation of radar rainfall scans to track the motion of precipitation cells with a forecasting lead time of a few hours. Radar-based precipitation nowcasting has a higher performance than NWP forecasts for the first few hours of the forecasts, but NWP forecasts have a better performance at longer forecasting lead times. However, radar-based precipitation nowcasting can be very useful for flash flood forecasting in urban areas (e.g., [27]) or hydrological forecasting for river catchments (e.g., [2]). Figure 7 shows river flow forecasts obtained in the Upper Medway catchment using a combination of radar nowcasts and NWP forecasts coupled with a hydrological model of the catchment. The results show that the forecasts are able to predict the peak flow several hours in advance. The blue area shows the uncertainty in the predictions. These types of forecasts can benefit the local population and emergency services before and during a major flooding event and they can help to manage flood risk.

Figure 7.

Hydrological forecasts in the Upper Medway catchment in the UK using radar nowcasts [28]. The forecasts were initialized at 14:15 (left) and 16:15 (right).

6. Concluding comments

Weather radar can measure precipitation over large areas, in real time and has the advantage of providing rainfall measurements with high spatial and temporal resolutions. Although radar rainfall measurements can be affected by different error sources, there are many algorithms in the literature to control the quality of radar rainfall. Polarimetric weather radars bring several benefits including improvements in radar data quality, identification of hydrometeors, attenuation correction, and radar rainfall estimation. Weather radars have demonstrated a huge potential for real-time flood forecasting applications.

Author details

Miguel A. Rico-Ramirez

Department of Civil Engineering, University of Bristol, Bristol, United Kingdom

*Address all correspondence to: m.a.rico-ramirez@bristol.ac.uk

Advertisement

Toward the Intelligent Internet of Everything: Observations on Multidisciplinary Challenges in Intelligent Systems Research

Theo Lynn, Pierangelo Rosati and Patricia Takado Endo

Abstract

For over 50 years, commentators have sought to envision a wired or networked society whose social structures and activities, to a greater or lesser extent, are organized around digital information networks that connect people, processes, things, data, and networks. This phenomenon is increasingly called the Internet of Everything. Complexity is a significant concern with the Internet of Everything due to both the volume of heterogeneous entities and the nature of how such entities are related to each other and the wider environment in which they operate. Without intelligence, the Internet of Everything may not reach its full potential, hampered by predefined rules ill-suited to a changing and dynamic physical world. More recently, intelligent systems have emerged that can perceive and respond to the physical and social world around them with a greater degree of autonomy; these systems make things smart. However, such intelligent systems and smart things present both interesting and significant multilevel computational and societal research challenges, not least representing and making sense of a dynamic physical world. This chapter will introduce the Internet of Everything, present the building blocks of Intelligent Systems, and discuss some of the opportunities and challenges for multidisciplinary research in this emerging area as it relates to the Internet of Everything.

Keywords: intelligent systems, IOT, Internet of Things, Internet of Everything, cognitive architectures, privacy

1. Introduction

In their 1973 article, “The Network City,” Craven & Wellman conceptualized cities as a multitude of social networks comprising systems of interaction, systems of resource allocation, and systems of integration and coordination. While at roughly the same time, the first Ethernet was invented and the first VOIP phone call was made, and the information technology is notable by its absence in “The Network City.” Within 5 years, numerous researchers envisioned a “network nation” and “wired society” driven by advances in communications technology [1, 2]. Over the last four decades, the opportunities and challenges of a society mediated by technology have been a major focus of academia, policymakers, and industry expanding with each new generation of information and communication technology [3, 4]. More recently, the emergence of the so-called third ICT platform characterized by the ubiquity, convergence, and interdependence of social media, mobile, cloud, big data, and sensor technologies is transforming how society operates and interacts [5]. Today, the networked society is increasingly a society whose social structures and activities, to a greater or lesser extent, are organized around digital information networks that connect people, processes, things, data, and social networks. This convergence of the virtual (cyber), the physical, and human worlds is commonly referred to as the “Internet of Everything”.

The focus by academia, industry, and policymakers on the Internet of Everything is not mere altruism. Estimates on the value of the Internet of Everything to the public and private sector by 2022 exceed $4.6 trillion and $14.4 trillion, respectively [6, 7]. Improvements to asset utilization and employee productivity, supply chain and logistics, and customer experience, as well as accelerated innovation are just some of the cited contributions from connecting a relatively small fraction of the 1.4 trillion things and billions of people that we can connect today. Realizing the Internet of Everything requires overcoming numerous technical challenges, not least complexity. The Internet of Things, the next logical step in the evolution toward the Internet of Everything, alone comprises an extremely large number of entities, with different storage, computing, networking, reasoning capabilities and profiles [8]. These entities may operate and interact autonomously in vastly different, dynamic, and uncertain environmental conditions where time may or may not be of the essence. The heterogeneity and scale of the Internet of Everything, the uncertainty and dynamism of the environments in which people and things operate and interact, and the criticality of information and data requirements require novel approaches to manage complexity and not least deciding where decisions should be made—locally at the edge (by the thing), centrally or somewhere in between—if they can be made locally at all. Recently, intelligent systems have emerged that can perceive and respond to the physical and social world around them with a greater degree of autonomy; these systems make things smart. However, such intelligent systems and smart things present both interesting and significant multilevel computational and societal research challenges, not least representing and making sense of a dynamic physical world.

The remainder of the chapter is organized as follows: Section II introduces intelligent systems, a conceptual framework, and design principles for general information system architecture. Section III provides an overview of intelligent methods and paradigms used for analyzing data and supporting decision-making in intelligent systems. Section IV discusses the Intelligent Internet of Everything and some of the opportunities and challenges for such a concept, after which the chapter concludes.

2. Intelligent systems

In computer science, intelligent systems research has its roots in the natural sciences and the study of natural systems and specifically how intelligent behavior occurs. Since the 1950s, computer scientists have sought to understand the nature of intelligence by constructing artifacts that exhibited the same breadth and depth of cognition as humans and other biological entities, such as ant colonies, swarms, etc. [9]. The search for artificial intelligence (AI) is a search for systems that think like humans, think rationally, act like humans, and act rationally [10]. In this respect, AI and intelligent systems are often used synonymously. Both involve agents whose behavior is informed by inputs from their environment and take actions that maximize their probability of achieving a goal [11]. However, some commentators suggest that intelligent systems are simply what it says on the tin: systems with some degree of intelligence. In this respect, some, relatively simple intelligent systems, may not be perceived to be AI. This is, most likely, a reflection of the so-called AI effect, that is, if an AI can successfully solve a problem, it is no longer part of AI [12, 13]. Kotseruba and Tsotsos [14] suggest that instead of looking for a particular definition of intelligence, it may be more practical to explore systems against the set of competencies and behaviors demonstrated by the system.

2.1 Defining intelligent systems

Like many topics, there are wide and narrow definitions of intelligent systems. As the AI domain has evolved, it has fragmented into a variety of subfields which may or may neither be useful for those approaching intelligent systems for the first time nor necessarily informing a general definition. Nonetheless, a starting point is needed. For the purposes of this chapter, we define intelligent systems as systems with the ability to:

“…act appropriately in an uncertain environment, where appropriate action is that which increases the probability of success, and success is the achievement of behavioral subgoals that support the system’s ultimate goal.” ([15], p. 8).

2.2 Toward a general intelligent system architecture

Langley [16] notes that the fragmentation of the AI domain has led to the proliferation of three primary architectural paradigms—multiagent systems, blackboard systems, and cognitive architectures. Table 1 summarizes the main characteristics of these architectural paradigms.

Architectural paradigmDefinitions
Multiagent systems
  • Distinct modules for different facets of an intelligent system

  • Modules communicate directly with each other

  • Architecture specifies inputs and outputs of each module and protocols for communication

  • Architecture places no constraints on how each component operates

Blackboard systems
  • Distinct modules for different facets of an intelligent system

  • No direct communication between modules

  • Modules read and alter a shared memory of beliefs, goals, and short-term structures

Cognitive architectures
  • Short-term and long-term memories that store the agent’s beliefs, goals, and knowledge

  • Representation and organization of structures embedded in memories

  • Functional processes that operate on structures including performance and learning mechanisms

  • A programming language to construct knowledge-based systems that embody the architecture’s assumptions

Table 1.

Architectural paradigms in AI [16].

For the purposes of this chapter, we focus on Albus and Meystel’s [15] reference architecture for intelligent systems, known as the real-time control system (RCS) architecture. RCS has evolved from a robot control schema to one for intelligent system design and has continued to expand while maintaining the validity of its core design principles. In its current iteration, 4D/RCS, it is most correctly a cognitive architecture, but as a conceptual architecture for intelligent systems, it accommodates multiple paradigms and approaches to intelligent system design including Dickmanns 4-D approach, behaviorist architectures, and others [17]. Again, while some literature differentiates between intelligent systems and cognitive architectures based on their capacity to evolve through development and use of knowledge to perform new tasks [18], Kotseruba and Tsotsos [14] note others are not as prescriptive.

RCS comprises four functional elements, or basic types, of processing module (behavior generation, sensory perception, world modeling, and value judgment), supported by a knowledge database module as presented in Table 2.

ConceptDefinitions
Behavior definitionThe planning and control of action designed to achieve behavioral goals
AgentA set of computational elements that plan and control the execution of jobs, correcting for errors, and perturbations along the way
Sensory perceptionThe transformation of data from sensors into meaningful and useful representations of the world
World modelingA process that performs four principal functions:
  1. Uses sensory input to construct, update, and maintain a knowledge database

  2. Answers queries from behavior generation regarding the state of the world

  3. Simulates results of possible future plans

  4. Generates sensory expectations based on knowledge in the knowledge database

Value judgment
  1. the computation of cost, risk, and benefit of actions and plans

  2. the estimation of the importance and value of objects, events, and situations

  3. the assessment of reliability of information

  4. the calculation of reward or punishment resulting from perceived states and events

Knowledge databaseA set of data structures filled with the static and dynamic information that provide a best estimate of the state of the world and the processes and relationships that effect events in the world
The knowledge database contains (i) state variables, (ii) entity frames, (iii) event frames, (iv) rules and equations, (v) images, (vi) maps, and (vii) task knowledge
The knowledge database has both long (static or slowly varying) and short (dynamic) memory
Entities-of-attention are entities that have either been specified by the current task or are particularly noteworthy entities observed in current memory input
NodeA part of a control system that processes sensory information, maintains a world model, computes values, and generates behavior

Table 2.

Selected definitions for intelligent systems [15].

Together, these modules are implemented in a single architecture with a communication system conveying messages between the various modules and the database module. The level of intelligence in the system is determined by the (i) computational power of the system, (ii) the sophistication of algorithms for behavior generation, sensory perception, value judgment, world modeling, and global communication, (iii) the information and values the system has stored in its memory, and (iv) the sophistication of the processes of the system functioning [15].

For Albus and Meystel, internal and external complexity is managed through hierarchical layering and focused attention, respectively. Recognizing that intelligent systems, in themselves, are extremely complex, they assume a hierarchical control system where higher level nodes are more strategic with longer time horizons and as such less concerned with detail, whereas lower level nodes have a narrower, more operational focus, shorter time horizon, and a greater emphasis on detail. In this way, sensors at the lowest level of the hierarchy process data locally and relatively regularly, while data from the aggregate sensor network are transmitted up the hierarchy for processing over longer time horizons and on a more global basis. Similarly, the concept of focused attention recognizes that in an uncertain and dynamic environment, limitations in the computation capacity of a given node require focus only on “what is important and ignoring what is irrelevant” ([19], p. 16).

The RCS reference architecture has been designed to integrate concepts from not only other areas of computer science but also control theory, operations research, amongst others [20]. As technology has evolved, so have expectations for what can be achieved in and with intelligent systems. Two related paradigms that have gained significant traction in intelligent systems design in the last two decades are self-organization and self-management, both of which are seen as solutions for managing extreme complexity. De Wolf and Holvoet [21] define self-organization as “a dynamical and adaptive process where systems acquire and maintain structure themselves, without external control.” They summarize the essential characteristics of self-organizing systems as:

  1. Increase in order—an increase in order (or statistical complexity), through organization, is required from some form of semiorganized or random initial conditions to promote a specific function.

  2. Autonomy—this implies the absence of external control or interference from outside the boundaries of the system.

  3. Adaptability or robustness with respect to changes—a self-organizing system must be capable of maintaining its organization autonomously in the presence of changes in its environment. It may generate different tasks but maintain the behavioral characteristics of its constituent parts.

  4. Dynamical—self-organization is a process from dynamism toward order.

While self-organization has its roots in emergence, they differ in how robustness is achieved. Interest in emergence in computing can be traced back to Turing [22] who noted that “global order arises from local interactions.” Emergence can be defined as follows:

“A system exhibits emergence when there are coherent emergent at the macro-level that dynamically arise from the interactions between the parts at the micro-level. Such emergent are novel with regards to the individual parts of the system.” ([21], p. 3).

The essence of the difference between emergence and self-organization is directional. Emergence arises from micro to macro, whereas self-organization is determined from macro to micro. While these seem like discrete paradigms, in reality they can be used together.

The related paradigm of self-management has its roots in autonomic computing and IBM’s conceptualization of autonomic computing as “computing systems that can manage themselves given high-level objectives from administrators” [23]. Kephart and Chess [23] further elaborated the essence of autonomic computing systems through four aspects of self-management—self-configuration, self-optimization, self-healing, and self-protection—underpinned by the use of control or feedback loops that collect details from the system and act accordingly, anticipating system requirements and resolving problems with minimal human intervention [24]. In this way, these design principles are consistent with Albus and Meystel [15]. Table 3 summarizes the definitions of the four aspects of self-management.

ConceptDefinitions
Self-configurationAutomated configuration of components and systems follows high-level policies. Rest of the system adjusts automatically and seamlessly.
Self-optimizationComponents and systems continually seek opportunities to improve their own performance and efficiency.
Self-protectionSystem automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system-wide failures.
Self-healingSystem automatically detects, diagnoses, and repairs localized software and hardware problems.

Table 3.

Self-management aspects of autonomic computing (adapted from [23]).

Taking into account the concepts of emergence and autonomic computing, Pfeifer et al. [25] set out a series of design principles for intelligent systems, albeit in the wider context of robotics. These principles, summarized in Table 4 below, pertain to both the design procedure and the design of agents, and in themselves represent significant research.

ConceptDefinitions
Design procedure principles
Synthetic methodologyUnderstanding by building models and abstracting general principles from the constructed model
EmergenceSystems should be designed for emergence (for increased robustness and adaptivity)
Diversity-complianceSolving the tradeoff between exploiting the givens and generating diversity in interesting ways
Time perspectivesThree perspectives required: (i) state-oriented (“here and now”), (ii) ontogenetic (“learning and development”), and (iii) phylogenetic (“evolutionary”)
Frame of referenceThree aspects must be distinguished—(i) perspective—agent, observer or designer, (ii) behavior versus mechanisms—behavior should be the result of system-environment interaction, and (iii) complexity of agent-environment interaction
Agent design principles
Three constituentsDefinitions of ecological niche (environment), tasks, and the agent itself are always required
Complete agentEmbodied, autonomous, self-sufficient, and situated agents are of interest
Parallel, loosely coupled processesIntelligence is emergent from parallel, asynchronous, partly autonomous processes, and largely coupled through interaction with the environment
Sensor-motor coordinationBehavior sensor-motor coordinated with respect to target; self-generated sensory stimulation
Cheap designExploitation of niche and interaction; parsimony
RedundancyPartial overlap of functionality in different subsystems based on different physical processes
Ecological balanceBalance in complexity of sensory, motor, and neural systems; task distribution between morphology, materials, and control
ValueDriving forces; developmental mechanisms; self-organization

Table 4.

Design principles for intelligent systems [25].

It would be wrong to limit the discussion of intelligent systems architecture to RCS and the paradigms and design principles above; however, a more comprehensive review is outside the scope of this chapter. In their recent survey of over 40 years of research related to cognitive architectures, Kotseruba and Tsotsos [14] identify over 84 architectures including 49 that are still actively developed. These architectures can and have been classified using a wide range of taxonomies including representation and information processing type they implement as well as the capabilities, properties, and evaluation criteria for the various architectures as represented in publications [14].

3. Intelligent methods

By design, intelligent systems typically acquire and must model and analyze big data, data characterized by its scale and complexity in volume, velocity, variety, variability, and veracity [26]. In addition, and in particular in the context of the Internet of Everything, they must deal with the behavior of unpredictable entities, in this case people, in uncertain and dynamic environments. Depending on the criticality of a decision, for example, in healthcare monitoring scenarios, intelligent systems may need to make trade-offs between the deliberation time, the cost, and the quality of results [27]. While traditional mathematical approaches to modeling and analysis were able to arrive at appropriate decisions when used in relatively simple intelligent systems, it was at a cost, both in terms of time and computation. Sometimes approximate reasoning is good enough and indeed preferable.

From the early 1980s onwards, researchers sought to formalize the cognitive processes of humans and particularly human tolerance for imprecision and uncertainty, so-called soft computing or computational intelligence (CI) [28]. Such techniques are now used widely, independently, and together in intelligent systems to support better and quicker decision-making. Some of the most common categories of intelligent methods include expert systems, artificial neural networks, fuzzy set systems, rough set theory, and evolutionary computing [29, 30]. These are briefly summarized in Table 5. It is important to note that these methods or techniques may be implemented discretely or together. Indeed, hybrid intelligent systems are an increasing feature of intelligent systems research.

ConceptDefinition
Expert systemsExpert systems are a branch of artificial intelligence aiming to simulate the knowledge acquisition process and reasoning of domain experts in order to complete complex tasks [31]. It typically comprises a knowledge base, inference engine, and user interface
Artificial neural networksArtificial neural networks (ANN) are a computing system that aims to replicate the structure and function of human neurons. The ANN typically consists of a number of interconnected processors, the artificial neurons, which receive inputs similar to the electrochemical impulses that biological neurons receive from other neurons and sends similar outputs to other neurons [32]
Fuzzy set systemsFuzzy systems represent an extension of traditional expert system techniques as they leverage mathematical theory of fuzzy sets in order to deal with the typical real-world uncertainty and simulate the normal (and less precise) human reasoning [32]
Rough set theoryComputing systems based on rough set theory implement a purely mathematical approach to deal with uncertainty and imperfect knowledge. The key advantage of this type of systems is that they do not need any prior or further information about data [29]
Rough set theory is mostly used for enabling automated classifiers and for attribute selection [33]
Evolutionary computingEvolutionary computing algorithms, also known as genetic algorithms, are designed to deal with randomness, nonlinear, and multidimensional functions [29]
Genetic algorithms tend to provide satisfactory results for very complex task; therefore, they represent a valuable alternative when suboptimal results are acceptable [30]

Table 5.

Summary of selected intelligent methods.

As discussed earlier, not all intelligent systems are as advanced as others and vary in terms of their reasoning and ability to adapt and evolve. Kotseruba and Tsotsos [14] identify three paradigms in cognitive architectures—symbolic (cognitivist), emergent (connectionist), and hybrid. In the symbolic paradigm, concepts are represented using symbols that can be manipulated using predefined instructions and then implemented as if-then rules applied to the symbols representing the facts known about the world. Symbolic paradigms are particularly useful for planning and reasoning; they are more intuitive, easier to understand, and as a result, more common. However, the symbolic paradigm can have difficulty with perceptual processing and adapting to dynamic and uncertain environments [14]. In contrast, the less common emergent approach deals with changing environments through massively parallel models but, as a result, loses transparency. These emergent approaches include many of the so-called intelligent methods presented in Table 5 below. Unsurprisingly therefore, hybrid architectures are increasingly the most common [14].

Exploring the pros, cons, and state of the art of this plethora of techniques is beyond the scope of this chapter. However, while deep learning and artificial intelligence research are producing impressive results on a case by case basis, there are a number of bottlenecks not least the human element. Intelligent systems require, if not demand, a knowledge database comprising human domain and task knowledge to make sense of this data and ensure meaningful insights are generated. Not only is access to domain experts a bottleneck, there is a dearth of data scientists to capture this knowledge and design and implement models based on it.

4. Toward an intelligent Internet of Everything

The architecture for a future Internet of Everything has yet to be determined. Indeed, the very concept of the Internet of Everything lacks definition. Yet, despite this, there is what one might term an “irrational exuberance” about the benefits and value it might generate. We propose an extended definition of the Internet of Everything that includes intelligence at its core. Thus, we define an Intelligent Internet of Everything as a system of systems that connect people, processes, things, data, and social networks and through intelligent systems proactively create new value for individuals, organizations, and society as a whole. It assumes a vision of ambient intelligence (see Figure 1) where we live, work and interact in, and with a digitally infused environment that “proactively, but sensibly, supports people in their daily lives” ([34], p. 15). Achieving this vision presents significant opportunities and challenges, not least advances in ubiquitous sensing, cognitive architectures, adaptive infrastructure, and privacy by design.

Figure 1.

The ambient-intelligence vision from an artificial intelligence perspective [34].

4.1 Ubiquitous sensing

An Intelligent Internet of Everything built on intelligent systems assumes not only ubiquitous computing but a ubiquitous sensing network. Ubiquitous sensing provides rich multimodal sensing abilities to the ubiquitous computing infrastructure without increasing the burden on the users [35]. The last two decades have seen a rapid advance in the sophistication of sensors including intelligent sensors, in vivo sensors, and sensors that can increasingly mimic the biological senses. Like intelligent systems, intelligent sensors can take actions based on data from the environment in which they operate rather than merely measuring a signal [36].

The development of the next generation of sensors is closely linked to innovation in nanomaterials and nanotechnology methods which both enable embedded advanced functionalities into sensors but also drive down costs, thereby increasing adoption and proliferation of sensors [36]. Similarly, nanotechnology is contributing to the development of electrochemical, optical, and magnetic resonance biosensors that are transforming our understanding of biology and treatment of disease including novel drug discovery for this purpose [37, 38]. As well as in vivo sensors, increasingly sophisticated sensors are providing multimodal sensing to not only allow machines to see and hear but also touch [39], taste [40], and smell [41] like humans. Advances in computer processing, networking, and miniaturization are introducing a new generation of bio-inspired mobile, wearable, and embedded sensors. These bio-inspired sensors are enabling machines to detect, alert, and prevent harmful events such as chemical threats in an unobtrusive fashion previously the domain of science fiction [42].

Despite the progress in developing increasingly novel and sophisticated sensors, Paulovich et al. [36] highlight the need for greater research into new materials and detection methods for wearable and implantable devices and specifically device engineering for both producing and commercializing low-cost, robust sensing units. Like all interdisciplinary research areas, the success of sensors and biosensing requires not only greater understanding of the technologies involved but also the underlying biophysical mechanisms, activities, and events inspiring sensors and specifically biosensor development. Furthermore, before intelligent systems can be relied upon particularly in high criticality situations or in vivo, concerns relating to predictability, reproducibility, and sensor drift and calibration, amongst others must be addressed over a sustained period [36].

4.2 Cognitive architectures

Ambient intelligence and the wider Intelligent Internet of Everything assume that once our sensing network captures data, we will have the systems in place to analyze the various sensing inputs to interpret, analyze, plan, act, and otherwise interact with other systems and humans [34]. Despite significant advancements, the state-of-the-art in cognitive architectures remains at a relatively early stage relative to match the 3000+ cognitive abilities of humans [43]. Both Ramos et al. [34] and Kotseruba and Tsotsos [14] emphasize the need for advancements in approaches to model human cognition and increase the range of supported cognitive abilities. To this end, Kotseruba and Tsotsos [14] identify eight areas for further research based on their survey of extant research on cognitive architectures:

  • Adequate experimental validation—thorough testing in more diverse, challenging, and realistic environments, real-world situations, more elaborate scenarios, and diverse tasks.

  • Realistic perception—advancements in active vision, localization and tracking, performance under noise and uncertainty, and the use of context information to improve detection and localization.

  • Human-like learning—more robust and flexible learning mechanisms, knowledge transfer, and accumulation of knowledge without affecting prior learning.

  • Natural communications—advancements in verbal communications including knowledge bases for generating dialogs, robustness, detection of emotional response and intentions, and personalized responses as well as performing and detecting other nonverbal human communications.

  • Autobiographic memory—episodic memory and lifelong memory.

  • Computational performance—computation efficiency in embodied and nonembodied architectures, time, and space complexity. See discussion in C below.

  • Comparative evaluation of cognitive architectures—combination of (i) objective and extensive evaluation procedures and (ii) theoretical analysis, software testing techniques, benchmarking, subjective evaluation, and challenges, to every aspect of the cognitive architecture and probing multiple abilities.

  • Reproducibility of results—fuller technical detail and greater access to software and data.

4.3 Infrastructure

Conventional cloud architectures allocate functionality against a very different set of design principles than the emerging Intelligent Internet of Everything. Decisions are made within the boundaries of a cloud and its infrastructure. The emergence of the Internet of Everything requires the allocation and/or optimization of resources and activities along a Cloud-to-Things (C2T) continuum and to accommodate computation at the edge but also using intermediary data management and processing capabilities or fog computing as it is commonly known. Latency requirements, network bandwidth constraints, device profiles, uninterrupted service provision with intermittent connectivity to the Cloud, cyber-physical devices, and security are just some of the design and research challenges that intelligent systems and the Internet of Everything will face [44]. Based on the scale, device (thing) heterogeneity, dynamism of the Internet of Everything, and existing approaches for resource provisioning and remediation are inadequate. As this complexity increases, this need will become increasingly acute.

The cloud is an essential component of future large-scale intelligent systems and the Internet of Everything. However, it requires fundamental changes to how they and their underlying infrastructure is designed and managed. Cloud computing data centers today largely leverage homogeneous hardware and software platforms to support cost-effective high-density scale-out strategies. The advantages of this approach include uniformity in system development, programming practices, and overall system capability, resulting in cost benefits to the cloud service provider [45]. There are significant disadvantages too not least energy inefficiencies and sub-optimal performance for specific use cases. Intelligent systems require significant data management support both for memory-storage and real-time processing [14, 36]. While the cloud has near-infinite low cost storage, intelligent systems will require high throughout communications. Similarly, existing commodity processors may not be suitable for the information processing tasks and techniques used in the cloud. Specialist coprocessor architectures, with relatively positive computation/power consumption ratios, are emerging which are better equipped to deal with the intelligent methods and techniques including GPUs, many integrated cores (MICs), and data flow engines (DFEs); however, their use in a cloud context is a specialist and nice market.

As in the wider domain of intelligent systems, concepts such as emergence and self-principles are being explored as methods to address complexity in the underlying networks and infrastructure that supports today’s Internet and the future Internet of Everything. For example, Östberg et al. [46] seek to specifically address the optimization of network and cloud resources in Internet of Things scenarios through more intelligent instantiation of services close to the end users that require them across the C2T continuum. Given the dynamism and nature of the Internet of Everything and to address greater variability in Quality of Service (QoS) levels, future infrastructure will need to be able to pre-emptively take reconfiguration and remediation actions in a fully autonomic fashion. They argue that this can be achieved through the concerted activation of (i) the prediction of the evolution of workload and application performance, (ii) the simulation of different deployments across the C2T continuum, (iii) the optimization of the deployment given the output of historic and real-time analysis, and (iv) the relocation of services and application components to achieve the required QoS. Similarly, Xiong et al. [47] propose a novel architecture to manage heterogeneous resources (including new processor architectures) and to improve service delivery in clouds based on a loosely coupled, hierarchical, self-adapting management model, deployed across multiple layers. This project is noteworthy in the context of intelligent systems as it specifically includes mechanisms to identify and provision appropriate resources (including specialist processors) to support the process parallelism associated with high performance services such as those required by intelligent systems.

4.4 Trust, privacy, and security

Trust is commonly defined as “the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” ([48], p. 395). Trust decisions are typically made based on an assessment of (i) ability, (ii) benevolence, and (iii) integrity [49]. These have been transposed to the IT domain as (i) functionality and performance (the capability of a technology), helpfulness (access and support when using a technology), and reliability and predictability (consistency and dependability) [50, 51]. There is a large established literature that suggests that trust, and the lack thereof, is a major driver, and barrier, of information and communication technology adoption. This is particularly the case at the early stages of adoption and interaction with a new technology or service and is further complicated by environmental uncertainties as well as the risks posed by the potential of the malfunctioning of a product or service and the acts of malicious third parties [5255]. Intelligent systems and the Internet of Everything are not immune from these risks.

As well as the technological logistics of data management in intelligent systems, it is foreseeable that end users will face similar concerns to cloud computing relating to the location, integrity, portability, security, and privacy of their data both as used by intelligent systems and the wider Internet of Everything. Indeed the underlying technologies we have discussed—ubiquitous sensing, cognitive architectures, and self-organizing, self-managing infrastructure—are all likely to exacerbate data privacy risks and concerns rather than ameliorate them.

Ubiquitous sensing can lead to the capture and storage of data indiscriminately and indeed with the permission of consumers [56]. Consumers may accept increased surveillance and intrusions into their privacy for a variety of reasons, not least the benefits outweighing the costs and perceptions of digital inevitability and transformation [57]. Some commentators have argued that consumers may indeed be technologically unconscious and simply unaware of what happens to their data and the implications for their privacy, identity, and well-being [58]. Indeed in the context of ambient intelligence data on third parties may be captured without their knowledge or express permission at all. There is a significant trade-off between data privacy and data utility and consumer acceptance of sensing/surveillance and their exploitation through sensing/surveillance [57, 59]. Many commentators have suggested that ubiquitous sensing/surveillance can erode reasonable expectations of privacy for society as a whole and result in mental health issues, pernicious or the perception of pernicious surveillance by others including organizations and government as well as changing how we think about accountability and identity [56].

Cognitive architectures similarly present issues around trust, privacy, and security. Following on from McKnight et al. [50] and Söllner et al. [51], Kotseruba and Tsotsos [14] suggest a need for greater and more robust evaluation, validation, and reproducibility before the performance, reliability, and predictability before they can be trusted. The reasoning and information processing techniques used in intelligent systems are based on human knowledge, which, as a result, include human biases including race, gender, and age [60]. Specific techniques based on emergent paradigms present other issues relating to transparency and as a result assurance and accountability.

Trust issues relating to infrastructure and particularly the cloud are well documented. Governance issues, data loss and leakage, and shared technology vulnerabilities are amongst the top threats to cloud computing identified by independent international organizations [61]. These are likely to present themselves in the context of the Intelligent Internet of Everything and raise complex ethical, legal, and regulatory questions. Who owns data and metadata generated by the interaction of the various entities in the Internet of Everything? How will data integrity and availability be managed? What is acceptable use of Internet of Everything data? Who will regulate it, if at all? These issues will be exacerbated by an extremely complex, most likely cross-border, and chain of service provision. Similarly, securing the Internet of Everything will likely be a significant challenge in itself. The envisioned proliferation of hyper-connected entities is and will introduce a wide range of new security risks and challenges for individuals but also organizations where connected end-points are used as attack vectors for distributed denial of service attacks. Together, all these factors will dilute assurance, accountability, and transparency with regards to data privacy, protection, and security unless addressed.

5. Discussion and conclusions

The Internet of Everything is at an early stage of conceptualization. Research suggests that it will generate significant value to the public and private sector, and as a result society as a whole. Due to the scale of the vision of the Internet of Everything and the proposed centrality of it in people’s lives, it introduces levels of complexity orders of magnitude greater than today and beyond human management capabilities to process, analyze, and interpret data in a timely, reliable, and accurate manner.

Intelligent systems both create value to consumers, organization, and society but are also a potential solution for managing complexity in the Internet of Everything. In the Intelligent Internet of Everything, intelligence is distributed throughout the Internet of Everything at smart end-devices, fog nodes, and in the cloud, and depending on the criticality of a given decision, deliberation occurs at the most appropriate point. An Intelligent Internet of Everything is not without significant technical and societal challenges. Such a system of systems requires a high degree of standard-based interoperability that currently does not exist as well as advancements in behavior generation, sensory perception, world modeling, value judgment, knowledge databases, and regulate them that cannot and should not be ignored.

Advertisement

Acknowledgements

The research work described in this chapter was partly supported by the Irish Centre for Cloud Computing and Commerce, an Irish National Technology Centre funded by Enterprise Ireland and the Irish Industrial Development Authority and by the European Union’s Horizon 2020 Research and Innovation Programme through RECAP (http://www.recap-project.eu) under Grant Agreement Number 732667.

Author details

Theo Lynn*, Pierangelo Rosati and Patricia Takado Endo

Irish Institute of Digital Business, Dublin City University, Ireland

*Address all correspondence to: theo.lynn@dcu.ie

Advertisement

Hydrological Modeling in the Rio Conchos Basin Using Satellite Information

Paul Hernández-Romero, Carlos Patiño-Gómez, Benito Corona-Vázquez and Polioptro Martínez-Austria

Abstract

The hydrologic modeling is a useful tool for integrated water management in watersheds. Its construction generally requires precipitation information and different physical parameters related with a watershed. In Mexico, the availability of official rainfall information, which is reliable and sufficient, is a challenge today. In some areas, there is the right amount of information but of poor quality, while other locations have few climatological stations. However, using the tools of the Industrial Revolution 4.0, we have satellite information that can supply additional records that are not available in the network of climatological stations. Nevertheless, it is necessary to evaluate the quality and usefulness of satellite information, for which an inter-comparison exercise between sources of information is useful and sometimes necessary. An alternative to obtain updated satellite records is the CLIMATESERV database of the GLOBALSERV. The objective of this work is to present the analysis and the results obtained from the hydrological simulation corresponding to year 1981, considering as the case study area the upper Río Conchos basin. For the generation of rainfall time series, the database of the rapid exhaust of meteorological information was used in version III (ERIC III 3.2 by its Spanish acronym) and the CLIMATESERV database.

Keywords: hydrological modeling, Rio Conchos basin, satellite climate information, HEC-HMS, CLIMATESERV

1. Introduction

In the region where the Río Conchos basin is located, the water pressure (total volume of concession water/volume of renewable water) has risen 27% in the last 14 years, increasing from 50% in the year 2003 to 77% in 2017 [1, 2]. This pressure is associated with population growth, increased irrigation land, growing urbanization, water-body pollution, aquifer overexploitation, and climate change.

Taking into consideration the above, it becomes clear that current water management in the region is not responding to the expectations of use for human consumption, the environment, and international responsibilities.

Therefore, for future decision-making about water management in the region, appropriate tools and methodologies are needed. The modeling of the rain-runoff process, which can be supported by a relational data model, is a very useful tool for the integrated water management in hydrological watersheds. This model aims to determine the amount of water available and analyze in detail the rain-runoff process in the watershed, and, thus, know the availability in the tributaries and help the decision-making in relation to the distribution of water resources, the proper implementation of integrated water management, and compliance with international treaties in the region. The construction of a hydrological model generally requires precipitation information and different physical parameters of the watersheds. In Mexico, the availability of official rainfall information, reliable and sufficient, is a challenge today, because there are no updated records in some regions. In some areas, there is the right amount of information but with poor quality, while other locations have few climatological stations.

On the other hand, using the tools of the Industrial Revolution 4.0, we now have satellite precipitation information that can reinforce and even supply the lack of information available in the network of climatological stations. However, it is necessary to evaluate the quality and usefulness of satellite information, for which an intercomparison exercise between sources of information is very useful and sometimes necessary. An alternative to obtain updated satellite records is the CLIMATESERV database of the GLOBALSERV. This database is made up of different satellite data sources and terrestrial source records.

The objective of this work is to present the analysis and the results obtained from the simulation corresponding to year 1981 of the rain-runoff process using the HEC-HMS software, considering the Río Conchos as the case study area—P. de la Colina subbasin located within the Río Conchos basin.

2. Methodology

2.1 Data collection

2.1.1 Geographic information

The geographic information of the basin was obtained in scale 1:50,000 and 1:250,000, which are common scales that manage the official dependencies in Mexico. This information is available in files with .shp extension (Shapefile) and in raster format for digital elevation models (DEM), which are located in the Lambert Conformal Conic projection coordinate system (CCL ITRF 1992), which uses the Datum International Terrestrial Reference Framework 1992 (D_ITRF_1992). A very complete relational data model was created including the most relevant base geographic and historical information compiled in the rio Bravo/Grande basin.

2.1.2 Climatological information

For the generation of rainfall time series, two sources were considered: one of them was the database named ERIC III that contains the meteorological information recorded by terrestrial climatological stations. The second source used was the CLIMATESERV database. The ERIC III database contains official information recorded by climatological stations from the National Water Commission (CONAGUA by its acronym in Spanish). The average daily precipitation was calculated with the help of the program ArcGIS 10.4, using the method of the Thiessen polygons. On the other hand, CLIMATESERV contains information from multiple satellite data sources and terrestrial observations, which combine the information to create rainfall historical series. Figure 1 shows the average daily precipitation of year 1981 from the two-database mentioned, which served as the main input parameter for the hydrological model. It is worth mentioning that the difference in millimeters of rainfall of the time series from the two databases is 11.15 mm that is equivalent to 1.63% of inconsistency.

Figure 1.

Precipitation time series of the ERIC III and CLIMATESERV of the year 1981.

2.1.3 Hydrometric information

Hydrometric information was obtained from the National Surface Water Data Bank (BANDAS by its acronym in Spanish) of the CONAGUA. In addition, naturalized flow information from the Texas Commission on Environmental Quality (TCEQ) was used [3]. The study area contains only one hydrometric station, 24,400-Llanitos, but it is located in the upper part of the basin, so the information could not be used. Thus, it was determined to use the hydrometric station 24077-Colina and the hydro-climatological station 8055-La Boquilla, which are located at the exit of the watershed. Figure 2 shows the location of hydrometric stations in the study basin, and Figure 3 shows the average daily runoff of year 1981 of the stations mentioned, which was used for calibrating the hydrological model.

Figure 2.

Location of hydrometric stations.

Figure 3.

Average daily runoff of year 1981.

2.1.4 Physiographic information

The area of the Río Conchos—P. de la Colina watershed is 20,814 km2, which was obtained with the help of the software ArcGIS taking as base layer the file.shp of hydrological subregions from the CONAGUA. The concentration time (Tc) was calculated using the Kirpich equation, which relates the length (Li) and the slope (Si) of the main channel. The Taylor-Schwarzmethod was used for the slope calculation. According to the U.S. Department of Agriculture [4], the relationship between the Tc and the peak time (Tp) is equal to Tp = 0.6Tc. The curve number (CN) was calculated with the soil conservation service method (SCS), which is defined by the hydrological soil groups (HSG), soil treatment, type of coverage, and the antecedent moisture conditions (AMCs).

2.2 Construction of the HEC-HMS model

After gathering the necessary information to enter into the hydrologic model, the model was built to simulate the rain-runoff process of Río Conchos-P. de la Colina with the HEC-HMS software version 4.2.1 using the method of transformation of the SCS Unitary Hydrograph (Figure 4). The data were input into the software in an organized manner taking into consideration its main components: basin model, meteorological model, control specifications, and time-series data [5]. The simulation of the hydrological model used a period of 365 days, from January 1 to December 31, 1981, establishing a daily interval. The time series of the different databases were captured manually, specifying the increment in millimeters (mm).

Figure 4.

Scheme of the HEC-HMS hydrological model of the Río Conchos-Presa de la Colina subbasin.

3. Results, calibration, and statistical evaluation

3.1 Results

Figure 5 shows the output hydrograph of the hydrological simulations with the two sources of precipitation information mentioned above.

Figure 5.

Output hydrographs of hydrological model HEC-HMS: CONAGUA (red) vs. CLIMATESERV (blue).

3.2 Calibration

The calibration of the model involved a quantitative assessment of the hydrological response of the subbasin. This process was done by comparing the observed hydrograph with the simulated hydrograph. This is essential for the evaluation of the model, to compare the distribution and variations of the data [6]. Figure 6 shows the result of this comparison.

Figure 6.

Hydrographs simulated vs. hydrographs observed.

3.3 Statistical evaluation

The performance of the results of the model was assessed with several statistic models, such as the coefficient of determination, correlation, standard deviation of observations (RSR), and efficiency of the Nash-Sutcliffe (NSE). Table 1 shows the results of the statistical evaluation of the simulations versus the observed historical information.

ResultsObservedSimulated CONAGUASimulated CLIMATESERV
Qmax1,135.291,055.401,145.80
Qmax09/10/198111/10/198111/10/1981
Error, Qmax7.04%−0.93%
Mean67.92104.28107.71
Stan. Dev.138.13196.13203.57
r0.810.82
r20.660.67
RSR0.540.59
NSE0.710.65

Table 1.

Summary of statistical results of the evaluation of the model.

4. Discussion and conclusions

The hydrological model created using the HEC-HMS software simulated very well the rain-runoff process of the Río Conchos-P. de la Colina. The precipitation information and different physical parameters of the watershed came from different sources. Talking about the precipitation input parameter, time-series were generated based on two sources of information: the first one with official information from climatological stations reported by the CONAGUA, and the second source with information from the CLIMATESERV database of the GLOBALSERV, which combines multiple satellite data sources and observations terrestrial. Quality and usefulness of the satellite information of precipitation were evaluated, by means of an intercomparison exercise between the time series coming from the two sources, obtaining a difference of 1.63% between them.

Based on this analysis, the result is a hydrological simulation model of the subbasin related to year 1981. A benchmark analysis of results and a statistical evaluation of the model were carried out. The analysis indicated a good behavior of the model versus the historical information observed, since the adjustment of the distribution and variation of the output flow were good. According to Moriasi et al. [7], the statistical evaluation of the model indicated a good performance or behavior of the CLIMATESERV database, with respect to the information registered in situ. Based on these results, it can be established that satellite records are a good alternative to reinforce or supplement the lack of information available in the network of climatological stations in Mexico.

Advertisement

Acknowledgements

We acknowledge the Universidad de las Americas Puebla and the UNESCO-UDLAP chair in hydrometeorological risk for all the support and facilities provided for the achievement of this work.

Author details

Paul Hernández-Romero, Carlos Patiño-Gómez*, Benito Corona-Vázquez and Polioptro Martínez-Austria

Department of Civil and Environmental Engineering, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, México

*Address all correspondence to: carlos.patino@udlap.mx

Modeling of the Controlled Release of Essential Oils Encapsulated by Emulsification

Mónica Dávila-Rodríguez, Aurelio López-Malo, Nelly Ramírez-Corona and María Teresa Jiménez-Munguía

Abstract

Encapsulated essential oils (EO) by different techniques have demonstrated antimicrobial activity against bacteria, compared to the nonencapsulated ones. The study of the EO release is important, in order to predict the time in which it will be available to exhibit its antimicrobial activity to inhibit or inactivate specific microorganisms. The objective of this study was to analyze and model the release of cinnamon essential oil (CEO) and rosemary essential oil (REO) encapsulated by emulsification using high-frequency ultrasound. A concentration of 10 μL/mL was obtained for OEO after 60 min in aqueous solution under stirring, while for REO, a concentration of 60 μL/mL was obtained after 360 min. The CEO microemulsion droplet size was smaller than in REO microemulsion, thus enhancing its release in the aqueous solution. The release profiles were fitted to different kinetic models; a first-order kinetic described CEO microemulsion release, while a second-order kinetic was used for REO microemulsion release. The encapsulated EO by high-frequency ultrasound can be applied as water-soluble natural antimicrobial additives in different food systems with long-term release periods.

Keywords: natural antimicrobials, release kinetics, encapsulation, high-frequency ultrasound

1. Introduction

Recent studies report that cinnamon essential oil (CEO) and rosemary essential oil (REO) have demonstrated antimicrobial activity, against microorganisms, due to their main components that are cinnamaldehyde and eucalyptus, respectively [1, 2].

The encapsulation of EOs has shown to provide stability to lipidic and volatile compounds, maintaining their antimicrobial activity. Encapsulation by emulsification has been recently studied to protect lipidic compounds in a micro- and nanoscale [3, 4]. The different compounds encapsulated in microemulsions are very stable during storage and can be released in specific conditions and dosage [5]. The controlled release of the encapsulated compounds can be influenced by different factors including environmental conditions, form and shape of particles, particle size, the bioactive solubility, and diffusivity [6, 7]. The release of the microencapsulated EO may determine their effectiveness depending on the selected application; consequently, the development of mathematical models able to describe the release of these compounds establishes a very useful tool for the behavior prediction [6].

The objective of this study was to evaluate the release of CEO and REO encapsulated in microemulsions prepared by high-frequency ultrasound, setting a target concentration to be released, according to their antimicrobial activity against bacteria related to common illness by food consumption.

2. Methodology

2.1 Materials

The REO was purchased at Hersol (Mexico) and the CEO at TECNAAL (Mexico). For the preparation of the oil-in-water (O/W) microemulsions, inulin (Fructagave PR95, Mexico) was used as stabilizer and Tween 80 (Sigma-Aldrich, USA) as emulsifying agent.

2.2 Microemulsion formation

The dispersed phase of CEO microemulsion was prepared with 10% (w/w) of CEO, while for REO microemulsion, it was prepared with 30% (w/w) of REO. The microemulsions were prepared using an ultrasound homogenizer (Cole Parmer, CP 505, USA). The continuous phase of the O/W CEO or REO microemulsions was formulated with 3% of Tween 80 and 5% of inulin.

2.3 Particle size

Particle size distributions of the microemulsions’ droplets were measured using a particle analyzer by laser diffraction (Bluewave, Microtrac, USA).

2.4 Release of encapsulated CEO and REO

The amount of microemulsion dispersed in water was defined according to values reported in other studies [8, 9] of minimum inhibitory concentration (MIC) and minimum bactericide concentration (MBC) against bacteria of the CEO and REO; maximum concentrations of 15 and 80 μL/mL, respectively, were adjusted in this study. The microemulsions were added in water, maintaining the system agitated at 60 rpm and 25°C. The quantification of the EOs’ release was made by dissolving 1 mL of sample in 9 mL of n-hexane, determining their absorbance in a spectrophotometer at 310 nm for the CEO and 225 nm for the REO.

2.5 Mathematical modeling for release profiles

The concentration data of the released EOs at different times were recorded and fitted to dynamic kinetic models, by optimizing the minimal square error. Eq. (1) was applied to describe a first-order kinetic model. A kinetic with an initial zero slope and a maximum slope at the inflection point (typical S shape response) was used to fit a second-order dynamic (Eq. (2)).

Ct=Cmax1ek1t(1)
Ct=Cmax1k2t+1ek2t(2)

3. Results

According to the release curves (Figure 1), the CEO release showed a constant rate in the first 90 min; followed by a plateau from 90 to 360 min. On the other hand, the REO release presented a delay release of 60 min, but then the release rate was constant until 420 min passed, presenting a decrease of the release rate until reaching a plateau at 600 min.

Figure 1.

(a) Release of CEO microemulsion (■) and (b) REO microemulsion (•), both in water at 60 rpm and 25°C. Continuous lines represent mathematical model estimation.

The antimicrobial activity of EOs depends on their availability, in certain food, to inhibit spoilage microorganisms [8]. In Figure 1, it can be observed that the encapsulated CEO release of 10 μL/mL was obtained after 60 min, and for REO release, the concentration of 60 μL/mL was obtained after 360 min.

Tomičić et al. [9] evaluated the antimicrobial activity of different EOs against Listeria monocytogenes. The authors reported that the MIC value with CEO was 2.56 μL EO/mL and with REO was 20.48 μL EO/mL. In addition, Kaskatepe et al. [8] reported MICs values of commercial and natural CEO between 0.39 and 12.5 μL EO/mL against Escherichia coli. For this study, these values were taken as a reference for the EOs release. In accordance with the release curve of CEO microemulsion (Figure 1), the MICs for both bacteria would be available in 30 min, whereas the MBC for L. monocytogenes will be released in 90 min and for E. coli in 60 min. Meanwhile, for the REO microemulsion, the MICs would be released after 120 min, for both bacteria. Finally, the concentration of MBC for L. monocytogenes and E. coli would be available after 210 and 420 min, respectively.

Different factors can influence the rate at which the EOs is released: first, the amount of EO in the microemulsion; and second, the lipid droplet size in the microemulsion. The droplet size D50 determined in CEO and REO microemulsions were 1.99 and 3.45 μm, respectively. Since the CEO microemulsion droplet size was smaller than in REO microemulsion, it enhanced its release in the aqueous solution. As shown in Figure 1, the release profile for CEO shows a monotonic response with a rapid raising at the beginning of the process, and after a lapse of time, the CEO concentration tends to achieve a stationary value, such behavior was described by a first-order kinetic model. For REO, a change in the slope of the release curve, describing a S-shape response, can be observed. This behavior was fitted to a second-order dynamic model, as discussed by Cardoso-Ugarte et al. [10].

For the CEO release, k1 was determined as 0.0166, while for REO liberation, the parameter k2 was calculated as 0.00708. To be dimensionally consistent, both parameters have units of min−1 and are directly related to the release rate. The fits of the proposed models are shown in Figure 1; the correlation coefficient (R2) between experimental data and the model estimation ranged from 0.993 to 0.974.

4. Conclusions

The encapsulated EOs in microemulsions by high-frequency ultrasound can be effective as natural antimicrobials in aqueous systems for long periods of time. In this study, it was demonstrated that the formulation of the microemulsions affects the release behavior in aqueous solutions, particularly, the EOs concentration and the droplet size of the oily dispersed phase.

Acknowledgements

The author Dávila-Rodríguez acknowledges Universidad de las Américas Puebla for the scholarship received for her PhD in Food Science studies as well as for the economic support for the development of this research.

Author details

Mónica Dávila-Rodríguez, Aurelio López-Malo, Nelly Ramírez-Corona and María Teresa Jiménez-Munguía*

Chemical and Food Engineering Department, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, México

*Address all correspondence to: mariat.jimenez@udlap.mx

Extraction, Composition, and Antibacterial Effect of Allspice (Pimenta dioica) Essential Oil Applied in Vapor Phase

Ana Cecilia Lorenzo-Leal, Enrique Palou and Aurelio López-Malo

Abstract

The aim of this study was to extract and evaluate the composition and antibacterial effect of allspice (Pimenta dioica) essential oil applied in vapor phase against Salmonella Typhimurium, Listeria monocytogenes, and Pseudomonas fluorescens at selected levels of pH and temperature. Microwave assisted extraction (MAE) was tested at different conditions, and it was found that the best extraction conditions were ground allspice, allspice:water relation 1:20, 90 min of soaking before extraction, 800 W for 30 min, and 600 W for 20 min. Antibacterial activity was determined through the minimal inhibitory concentration (MIC) of EO in vapor phase in culture media. Allspice essential oil (AEO) was more effective against L. monocytogenes despite the pH or temperature level, compared with S. Typhimurium and P. fluorescens. Allspice essential oil was able to inhibit the growth of the three bacteria tested, and it was found that both the incubation temperature and pH are the factors that could influence the inhibitory effect of the EO tested in this study.

Keywords: microwave extraction, antibacterial effect, allspice essential oil, vapor phase

1. Introduction

The use of microwaves is an alternative to extract essential oils (EOs), and it can be used to assist a common method known as hydrodistillation. This method is achieved by adapting a distillation apparatus to a microwave oven, or with specialized equipment such as the NEOS System equipment (Milestone, Shelton CT, USA) [1]. The MAE is a method that uses microwave radiation as a heating source, for a mixture made with solvent and sample. This type of heating is instantaneous and occurs inside the sample, so the extraction is usually a very fast process [2].

Essential oils are substances extracted from different parts of aromatic plants (flowers, seeds, leaves, herbs, fruits, roots, and rhizomes, among others) and have antiviral, antibacterial, antifungal, and insecticidal properties [3]. EOs contain different components (mono- and sesquiterpenes), of which the main ones represent 85–95% of the total volume, and the others are known as minor components [4, 5].

An alternative source of EOs is the one extracted from the dried fruit of allspice. This spice has been used in meat, fish, soups, sauces, cakes, cosmetics, and drugs due to its antimicrobial and antioxidant properties [6, 7]. AEO has been used as an antimicrobial against different microbial strains, since it has been proven that the volatiles present in the EO are capable of inhibiting the growth of different bacteria and fungi [6, 8, 9, 10]. One of the limitations of the AEO is that when applied in liquid phase (directly), it generates a significant impact on the sensory attributes of food, because of its strong aroma [11]. Unlike the liquid phase, the application in vapor phase, which is achieved through the formation of atmospheres with volatilized essential oil compounds, requires lower concentrations, for its use as an antimicrobial. Therefore, the application in vapor phase could be a solution to the intense aroma characteristic of the AEO [12, 13, 14].

There are few studies on the use of allspice essential oil as an antimicrobial and its application in vapor phase. Therefore, the aim of this chapter is to extract and evaluate the composition and antibacterial effect of allspice essential oil applied in vapor phase against Listeria monocytogenes, Salmonella Typhimurium, and Pseudomonas fluorescens at selected levels of pH and temperature.

2. Methodology

2.1 Bacterial strains

Bacterial strains (Salmonella enterica serovar Typhimurium ATCC 14028, Listeria monocytogenes Scott A and Pseudomonas fluorescens) were obtained from the Food Microbiology Laboratory of Universidad de las Americas Puebla (UDLAP, Mexico, Puebla). The strains were maintained on Trypticase Soy Agar (TSA; Difco, BD, Sparks, MD) slants at 5°C.

Bacteria (L. monocytogenes, S. Typhimurium, and P. fluorescens) were inoculated into 10 mL of Trypticase Soy Broth (TSB; Difco, BD, Sparks, MD) and incubated at 35°C for 24 h. Then culture inoculum cell concentration was adjusted to 107 CFU/mL for subsequent use in culture media [15, 16].

2.2 Plant materials

Allspice (Pimenta dioica) dry fruits were obtained from Condimentos Naturales Tres Villas S.A. de C.V. Puebla.

2.3 Extraction method

Allspice essential oil was obtained by means of MAE, using a NEOS System equipment (Milestone, Shelton CT, USA), according to the following methodology: allspice dried sample was grounded with the help of a blender (NutriBullet, Magic Bullet, USA), sieved with a mesh number 20 (850 μm), placed in a glass beaker with 2 L of distilled water, and letting the sample soak for 1.5 h. Then, the sample was introduced to the NEOS System with the following conditions: 800 W (representing 100% of the equipment power) for 30 min and at 600 W (representing 67% of the equipment’s power) for another 30 min at 400 rpm. The recovered AEO was placed in hermetically sealed amber containers and stored at 5°C [7, 16, 17].

2.4 Essential oil yield

The yield of AEO was calculated by means of the following equation:

%R=VM100(1)

where V is the final volume of essential oil, M is the initial mass used of grounded allspice dry fruit, and 100 is a mathematical factor to express it as a percentage (v/p) [18].

2.5 Gas chromatography/mass spectrometry (GC/MS) analysis

Allspice EO was analyzed by gas chromatography using a 6850 Series Network (Agilent Technologies, Santa Clara, CA), a mass selective detector (5975C VL), and with a triple-axis detector (Agilent Technologies). Component separation was accomplished by an HP-5MS (5% phenyl—95% polydimethylsiloxane) capillary column (30 m by 0.35 mm, 0.25 μm film thickness). The carrier gas used was helium with a constant flow mode of 1.5 mL/min. The temperature of the column started at 60°C for 10 min, increasing every 5 min until reaching 240°C, and maintained at 240°C for 50 min. The injector temperature was 240°C. Retention indices were calculated by a homologous series of n-alkanes C8–C18 (Sigma, St. Louis, MO). Compounds were found by comparing their retention indices from the US NIST (National Institute of Standard Technology) Library and Shimadzu retention index (RI) isothermal equation [16, 19].

RI=100log(trs)log(trn)log(trn+1)log(trn)+n(2)

where trs is the retention time of the target component, trn is the previous alkane to the target component, trn + 1 is the alkane after the target component, and n is the number of carbons of the alkane trn + 1.

2.6 Vapor phase antibacterial activity in vitro

2.6.1 Culture medium

Trypticase soy agar was prepared adjusting its pH values (6.0 or 6.5) with hydrochloric acid (Meyer S.A. de C.V., Mexico City, Mexico), using a previously calibrated potentiometer pH 10 (Conductronic S.A. de C.V., Mexico City, Mexico). Then, the sterilized (15 min at 121°C) TSA was poured in sterile Petri dishes and allowed to solidify. Subsequently, culture media were inoculated using a spiral plater Autoplate 4000 (Spiral Biotech, Norwood, MA), applying 50 μL of inoculum of each bacteria [20].

2.7 Inverted Petri dish method

The antibacterial activity was evaluated through the minimum inhibitory concentration (MIC) that refers to the minimum concentration necessary to inhibit the visible growth of the studied strains [21], using the inverted Petri dish technique. This method consists in placing a sterile paper disc (Whatman No. 1, diameter 55 mm) impregnated with a known volume of AEO (that varied from 5 to 2000 μL) on the Petri dish lid. The culture medium was then immediately inverted on top of the lid, sealed with Parafilm®, and incubated as followed: (1) 35°C for 24 hours, (2) 25°C for 48 hours, (3) 15°C for 8 days, and (4) 10°C for 9 days [22, 23, 24]. These incubation conditions were selected from previous experiments that corroborate that tested bacteria can grow at the studied temperatures after those incubation times. The obtained MICs were expressed as mL of EO per L of air. A Q-Count counter and software (Spiral Biotech, Norwood, MA) were used to quantify colony forming units (CFU mL−1) when growth was observed. Tests were performed in triplicate.

3. Results

3.1 Extraction and allspice essential oil yield

To obtain the best extraction yield of the AEO, different conditions were tested: soaking times and volumes, microwave power, extraction times, and allspice dry fruit size. These results are presented in Table 1.

Tested conditionsYield (%)
Allspices dry fruit sizeAllspice:water relationSoaking time (min)Power (W)—time (min)
Whole1:5No soak600—400.5
1:590600—400.6
1:1090600—400.6
1:2090800—30 & 600—2010.6
Grounded (20 mesh)1:5No soak300—600.6
1:590300—600.8
1:590600—600.9
1:1090300—600.8
1:1090600—600.9
1:2090600—601.1
1:2090800—6 & 600—5411.3
1:2090800—30 & 600—2012.0

Table 1.

Conditions tested for the extraction of allspice essential oil and obtained yields.

Time and power combinations were used consecutively in the experiment.


All experiments were carried out with an agitation of 400 rpm.

Table 1 shows that whole allspice, with an allspice:water ratio of 1: 5, without soaking, at 600 W for 40 min, obtaining a yield of 0.5%. Also, it was observed that when conditions changed, yield only increased up to 0.6%. When allspice particle size decreased, yield was higher, which was also seen by Jiang et al. [7] and Chen et al. [17]. When ground sample was used, with an allspice:water relation of 1:5, without soaking in water, at 600 W for 60 min, the yield obtained was the same as the one obtained when the whole allspice was used (0.6%). However, by increasing the soaking time, the allspice:water relation and the microwave power yield increased considerably. When the extraction was made with an allspice:water relation of 1:20, a soaking time of 90 min and by using a combination of power and extraction times (800 W for 30 min and 600 W for 20 min) consecutive in the experiment, the yield increased to 2%. Therefore, the best extraction conditions determined in this study were ground allspice, allspice:water relation 1:20, 90 min of soaking, and 800 W for 30 min followed by 600 W for 20 min.

On the other hand, Jiang et al. [7] also used MAE to obtain EO from allspice, and the extraction time was 63 min at 1000 W, obtaining a yield of 3.25%. The yield and extraction time were greater than those obtained in this work, probably because they used higher power. Chen et al. [17] tested different conditions to extract essential oil from black pepper and observed that when the pepper was ground and soaked in water for 90 min, the amount of oil obtained was greater compared with the results obtained when it was not soaked or soaked for 30 or 60 min, which was also observed in the present work, since extraction tests made with whole raw material lower yields were obtained.

3.2 Chemical composition

The main components identified by GC-MS in tested allspice EO and their calculated retention indices are reported in Table 2. Attokaran [25] mentions that allspice EO could contain 80–87% of eugenol, 4–8% β-caryophyllene, or 0.2–0.5% of β-phyllandrine, which coincide with the findings in the present work, especially for eugenol.

CompoundPercentage in total allspice EO (%)Retention index
Eugenol89.551356
α-Terpineol2.041457
Caryophyllene oxide1.481582
1,8-Cineole1.061026
α-Cadinol0.861652

Table 2.

Main components of allspice essential oil determined by gas chromatography-mass spectrometry.

3.3 Antibacterial activity

The antibacterial activity of AEO against L. monocytogenes, S. Typhimurium, and P. fluorescens was tested using the inverted Petri dish technique, and the results obtained are presented in Figure 1. The figure shows that the AEO had antibacterial effects (at different concentrations) against the three studied bacteria in the different conditions of pH and temperature.

Figure 1.

Minimum inhibitory concentrations (MICs) of allspice essential oil against (a) L. monocytogenes, (b) S. Typhimurium, and (c) P. fluorescens.

Allspice EO was more effective against L. monocytogenes, despite the pH (6.0 or 6.5) or the evaluated temperatures (10, 15, 25 or 35° C) compared to S. Typhimurium and P. fluorescens. Du et al. [6] also tested the antimicrobial activity of AEO in vapor phase against L. monocytogenes, Escherichia coli O157:H7, and Salmonella enterica, being L. monocytogenes less resistant to the EO vapors than E. coli or Salmonella which coincide with our findings. These results could be related with the fact that Gram positive (L. monocytogenes) bacteria could be more susceptible to EOs than Gram negative bacteria (S. Typhimurium or P. fluorescens), as reported by various authors [5, 26, 27].

On the other hand, different pH levels did not affect much the antimicrobial effect of the AEO, but different temperature conditions did have an impact on the antimicrobial effect, especially when the temperature increases up to 25 and 35°C, as shown in Figure 1. It is well known that microorganisms have an optimum pH and temperature for their growth, and when these conditions move in any direction (lower or higher), microbial growth could be delayed. Furthermore, when microorganisms are exposed to different factors, such as temperature, pH or antimicrobials, such as EOs, an interaction between these factors could affect microbial growth. The results of these interactions could give us an idea of what microbial growth would look like in food systems; however, it is necessary to test food systems to guarantee the response of the microorganisms of interest. In this study, the pH had a lower impact, compared to the temperature, and in some cases as it increased, the MICs also did so. In summary, when the temperature and pH increased, in most cases, the MICs were higher.

4. Conclusions

Allspice essential oil was able to inhibit the growth of L. monocytogenes, S. Typhimurium, and P. fluorescens, and it was found that both the incubation temperature and pH are the factors that could influence the inhibitory effect of the EO tested in this study. It was also observed that lower concentrations of the AEO were required to inhibit the growth of L. monocytogenes, in comparison with S. Typhimurium and P. fluorescens, which was expected, since Gram-negative bacteria (Salmonella or P. fluorescens) are more resistant to EOs than Gram positive (L. monocytogenes).

Author details

Ana Cecilia Lorenzo-Leal, Enrique Palou and Aurelio López-Malo*

Departamento de Ingeniería Química y Alimentos, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, México

*Address all correspondence to: aurelio.lopezm@udlap.mx

Stability of the Antimicrobial Activity of Lactobacillus plantarum NRRL B-4496 Supernatants during Storage against Staphylococcus aureus ATCC 29413

Daniela Arrioja Bretón, Emma Mani-López and Aurelio López-Malo

Abstract

Staphylococcus aureus is a pathogenic microorganism that causes gastrointestinal diseases due to the production of enterotoxins. The study of lactic acid bacteria such as Lactobacillus plantarum has generated interest due to its ability to generate secondary metabolites against pathogenic microorganisms. In this work, the stability of the antimicrobial activity of the L. plantarum supernatants was evaluated by means of a well diffusion test. The supernatants were stored at 25 ± 1.0°C for a period of 20 weeks. A significant difference (P < 0.05) was observed in the antimicrobial activity of L. plantarum supernatants between time 0 and after 20 weeks of storage, although the ability to inhibit Staphylococcus aureus was still observed during the storage time.

Keywords: lactic acid bacteria, storage, antimicrobial activity, stability

1. Introduction

Staphylococcus aureus is a spherical, nonsporulating, nonmotile bacterium, facultative aero-anaerobic, Gram-positive, and catalase positive. It belongs to the normal microbiota and is found on the skin and mucous of mammals and birds. This bacterium can be disseminated in the environment of its hosts and survives for long periods in these areas [1].

The determination of S. aureus in food products is done in order to establish its potential to cause food poisoning and demonstrate contamination after being processed. This microorganism produces enterotoxins formed in foods under broad conditions of pH, water activity, and redox potential [2].

On the other hand, the use of some of the genera of lactic acid bacteria (LAB), such as Lactobacillus, Leuconostoc, Lactococcus, as alternatives in the biopreservation of foods is due to their ability to produce secondary metabolites such as bacteriocins, organic acids, hydrogen peroxide, among others. Several researchers highlight species of the genus Lactobacillus as the main antagonists of pathogenic microorganisms and food microbial spoilers. It has been observed that some species of Lactobacillus produce a variety of antimicrobial compounds that differ in their inhibitory spectrum, mode of action, structure and biochemical properties. Lactobacillus plantarum has an antagonistic effect on Gram-positive organisms and, in some cases, on Gram-negative organisms, such as Listeria monocytogenes, Staphylococcus aureus, Streptococcus, Salmonella, and Pseudomonas [3].

The objective of this work was to evaluate the stability of the antimicrobial activity of Lactobacillus plantarum NRRL B-4496 supernatants against Staphylococcus aureus ATCC 29413 during 20 weeks of storage at 25 ± 1.0°C.

2. Methodology

2.1 Bacterial strains

The L. plantarum NRRL B-4496 used in this study and the indicator strain S. aureus ATCC 29413 were obtained from the Food Microbiology Laboratory strain collection of the Universidad de las Americas Puebla (Puebla, Mexico). L. plantarum NRRL B-4496 was maintained on MRS Agar (de Man, Rogosa and Sharpe) (Difco™ BD, Sparks, Maryland) and Staphylococcus aureus ATCC 29413 on Trypticase soy agar (TSA, Bioxon® BD, Edo. de Mexico, Mexico), at 5 ± 1.0°C.

2.2 Culture conditions

The cultures were prepared by inoculating L. plantarum NRRL B-4496 into 30 mL of MRS broth (Difco™ BD, Sparks, Maryland) and incubated at 35 ± 1.0°C for 48 h under anaerobic conditions, whereas S. aureus ATCC 29413 was inoculated into 10 mL of Trypticase soy broth (Bioxon® BD, Edo. de Mexico, Mexico) and incubated at 35 ± 1.0°C for 24 h.

2.3 Preparation of cell-free supernatant

The cell-free supernatant was collected by centrifugation at 12000× g for 10 min (Marathon 21 K/R, Fisher Scientific, Germany), filtrated through 0.45-μm Millipore membrane filter and supernatants concentrated 10-fold by vacuum evaporation on a Buchi R-210/215 rotary evaporator (Buchi, Flawil, Switzerland) at 70 ± 1.0°C and 25 cm Hg.

2.4 Antimicrobial activity

The antimicrobial activity was evaluated by the agar-well diffusion method [4], performed in duplicate, which consists on spreading 0.1 mL of the indicator bacteria culture (105 CFU/ml) on TSA plates previously solidified, and four wells (8 mm diameter) were punched in the plate. Three of the wells were filled with 100 μL of the concentrated L. plantarum supernatant; the fourth well was filled with 100 μL concentrated MRS broth as a negative control. The plates were incubated at 37 ± 1.0°C for 24 h. The bacterial growth was observed, and the diameter of the inhibition zones (mm) around the wells including the well was measured with a digital Vernier caliper (Metax, Mexico), by triplicate.

2.5 Storage stability

To evaluate the stability of L. plantarum supernatant through time, supernatant was stored at 25 ± 1.0°C. The antimicrobial activity with the agar-well diffusion method (previously described) was determined every 7 days for 5 months.

2.6 Statistical analysis

Statistical software Minitab (v.17, LEAD Technologies Inc., USA) was used to perform an analysis of variance (ANOVA) with the 95% level of confidence.

3. Results

The zone of inhibition generated from the supernatant of L. plantarum against S. aureus, during storage at 25 ± 1.0°C, obtained at time zero was 20.05 mm and after 20 weeks of storage was 16.24 mm. As it can be observed in Figure 1, there is a significant difference (P < 0.05) in the antimicrobial activity of L. plantarum supernatants between time 0 and after 20 weeks, although the ability to inhibit S. aureus was still observed after storage time. The observed antimicrobial activity can be attributed to the presence of secondary metabolites such as organic acids, hydrogen peroxide, carbon dioxide, diacetyl, pyroglutamic acid, among others, what should be corroborated in further experiments.

Figure 1.

Plot of intervals of inhibition zones (mm) vs. storage time (week).

Other researchers such as Anas et al. [5] and Kareem et al. [6] observed that the supernatants of L. plantarum have the ability to inhibit the growth of pathogenic microorganism Gram-positive such as S. aureus ATCC 25923 and also Gram-negative bacteria.

4. Conclusions

The use of the supernatants of L. plantarum is a good alternative in the industry of foods to be used as biopreservatives, since these are able to inhibit S. aureus during a period of time of 20 weeks when stored at 25 ± 1.0°C; although there is a significant difference between the antimicrobial activity at time 0 and time 20, the antimicrobial activity remains effective.

Author details

Daniela Arrioja Bretón*, Emma Mani-López and Aurelio López-Malo

Departamento de Ingeniería Química y Alimentos, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, México

*Address all correspondence to: daniela.arriojabn@udlap.mx

Music as a Medium of Encounter of Otherness in Animated Cinema

Luis Daniel Martínez Álvarez

Abstract

The present investigation seeks to analyze the problem of access to knowledge about the other, through the study of music as a representative medium of alterity in a selection of animated short films. The original contribution of the academic dissertation is based on generating a sound approach to the phenomenon of otherness, through a programmatic analysis of music within a selection of animated films with nonvococentric characteristics. The research starts from a different approach of the conceptual language that names and appropriates the other, making it lose its specificity and idiosyncrasy as an alternative. In this sense, music emerges as an opportunity to meet with the notions of otherness, because it does not appropriate the other by not having the ability to name; on the contrary, music requires the other to be heard, to be performed, and it is said, music is a phenomenon that always tends to manifest itself from otherness. The visual and sound narrative analysis of cinematic works will be based on the theoretical framework of authors such as Jacques Derrida, Emmanuel Levinas, and Michel Chion, as a starting point to define the concepts of otherness, foreignness, alterity, totality, infinity, music, and leitmotiv.

Keywords: animation, films, leitmotiv, music, narrative, otherness

1. Introduction

Who is the other? Is this perhaps the fundamental question to answer when we refer to everything that is outside of my own person, essence, or limits; or otherness is an inaccessible philosophical problem because every methodological and epistemic approach to the other is always conditioned by the subjectivity of the self. “The other … He invented a face. Behind her she lived died and rose again many times. Your face today has the wrinkles of that face. Their wrinkles have no face” [1]. As the brilliant poetry of Octavio Paz presents it, the phenomenon of the other is intervened by a mask, by a face alien to his ego, which, in the manner of an oxymoron, makes us more impossible, a more intimate approach with his exteriority. The mask is shown as a phenomenon, which we can observe, study, and analyze, but, nevertheless, the face of the other remains hidden in an unintelligible space. Paz’s prose makes us wonder how can we access and know the other? He invented a face, however, does not specify who has invented that mask. Has the other put it on his own will, or was it invented and imposed on his behalf? Derrida mentions, when speaking of the act of the invention of the other, that every act of invention focuses on the search for a rhetorical element or construct, but fundamentally presupposes certain notions of illegality and ruptures with what he calls the implicit contract, establishing a distortion on the peaceful order of things ([2], pp. 1).

In this same way, the other bursts into the naturalness or normality of the order of the self, presenting a mask that is imposed rhetorically from my own existence to be able to understand its nature from my ego. On the other hand, Levinas recognizes the primordial role of the face, because the face allows us a first encounter with the human. It focuses on the recognition of a being similar to the self in each of its features, that is, it allows to recognize another similar to my individuality [3]. However, it also exhibits the other from its own singularity in the notion of a different and unique face in its essence. “Totality and infinity describes the epiphany of the face as a disenchantment of the world. But the face as a face is the nakedness -and the stripping ‘of the poor, of the widow, of the orphan, of the foreigner’ and its expression indicates ‘you will not kill’” [3]. The alterity is shown to us as a phenomenon that seems to have an oxymoron principle, so that the face is what allows us both to recognize ourselves as a simile, and also makes it possible to distinguish it as an alien to myself, with its own nature, limits, and singularities. This apparently contradictory condition is one that raises the complexity of knowing the other; however, it is possible to visualize different perspectives and methods to try to understand the uniqueness of the external. It is fundamental to emphasize that the construction of the vision of the face is not limited merely to the sphere of the pictorial; clearly presents a representation in function of effigy, but, nevertheless, the notion of the face is also present as an allegory of everything that arises from an exteriority exhibited in any sense and form, from which we will rescue primarily the expression and representation of the sonorous.

The access to knowledge with the other is considered as a complex and paradoxical relationship between the self and the alternative. The notion of otherness is outlined from the exteriority and infinity, that is, from the notion of the foreigner, the stranger, the unknown, and the alien to my totality. The encounter with the other requires a process of appointment that can end up constructing it and delimiting it under my subjectivity, causing it to lose all its specific and individual qualities as another. That is why more abstract and programmatic codes such as music emerge as an opportunity to access knowledge about the alternative. In the particular case of music, the voice it emerges as the immaterial face of the other, as Levinas affirms with the role of the face as a form of encounter with the human and with the outside, now, music is presented as an exteriority to the infinite, that is, it is shown as the encounter with the face of otherness from the immaterial and ideal sound abstraction. “The face stops the totalization” [3], the appropriation of the other finds a brake when encountering the face, because it poses a first encounter with the external; in addition, the face presents the approach with an unrepeatable individual with your own unique traits.

The present research project focuses on conducting a study on the constitution of the various representations of the other, from the immateriality of sound in a series of animated short films. Notions like the enemy, the foreigner, and infinity will be addressed throughout a narrative analysis of a selection of nonvococentric films, where the role of music and image acquires a fundamental role in the construction of the sonorous effigy of the other.

2. Methodology

The research method is of a qualitative nature and is structured mainly in two stages: an exploratory phase and a propositive phase. This procedure is constituted in the structure of the grounded theory method, where it is not possible to have a precondition of the object studied, until initially and iteratively acquire a collection and interpretation of the data. The first stage is designed to carry out an extensive bibliographic and audiovisual review of the phenomenon of alterity, from the critical framework of authors such as Emmanuel Levinas, Jacques Derrida, and Michel Chion. The selected case studies in this stage of the investigation will focus on three fundamental criteria. The first refers to the format of the short film, because, thanks to its short extension, it constitutes an exercise of symbolic synthesis, as well as it is possible to access more study cases. The second criterion refers to the animated film product that today there is still a large fertile field for critical investigations of the role of music from animated cinema [4]. The third criterion is structured on the principle of selection of nonvococentric film narratives, because in this type of stories, music acquires a more active role within the cinematic narratives.

3. Results

The present dissertation allowed a critical analysis on the diverse representations of the other. The alternate within the selection of short films, I present a visualization and listen to racial, social, cultural, and historical subjectivities, as the case of the Mexican visualized by the foreigner American, the African American within the first decades of the twentieth century in the United States, the figure of the poilus and the enemy in the trenches of the Battle of Verdun during World War I and finally the face of the other’s negation in the figure of the Jew in Auschwitz. The research empowers the possibility of generating spaces for reflection on the notions of otherness, from an ethereal, abstract, diverse, and programmatic perspective, even in the most inaccessible spaces of conflict for the encounter with the other.

4. Conclusions

Music is established as a nonplace [5] sound where the other cannot only express himself, but fundamentally, be heard. The musical code opens the guideline to look for new ways to get closer to know the other, that is, from noncategorizable forms that deny the specification of the alternate. Music is presented as a code with various shortcomings and shortcomings to conceptual language, but it is precisely within its conceptual and absolute insufficiencies that we can understand the infinite qualities of the other; the musical themes have that quality to which Derrida calls difference ([6], pp. 133), because, in each of his notes, there will always be the possibility of exceeding his own representation and therefore has the ability to know to the other from all its infinity. In this way, the sound phenomenon ends up opening new patterns of encounter and reflection with the other, but always with the recognition that there is no single way or way to access the other because of its paradoxical and complex idiosyncrasy.

Author details

Luis Daniel Martínez Álvarez

Arts and Humanities Department, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, Mexico

*Address all correspondence to: van_buho@hotmail.com

Revolutionary Veganism

Victor Fonseca López

Abstract

This dissertation analyzes the development of the vegan movement in Puebla and its potential to trigger political and social changes in the context of contemporary neoliberal capitalism. Through critic ethnography research, the study puts into question the emergence of a so-called Vegan Revolution examining the formation of a globalized social movement and community networks with specific habits and values, the different perceptions about veganism, and affective bodily interactions among its members. From dialog with posthegemony theory (Beasley-Murray), it propels the notion of a bio- and micropolitical transformation (Deleuze and Guattari) instead of a more traditional revolution concept to account the processes in which a multitude of bodies—beyond diverse ideologies—meet, share, and affect each other in daily life, which shapes alternative models of society and forms of community to the cultural and political programs of neoliberalism, which, paradoxically enough, constitute a truly revolutionary quality.

Keywords: veganism, posthegemony, critical ethnography, Puebla

1. Introduction

Coinciding with the arrival of the twenty-first century, the entry of veganism into cultural mainstream has been announced—mainly on the Internet—with a “Revolution” label. For some critics, veganism tries to settle some debts regarding the exploitation of animals that remained pending in the agenda of the countercultural movement of the 1960s and 1970s. Just as the hippies and the beatniks rose against the Vietnam War and social inequalities, modern hipsters and vegans demand for a change denouncing ecological deterioration and mistreatment of animals. Headlines such as “The ‘boom’ of veganism and vegetarian diet reveal the new ecological awareness of the world” associate vegan practices with a transformation of consciousness that is expressed more in a lifestyle than with traditional revolutionary actions.

Even so, the “revolution” tag remains (it is sticky). Social and political revolution traditionally refers to the exertion of violence which aims to overthrow—or at least radically change—a certain order or system of oppression. Historically, since the eighteenth century, the term has been linked to nationalist processes, where peoples or nations rise up in arms to defeat a government whose dictatorship or tyranny is necessary to dismiss. In his Communist Manifesto, Marx wrote that the class struggle “each time ended either in a revolutionary re-constitution of society at large, or in the common ruin of the contending classes” [1]. But in the twenty-first century, political and social conditions have reduced the ideological conflict to its minimum expression. However, the decay of ideology does not mean that it has completely disappeared, nor does the restlessness or longing for revolution. Instead, a shift is taking place from ideological discourse to pulsing affects which posthegemony theory tries to acknowledge.

Nowadays, if we can speak of a vegan revolution, it does not necessarily must come from an ideological substratum (which does not imply the absence of ideology). Ideology becomes the “color” or “flavor” that acquires the movement, the covering for a base that is primarily habitual and affective. Therefore, current revolutions, deprived of an effective representation system, are rather posthegemonic, or as I want to propose in the case of vegans, inserted in a biopolitical frame. If we substitute the Marxist equation with posthegemonic terms, we will find that the revolutionary subject ceases to be the “proletariat,” the “people,” or any kind of unity to become the “multitude.”

According to Hardt and Negri, the traits of community and diversity are central to understand the plurality of vegans as a multitude, since “the multitude cannot be reduced to a single unit and does not submit to the rule of one” (330) [2], or as Beasley-Murray puts it, “posthegemony means the change from a persuasive rhetoric to a regime in which what counts are the effects produced and orchestrated by affective investment in the social, if by affect we mean the order of bodies rather than the order of meaning” (120) [3]. I agree with Beasley-Murray that the power of ideology has declined, but I question his assertion that “nobody is too persuaded by ideologies that once seemed fundamental to ensure social order” (ix). I believe that the ideological function should not be completely ruled out, since the vegan movement, in some of its manifestations, revives a fundamentalist tendency toward ideology, although it may be promoted from the grassroots and not by the State. It is precisely the relationship between culture and politics, the tension between the ideological and the corporal, and the role of domination and social consensus through the concepts of habit, affection, and multitude that Beasley-Murray manages, which may explain the vegan endeavor for a revolution.

2. Methodology

The community of vegans constitutes a multitude of bodies whose diversity and mobility make it difficult to locate their contours and borders. Veganism is a conglomerate of worlds, with different economic, social, and territorial contexts. To approach it, I carried out a three-year multi-situated ethnography of the community of vegans who live in the metropolitan area of Puebla, including the analysis of their social networks. I selected the Facebook group “Vegans and Vegetarians of Puebla,” a virtual community representative of local veganism where most of its members live in the cities of Puebla, Cholula, and Atlixco. The case of Puebla is relevant for the following reasons: (1) it is an average city in Mexico that, not being the capital of the country, serves to exemplify the case of many other Latin American cities with similar situations. (2) Rapid urban and economic development has led Puebla to become an aspiring global city. (3) The rich cultural life of Puebla has favored the proliferation of groups with global tendencies with different philosophies and projects like the vegan movement.

I conceive the corpus of study based on a rhizomatic model or multiple networks, inspired by Deleuze and Guattari. The center or starting point is the multisituated critical ethnography of the Facebook group to follow from there its ramifications that arise both virtually (online) and face-to-face (offline). Research has been structured and reoriented by the various paths that the movement undertakes, recording its evolution and manifestations, the spaces it appropriates, the practices it inaugurates, and the connections it establishes. My ethnography pursues a logic of encounters and affects which become evident in “critical events,” like fairs or tianguis days in which the vegans congregate, as well as their interactions on the Internet.

3. Results

I propose that vegan activists are fed by a belief system that embraces the ideals of the revolution, but this revolution is simply the counterpart of the modern project of civilization. For this reason, from a posthegemonic perspective, it cannot be conceived as an authentic revolution: it does not break with the same social and epistemic categories of modernity. On the other side, merchants, who are not interested in a traditional, leftist revolution, are those who, through their habits and affections, carry out more effective or “revolutionary” transformation processes.

To qualify as a revolution in the traditional Marxist sense, the vegan movement would have to have a greater hierarchical organization (with leaders and militants), a more formal and constant type of membership, a political agenda with defined goals, and a more complex communication system. But perhaps most importantly, it would require objectives and actions aimed at mobilizing a social uprising—ready to assume the use of violence—against an imperial or state regime. The relatively small number of members of the vegan movement, their nonviolent strategies, and their nongovernmental political positioning distance them from the singularities of the revolution. The anti-speciesism vegan movements want a radical change mainly in ethical and consumer habits, and they do not seek to overthrow an economic or political system.

In this way, the vegan revolution may not qualify as a revolution for two main reasons: (1) the changes it intends to establish are situated in the same linear historicism. Vegan ideology considers the movement to be an engine of progress in a linear vision of history. For some activists, the “Vegan revolution” makes sense as a project of evolution and civilization that follows the colonialist logic. For others, it is the continuation of the utopian struggle of the 1960s counterculture that still believes in the overthrow of capitalism. But the shift to a vegan diet does not alter the economic system, only exchanges one element of the equation for another, leaving the capitalist mode of production untouched, and (2) they do not abandon the paradigm of modernity. Following Deleuze, “modern societies have replaced inoperative codes by a univocal overcodification, and lost territorialities by a specific territorialization” [4]. Hence, the implantation of veganism resorts to overcoding and segmentation by means of binary separation (vegans-non-vegans). The binary conflict is part of the modern paradigm, in which one party tries to defeat the other, in order to impose itself on the top of civilization. Since its inception, veganism has placed its participants in a tension with its otherness. For instance, pro-animal organizations act by defining binary subjectivities, “we” and “they,” or “defenders,” those who support charity and protection of animal rights, and “enemies,” supermarket chains and distributors of meat products and its final consumers, carnivores themselves.

In its posthegemonic version, veganism calls into question the universal, anthropocentric humanism, rational capacity, the Truth, the History or other great narratives. It does not believe in pre-constituted objects or subjects, pre-existing unitary actors or final purposes, only “contingent foundations.” There is no certainty of being. Instead of thinking rationally, in binary form (opposites “nature-culture”), it becomes “relational,” looking for connections. In fact, posthegemonic veganism is not interested in the concept of revolution (unless it functions as a commercial slogan). The multitude of vegans does not have a defined project of society. For the multitude, a revolution would be understood as alternating flows that combine to form currents of invention. The actual transformation carried out by affective and habitual means is not called revolution. Revolution implies an ideological tradition, while veganism does not fit in the parameters of the communism or other theoretical constructions. It is not a class struggle, and it is the biopolitical change of the multitude toward other ways of seeing the world, an altermodernity. This approach consists not in attacking the ruling powers but in reshaping production, distribution, and consumption processes. Ethnographic results show that vegan groups do not have a center, and they work globally, having subsidiaries or counterparts in several cities of the world—small cells that replicate global patterns with local tonalities. It is not about changing cities but networks of bodies. Veganism as immaterial work can only be conducted in common, inventing new independent networks of cooperation. That said, it is preferable to refer to veganism not as a revolution (in any case biopolitic), but as a movement toward political emancipation through collective action. It is an abolitionist, liberation movement, but what sets veganism apart is that what is at stake is life itself not only human life, but of all kinds of animal species.

4. Conclusions

The vegan collective has woven its own networks, both virtual (mainly on Facebook) and face-to-face. It is these networks that give strength and sustain the economy of veganism: its ability to appropriate spaces, its mobility and flexibility to spread, and its parameters of inclusion and exclusion, making vegan groups appear within the cartography of biopower. When Puebla vegan sellers and distributors organize tianguis and fairs, they are demarcating their own topologies, moving throughout territories of the city and appropriating spaces to interact affectively. The bodies attracted to vegan events revolve around environments that arouse emotions that originate resonances of different intensities. The combination of these elements produces a pattern or assembly. The broader and more solid the networks of this assembly are, the more will increase the share of power and social influence of the vegan collective. And so, the “Vegan Revolution,” an ideological burden—a discursive strategy that refers to an unattainable utopia—would give way to a serious social transformation, always in process.

Author details

Victor Fonseca López

School of Arts and Humanities, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, Mexico

*Address all correspondence to: fonsvic@yahoo.com

Aerodynamic Coefficient Calculation of a Sphere Using Incompressible Computational Fluid Dynamics Method

Carlos Duran-Hernandez, Rene Ledesma-Alonso, Gibran Etcheverry and Rogelio Perez-Santiago

Abstract

This chapter provides essential information regarding calculation of main forces acting on a body when going through a fluid. Drag and lift coefficients are analyzed with a finite-element method (FEM) Software using an incompressible computational fluid dynamics (ICFD) simulation. A sphere is analyzed and simulated as static object immerse in a fluid. Results are validated by comparing them with the experimental ones found in literature. Efficiency of simulations is demonstrated by close approximation to experimental results.

Keywords: computational fluid dynamics, ICFD, finite-element method, drag force, drag coefficient, lift force, lift coefficient

1. Introduction

When an object is immersed into a channel, a chamber, a wind tunnel, or even within our bodies, for instance, when a blood cell or clot moves through the heart or along the smallest arteries in the body, a fluid-structure interaction phenomenon occurs between the object and the surrounding fluid. This moving or static object (i.e., a cylinder, a sphere, a plate, a clot stuck in the blood vessels) will experiment a drag force in the fluid direction due to pressure and shear stress forces [1]. Drag force produced is a consequence of the velocity gradient of the fluid to the object, and it is dependent on the drag coefficient and geometry of the object.

Hydrodynamic forces are related to viscosity and inertia of the fluid. Close to the surface of the object, momentum is transferred through a layer in which viscosity takes an important role in determining the width of the layer and the way the velocity changes within it. This viscous zone is called the boundary layer [2]. Far from the solid boundary, viscous effect is less important and the inertia of the fluid particles decides the faith of the flow. The comparison between viscous and inertial forces acting on a fluid is given by the Reynolds number (Re), a dimensionless quantity.

Inertial forces are represented by the density (ρ) times the characteristic velocity of the fluid (V) times a length scale (L) according to the surface over which the fluid is moving. The viscous part is characterized by the dynamic viscosity of the fluid (μ) [2]. Thereby, Re can be expressed using Eq. (1):

Re=Inertial  ForcesViscous  Forces=ρV2L2μVL=ρVLμ(1)

For low Re values (below 102), viscous forces dominate the fluid motion having a well-defined straight path indicating that fluid moves in parallel layers, this behavior is called laminar flow. For high Re values (above 105), the fluid follows an irregular motion across the section of the channel or tube where the fluid is moving. This motion is known as turbulent flow, and when it occurs, the viscous forces are so small that they can be neglected. The stage between these ranges is called the transition regime and it always occurred when Re ∼3000. As it can be seen in Figure 1, laminar regime follows a linear behavior, and as soon as it gets closer to the transition regime and then turbulent region, it changes to a nonlinear behavior.

Figure 1.

Characteristic drag curve acting in a sphere for different Reynolds number ranges. Transition of the boundary layer to turbulent region is represented by the dip at the last part.

The relative motion of the fluid produces a hydrodynamic force in a parallel direction to the flow, also known as drag force. According to dimensionless analysis in [3], the drag force is represented by a drag coefficient CD, another dimensionless number, which is directly related to the Reynolds number. Depending on the Re observed in the object, a specific drag coefficient will occur. Hence, not only velocity is involved, but also the drag force FD produced by the object, its frontal area A, and density of the fluid ρ. This number is used to model all dependencies in shape, rotation, and flow conditions of the object [4] and it can be expressed by using Eq. (2):

CD=FD0.5ρv2A(2)

The way to obtain these coefficients in a simulation is through the use of ICFD analysis. Nevertheless, these cases represent a simple calculation because the object interacting with the fluid is always stationary and only behaves as an obstacle for the fluid. However, when the object also presents a motion caused by the fluid (nonstationary object), a different approach is needed and an FSI simulation is required. An example of these problems is presented in [5]. For the aim of this work, the interest is focused only in stationary objects and how they affect the surrounding fluid.

The content in this chapter shows in Section 2 the process to obtain these forces and coefficients for a static sphere in an ICFD simulation. A comparison between two approaches available in the software to solve calculations, as well as the configuration for their simulations using a finite-element (FE) software is also shown. Results obtained for the ICFD using the sphere and a validation test to select the correct mesh size are presented in Section 3. Future work and conclusions of the chapter are listed in Section 4 and 5, respectively.

2. Methodology

In order to compare simulations with experimental results, it is necessary to have the drag curve of a sphere. According to the establish in [6], the principle of similarity simplifies the use of many variables (velocity, density, dynamic viscosity, and a dimension of the body, e.g., diameter of the sphere) to a single variable, which is the Reynolds number. For different diameters, velocities, and fluids, a characteristic drag curve can be obtained using Re and is shown in Figure 1.

Once the curve is obtained, an ICFD analysis is conducted through a sphere immerse in a constant flow to obtain different drag and lift coefficients through diverse Re ranges. Simulation is configured using LS-DYNA Software and the boundary conditions of the fluid domain (see Figure 2) are stated as follows: Inlet Wall representing the entrance for the fluid, Outlet Wall where the fluid goes away and the pressure reaches zero, Nonslip Condition Wall representing that the fluid has zero velocity relative to the boundary. For an ICFD, the object is also configured as a nonslip condition object and it will behave as an obstacle for the fluid.

Figure 2.

Boundary conditions of the ICFD problem.

The test is conducted for seven levels of Reynold numbers: 1, 10, 102, 103, 104, 105, 106, and 107, which are the ones present in Figure 1. The fluid used is air which density ρ = 1.229 kg/m3, viscosity μ = 0.0000173 Pa s. Speaking of the objects, the sphere has a frontal area (perpendicular to the flow) of 0.000314159 m2 (where the area A = πd2/4). Velocities are obtained using Eq. (1), where L is the diameter d of the sphere.

2.1 Calculation process

LS-DYNA has two different approaches to solve a problem: shared memory parallel (SMP) processing and massively parallel processing (MPP) [7]. SMP processing allows to distribute the model, solving process over multiple processors on the same computer. MPP capabilities allow to run the problem over a cluster of machines or use multiple processors on a single computer and it is fundamental if an implicit analysis is required.

In Figure 3, an example of SMP vs. MPP is shown. The model used is a 3D wind tunnel with an object placed at the center as an obstacle. For SMP, a mesh size of 0.008 is configured; for MPP, a finer mesh size of 0.0055 is used. Four different random velocities are chosen for the simulation and the number of CPUs used for SMP and MPP are 4 and 8, respectively. Results demonstrated that MPP finished calculations in less time than SMP for all cases, even when MPP uses a finer mesh than SMP. Due to this reason, MPP is the method selected for further simulations in order to reduce computational time. Resultant values obtained with the two methods are almost identical.

Figure 3.

Comparing two different setups. It is observable that using MPP takes less computational time than SMP even with a finer mesh. Scalability also helps to reduce time.

3. Results

3.1 Validation test

Before obtaining the coefficients for different Reynolds numbers, it is necessary to prepare a mesh study to select the most adequate size having an equilibrium between computational time and accuracy of results. Having a low-quality mesh yields inaccuracies in results; hence, different configurations are tested for comparison (see Figure 4). The lift coefficient for a sphere is expected to be CL ≈ 0. An initial mesh size is used to compare drag and lift coefficients for an Re = 1000. However, configuration “X” is not suitable because results in CL are far from expected. This configuration is selected by using default parameters allowed by the software and then varying them in order to obtain the expected CD. Even when results in CD obtained using configuration “X” are close to the desirable values, CL is far from the expected, and simulation takes almost 3 h to finish calculations, and for this reason, other five options are configured for an Re = 1000, a running time of 3 s and a constant velocity of 0.704 m/s in search of obtaining the desirable values in CD and CL.

Figure 4.

Configuring different mesh sizes for the sphere.

Different mesh sizes used for this Re need to have a CD ≈ 0.5 and CL ≈ 0. Configurations “A” and “E” have the coarser mesh with the lower number of elements and the finer mesh with the highest amount of elements, respectively. According to the results shown inTable 1, configuration “X” yields the highest computational time. From configurations “A” to “C,” computational time is considerably lower than “E” and “X” but CD is far from expected. Configuration “D” possess a close CD but a higher CL than previous variations, whereas configuration “E” is the one with the closest results to the expected values for CD ≈ 0.5 and CL ≈ 0 and a similar computational time than “X.” For these reasons, “E” is the configuration selected for the next test.

ConfigurationNo. of elementsDrag coefficient (CD)Lift coefficient (CL)Running time (#CPUs = 16 with MPP) (min)
A19,1007.43E-018.43E-024.9
B36,7626.79E-017.55E-0213.5
C39,8923.74E-012.79E-0228
D65,4465.54E-011.20E-0135
E78,3924.27E-012.43E-02150
X70,7305.67E-011.61E-01162

Table 1.

Validation test for the sphere using different mesh sizes. The simulations were configured using a Re=1000 and a total running time of 3 seconds.

Sphere meshing w/different sizes. Re = 1000, running time = 3 s.

3.2 Sphere drag simulation

Once the mesh is selected, the simulation is configured using the parameters listed in Section 2. Results observed during test demonstrated that Navier-Stokes equation used by LS-DYNA is not suitable to solve ICFD problems where turbulent conditions are present (Re > 105). Therefore, simulations are only trusted for 1 ≤ Re ≤ 100,000. We show the results obtained for drag and lift coefficients in Figures 5 and 6 by using configuration “E.”

Figure 5.

The curve obtained during simulations is presented in red. The turbulent part is not accurate as expected due to the Navier-Stokes equation used by the software.

Figure 6.

Lift coefficients of the sphere are expected to be approximately zero. All results are between 0 and 0.05.

It is observed that the higher the Reynolds number, the lower drag and lift coefficients will be.

The software is capable of managing laminar zones for these cases without any problem.

4. Future work

The aim of this work is to understand how an object interacts with a fluid when it is only present as an obstacle. Future investigations are focused in configuring objects as being nonstationary, i.e., using a fluid-structure interaction (FSI) problem, the objective of this is to understand how other geometries will affect the fluid when they have their own motion. Simulating artificial valves is the main interest of future approaches, especially the ones shown in [5]. Future work includes not only an FSI simulation of a cardiac valve but also a parametric study where characteristic velocity and frequency of the fluid and the elastic modulus of the material are varied to create different situations. This will allow us to create a data matrix that will be focused in determining possible control parameters. Once these factors are settled, a modification to the electric analogy of the cardiac cycle represented by the Windkessel model [8, 9, 10] will be conducted in order to replicate the behavior of a valve.

5. Conclusions

The aim of this research is only focused in low-velocity fields; hence, inaccuracies observed in turbulent regions can be neglected because they only occurred for high-velocity fields.

Using the MPP software approach is imperative for these kinds of simulations where an implicit analysis is needed. Parallelizing the workload helps to decrease computational time in almost 80% compared to the use of the SMP approach.

Conducting a validation test is essential in order to select a proper mesh quality (resolution) and this radically impacts the veracity and accuracy of results.

ICFD analysis is a useful tool to understand hydrodynamic problems, providing the user with additional information to that obtained when using experimental particle image velocimetry method. The main advantage is the possibility to change geometry in an easier and faster manner, saving a large amount of time.

Author details

Carlos Duran-Hernandez1*, Rene Ledesma-Alonso2*, Gibran Etcheverry1* and Rogelio Perez-Santiago2

1 Department of Computing, Electronics and Mechatronics, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, Mexico

2 Department of Industrial and Mechanical Engineering, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, Mexico

*Address all correspondence to: jose.duranhz@udlap.mx; rene.ledesma@udlap.mx and gibran.etcheverry@udlap.mx

Comparison of Dispersion Measures for a Territory Design Problem

María Gabriela Sandoval Esquivel, Roger Z. Ríos-Mercado and Juan Díaz

Abstract

Territory design consists of dividing a geographic area into territories according to certain planning criteria. In most applications, it is desired that the resulting territories are balanced, connected, and compact. We analyze different ways of measuring dispersion, each cast in a particular mixed-integer linear programming model. One is based on p-centers and the other on p-medians. The experimental work includes a comparison between these two models in terms of robustness.

Keywords: territory design, p-center, p-median, dispersion measures, integer programming

1. Introduction

Territory design deals with the discrete assignment of basic units (BUs) (such as zip-code areas, blocks, etc.) into clusters with restrictions defined by planning criteria. The need for a territory design plan is present in several social planning contexts such as political districting and commercial territory design. The motivation to divide a geographical area is related to the fact that smaller areas are easier to manage. In practice, a territory design requires a lot of time and effort and, in most cases, needs to be performed recurrently as the environment evolves and new needs develop. Thus, it is desired to have an automatic or computational method to support this decision-making process.

Mathematical formulations of territory design problems have been developed since 1965. As noted in Kalcsics et al. [1], most of the research related to territory design problems is tied to specific applications and thus have specific planning criteria accordingly. However, three types of requirements can be identified in most territory design applications: balance, connectivity, and compactness. Balance refers to having territories of the same size, and a size is defined upon an activity measure related to the problem application. For example, the size of the territory may be defined as the number of inhabitants in an area or the amount of workload involved. Connectivity constraints require all the BUs of a territory to be connected. Compactness is another spatial characteristic that is rather vaguely defined as having the BUs in a territory as close together as possible.

Compactness is crucial in most applications of territory design because this leads to shorter (more inexpensive) routes when distributing product or visiting the BUs later. Having territories that are as independent as possible is the aim of most applications of territory design. Despite the importance of the compactness constraint, in the literature, there is no consensus regarding the best practice for measuring it.

Compactness measures used in territory design models can be categorized into two main classes: one that considers the geometrical shapes of territories and another one which is concerned with the distances between BUs within a territory. The first type of measures compares spatial characteristics of the shape of territories with regular geometric shapes such as circles or convex polyhedra. On the other hand, distance-based measures use diverse dispersion measures to describe how close together BUs within a territory are. Bear in mind that maximizing compactness is equivalent to minimize dispersion. These types of compactness measures are often used as objective functions in integer programming minimization problems.

1. Geometric measures of compactness

The compactness measure used in Kalcsics et al. [1] and Butsch [2] is based on the geometrical shape of districts. Both divisional methods take into consideration the convex hulls of districts to decide the placement of the bisecting division in each iteration. Thus, the solutions obtained by their methodology are inherently compact.

2. Distance-based measures of compactness

This type of measures describes how close together the BUs within a territory are. Diverse measures of dispersion are used for this purpose since maximizing compactness is equivalent to minimizing dispersion. For instance, dispersion could be measured by the maximum distance between two nodes in a district as in Ríos-Mercado and Salazar-Acosta [3] and in Gliesch et al. [4]. If district centers are defined in a model, they are useful in dispersion metrics that use the aggregation of the distances from BUs to their assigned territory center. A first method to aggregate distances may be done by a simple sum [5, 6, 7] or a weighted sum [1] with weights corresponding to activity measures. A second method of aggregation is done by a metric called moment of inertia which is basically the sum of the squared values of distances [8].

In this short chapter, we present a comparison between two distance-based measures of dispersion that involve the definition of a territory center. These metrics are the p-center measure and the p-median measure. In the following section, we describe the models and the methodology used for this comparison. Next, we present some empirical results and finally the conclusions.

2. Mathematical models

The aim of this chapter is to compare the performance of the p-center and the p-median metrics for dispersion when used as the objective function of models for a territory design problem. The models used for this analysis were introduced by Ríos-Mercado and Fernández [9] and by Salazar-Aguilar et al. [5]. We consider a version of these models without connectivity constraints.

Below we show the mathematical formulation of the p-center-based model (PC). Let V be the set of BUs that represent the geographical area of study. In addition, let A be the set of activity measures that are considered for the balance constraints of the practical problem.

For all the BUs in V, the following parameters are defined:

  • dij: Euclidean distance between the BUs i and j.

  • wia: value of activity measure a in A that corresponds to BU i.

In addition, let xij be the binary decision variable that indicates if BU i belongs to the territory centered at BU j xij=1 or not xij=0. Clearly xii=1 indicates that BU i is a territory center.

Objective function (3) minimizes the p-center dispersion measure. This measure is defined as the maximum distance between any basic unit in a territory and its center. Equation (4) represents the constraint of having only a predefined number of territory centers p. Constraints (5) ensure the unique assignment of each BU to exactly one territory. The balance constraints are defined by (6) and (7) in which the lower bounds (LBa) and the upper bounds (UBa) are established to the resulting sizes of district according to each activity measure a as follows:

LBa=1τiVwia(1)
UBa=1+τiVwia(2)

where τ is a tolerance parameter that in this case has a value of 0.05.

minz=maxi,jVdijxij(3)

Subject to:

iVxii=p(4)
jVxij=1iV(5)
iVwiaxij1τaμaxjjjV(6)
iVwiaxij1+τaμaxjjjV(7)
xij01i,jV(8)

The only difference between this model and the p-median-based model (PM) is the objective function which is defined in (9). The p-median-based objective is to minimize the total distance between each BU in a territory and its center.

minz=i,jVdijxij(9)

3. Experimental work

This section describes the methodology implemented to compare both dispersion methods. We tested 100 artificially generated instances based on real-world data from a commercial territory design problem of different sizes. For each instance, we obtained the optimal solutions for both the PC and the PM models using the CPLEX solver. For each instance, the optimal solution (the values of the decision variables xij) of the p-center problem ZC* was evaluated with the objective function of the p-median problem. The resulting value PM(ZC*) was compared with the optimal value of the p-median problem PM(ZM*) as follows:

RDPM=PMZCPMZMPMZM(10)

In contrast, for each instance, the optimal solution of the p-median problem ZM was evaluated with the objective function of the p-center problem resulting in PC(ZM*). Accordingly, this value was compared with the optimal value of the p-center problem PC(ZC*) as follows:

RDPC=PCZMPCZCPCZC(11)

We name these measures as “relative differences” for each dispersion measure (RDPM for the p-median and RDPC for the p-center measures). Relative differences describe how far the solution of one model is from the other under each dispersion measure.

The instances tested had two activity measures and five districts to be formed and ranged in sizes of 60, 80, 100, 120, and 150 BUs. We tested 20 instances of each size. The results are shown in the following figure (Figure 1). The horizontal axis shows the test instance numbers. Test instances are numbered with respect to their sizes. That is, instance numbers 1–20 correspond to the test instances with 60 BUs, instance numbers 21–40 correspond to the test instances with 80 BUs, and so on. In green, we show RDPM, and in blue, we show RDPC.

Figure 1.

Relative difference in dispersion metrics.

As can be seen from the figure, relative differences RDPM (green) are generally better (closer to zero) than RDPC (blue). (Only on five instances, the RDPC (blue) values were very close to zero.) What this means is that ZC, the optimal solution from the PC model, is generally closer to the optimal solution to the PM model. Therefore, by using ZP*, we are better off under both models than using ZM*. We conclude then that model PC is more robust than model PM.

4. Conclusion

In this chapter, we have shown a comparison of two models for a territory design problem with different dispersion measures for dispersion: the p-center and the p-median. Both models were tested with 100 artificially generated instances, and the optimal solutions obtained were evaluated with the corresponding dispersion measure to compare both models. The optimal solution of the PM model ZM* was evaluated with the objective function of the PC model. Its value was compared with the optimal solution of the PC model with the defined measure RDPC. The same was done for the optimal solution of the PC model ZP*, and the results were compared with RDPM. The relative differences RDPC were lower than RDPM for most instances. A relative difference closer to zero means that the assignment of BUs to territory centers obtained from the optimal solution of one model gave a dispersion value that is very close to that of the optimal solution of the other model. These results show that the PC model is more robust compared to the PM model. Future work will consider the comparison of models with other dispersion measures which are not center-based since in practice the definition of a territory center has no practical meaning.

Acknowledgements

The research of the first author was supported by scholarship for doctoral studies by UDLAP.

Author details

María Gabriela Sandoval Esquivel1*, Roger Z. Ríos-Mercado2 and Juan Díaz1

1 Department of Computers, Electronics and Mechatronics, Universidad de las Américas Puebla, San Andrés Cholula, Puebla, Mexico

2 Graduate Program in Systems Engineering, Universidad Autónoma de Nuevo León, San Nicolás de los Garza, Nuevo León, Mexico

*Address all correspondence to: maria.sandovalel@udlap.mx

A 3D Spatial Visualization of Measures in Music Compositions

Omar Lopez-Rincon and Oleg Starostenko

Abstract

Most of the works for music visualization are for composition and development of arrangements, but very few are designed for data analysis. This chapter proposes a generalized model based on numeric analysis of compositions, capable to project high dimensionality in a 3D space to represent the pieces of music. A multiresolution extraction with vectors and Euclidean distance are used to analyze the compositions to visualize its main features. The relationships between the measures of the compositions are projected. This also provides the ability to use the resulting distances as an objective function to measure a performance of the music composition system by comparing the extracted features as a profile of the music piece from the original batch and the generated ones at different dimensional levels. With these metrics, there is a numeric evaluation as a function that could be minimized for algorithmic music generation.

Keywords: music analysis, arts and humanities, music computing, high-dimensional visualization

1. Introduction

Algorithmic composition is not new, and this is because tonal western music composition follows rules, structures, and guidelines that make it algorithmic. Before the beginning of this computational research area, there were already proposed methods of algorithmic composition, one example of the most common ways was to try to replicate the ability of music composition with the “Mozart’s dice” technique. Another example of a proposed method was a “species counterpoint” system for counterpoint composition [1]. These methods share the same problem with the new computational methods: after more than 50 years, there is still no metric to evaluate the resulting compositions or the system performance. The most common way for having a numeric result is by comparing the compositions with its references contained in the example music batch.

Nowadays, neural networks are the main research area for trying to replicate the music composition ability or for extracting human creativity features [2]; the lack of metrics is still the main problem for determining the loss function used to optimize the networks or even not knowing what needs to be measured [3].

A structured rules system, known as a knowledge database, is often avoided due to the time-consuming effort of extracting and structuring the general rules for composing in a determined style or genre. Another problem of this rule-based composition approach is that it may only create a specific type of compositions with the extracted guidelines [4]. These are the main reasons why most of the researchers try not to use a rule-based system.

The model can extract features of a composition in MIDI format, selected by the main user as the reference for the system. These characteristics would then help to measure the composition profile with Euclidian distance into a self-similarity matrix, and by comparison to each of the references in the higher dimension, they are projected into a lower one. The model searches for specific features in the composition, such as harmonies in 4/4 measures.

When a composition is being made, there is a unique goal for the composer: to express something. The user would have this goal in mind when selecting the reference pieces, so the system could extract features and then replicate those characteristics in new music compositions. This is also useful when the selected references are a small number of examples, which is one of the constraints when using deep learning methods; they work with large datasets [5]. The analysis of the harmony or melody contour is being studied to find the features of the music pieces [6], but very few tools had been developed to study them as a numeric value. Other methods with feature extraction are Markov Models and N-grams [7, 8] but their main goal are for generation with a repetitive result.

The rest of this chapter is organized as follows. In Section 2, the proposed model for feature extraction of the reference piece of music is detailed. In Section 3, results obtained by testing the proposal are presented. In Section 4, conclusions are described.

2. Methodology

The MIDI file format has in its codification events or messages of notes described in tones in a specific moment called ticks, which are the smallest unit of time, along with other event descriptors like instruments, volume, effects, etc. Each MIDI note could have a value between 0 and 127. We need to identify the collections of events in which the note_on and note_off events are described so we could then transform them into a list of notes with its channel, note, volume, and duration properties. A typical list of events inside an MIDI file structure is shown in Figure 1.

Figure 1.

MIDI events description (left) and conversion into note events (right).

We then take the last tick in all the sequences of events to know the longest length L in pulses. With this information, we can create a matrix with a row for each of the instruments and with the number of the last tick. At the above example, we got the last tick and the number of channels for different instruments. The matrix we need to create has a width of 36,864 (last tick) and 4 rows as height (one for each instrument). Inside, the events of the MIDI file are specified as the length of a quarter note in pulses. In this example, the length of the quarter note is 1024; so, we multiply it by four and we get 4096, this will be the length of each measure (ml) in each of the instrument row from the file as Eq. (1), where the length L has the value of 36,864 in this example.

S=00L00nLnS=0036864004368644(1)

After creating the matrix, we fill each of the rows with the values of the notes for the duration specified in the MIDI file. In the example of the file, we have the first note to be 67 that starts at the pulse 3072 and it ends at 4096, so we fill the 1024 pulses with the value in the note (67). Since we know that each of the measures has an ml of 4096 pulses, we can compute the total number of measures for the file in each of the instruments as follows in Eq. 2:

total measures=Lml=368644096=9(2)

The piano roll representation of this file will look like Figure 2. Where each of the different colors represent a different instrument, the left-most moment is zero and the right-most is 36,864, the first row represents the note with value 0 and the upper one represents the note 127, middle C is represented by the key with value 60.

Figure 2.

Piano roll of an MIDI file.

The same information could be represented as the notation of the music sheet from the same MIDI file as in Figure 3.

Figure 3.

Music sheet of the song New-Age from the artist Marlon Roudette.

The same information can also be represented as a continuous signal represented in Figure 4.

Figure 4.

Excerpt from the song P.S. I love you, where the red represents the guitar which means all those notes are played at the same time.

Now that we have each of the instruments as a function fip where p represents each of the pulses inside the ith signal, we can obtain the first derivative of each of the functions with a numeric derivative by subtracting the previous moment p (Eq. 3).

fip=fipfip1(3)

where p stands for the pulse of the signal i (channel/instrument), the derivatives would look as follows in Figure 5.

Figure 5.

Midi file signal differences graph.

The next step is to extract the four representative numbers of each measure mil from each signal; this is done by selecting from each measure mi,l (with 4096 values in this example) dividing it by four for each quarter qilk with size s equal to 1024. The biggest absolute change from each range of quarters is selected to represent each quarter values of the measure to form a vector of dimension four as in Eq. 4.

vil=qil1qil2qil3qil4(4)

where i represents the signal number, l, the measure number, and each qilk value of the vector is selected as follows in Eq. 5:

qilk=x:filmlk1smaxxfilmlks(5)

where ml represents the measure length in pulses, s represents the quarter size in pulses, and k is the dimension value desired of the vector. Now that, we have all the vectors of each measure of each signal we can compute the distance matrix of size n×n for each signal by comparing with the Euclidean distance of each of the measures as in Eq. 6.

Di=dmijldmijndmijndmijn(6)

where dmijl stands for the Euclidian distance between measure j and measure l from signal i and n is the number of measures in the signals. Another consideration is that we are experimenting only with time signatures of 4/4 in each case of different signature several accommodations should be made.

3. Results

Self-similarity matrices had been used for waveform analysis and enhance attributes of similarity along time in the songs as time series analysis. In Figure 6, we can see the results of the self-similarity matrices of the measure of applying the method to a midi file. Each measure feature is measured with each of the other measures in the file with the Euclidean distance. The diagonal in each of the matrices is the equivalent of the distance of the measure to itself. Since it is a square matrix, the complete information could be in one of the two sides of the matrix along the diagonal. The similarity result is normalized to enhance the contrast in the visualization of the rendered image. This normalization helps to increase distances between clusters, resulting in a better representation.

Figure 6.

Example of an image of self-similarity matrices of vectors extracted from an MIDI file.

We used the similarity matrix to project the information in a 3D environment. The resulting distances are used as a guide to emulate the vectors in the lower dimensional space. In Table 1, we have an example of four vectors of dimension four and the resulting self-similarity matrix which is used to represent them in a 3D and 2D projection.

ABCDABCD
0.456690.433070.433070.37795A00.112742020.045911890.0822064
−0.05512−0.05512−0.05512−0.05512B0.1127420200.117059190.12549493
−0.09449−0.09449−0.05512−0.07087C0.045911890.1170591900.05732606
−0.055120.05512−0.05512−0.05512D0.08220640.125494930.057326060

Table 1.

Example vectors (left) and their self-similarity matrix (right) given by the Euclidian distance.

To reduce the dimensionality, we loop through all the vectors and compare the distances to the actual ones in the self-similarity matrix. In the lower dimension, we take Vî and compare the distance to Vĵ and we compare it to their equivalent distance in the self-similarity matrix of the higher dimension as Eq. 7:

dVîVĵDijt(7)

where Vî and Vĵ are the vectors in the lower dimension and Dij are the distances in the self-similarity matrix. If the difference is higher than a threshold t, we adjust the position of the vector with Eq. (8) with a given increment α until the distances are close to the ones in the self-similarity matrix.

Vî=αVî+1αVĵ(8)

In the Figure 7, we present the projection of the four vectors in the 3D space (left) and 2D space (right). The vector B is painted in blue for ease of visualization of the correspondence of the projections in both dimensions.

Figure 7.

Vectors of four dimension 3D visualization (left) and 2D visualization of same vectors (right).

In the results of the vectors in 3D and 2D from the table, we can see a corresponding relationship that is close between both dimensions that preserve a relative distance between objects. This method is a fast tool to visualize the spatial distribution, and even if we do not make clusters, we can see the behavior of the vectors. We can use the same principle to visualize the entire file with all its measures of the self-similarity matrix as seen in Figure 8.

Figure 8.

3D visualization of the projections from the vectors in 4D.

4. Conclusion

This work is related to music composition, and as a part of the sequence of steps needed to complete such task is required a map of the structure of a musical piece. This tool is aiming into finding similarities with a numeric precision and thus obtaining a metric that could be used as an objective function in an optimization task. This method is also a new tool for fast visualization of measures in an MIDI file even with a high dimensionality and it could help discover hidden information inside of the music piece.

Author details

Omar Lopez-Rincon and Oleg Starostenko*

Department of Computing, Electronics and Mechatronics, Universidad de las Americas Puebla, San Andrés Cholula, Puebla, Mexico

*Address all correspondence to: oleg.starostenko@udlap.mx

References

  1. 1. Gram HC. Über die isolierte Färbung der Schizomyceten in Schnitt- und Trockenpräparaten. Fortschritte der Medizin. 1884;2:185-189
  2. 2. Slavin YN, Asnis J, Häfeli UO, Bach H. Metal nanoparticles: Understanding the mechanisms behind antibacterial activity. Journal of Nanobiotechnology. 2017;15(1):65
  3. 3. Pray L. Antibiotic resistance, mutation rates and MRSA. Nature Education. 2008;1(1):30
  4. 4. Kondrashov A. Genetics: The rate of human mutation. Nature. 2012;488(7412):467-468
  5. 5. Magiorakos A-P, Srinivasan A, Carey RB, Carmeli Y, Falagas ME, Giske CG, et al. Multidrug-resistant, extensively drug-resistant and pandrug-resistant bacteria: An international expert proposal for interim standard definitions for acquired resistance. Clinical Microbiology and Infection. 2012;18(3):268-281
  6. 6. Nikaido H. Multidrug resistance in bacteria. Annual Review of Biochemistry. 2009;78:119-146
  7. 7. Read AF, Woods RJ. Antibiotic resistance management. Evolution, Medicine, and Public Health. 2014;2014(1):147
  8. 8. Rossolini GM, Arena F, Pecile P, Pollini S. Update on the antibiotic resistance crisis. Current Opinion in Pharmacology. 2014;18:56-60
  9. 9. Gross M. Antibiotics in crisis. Current Biology. 2013;23(24):R1063-R1065
  10. 10. Klevens RM, Edwards JR, Richards CL, Horan TC, Gaynes RP, Pollock DA, et al. Estimating health care-associated infections and deaths in U.S. hospitals, 2002. Public Health Reports. 2007;122(2):160-166
  11. 11. Peleg AY, Hooper DC. Hospital-acquired infections due to Gram-negative bacteria. The New England Journal of Medicine. 2010;362(19):1804-1813
  12. 12. Kola I, Landis J. Can the pharmaceutical industry reduce attrition rates? Nature Reviews. Drug Discovery. 2004;3(8):711-716
  13. 13. Mullard A. 2010 FDA drug approvals. Nature Reviews. Drug Discovery. 2011;10:82-85
  14. 14. Infectious Diseases Society of America. The 10 ב20 initiative: Pursuing a global commitment to develop 10 new antibacterial drugs by 2020. Clinical Infectious Diseases. 2010;50(8):1081-1083
  15. 15. Piddock LJ. The crisis of no new antibiotics—What is the way forward? The Lancet Infectious Diseases. 2012;12(3):249-253
  16. 16. Wright GD. Something old, something new: Revisiting natural products in antibiotic drug discovery. Canadian Journal of Microbiology. 2014;60(3):147-154
  17. 17. Gould IM, Bal AM. New antibiotic agents in the pipeline and how they can help overcome microbial resistance. Virulence. 2013;4(2):185-191
  18. 18. Song CH, Han J-W. Patent cliff and strategic switch: Exploring strategic design possibilities in the pharmaceutical industry. Springerplus. 2016;5(1):692
  19. 19. Spink WW, Ferris V. Quantitative action of penicillin inhibitor from penicillin-resistant strains of staphylococci. Science. 1945;102(2644):221-223
  20. 20. Lyon BR, Skurray R. Antimicrobial resistance of Staphylococcus aureus: Genetic basis. Microbiological Reviews. 1987;51(1):88-134
  21. 21. Blumberg HM, Rimland D, Carroll DJ, Terry, Wachsmuth IK. Rapid development of ciprofloxacin resistance in methicillin-susceptible and -resistant Staphylococcus aureus. The Journal of Infectious Diseases. 1991;163(6):1279-1285
  22. 22. Wenzel RP. Preoperative antibiotic prophylaxis. The New England Journal of Medicine. 1992;326(5):337-339
  23. 23. Spellberg B, Gilbert DN. The future of antibiotics and resistance: A tribute to a career of leadership by John Bartlett. Clinical Infectious Diseases. 2014;59(suppl_2):S71-S75
  24. 24. Bartlett JG, Gilbert DN, Spellberg B. Seven ways to preserve the miracle of antibiotics. Clinical Infectious Diseases. 2013;56(10):1445-1450
  25. 25. Michael CA, Dominey-Howes D, Labbate M. The antimicrobial resistance crisis: Causes, consequences, and management. Frontiers in Public Health. 2014;2(145):1-8
  26. 26. Wright GD. Antibiotic resistance in the environment: A link to the clinic? Current Opinion in Microbiology. 2010;13(5):589-594
  27. 27. Ventola CL. The antibiotic resistance crisis. P T. 2015;40(4):277-283
  28. 28. Klaenhammer TR. Genetics of bacteriocins produced by lactic acid bacteria. FEMS Microbiology Reviews. 1993;12(1-3):39-85
  29. 29. Cleveland J, Montville TJ, Nes IF, Chikindas ML. Bacteriocins: Safe, natural antimicrobials for food preservation. International Journal of Food Microbiology. 2001;71(1):1-20
  30. 30. Audisio MC, Oliver G, Apella MC. Protective effect of Enterococcus faecium J96, a potential probiotic strain, on chicks infected with Salmonella pullorum. Journal of Food Protection. 2000;63(10):1333-1337
  31. 31. Portrait V, Cottenceau G, Pons AM. A Fusobacterium mortiferum strain produces a bacteriocin-like substance(s) inhibiting Salmonella enteritidis. Letters in Applied Microbiology. 2000;31(2):115-117
  32. 32. Cotter PD, Ross RP, Hill C. Bacteriocins—A viable alternative to antibiotics? Nature Reviews. Microbiology. 2013;11(2):95-105
  33. 33. Goldstein BP, Wei J, Greenberg K, Novick R. Activity of nisin against Streptococcus pneumoniae, in vitro, and in a mouse infection model. The Journal of Antimicrobial Chemotherapy. 1998;42(2):277-278
  34. 34. Fontana MBC, de Bastos Mdo CF, Brandelli A. Bacteriocins Pep5 and epidermin inhibit Staphylococcus epidermidis adhesion to catheters. Current Microbiology. 2006;52(5):350-353
  35. 35. Kwaadsteniet MD, Doeschate KT, Dicks LMT. Nisin F in the treatment of respiratory tract infections caused by Staphylococcus aureus. Letters in Applied Microbiology. 2009;48(1):65-70
  36. 36. Mota-Meira M, Morency H, Lavoie MC. In vivo activity of mutacin B-Ny266. The Journal of Antimicrobial Chemotherapy. 2005;56(5):869-871
  37. 37. Haste NM, Thienphrapa W, Tran DN, Loesgen S, Sun P, Nam S-J, et al. Activity of the thiopeptide antibiotic nosiheptide against contemporary strains of methicillin-resistant Staphylococcus aureus. The Journal of Antibiotics. 2012;65(12):593-598
  38. 38. Singh SB, Occi J, Jayasuriya H, Herath K, Motyl M, Dorso K, et al. Antibacterial evaluations of thiazomycin. The Journal of Antibiotics. 2007;60(9):565-571
  39. 39. Trzasko A, Leeds JA, Praestgaard J, LaMarche MJ, McKenney D. Efficacy of LFF571 in a hamster model of Clostridium difficile infection. Antimicrobial Agents and Chemotherapy. 2012;56(8):4459-4462
  40. 40. Xu L, Farthing AK, Dropinski JF, Meinke PT, McCallum C, Leavitt PS, et al. Nocathiacin analogs: Synthesis and antibacterial activity of novel water-soluble amides. Bioorganic & Medicinal Chemistry Letters. 2009;19(13):3531-3535
  41. 41. Lopez FE, Vincent PA, Zenoff AM, Salomón RA, Farías RN. Efficacy of microcin J25 in biomatrices and in a mouse model of Salmonella infection. The Journal of Antimicrobial Chemotherapy. 2007;59(4):676-680
  42. 42. Gänzle MG, Hertel C, van der Vossen JMBM, Hammes WP. Effect of bacteriocin-producing lactobacilli on the survival of Escherichia coli and Listeria in a dynamic model of the stomach and the small intestine. International Journal of Food Microbiology. 1999;48(1):21-35
  43. 43. Jalc D, Lauková A. Effect of nisin and monensin on rumen fermentation in the artificial rumen. Berliner und Münchener Tierärztliche Wochenschrift. 2002;115(1-2):6-10
  44. 44. Bierbaum G, Sahl H-G. Lantibiotics: Mode of action, biosynthesis and bioengineering. Current Pharmaceutical Biotechnology. 2009;10(1):2-18
  45. 45. Martin NI, Breukink E. Expanding role of lipid II as a target for lantibiotics. Future Microbiology. 2007;2(5):513-525
  46. 46. Piper C, Draper LA, Cotter PD, Ross RP, Hill C. A comparison of the activities of lacticin 3147 and nisin against drug-resistant Staphylococcus aureus and Enterococcus species. The Journal of Antimicrobial Chemotherapy. 2009;64(3):546-551
  47. 47. Destoumieux-Garzón D, Peduzzi J, Thomas X, Djediat C, Rebuffat S. Parasitism of iron-siderophore receptors of Escherichia coli by the siderophore-peptide microcin E492m and its unmodified counterpart. Biometals. 2006;19(2):181-191
  48. 48. Diep DB, Skaugen M, Salehian Z, Holo H, Nes IF. Common mechanisms of target cell recognition and immunity for class II bacteriocins. PNAS. 2007;104(7):2384-2389
  49. 49. Bagley MC, Dale JW, Merritt EA, Xiong X. Thiopeptide antibiotics. Chemical Reviews. 2005;105(2):685-714
  50. 50. Kobayashi Y, Ichioka M, Hirose T, Nagai K, Matsumoto A, Matsui H, et al. Bottromycin derivatives: Efficient chemical modifications of the ester moiety and evaluation of anti-MRSA and anti-VRE activities. Bioorganic & Medicinal Chemistry Letters. 2010;20(20):6116-6120
  51. 51. Metlitskaya A, Kazakov T, Kommer A, Pavlova O, Praetorius-Ibba M, et al. Aspartyl-tRNA synthetase is the target of peptide nucleotide antibiotic microcin C. The Journal of Biological Chemistry. 2006;281(26):18033-18042
  52. 52. Novikova M, Metlitskaya A, Datsenko K, Kazakov T, Kazakov A, Wanner B, et al. The Escherichia coli Yej transporter is required for the uptake of translation inhibitor microcin C. Journal of Bacteriology. 2007;189(22):8361-8365
  53. 53. Parks WM, Bottrill AR, Pierrat OA, Durrant MC, Maxwell A. The action of the bacterial toxin, microcin B17, on DNA gyrase. Biochimie. 2007;89(4):500-507
  54. 54. Crandall AD, Montville TJ. Nisin resistance in Listeria monocytogenes ATCC 700302 is a complex phenotype. Applied and Environmental Microbiology. 1998;64(1):231-237
  55. 55. Mazzotta AS, Crandall AD, Montville TJ. Nisin resistance in Clostridium botulinum spores and vegetative cells. Applied and Environmental Microbiology. 1997;63(7):2654-2659
  56. 56. Ming X, Daeschel MA. Nisin resistance of foodborne bacteria and the specific resistance responses of Listeria monocytogenes Scott A. Journal of Food Protection. 1993;56(11):944-948
  57. 57. Carlson SA, Frana TS, Griffith RW. Antibiotic resistance in Salmonella enterica serovar typhimurium exposed to microcin-producing Escherichia coli. Applied and Environmental Microbiology. 2001;67(8):3763-3766
  58. 58. Mantovani HC, Russell JB. Nisin resistance of Streptococcus bovis. Applied and Environmental Microbiology. 2001;67(2):808-813
  59. 59. Collins B, Curtis N, Cotter PD, Hill C, Ross RP. The ABC transporter AnrAB contributes to the innate resistance of Listeria monocytogenes to nisin, bacitracin, and various β-lactam antibiotic. Antimicrobial Agents and Chemotherapy. 2010;54(10):4416-4423
  60. 60. Baumann S, Schoof S, Bolten M, Haering C, Takagi M, Shin-ya K, et al. Molecular determinants of microbial resistance to thiopeptide antibiotics. Journal of the American Chemical Society. 2010;132(20):6973-6981
  61. 61. Yuzenkova J, Delgado M, Nechaev S, Savalia D, Epshtein V, Artsimovitch I, et al. Mutations of bacterial RNA polymerase leading to resistance to microcin J25. The Journal of Biological Chemistry. 2002;277(52):50867-50875
  62. 62. del Castillo FJ, del Castillo I, Moreno F. Construction and characterization of mutations at codon 751 of the Escherichia coli gyrB gene that confer resistance to the antimicrobial peptide microcin B17 and alter the activity of DNA gyrase. Journal of Bacteriology. 2001;183(6):2137-2140
  63. 63. Rink R, Arkema-Meter A, Baudoin I, Post E, Kuipers A, Nelemans SA, et al. To protect peptide pharmaceuticals against peptidases. Journal of Pharmacological and Toxicological Methods. 2010;61(2):210-218
  64. 64. Su P, Henriksson A, Mitchell H. Survival and retention of the probiotic Lactobacillus casei LAFTI® L26 in the gastrointestinal tract of the mouse. Letters in Applied Microbiology. 2007;44(2):120-125
  65. 65. Su P, Henriksson A, Mitchell H. Prebiotics enhance survival and prolong the retention period of specific probiotic inocula in an in vivo murine model. Journal of Applied Microbiology. 2007;103(6):2392-2400
  66. 66. Hillman JD, Mo J, McDonell E, Cvitkovitch D, Hillman CH. Modification of an effector strain for replacement therapy of dental caries to enable clinical safety trials. Journal of Applied Microbiology. 2007;102(5):1209-1219
  67. 67. Hillman JD. Genetically modified Streptococcus mutans for the prevention of dental caries. Antonie Van Leeuwenhoek. 2002;82(1-4):361-366
  68. 68. Hancock REW, Rozek A. Role of membranes in the activities of antimicrobial cationic peptides. FEMS Microbiology Letters. 2002;206(2):143-149
  69. 69. Hancock RE. Peptide antibiotics. Lancet. 1997;349(9049):418-422
  70. 70. Fu H, Björstad Å, Dahlgren C, Bylund J. A bactericidal cecropin-A peptide with a stabilized α-helical structure possess an increased killing capacity but no proinflammatory activity. Inflammation. 2004;28(6):337-343
  71. 71. Houston ME, Kondejewski LH, Karunaratne DN, Gough M, Fidai S, Hodges RS, et al. Influence of preformed α-helix and α-helix induction on the activity of cationic antimicrobial peptides. The Journal of Peptide Research. 1998;52(2):81-88
  72. 72. Rozek A, Powers J-PS, Friedrich CL, Hancock REW. Structure-based design of an indolicidin peptide analogue with increased protease stability. Biochemistry. 2003;42(48):14130-14138
  73. 73. Uteng M, Hauge HH, Markwick PRL, Fimland G, Mantzilas D, Nissen-Meyer J, et al. Three-dimensional structure in lipid micelles of the pediocin-like antimicrobial peptide sakacin P and a sakacin P variant that is structurally stabilized by an inserted C-terminal disulfide bridge. Biochemistry. 2003;42(39):11417-11426
  74. 74. Jenssen H, Hamill P, Hancock REW. Peptide antimicrobial agents. Clinical Microbiology Reviews. 2006;19(3):491-511
  75. 75. Brogden KA. Antimicrobial peptides: Pore formers or metabolic inhibitors in bacteria? Nature Reviews. Microbiology. 2005;3(3):238-250
  76. 76. Park CB, Kim HS, Kim SC. Mechanism of action of the antimicrobial peptide buforin II: Buforin II kills microorganisms by penetrating the cell membrane and inhibiting cellular functions. Biochemical and Biophysical Research Communications. 1998;244(1):253-257
  77. 77. Lehrer RI, Barton A, Daher KA, Harwig SS, Ganz T, Selsted ME. Interaction of human defensins with Escherichia coli. Mechanism of bactericidal activity. The Journal of Clinical Investigation. 1989;84(2):553-561
  78. 78. Patrzykat A, Friedrich CL, Zhang L, Mendoza V, Hancock REW. Sublethal concentrations of pleurocidin-derived antimicrobial peptides inhibit macromolecular synthesis in Escherichia coli. Antimicrobial Agents and Chemotherapy. 2002;46(3):605-614
  79. 79. Subbalakshmi C, Sitaram N. Mechanism of antimicrobial action of indolicidin. FEMS Microbiology Letters. 1998;160(1):91-96
  80. 80. Brötz H, Bierbaum G, Reynolds PE, Sahl H-G. The lantibiotic mersacidin inhibits peptidoglycan biosynthesis at the level of transglycosylation. European Journal of Biochemistry. 1997;246(1):193-199
  81. 81. Kragol G, Lovas S, Varadi G, Condie BA, Hoffmann R, Otvos L. The antibacterial peptide pyrrhocoricin inhibits the ATPase actions of DnaK and prevents chaperone-assisted protein folding. Biochemistry. 2001;40(10):3016-3026
  82. 82. Otvos Laszlo OI, Rogers ME, Consolvo PJ, Condie BA, Lovas S, et al. Interaction between heat shock proteins and antimicrobial peptides. Biochemistry. 2000;39(46):14150-14159
  83. 83. Peschel A, Vincent Collins L. Staphylococcal resistance to antimicrobial peptides of mammalian and bacterial origin. Peptides. 2001;22(10):1651-1659
  84. 84. Robey M, O’Connell W, Cianciotto NP. Identification of Legionella pneumophila rcp, a pagP-like gene that confers resistance to cationic antimicrobial peptides and promotes intracellular infection. Infection and Immunity. 2001;69(7):4276-4286
  85. 85. Harwig SSL, Swiderek KM, Kokryakov VN, Tan L, Lee TD, Panyutich EA, et al. Gallinacins: Cysteine-rich antimicrobial peptides of chicken leukocytes. FEBS Letters. 1994;342(3):281-285
  86. 86. Evans EW, Beach FG, Moore KM, Jackwood MW, Glisson JR, Harmon BG. Antimicrobial activity of chicken and turkey heterophil peptides CHP1, CHP2, THP1, and THP3. Veterinary Microbiology. 1995;47(3):295-303
  87. 87. Morassutti C, Amicis FD, Skerlavaj B, Zanetti M, Marchetti S. Production of a recombinant antimicrobial peptide in transgenic plants using a modified VMA intein expression system. FEBS Letters. 2002;519(1-3):141-146
  88. 88. Goode D, Allen VM, Barrow PA. Reduction of experimental Salmonella and Campylobacter contamination of chicken skin by application of lytic bacteriophages. Applied and Environmental Microbiology. 2003;69(8):5032-5036
  89. 89. Leverentz B, Conway WS, Alavidze Z, Janisiewicz WJ, Fuchs Y, Camp MJ, et al. Examination of bacteriophage as a biocontrol method for Salmonella on fresh-cut fruit: A model study. Journal of Food Protection. 2001;64(8):1116-1121
  90. 90. Atterbury RJ, Bergen MAPV, Ortiz F, Lovell MA, Harris JA, Boer AD, et al. Bacteriophage therapy to reduce Salmonella colonization of broiler chickens. Applied and Environmental Microbiology. 2007;73(14):4543-4549
  91. 91. Higgins JP, Higgins SE, Guenther KL, Huff W, Donoghue AM, Donoghue DJ, et al. Use of a specific bacteriophage treatment to reduce Salmonella in poultry products. Poultry Science. 2005;84(7):1141-1145
  92. 92. Schicklmaier P, Schmieger H. Frequency of generalized transducing phages in natural isolates of the Salmonella typhimurium complex. Applied and Environmental Microbiology. 1995;61(4):1637-1640
  93. 93. Schmieger H, Schicklmaier P. Transduction of multiple drug resistance of Salmonella enterica serovar typhimurium DT104. FEMS Microbiology Letters. 1999;170(1):251-256
  94. 94. Figueroa-Bossi N, Bossi L. Inducible prophages contribute to Salmonella virulence in mice. Molecular Microbiology. 1999;33(1):167-176
  95. 95. Penadés JR, Chen J, Quiles-Puchalt N, Carpena N, Novick RP. Bacteriophage-mediated spread of bacterial virulence genes. Current Opinion in Microbiology. 2015;23:171-178
  96. 96. Loeffler JM, Nelson D, Fischetti VA. Rapid killing of Streptococcus pneumoniae with a bacteriophage cell wall hydrolase. Science. 2001;294(5549):2170-2172
  97. 97. Sulakvelidze A, Alavidze Z, Morris JG. Bacteriophage therapy. Antimicrobial Agents and Chemotherapy. 2001;45(3):649-659
  98. 98. Summers WC. Bacteriophage therapy. Annual Review of Microbiology. 2001;55(1):437-451
  99. 99. Smith HW, Huggins MB. Successful treatment of experimental Escherichia coli infections in mice using phage: Its general superiority over antibiotics. Microbiology. 1982;128(2):307-318
  100. 100. Smith HW, Huggins MB. Effectiveness of phages in treating experimental Escherichia coli diarrhoea in calves, piglets and lambs. Microbiology. 1983;129(8):2659-2675
  101. 101. Biswas B, Adhya S, Washart P, Paul B, Trostel AN, Powell B, et al. Bacteriophage therapy rescues mice bacteremic from a clinical isolate of vancomycin-resistant Enterococcus faecium. Infection and Immunity. 2002;70(1):204-210
  102. 102. Westwater C, Kasman LM, Schofield DA, Werner PA, Dolan JW, Schmidt MG, et al. Use of genetically engineered phage to deliver antimicrobial agents to bacteria: An alternative therapy for treatment of bacterial infections. Antimicrobial Agents and Chemotherapy. 2003;47(4):1301-1307
  103. 103. Kharissova OV, Dias HVR, Kharisov BI, Pérez BO, Pérez VMJ. The greener synthesis of nanoparticles. Trends in Biotechnology. 2013;31(4):240-248
  104. 104. Raveendran P, Fu J, Wallen SL. Completely “green” synthesis and stabilization of metal nanoparticles. Journal of the American Chemical Society. 2003;125(46):13940-13941
  105. 105. Simon-Deckers A, Loo S, Mayne-L’hermite M, Herlin-Boime N, Menguy N, Reynaud C, et al. Size-, composition- and shape-dependent toxicological impact of metal oxide nanoparticles and carbon nanotubes toward bacteria. Environmental Science & Technology. 2009;43(21):8423-8429
  106. 106. Martinez-Gutierrez F, Olive PL, Banuelos A, Orrantia E, Nino N, Sanchez EM, et al. Synthesis, characterization, and evaluation of antimicrobial and cytotoxic effect of silver and titanium nanoparticles. Nanomedicine. 2010;6(5):681-688
  107. 107. Pérez-Díaz MA, Boegli L, James G, Velasquillo C, Sánchez-Sánchez R, Martínez-Martínez R-E, et al. Silver nanoparticles with antimicrobial activities against Streptococcus mutans and their cytotoxic effect. Materials Science & Engineering. C, Materials for Biological Applications. 2015;55:360-366
  108. 108. McQuillan JS, Infante HG, Stokes E, Shaw AM. Silver nanoparticle enhanced silver ion stress response in Escherichia coli K12. Nanotoxicology. 2012;6(8):857-866
  109. 109. Mukha IP, Eremenko AM, Smirnova NP, Mikhienkova AI, Korchak GI, Gorchev VF, et al. Antimicrobial activity of stable silver nanoparticles of a certain size. Applied Biochemistry and Microbiology. 2013;49(2):199-206
  110. 110. Ramalingam B, Parandhaman T, Das SK. Antibacterial effects of biosynthesized silver nanoparticles on surface ultrastructure and nanomechanical properties of Gram-negative bacteria viz. Escherichia coli and Pseudomonas aeruginosa. ACS Applied Materials & Interfaces. 2016;8(7):4963-4976
  111. 111. Tamayo LA, Zapata PA, Vejar ND, Azócar MI, Gulppi MA, Zhou X, et al. Release of silver and copper nanoparticles from polyethylene nanocomposites and their penetration into Listeria monocytogenes. Materials Science and Engineering: C. 2014;40:24-31
  112. 112. Stoimenov PK, Klinger RL, Marchin GL, Klabunde KJ. Metal oxide nanoparticles as bactericidal agents. Langmuir. 2002;18(17):6679-6686
  113. 113. El Badawy AM, Silva RG, Morris B, Scheckel KG, Suidan MT, Tolaymat TM. Surface charge-dependent toxicity of silver nanoparticles. Environmental Science & Technology. 2011;45(1):283-287
  114. 114. Wang L, He H, Yu Y, Sun L, Liu S, Zhang C, et al. Morphology-dependent bactericidal activities of Ag/CeO2 catalysts against Escherichia coli. Journal of Inorganic Biochemistry. 2014;135:45-53
  115. 115. Kim JS, Kuk E, Yu KN, Kim J-H, Park SJ, Lee HJ, et al. Antimicrobial effects of silver nanoparticles. Nanomedicine: Nanotechnology, Biology and Medicine. 2007;3(1):95-101
  116. 116. Choi O, Hu Z. Size dependent and reactive oxygen species related nanosilver toxicity to nitrifying bacteria. Environmental Science & Technology. 2008;42(12):4583-4588
  117. 117. Morones JR, Elechiguerra JL, Camacho A, Holt K, Kouri JB, Ramírez JT, et al. The bactericidal effect of silver nanoparticles. Nanotechnology. 2005;16(10):2346
  118. 118. Bragg PD, Rainnie DJ. The effect of silver ions on the respiratory chain of Escherichia coli. Canadian Journal of Microbiology. 1974;20(6):883-889
  119. 119. Wigginton NS, de Titta A, Piccapietra F, Dobias J, Nesatyy VJ, Suter MJF, et al. Binding of silver nanoparticles to bacterial proteins depends on surface modifications and inhibits enzymatic activity. Environmental Science & Technology. 2010;44(6):2163-2168
  120. 120. Soni D, Bafana A, Gandhi D, Sivanesan S, Pandey RA. Stress response of Pseudomonas species to silver nanoparticles at the molecular level. Environmental Toxicology and Chemistry. 2014;33(9):2126-2132
  121. 121. Aris R. Mathematical Modelling Techniques. New York: Dover Publications; 1994
  122. 122. Process Systems Enterprise, gPROMS. 1997-2018. Available from: www.psenterprise.com/gproms
  123. 123. GAMS Development Corporation. General Algebraic Modeling System (GAMS) Release 24.2.1. Washington, DC, USA; 2013
  124. 124. Ansys, Fluent. Available from: https://www.ansys.com/about-ansys. [Accessed: 2018]
  125. 125. Stephanopoulos G, Henning G, Leone H. MODEL.LA. A modeling language for process engineering—I. The formal framework. Computers and Chemical Engineering. 1990a;14(8):813-846
  126. 126. Stephanopoulos G, Henning G, Leone H. MODEL.LA. A modeling language for process engineering— II. Multifaceted modeling of processing systems. Computers and Chemical Engineering. 1990b;14(8):847-869
  127. 127. Han C, Douglas JM, Stephanopoulos G. Agent-based approach to a design support system for the synthesis of continuous chemical processes. Computers and Chemical Engineering. 1995;19S:S63-S69
  128. 128. Stephanopoulos G, Han C. Intelligent systems in process engineering: A review. Computers and Chemical Engineering. 1996;20(617):143-191
  129. 129. Linninger A, Stephanopoulos G. Computer-aided waste management of pharmaceutical wastes. In: Paper 23a, AIChE Meeting; 25-29 February 1996; New Orleans, LA
  130. 130. Linninger A, Ali SA, Stephanopoulos G. Knowledge-based validation and waste management of batch pharmaceutical process designs. In: Symposium on Computer Aided Process Engineering-6 (ESCAPE); 26-29 May 1996; Rhodes, Greece
  131. 131. Linninger A, Salomone E, Ali S, Stephanopoulos E, Stephanopoulos G. Pollution prevention for production systems of energetic materials. Waste Management. 1997;17(2/3):165-173
  132. 132. Linninger A, Stephanopoulos G. A natural language approach for the design of batch operating procedures. Informatica. 1998;22(4):423-434
  133. 133. Linninger A, Chakraborty A. Pharmaceutical waste management under uncertainty. Computers and Chemical Engineering. 2001;25:675-681
  134. 134. Linninger A, Chakraborty A, Colberg RD. Planning of waste reduction strategies under uncertainty. Computers and Chemical Engineering. 2000a;24:1043-1048
  135. 135. Linninger A, Chowdhry S, Bahl V, Krendl H, Pinger H. A systems approach to mathematical modeling of industrial processes. Computers and Chemical Engineering. 2000b;24:591-598
  136. 136. Gould I, Linninger A. Hematocrit distribution and tissue oxygenation in large microcirculatory networks. Microcirculation. 2015;22:1-18
  137. 137. Gould I, Tsai P, Kleinfeld D, Linninger A. The capillary bed offers the largest hemodynamic resistance to the cortical blood supply. Journal of Cerebral Blood Flow and Metabolism. 2017;37(1):52-68
  138. 138. Linninger A, Gould I, Marinnan T, Hsu CY, Chojecki M, Alaraj A. Cerebral microcirculation and oxygen tension in the human secondary cortex. Annals of Biomedical Engineering. 2013;41:2264-2284
  139. 139. Karch R, Neumann F, Neumann M, Schreiner W. A three-dimensional model for arterial tree representation, generated by constrained constructive optimization. Computers in Biology and Medicine. 1999;29:19-38
  140. 140. Mount C, Downton C. Alzheimer disease: Progress or profit? Nature Medicine. 2006;12:780-784
  141. 141. Roehrig C. Mental disorders top the list of the most costly conditions in the United States: $201 Billion. Health Affairs. 2016;35(6):1130-1135
  142. 142. Blinder P, Tsai PS, Kaufhold JP, Knutsen PM, Suhl H, Kleinfeld D. The cortical angiome: An interconnected vascular network with noncolumnar patterns of blood flow. Nature Neuroscience. 2013;16:889-897
  143. 143. Pappenberger F, Cloke HL, Parker DJ, Wetterhall F, Richardson DS, Thielen J. The monetary benefit of early flood warnings in Europe. Environmental Science & Policy. 2015;51:278-291
  144. 144. Price et al. Operational use of a grid-based model for flood forecasting. Proceedings of the Institution of Civil Engineers: Water Management. 2012;165(2):65-77
  145. 145. Zhuo L, Dai Q, Han D. Meta-analysis of flow modeling performances—To build a matching system between catchment complexity and model types. Hydrological Processes. 2015;29(11):2463-2477
  146. 146. Fletcher TD, Andrieu H, Hamel P. Understanding, management and modelling of urban hydrology and its consequences for receiving waters: A state of the art. Advances in Water Resources. 2013;51:261-279
  147. 147. Cristiano E, van de Giesen N. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas—A review. Hydrology and Earth System Sciences. 2017;21(7):3859-3878
  148. 148. Ciach G. Local random errors in tipping-bucket rain gauge measurements. Journal of Atmospheric and Oceanic Technology. 2003;20:752-759
  149. 149. Colli M, Lanza LG, La Barbera P. Performance of a weighing rain gauge under laboratory simulated time-varying reference rainfall rates. Atmospheric Research. 2013;131:3-12
  150. 150. Habib E, Krajewski W, Kruger A. Sampling errors of tipping-bucket rain gauge measurements. Journal of Hydrologic Engineering. 2001;6:159-166
  151. 151. Upton GJG, Rahimi AR. On-line detection of errors in tipping-bucket raingauges. Journal of Hydrology. 2003;278(1-4):197-212
  152. 152. Hou AY, Kakar RK, Neeck S, Azarbarzin AA, Kummerow CD, Kojima M, et al. The global precipitation measurement mission. Bulletin of the American Meteorological Society. 2014;95:701-722
  153. 153. Leijnse H, Uijlenhoet R, Stricker J. Rainfall measurement using radio links from cellular communication networks. Water Resources Research. 2007;43:W03201
  154. 154. Chen H, Chandrasekar V. The quantitative precipitation estimation system for Dallas-Fort Worth (DFW) urban remote sensing network. Journal of Hydrology. 2015;531:259-271
  155. 155. Fabry F. Radar Meteorology, Principles and Practice. Cambridge, United Kingdom: Cambridge University Press; 2015. 256pp
  156. 156. Bringi VN, Chandrasekar V. Polarimetric Doppler Weather Radar, Principles and Applications. New York: Cambridge University Press; 2001. 637pp
  157. 157. Pruppacher HR, Beard KV. A wind tunnel investigation of the internal circulation and shape of water drops falling at terminal velocity in air. Quarterly Journal of the Royal Meteorological Society. 1970;96(408):247-256
  158. 158. Seliga TA, Bringi VN. Potential use of radar differential reflectivity measurements at orthogonal polarizations for measuring precipitation. Journal of Applied Meteorology. 1976;15:69-76
  159. 159. Illingworth A. Improved precipitation rates and data quality by using polarimetric measurements. In: Weather radar, Principles and Advanced Applications. Berlin, Heidelberg: Springer; 2004. pp. 130-166
  160. 160. Rico-Ramirez MA, Cluckie ID. Classification of ground clutter and anomalous propagation using dual-polarization weather radar. IEEE Transactions on Geoscience and Remote Sensing. 2008;46:1892-1904
  161. 161. Hall W, Rico-Ramirez MA, Krämer S. Offshore wind turbine clutter characteristics and identification in operational C-band weather radar measurements. Quarterly Journal of the Royal Meteorological Society. 2017;143(703):720-730
  162. 162. Rico-Ramirez MA. Adaptive attenuation correction techniques for C-band polarimetric weather radars. IEEE Transactions on Geoscience and Remote Sensing. 2012;50(12):5061-5071
  163. 163. Vivekanandan J, Zrnic DS, Ellis SM, Oye R, Ryzhkov AV, Straka J. Cloud microphysics retrieval using S-band dual-polarization radar measurements. Bulletin of the American Meteorological Society. 1999;80(3):381-388
  164. 164. Rico-Ramirez MA, Cluckie ID, Han D. Correction of the bright band using dual-polarisation radar. Atmospheric Science Letters. 2005;6(1):40-46
  165. 165. Bringi VN, Rico-Ramirez MA, Thurai M. Rainfall estimation with an operational polarimetric C-band radar in the United Kingdom: Comparison with a gauge network and error analysis. Journal of Hydrometeorology. 2011;12(5):935-954
  166. 166. Delrieu G, Wijbrans A, Boudevillain B, Faure D, Bonnifait L, Kirstetter PE. Geostatistical radar-raingauge merging: A novel method for the quantification of rain estimation accuracy. Advances in Water Resources. 2014;71:110-124
  167. 167. Sideris IV, Gabella M, Erdin R, Germann U. Real-time radar-rain-gauge merging using spatio-temporal co-kriging with external drift in the alpine terrain of Switzerland. Quarterly Journal of the Royal Meteorological Society. 2014;140(680):1097-1111
  168. 168. Courty LG, Rico-Ramirez MÁ, Pedrozo-Acuña A. The significance of the spatial variability of rainfall on the numerical simulation of urban floods. Water. 2018;10(2):207
  169. 169. Liguori S, Rico-Ramirez MA, Schellart ANA, Saul A. Using probabilistic radar rainfall nowcasts and NWP forecasts for flow prediction in urban catchments. Atmospheric Research. 2011;103:80-95
  170. 170. Liguori S, Rico-Ramirez MA. A practical approach to the assessment of probabilistic flow predictions. Hydrological Processes. 2013;27(1):18-32
  171. 171. Hiltz SR, Turoff M. The Network Nation: Human Communication Via Computer. 1978
  172. 172. Martin J. The Wired Society. NJ: Prentice-Hall; 1978
  173. 173. Castells M. The Rise of the Network Society. Oxford, UK: Blackwell; 1996
  174. 174. Dijk JV. The Network Society. Social Aspects of New Media. London: Sage Publications; 2006
  175. 175. IDC. The 3rd Platform: Enabling Digital Transformation; 2013
  176. 176. Cisco. Internet of Everything: A $4.6 Trillion Public-Sector Opportunity; 2013a
  177. 177. Cisco. Embracing the Internet of Everything to Capture Your Share of $14.4 Trillion; 2013b
  178. 178. Perera C, Member CHL, Jayawardena S, Chen M. Context-aware computing in the Internet of Things: A survey on Internet of Things from industrial market perspective. arXiv preprint arXiv:1502.00164; 2015
  179. 179. Langley P, Laird JE. Artificial Intelligence and Intelligent Systems. Palo Alto, CA, USA: American Association for Artificial Intelligence; 2006
  180. 180. Russell S, Norvig P. Artificial Intelligence: A Modern Approach. Englewood Cliffs: Prentice-Hall; 1995
  181. 181. Poole DL, Mackworth AK, Goebel R. Computational intelligence: A logical approach. Vol. 1. New York: Oxford University Press; 1998
  182. 182. Hofstadter DR. Gödel, Escher, Bach. New York: Vintage Books; 1980
  183. 183. McCorduck P. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, Mass: AK Peters; 2004
  184. 184. Kotseruba I, Tsotsos JK. A review of 40 years of cognitive architecture research: Core cognitive abilities and practical applications. arXiv preprint arXiv:1610.08602; 2016
  185. 185. Albus JS, Meystel A. Behavior Generation in Intelligent Systems, NIST No. 6083; 1997
  186. 186. Langley P. Cognitive architectures and general intelligent systems. AI Magazine. 2006;27(2):33
  187. 187. Albus JS, Barbera AJ. RCS: A cognitive architecture for intelligent multi-agent systems. IFAC Proceedings Volumes. 2004;37(8):1-11
  188. 188. Laird JE. The Soar Cognitive Architecture. Cambridge, MA, USA: MIT press; 2012
  189. 189. Albus JS. The engineering of mind. Information Sciences. 1999;117(1-2):1-18
  190. 190. Albus JS. RCS: A reference model architecture for intelligent systems. In: Working Notes: AAAI Spring Symposium on Lessons Learned for Implemented Software Architectures for Physical Agents; 1995. pp. 1-6
  191. 191. De Wolf T, Holvoet T. Emergence versus self-organisation: Different concepts but promising when combined. In: International Workshop On Engineering Self-Organising Applications; Berlin, Heidelberg: Springer; 2004. pp. 1-15
  192. 192. Turing AM. The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society B: Biological Sciences. 1952;237(641):37-72
  193. 193. Kephart JO, Chess DM. The vision of autonomic computing. Computer. 2003;(1):41-50
  194. 194. IBM. An Architectural Blueprint for Autonomic Computing. IBM; 2005
  195. 195. Pfeifer R, Iida F, Bongard J. New robotics: Design principles for intelligent systems. Artificial Life. 2005;11(1-2):99-120
  196. 196. Sivarajah U, Kamal MM, Irani Z, Weerakkody V. Critical analysis of Big Data challenges and analytical methods. Journal of Business Research. 2017;70:263-286
  197. 197. Santos GL, Endo PT, da Silva Lisboa MFF, da Silva LGF, Sadok D, Kelner J, et al. Analyzing the availability and performance of an e-health system integrated with edge, fog and cloud infrastructures. Journal of Cloud Computing. 2018;7(1):16
  198. 198. Zadeh LA. Fuzzy logic, neural networks, and soft computing. In: Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers by Lotfi A Zadeh. World Scientific Publishing Co., Inc. River Edge, NJ, USA; 1996. pp. 775-782
  199. 199. Das TK. Intelligent techniques in decision making: A survey. Indian Journal of Science and Technology. 2016;9(12):1-6
  200. 200. Juuso EK. Integration of intelligent systems in development of smart adaptive systems. International Journal of Approximate Reasoning. 2004;35(3):307-337
  201. 201. Sahin S, Tolun MR, Hassanpour R. Hybrid expert systems: A survey of current approaches and applications. Expert Systems with Applications. 2012;39(4):4609-4617
  202. 202. Liao SH. Expert system methodologies and applications—A decade review from 1995 to 2004. Expert Systems with Applications. 2005;28(1):93-103
  203. 203. Thangavel K, Pethalakshmi A. Dimensionality reduction based on rough set theory: A review. Applied Soft Computing. 2009;9(1):1-12
  204. 204. Ramos C, Augusto JC, Shapiro D. Ambient intelligence—The next step for artificial intelligence. IEEE Intelligent Systems. 2008;23(2):15-18
  205. 205. Essa IA. Ubiquitous sensing for smart and aware environments. IEEE Personal Communications. 2000;7(5):47-49
  206. 206. Paulovich FV, De Oliveira MCF, Oliveira ON Jr. A future with ubiquitous sensing and intelligent systems. ACS Sensors. 2018;3(8):1433-1438
  207. 207. Cooper MA. Optical biosensors in drug discovery. Nature Reviews Drug Discovery. 2002;1(7):515
  208. 208. Eckert MA, Vu PQ, Zhang K, Kang D, Ali MM, Xu C, et al. Novel molecular and nanosensors for in vivo sensing. Theranostics. 2013;3(8):583
  209. 209. Yang T, Xie D, Li Z, Zhu H. Recent advances in wearable tactile sensors: Materials, sensing mechanisms, and device performance. Materials Science & Engineering R: Reports. 2017;115:1-37
  210. 210. Ha D, Sun Q, Su K, Wan H, Li H, Xu N, et al. Recent achievements in electronic tongue and bioelectronic tongue as taste sensors. Sensors and Actuators B: Chemical. 2015;207:1136-1146
  211. 211. Wasilewski T, Gębicki J, Kamysz W. Bioelectronic nose: Current status and perspectives. Biosensors and Bioelectronics. 2017;87:480-494
  212. 212. Sempionatto JR, Mishra RK, Martín A, Tang G, Nakagawa T, Lu X, et al. Wearable ring-based sensing platform for detecting chemical threats. ACS Sensors. 2017;2:1531-1538
  213. 213. Carroll JB. Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge, United Kingdom: Cambridge University Press; 1993
  214. 214. Chiang M, Zhang T. Fog and IoT: An overview of research opportunities. IEEE Internet of Things Journal. 2016;3(6):854-864
  215. 215. Lynn T. Addressing the complexity of HPC in the cloud: Emergence, self-organisation, self-management, and the separation of concerns. In: Lynn T, Morrison J, Kenny D, editors. Heterogeneity, High Performance Computing, Self-Organization and the Cloud. Cham, Switzerland: Palgrave Macmillan, Cham; 2018. pp. 1-30
  216. 216. Östberg PO, Byrne J, Casari P, Eardley P, Anta AF, Forsman J, et al. Reliable capacity provisioning for distributed cloud/edge/fog computing applications. In: IEEE European conference on networks and communications (EuCNC); 2017. pp. 1-6
  217. 217. Xiong H, Dong D, Filelis-Papadopoulos C, Castañé GG, Lynn T, Marinescu DC, et al. CloudLightning: A self-organized self-managed heterogeneous cloud. In: IEEE Federated Conference on Computer Science and Information Systems (FedCSIS); 2017. pp. 749-758
  218. 218. Rousseau DM, Sitkin SB, Burt RS, Camerer C. Not so different after all: A cross-discipline view of trust. Academy of Management Review. 1998;23:393-404
  219. 219. Mayer RC, Davis JH, Schoorman FD. An integrative model of organizational trust. Academy of Management Review. 1995;20:709-734
  220. 220. McKnight DH, Carter M, Thatcher JB, Clay PF. Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems. 2011;2:12
  221. 221. Söllner M, Pavlou P, Leimeister JM. Understanding trust in IT artifacts—A new conceptual approach. In: Academy of Management Proceedings, Florida, January 2013; 2013. p. 11412
  222. 222. Beldad A, De Jong M, Steehouder M. How shall I trust the faceless and the intangible? A literature review on the antecedents on municipal websites. Government Information Quarterly. 2010;27:238-244
  223. 223. Boyd J. The rhetorical construction of trust online. Communication Theory. 2003;13:392-410
  224. 224. Pavlou PA. Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce. 2003;7:101-134
  225. 225. Wang YD, Emurian HH. An overview of online trust: Concepts, elements, and implications. Computers in Human Behavior. 2005;21:105-125
  226. 226. O’Hara K, Tuffield MM, Shadbolt N. Lifelogging: Privacy and empowerment with memories for life. Identity in the Information Society. 2008;1(1):155-172
  227. 227. Singh S, Lyon D. Surveilling consumers: The social consequences of data processing on Amazon.com. In: Belk RW, Llamas R, editors. The Routledge Companion to Digital Consumption. Vol. 2013. Florence, KY: Routledge; 2013. pp. 319-332
  228. 228. Thrift N. Remembering the technological unconscious by foregrounding knowledges of position. Environment and Planning D: Society and Space. 2004;22(1):175-190
  229. 229. Acquisti A. Identity management, privacy, and price discrimination. IEEE Security and Privacy. 2008;6(2):46-50
  230. 230. Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically from language corpora contain human-like biases. Science. 2017;356(6334):183-186
  231. 231. Lynn T, Van Der Werff L, Hunt G, Healy P. Development of a cloud trust label: A Delphi approach. The Journal of Computer Information Systems. 2016;56(3):185-193
  232. 232. CONAGUA. Estadística del agua en México. México, D.F.: Comisión Nacional del Agua; 2003
  233. 233. CONAGUA. Estadística del Agua en México. Cd. México: Comisión Nacional del Agua; 2017
  234. 234. R. J. Brandes Company. Naturalized Streamflow Data. Austin, Texas: Texas Commission on Environmental Quality; 2003
  235. 235. USDA. Hydrology National Engineering Handbook: Time of Concentration. Washington D.C.: Natural Resources Conservation Service; 2010
  236. 236. US Army Corps of Engineers. Hydrologic Modeling System, HEC-HMS. Quick Start Guide Version 3.5. Davis, CA, USA: Institute for Water Resources, Hydrological Engineering Center; 2010
  237. 237. Legates DR, MacCabe GJ. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resources Research. 1999;35(1):233-241
  238. 238. Moriasi DN, Arnold JG, Van Liew MW, Bigner RL, Harmel RD, Veith TL. Model evaluation guidelines for systematic quantification of accuracy in watershed simulation. American Society of Agricultural and Biological Engineers. 2007;50(3):885-900
  239. 239. Fisher K, Phillips C. Potential antimicrobial uses of essential oils in food: Is citrus the answer? Trends in Food Science and Technology. 2008;19(3):156-164. DOI: 10.1016/j.tifs.2007.11.006
  240. 240. Sokovic M, Glamoclija J, Marin P, Brkic D, Griensven L. Antibacterial effects of essential oils of commonly consumed medical herbs using in vitro model. Molecules. 2010;15:7532-7546. DOI: 10.3390/molecules15117532
  241. 241. Char C, Cisternas L, Pérez F, Guerrero S. Effect of the emulsification on the antimicrobial activity of carvacrol. Journal of Food Science. 2015;1:1-6. DOI: 10.1080/19476337.2015.1079558
  242. 242. Zhang S, Zhang M, Fang Z, Liu Y. Preparation and characterization of blended cloves/cinnamon essential oil nanoemulsions. Food Science and Technology. 2017;75:316-322. DOI: 10.1016/j.lwt.2016.08.046
  243. 243. Garti N, Benichou A. Double emulsions for controlled-release applications: Progress and trends. In: Sjoblom J, editor. Encyclopedic Handbook of Emulsion Technology. USA: CRC Press; 2001. pp. 409-442
  244. 244. Bezerra FM, Carmona OG, Carmona CG, Lis MJ, de Moraes FF. Controlled release of microencapsulated citronella essential oil on cotton and polyester matrices. Cellulose. 2016;23(2):1459-1470. DOI: 10.1007/s10570-016-0882-5
  245. 245. Pays K, Giermanska-Kahn J, Pouligny B, Bibette J, Leal-Caldron F. Double emulsions: How does release occur. Journal of Controlled Release. 2002;79(1-3):193-205. DOI: 10.1016/s0168-3659(01)00535-1
  246. 246. Kaskatepe B, Kiymaci ME, Simsek D, Erol HB, Erdem SA. Comparison of the contents and antimicrobial activities of commercial and natural cinnamon oils. Indian Journal of Pharmaceutical Sciences. 2016;78(4):541-546. DOI: 10.4172/pharmaceutical-sciences.1000150
  247. 247. Tomičić R, Čabarkapa I, Varga A, Tomičić Z. Antimicrobial activity of essential oils against Listeria monocytogenes. Food and Feed Research. 2018;45(1):37-44. DOI: 10.5937/ffr1801037t
  248. 248. Cardoso-Ugarte GA, Ramírez-Corona N, López-Malo A, Palou E, San Martín-González MF, Jimenez-Munguia MT. Modeling phase separation and droplet size of W/O emulsions with oregano essential oil as a function of its formulation and homogenization conditions. Journal of Dispersion and Technology. 2018;39(7):1065-1073. DOI: 10.1080/01932691.2017.1382370
  249. 249. Peredo-luna HA, Palou-García E, López-Malo A. Aceites esenciales: métodos de extracción. Temas Selectos de Ingeniería de Alimentos. 2009;3(1):24-32
  250. 250. Preedy VR. Essential oils in food preservation, flavor and safety. USA: Elsevier; 2016. p. 895
  251. 251. Burt S. Essential oils: Their antibacterial properties and potential applications in foods. International Journal of Food Microbiology. 2004;94:223-243
  252. 252. Adelakun OE, Oyelade OJ, Olanipekun B. Use of essential oils in food preservation. In: Preedy V, editor. Essential Oils in Food Preservation, Flavor and Safety. USA: Elsevier; 2016. p. 71
  253. 253. Soković M, Glamočlija J, Marin PM, Brkić D, van Griensven LJLD. Antibacterial effects of the essential oils of commonly consumed medicinal herbs using an In Vitro model. Molecules. 2010;15:7532-7546
  254. 254. Du WX, Olsen CW, Avena-Bustillos TH, McHuh CE, Levin RM, Friedman A. Antibacterial effects of allspice, garlic, and oregano essential oils in tomato films determined by overlay and vapor-phase methods. Journal of Food Science. 2009;74(7):390-397
  255. 255. Jiang ZT, Feng X, Li R, Wang Y. Composition comparison of essential oils extracted by classical hydrodistillation and microwave-assisted hydrodistillation from Pimenta dioica. Journal of Essential Oil Bearing Plants. 2015;16(1):45-50
  256. 256. Oussalah M, Caillet S, Saucier L, Lacroix M. Inhibitory effects of selected plant essential oils on the growth of four pathogenic bacteria: E. coli O157:H7, Salmonella Typhimurium, Staphylococcus aureus and Listeria monocytogenes. Journal of Food Control. 2007;18:424-420
  257. 257. Vazquez-Cahuich DA, Espinosa Moreno J, Centurion Hidalgo D, Velazquez Martinez JR, Borges-Argaez R, Caceres Farfan M. Antimicrobial activity and chemical composition of the essential oils of Malvaviscus arboreus Cav, Pimenta dioica (L.) Merr., Byrsonima crassifolia (L.) Kunth and Psidium guajava L. Tropical and Subtropical Agroecosystems. 2013;16:505-513
  258. 258. Zabka M, Pavela R, Slezakova L. Antifungal effect of Pimenta dioica essential oil against dangerous pathogenic and toxinogenic fungi. Industrial Crops and Products. 2009;30:250-253
  259. 259. Charles DJ. Allspice. In: Antioxidant Properties of Spices, Herbs and Other Sources. Iowa, USA: Springer Science and Business Media; 2012
  260. 260. Hadjilouka A, Polychronopoulou M, Paramithiotis S, Tzamalis P, Drosinos EH. Effect of lemongrass essential oil vapors on microbial dynamics and Listeria monocytogenes survival on rocket and melon stored under different packaging conditions and temperatures. Microorganisms. 2015;3:535-550
  261. 261. Olivares-cruz MA, López-Malo A. Potencial antimicrobiano de mezclas que incluyen aceites esenciales o sus componentes en fase vapor. Temas Selectos de Ingeniería de Alimentos. 2013;7(1):78-86
  262. 262. Reyes-Jurado F, Palou E, López-Malo A. Vapores de aceites esenciales: alternativa de antimicrobianos naturales. Temas Selectos de Ingeniería de Alimentos. 2012;6(1):29-39
  263. 263. Catherine AA, Deepika H, Negi PS. Antibacterial activity of eugenol and peppermint oil in model food systems. Journal of Essential Oil Research. 2012;24:481-486
  264. 264. Reyes-Jurado F, López-Malo A, Palou E. Antimicrobial activity of individual and combined essential oils against foodborne pathogenic bacteria. Journal of Food Protection. 2016;79:309-315
  265. 265. Chen W, Wang F, Hu Y, Li C. Optimization of simultaneous distillation extraction of the black pepper. Advanced Materials Research. 2012;396-398:1454-1457
  266. 266. Quert Álvarez R, Miranda Martínez M, Leyva Córdova B, García Corrales H, Gelabert Ayón F. Rendimiento de aceite esencial en Pinus caribeae MorElet según el secado al sol y a la sombra. Revista Cubana de Farmacia. 2001;35(1):47-50
  267. 267. Shimadzu Excellence in Science. Analytical and Measuring Instruments, 27th September 2018. 2007. Available from: http://www.shimadzu.com/an/retention_index.html
  268. 268. Claxton LD, Stewart Houk V, Warren S. Methods for the spiral Salmonella mutagenicity assay including specialized applications. Mutation Research. 2001;488:241-257
  269. 269. López-Malo A, Palou E, Parish ME, Davidson PM. Methods for activity assay and evaluation of results. In: Davidson PM, Sofos JN, Branen AL, editors. Antimicrobials in Foods. New York: CRC; 2015. pp. 659-680
  270. 270. Kim E, Oh CS, Koh SH, Seok Kim H, Kang KS, Park PS, et al. Antifungal activities after vaporization of ajowan (Trachyspermum ammi) and allspice (Pimenta dioica) essential oils and blends of their constituents against three Aspergillus species. Journal of Essential Oil Research. 2016;28:252-259
  271. 271. Krisch J, Tserennadmid R, Vágvölgyi C. Activity of essential oils in vapor phase against bread spoilage fungi. Acta Biologica Szegediensis. 2013;57:9-12
  272. 272. Miladi H, Slama RB, Mili D, Zouari S, Bakhrouf A, Ammar E. Essential oil of Thymus vulgaris L. and Rosmarinus officinalis L.: Gas chromatography-mass spectrometry analysis, cytotoxicity and antioxidant properties and antibacterial activities against foodborne pathogens. Natural Science. 2013;5:729-739
  273. 273. Attokaran M. Allspice (Pimenta doica). In: Natural Food Flavours and Colorants. USA: Blackwell Publishing Ltd. and Institute of Food Technologists; 2011. p. 53
  274. 274. Han JH, Patel D, Kim JE, Min SC. Retardation of Listeria monocytogenes growth in mozzarella cheese using antimicrobial sachets containing rosemary oil and thyme oil. Journal of Food Science. 2014;79:2272-2278
  275. 275. Techathuvanana C, Reyes F, David JRD, Davidson PM. Efficacy of commercial natural antimicrobials alone and in combinations against pathogenic and spoilage microorganisms. Journal of Food Protection. 2014;77:269-275
  276. 276. Hennekinne JA, De Buyser ML, Dragacci S. Staphylococcus aureus and its food poisoning toxins: Characterization and outbreak investigation. FEMS Microbiology Reviews. 2012;36(4):815-836
  277. 277. International Commission on Microbiological Specifications for Foods (ICMSF). Microorganismos de los Alimentos: Características de los Patógenos Microbianos. España: Acribia; 1998. pp. 349-385
  278. 278. Suárez H, Francisco AD, Beirão LH. Influence of bacteriocins produced by Lactobacillus plantarum LPBM10 on shelf life of cachama hybrid fillets Piaractus rachypomus x Colossoma macropomum vacuum packaged. Vitae. 2008;15(1):32-40
  279. 279. Tagg JR, McGiven AR. Assay system for bacteriocins. Applied Microbiology. 1971;21:943
  280. 280. Anas M, Eddine HJ, Mebrouk K. Antimicrobial activity of lactobacillus species isolated from Algerian raw goat’s milk against Staphylococcus aureus. World Journal of Dairy & Food Sciences. 2008;3:39-49
  281. 281. Kareem KY, Ling FH, Chwen LT, Foong OM, Asmara SA. Inhibitory activity of postbiotic produced by strains of Lactobacillus plantarum using reconstituted media supplemented with inulin. Gut Pathogens. 2014;6:23
  282. 282. Paz O. The Collected Poems of Octavio Paz 1957-1987. New York: New Direction Books; 1991
  283. 283. Derrida J. Psyche. The Inventions of the Other. Vol. 1. California: Stanford University Press; 2007
  284. 284. Levinas E, Guillot D. Totalidad e Infinito. Ensayo sobre la exterioridad. Salamanca: Ediciones Sígueme Salamanca; 2002
  285. 285. Meier A. El cortometraje: el arte de narrar, emocionar y significar. México: Editorial Casa abierta al tiempo; 2013
  286. 286. Auge M. Los “no lugares” espacios del anonimato. Una antropología de la Sobremodernidad. Barcelona: Editores Gedisa; 2000
  287. 287. Derrida J. De la Gramatología. Mexico: Siglo Veintiuno editores; 1986
  288. 288. Marx K, Engels F. The Communist Manifesto. Oxford: Oxford University Press; 1992.
  289. 289. Hardt M, Negri A. Multitude. War and Democracy in the Age of Empire. New York: Penguin Books; 2004
  290. 290. Beasley-Murray J. Posthegemony. Political Theory and Latin America. Minneapolis: University of Minnesota Press; 2010
  291. 291. Deleuze G, Guattari F. Mil Mesetas. Capitalismo y esquizofrenia. Pre-textos: Valencia; 2002
  292. 292. Kundu P, Cohen L. Fluid Mechanics. 3rd ed. Calif: Academic; 1990
  293. 293. NASA GRC. Reynolds Number [Online]. 2016. https://www.grc.nasa.gov/www/k-12/airplane/reynolds.html [Accessed: March 10, 2018]
  294. 294. Barenblatt GI. Scaling, Self-Similarity, and Intermediate Asymptotics: Dimensional Analysis and Intermediate Asymptotics. Vol. 14. Cambridge, United Kingdom: Cambridge University Press; 1996
  295. 295. NASA GRC. The Drag Coefficient. [Online]. 2016. https://www.grc.nasa.gov/www/k-12/airplane/dragco.html [Accessed: March 11, 2018]
  296. 296. Ledesma-Alonso R, Guzmán J, Zenit R. Experimental study of a model valve with flexible leaflets in a pulsatile flow. Journal of Fluid Mechanics. 2014;739:338-362
  297. 297. Schlichting H, Gersten K. Boundary-Layer Theory. New York, USA: Springer; 2016
  298. 298. Sharcnet. LS-DYNA parallel processing capabilities [Online]. 2016. https://www.sharcnet.ca/Software/Ansys/17.0/en-us/help/ans_lsd/Hlp_L_solumem.html [Accessed: April 6, 2018]
  299. 299. Peña Pérez N. Windkessel modeling of the human arterial system [B.S. thesis]; 2016
  300. 300. Catanho M, Sinha M, Vijayan V. Model of Aortic Blood Flow Using the Windkessel Effect. San Diago: University of California of San Diago; 2012
  301. 301. Westerhof N, Lankhaar J-W, Westerhof BE. The arterial windkessel. Medical & Biological Engineering & Computing. 2009;47(2):131-141
  302. 302. Kalcsics J, Nickel S, Schröder M. Towards a unified territorial design approach—Applications, algorithms and GIS integration. TOP. 2005;13(1):1-56
  303. 303. Butsch A. Districting Problems—New Geometrically Motivated Approaches. Karlsruhe, Germany: Doctoral dissertation, Karlsruhe Institut für Technologie; 2016
  304. 304. Ríos-Mercado RZ, Salazar-Acosta JC. A GRASP with strategic oscillation for a commercial territory design problem with a routing budget constraint. In: Batyrshin I, Sidorov G, editors. Advances in Soft Computing: Proceedings of the 10th Mexican International Conference on Artificial Intelligence (MICAI 2011), Part II, Lecture Notes in Artificial Intelligence. Vol. 7095. Heidelberg, Germany: Springer; 2011. pp. 307-318
  305. 305. Gliesch A, Ritt M, Moreira MC. A Multistart Alternating Tabu Search for Commercial Districting. In: European Conference on Evolutionary Computation in Combinatorial Optimization. Cham: Springer; 2018. pp. 158-173
  306. 306. Salazar-Aguilar MA, Ríos-Mercado RZ, Cabrera-Ríos M. New models for commercial territory design. Networks and Spatial Economics. 2011;11(3):487-507
  307. 307. Salazar-Aguilar MA, Ríos-Mercado RZ, González-Velarde JL, Molina J. Multiobjective scatter search for a commercial territory design problem. Annals of Operations Research. 2012;199(1):343-360
  308. 308. Shirabe T. Districting modeling with exact contiguity constraints. Environment and Planning B: Planning and Design. 2009;36(6):1053-1066
  309. 309. Ahuja N, Bender M, Sanders P, Schulz C, Wagner A. Incorporating road networks into territory design. In: Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2015). Article No. 4; New York, USA: ACM; 2015
  310. 310. Ríos-Mercado RZ, Fernández E. A reactive GRASP for a commercial territory design problem with multiple balancing requirements. Computers and Operations Research. 2009;36(3):755-776
  311. 311. Fernández JD, Vico F. AI methods in algorithmic composition: A comprehensive survey. Journal of Artificial Intelligence Research. 2013;48:513-582
  312. 312. Briot JP, Hadjeres G, Pachet F. Deep learning techniques for music generation—A survey. arXiv Preprint. 2017. arXiv:1709.01620
  313. 313. Briot JP, Pachet F. Music generation by deep learning—Challenges and directions. arXiv Preprint. 2018. arXiv:1712.04371. Available at https://arxiv.org/pdf/1712.04371.pdf
  314. 314. Colombo F, Gerstner W. A general model of music composition. arXiv Preprint. 2018. arXiv:1802.05162
  315. 315. Jaques N, Gu S, Turner RE, Eck D. Tuning Recurrent Neural Networks with Reinforcement Learning. 2017
  316. 316. Prince JB. Contributions of pitch contour, tonality, rhythm, and meter to melodic similarity. Journal of Experimental Psychology: Human Perception and Performance. 2014;40(6):2319
  317. 317. Pachet F, Roy P. Markov constraints: Steerable generation of Markov sequences. Constraints. 2011;16(2):148-172
  318. 318. Chordia P, Sastry A, Şentürk S. Predictive tabla modelling using variable-length Markov and hidden Markov models. Journal of New Music Research. 2011;40(2):105-118

Written By

Sergio Picazo-Vela and Luis Ricardo Hernández

Submitted: 10 December 2018 Published: 20 February 2019