\r\n\tThe objective of the proposed book is to give a multi-perspective view on role of autophagy in injury, infection and cancer diseases. The book chapters aim to elucidate autophagy pathways in sustaining the host defense mechanisms, adaptive homeostasis as well as in remodeling and regeneration events that are essential for recuperation of the affected tissues. A specific subject for discussion will be up-regulation and/or impairment of autophagy and crinophagy in phagocytes/granulocytes and adult stem cells.\r\n
\r\n\tThe cell/tissue responses to acute stress, trauma/injury or pathogens are mediated by expression and release of plethora of paracrine and endocrine effectors including DAMPs, PAMPs and inflammatory cytokines, chemokines, defensins, and reactive intermediate species. These effectors drive the integrative interactome constituted by hubs of the acute phase response modules, the inflammatory response modules, the module of the adaptive homeostatic response in the damaged parenchymal cells, vascular cells, immunocompetent cells and emerging stem cells. Among these defense mechanisms is autophagy – the lysosomal pathway for processing of compromised cell constituents and/or bacterial and viral pathogens. In this light, explication of the role of autophagy in cellular pathology may arouse R&D of new modalities for management of devastating diseases such as injury, acute infections or cancer.
Proceedings of the Workshop: Technology, Science, and Culture: A Global Vision 2018
Luis Ricardo Hernández
Knowledge Area Co-editors
Ileana Azor Hernández
Nelly Ramírez Corona
Roberto Rosas Romero
Erwin Josuan Pérez Cortés
Andrés Alfonso Peña Olarte
The aim of the Workshop: Technology, Science, and Culture: A Global Vision is to create a discussion forum on research related to the fields of Water Science, Food Science, Intelligent Systems, Molecular Biomedicine, and Creation and Theories of Culture. The workshop is intended to discuss research on current problems, relevant methodologies, and future research streams and to create an environment for the exchange of ideas and collaboration among participants.
This first edition of the workshop was held on November 6, 2018, at Universidad de las Americas Puebla. In this edition, we had four keynotes and nine posters presented in the poster session, which aimed to show selected research from doctoral students. At the end of the workshop, the best poster was awarded.
Keynote speakers are researchers with recognized trajectories, who have published in leading academic and scientific journals. In this edition, the invited speakers were: Dr. Horacio Bach, Dr. Andreas Linninger, Dr. Miguel Ángel Rico-Ramírez, and Dr. Theodore Gerard Lynn.
During keynotes, Dr. Horacio Bach discussed the problem of multidrug resistant bacteria and the lack of R&D in the development of new antibiotics in pharmaceutical companies. In his talk, Dr. Andreas Linninger focused on mathematical modeling, and he proposed a definition, explained applications on chemistry and biochemistry, and emphasized the benefits of viewing math as a language for scientific inquiry and math education. In his turn, Dr. Rico-Ramírez explained the importance of measuring and forecasting precipitations; he also discussed latest advances of the measurement and forecasting of precipitations with weather radars. Finally, Dr. Theodore Lynn explained the importance of Intelligent Systems in the Internet of Everything; he explained the building blocks of Intelligent Systems and research opportunities.
In the first edition of this workshop, the best poster was awarded to Omar López Rincón, a student of Intelligent System Doctorate, who presented his work “A 3D Spatial Visualization of Measures in Music Compositions.”
The number and impact of water-related natural disasters have increased since the middle of last century. As a result of increased climate variability and the effects of global warming, the hydrometeorological risk has increased and spread, while the resilience of societies, in many cases, is not adequate. Consequently, the risk has increased. Floods and droughts, particularly in a changing climate, require greater understanding to generate better forecasts and proper management of these phenomena. Mexico, like other countries in the world, and of course in Latin America and the Caribbean region, suffers from both weather extremes, so the study of these phenomena is important in Mexican context.
The UNESCO Chair on Hydrometeorological Risks, held at the Universidad de las Américas Puebla, is devoted to the analysis, measurement, modeling and management of extreme hydrometeorological events in the context of a more urbanized world, climate change, and vulnerable regions. The Chair focuses on the development of basic and applied research for the design of adapting and mitigating measures, and it also focuses on the dissemination of information and training of decision makers as well as the public. In its activities, the Chair keeps a gender focus, targeting to reduce the vulnerability of women to hydrometeorological disasters.
The Chair acts in the following fields:
Hydrometeorological risks and climate change.
Modeling and forecasting of hydrometeorological risks.
Integrated management of hydrometeorological risks.
Gender and hydrometeorological risks.
A detailed description of the UNESCO Chair on Hydrometeorological Risks, members, and publications, can be obtained at its Website https://www.udlap.mx/catedraunesco/
The Chair publishes a quarterly Newsletter, in Spanish and English, that can be consulted at https://www.udlap.mx/catedraunesco/newsletters.aspx
A New Era without Antibiotics 1
Mathematical Modeling: The Art of Translating between Minds and Machines and How to Teach It 23
Advances in the Measurement and Forecasting of Precipitation with Weather Radar for Flood Risk Management 39
Toward the Intelligent Internet of Everything: Observations on Multidisciplinary Challenges in Intelligent Systems Research 51
Hydrological Modeling in the Rio Conchos Basin Using Satellite Information 69
Modeling of the Controlled Release of Essential Oils Encapsulated by Emulsification 77
Extraction, Composition, and Antibacterial Effect of Allspice (Pimenta dioica) Essential Oil Applied in Vapor Phase 83
Stability of the Antimicrobial Activity of
Music as a Medium of Encounter of Otherness in Animated Cinema 99
Revolutionary Veganism 103
Aerodynamic coefficient Calculation of a Sphere using Incompressible Computational Fluid Dynamics Method 109
Comparison of Dispersion Measures for a Territory Design Problem 117
A 3D Spatial Visualization of Measures in Music Compositions 123
The appearance of multidrug-resistant bacteria is challenging the research community to find new antimicrobial agents. The problem is exacerbated because of the lack of new antibiotics and an uncontrolled use of antibiotics in the human and husbandry health. All these factors contributed to the development of more resistant pathogenic bacteria, which is alarming the health systems. In this chapter, the problems related to the lack of R&D in the development of new antibiotics in pharmaceutical companies as well as the misuse of antibiotics will be discussed. In addition, the new avenues of research in the development of new antimicrobial entities will be also examined.
Bacteria are ubiquitous unicellular organisms able to adapt to the environmental changes in a very fast way. The double time of bacterial cells varies between 20 and 60 min.
To visualize bacteria, a microscope is required. However, as a result of their transparency, their visualization is impaired unless a stain is used. In 1884, the Danish bacteriologist Hans C. Gram published a technique by which, bacterial cells can be divided into two groups according to their color after the staining . Based on the staining remnant, bacteria are classified as Gram-positive (purple) and Gram-negative (pink). This separation is based on the ability of the Gram-positive bacteria to retain the dye crystal violet, according to their cell wall composition (Figure 1).
Not all of the bacteria are classified in these groups. For instance, mycobacteria species do not respond to Gram staining as the result of a lipidic cell wall resistant to the stains. However, the Ziehl-Neelsen stain or acid-fast staining was developed
Viruses are ubiquitous infective materials composed of a genetic material generally protected by a proteinaceous coat. Only an electron microscope can visualize them. Viruses are obligatory parasites, which require a live cell to multiply. They cannot proliferate outside of the cell because they need the cell machinery to multiply their genetic material and to produce their own proteins always depending on the machinery of the host.
Generally, viruses infect by introducing their genetic material into the host cell. Then, the viral genetic material hijacks the host systems and the host starts to produce the viral proteins as well as the genetic material. At the end of the process, the viruses opt for staying inside the host cell or to rupture it and disseminate.
In order to use the host machinery, the genetic material of the virus codes for a few specific proteins able to interact with the host proteins. Although a small number of viral proteins are produced by the host, they have a high affinity for the host proteins. This is the reason why viruses are very specific to their host and very rarely viruses can infect different species.
Antibiotics are molecules able to inhibit the growth of bacteria. In nature, antibiotics are produced as secondary metabolites by specific groups of bacteria and fungi. The definition of secondary metabolites means that they are not involved in essential metabolic reactions in the cell. Then, if the genes responsible for their production are deleted from the bacterial DNA, they still can proliferate. Instead, it looks that antibiotics are produced in order to compete for nutritional sources by inhibiting or stopping the development of other bacterial competitors.
Penicillin was the first antibiotic discovered in 1928 by Alexander Fleming and started to be used to combat infections in 1942. Since then, new antibiotics were approved with a concomitant decrease over the last decades. The reasons for this decrease are discussed below.
When discussing the development of new antibiotic targets, it should be taken into consideration the bacterial target. Many metabolic pathways and enzymes in bacteria are highly conserved across living organisms. Therefore, these pathways and enzymes are not useful as targets because it will inflict similar damage(s) to human cells. Thus, the antibiotic targets should be directed to any bacterial target (e.g., protein, biosynthetic pathway, etc.) that does not have any similarity in human. Examples of antibiotics targeting bacteria and mechanism of resistance are depicted in Figure 2.
Bacteria multiply by binary fission, which means that the parental cell divides into two daughters. Each daughter is considered a clone or genetically identical offspring generated by vegetative multiplication. As mentioned before, bacteria multiply exponentially very fast with a generation time between 20 and 60 min, depending on the species. Thus, in a bacterial culture, although originated from a single cell, a prolonged growth may generate a residual change as a result of an adaptive process, resulting in spontaneous mutations. If we calculate the number of mutations (at a rate of 10−10 mutations per nucleotide base) in the genome of the bacteria
The term drug resistance refers to acquired changes in the bacterial genome against an antibiotic. These genomic changes will continue to exist even when the drug is removed from the environment and they will be inherited to the descendants of this bacterial clone. This type of changes can be driven either by a change in the sequence of a protein (target of the antibiotic) or by the infection of the bacteria with a foreign piece of DNA that brings new genetic material to it.
Treating bacteria with antibiotics produces a selection pressure on the bacterial cells. Based on the information provided above, it is reasonable to think that spontaneous mutations will appear, which will confer to that specific cell an advantage over the rest of the population. This new resistant strain can multiply in presence of the antibiotic because it has developed an adaptive mechanism to cope with the killing activity of the antibiotic.
This problem is aggravated when bacteria develop resistance to different antibiotics. In fact, different terms are used depending on the resistance. For example, multidrug-resistant (MDR) bacteria are resistant to at least one antibiotic in three or more antimicrobial categories; extensively drug-resistant (XDR) bacteria resistant to at least one antibiotic in all but two or fewer antimicrobial categories; and pandrug-resistant (PDR) bacteria, which are resistant to all antimicrobial categories .
To acquire resistance to an antibiotic, bacteria should develop a mechanism to neutralize it. Bacteria have developed different mechanisms to cope with the presence of antibiotics, which can be generalized as follows: (1) destruction of the antibiotic (enzymatic alteration of the antibiotic molecule by phosphorylation, adenylation, or acetylation), (2) changes in the antibiotic target (mutational alterations in the sequence of the protein targeted by the antibiotic), and (3) reduction in the permeability of the antibiotic (efflux pumps that pump out the internalized antibiotic) .
Once a single bacterial cell generates a mutation, which provides advantages to survive in the presence of the antibiotic, the genetic material conferring this resistance can be transferred to other bacterial cells by small autonomous pieces of DNA, termed plasmids, that are not integrated into the bacterial genome and exist as independent entities. This autonomous pieces are multiplied by the bacteria and transferred to the progeny (vertical transfer) or can be transferred to other species (horizontal transfer) during a process called conjugation . Moreover, bacteria can continually exchange plasmids, and these pieces of DNA may contain resistance genes that will be passed to new bacteria. Interestingly, plasmids can move to new bacteria even in the absence of an antibiotic, suggesting that resistance can be disseminated in the bacterial population without the presence of an antibiotic agent.
The number of deaths related to infections is alarming in both Gram-positive and Gram-negative groups. For example, the toll death related to the Gram-positive
Health-care systems cope with antibiotic-resistant infections with a high economic burden. The main problems occur in hospitals as a result of vulnerable patient crowding. This issue is aggravated because the invasive procedures performed in these facilities with an excessive use of antibiotics to safeguard the lives of critical patients. For instance, a study published in the U.S. in 2002 revealed that approximately 2 million people developed hospital-acquired infection per year, causing 99,000 deaths as the result of antibacterial-resistant pathogens . The appearance of infections caused in hospitals by antibacterial-resistant pathogens extends the hospitalization of the patients with a subsequent increase in the cost of hospital days, depending on the type of infection .
The role of pharmaceutical companies is to develop drugs to prevent or cure illness. Positive outcomes are based on a continuous introduction of new medicines with an increase in the life expectancy to 78 years.
The drug discovery process is complex and involves investments of billions of dollars with a high risk. Therefore, pharmaceutical companies should assess carefully the profitability of their products before deciding in what drug to invest.
The process of introducing a new drug in the market comprises mainly four phase: (1) drug discovery (3–4 years), (2) drug development (clinical phases, 5–6 years), (3) FDA filing and review (2–3 years), and (4) manufacturing and marketing. Thus, pharmaceutical companies need to evaluate a long drug discovery process associated with a patent that will expire at some point and a potential drug recall or withdrawal from the market. In conclusion, all the process involved from drug discovery until marketing may last for over a period of 12–15 years. In the case of the introduction of new antibiotics, the process is aggravated because of the appearance of bacterial resistance that will reduce the profitability of the antibiotics in the short term. Moreover, when new antibiotics are released into the market, they are often used as last resort because clinicians prefer to reserve them to treat complex infections. This situation prolongs the shelf live of the antibiotic reducing the profitability of the company.
When the pharmaceutical company chooses a specific drug, the transition between the phases is of high risk because regulatory agencies will monitor that each stage is safe for human consumption even before the drug is entering into clinical trials. For example, the selection of a candidate involves the screening of thousands of compounds, which as a result of its toxicity, efficacy, or safety may not proceed to the next step. This cost of the investment will be recovered only if the candidate drug successfully passes all the phases. As an illustration, 38% of the drugs failed in phase I (safety/blood levels), 60% of the remaining failed in phase II (basic efficacy), 40% of the remaining candidates failed in phase III (big, expensive efficacy), and 23% failed to be approved by the FDA . Taking together, the number of medicines approved for new treatments has consistently dropped from approximately 35 to 20 new drugs/year in the last decade .
Based on all these explanations, it is reasonable to deduce that pharmaceutical companies have more interest to develop new medicines for the treatment of chronic diseases rather than antibiotics. Patients treated for these chronic diseases will consume the drugs for a long period of time (years) even for life, whereas antibiotics will be prescribed for a period of few days and then stopped. Taking all these concerns together, only three pharmaceutical companies in the world continue to develop new antibiotics . Other contributors to the development of new antibiotics, such as the academy, have been affected by funding restrictions .
Over the last two decades, regulatory agencies such as the FDA have changed the way antibiotic clinical trials are executed . For example, the use of placebo in the clinical trials of antibiotics is now considered unethical, and instead, trials are addressing noninferiority of new antibiotics compared to existing drugs. These regulations increase the cost of the trials because larger populations are required with a concomitant reduction of the profitability . Taking together, changes in the regulations should be pursued to accelerate the approval of new antibiotics . These changes can, for example, be based on reducing the clinical trial to a smaller population, which will reduce the cost of the trial, as well as its acceleration for completion.
Pharmaceutical companies face an additional problem and it is related to patent expiration. Taking into consideration that the patenting of a specific drug has been performed earlier and during the discovery phase, after the FDA approval and product launching to the market, companies have a period of approximately 10–12 years to recover the investment during the different phases. Once the patent expired, the company faces what is known as a “patent cliff” .
Patent cliff means that the company loses the exclusive marketing of a specific drug and it becomes “generic.” A generic drug means that the product is sold at a considerably lower price compared to the original equivalent. Thus, sales and revenues for that specific drug plummet with a loss of price of up to 70% in a short period of time after the patent expiration.
Antibiotics, without doubts, have had a positive impact on human health. In the past, deadly untreatable bacterial infections became treatable and stopped to be the main cause of death.
The history teaches us that penicillin resistance in
Physicians routinely prescribe antibiotics to treat infections whose origin may have not been identified yet. For example, the treatment of viral infections with antibiotics has no benefits to the patient, but instead, increases the antibiotic resistance in other bacteria present in the patient microbiome. Thus, increasing the resistance to antibiotics in the normal flora of the patients will neutralize their activity in future infections. For example, after the examination of a patient with ear pain, the family doctor concluded that her/his ear has an infection. There is a 25 and 75% probability that this infection is caused by bacteria and viruses, respectively. Although it may be a discomfort for a few days, the infection will resolve without treatment in case that its origin is viral. To determine the origin of the infection, a culture test should be performed, which may take a couple of days with a higher cost than an antibiotic prescription. Thus, the patient prefers to purchase the antibiotics knowing that there is only a 25% probability. Under these facts, the patient will press her/his physician for an antibiotic prescription, making the patient happy. In conclusion, this event multiplied by thousands of doctor visits/year develops the resistance for untreatable bacteria in the future.
Overuse of antibiotics by physicians occurs when surgeons decide to administer antibiotics to the patients facing a surgery as a mean of prophylaxis to prevent infections during and after the procedure .
Another aspect of antibiotic overuse is observed in the livestock industry, which uses large quantities of antibiotics not only to prevent infections [23, 24] but also to increase their growth . These infections may take an enormous toll of death very fast and reduce considerably the number of animals, especially in intensive husbandry (e.g., turkey, chicken, and fish ponds). These antibiotics reach the environment where they create an ideal niche for the development of resistance in the microbiome. Thus, the misuse of antibiotics in these industries provides pressure on bacteria to acquire resistance. For example, it has been demonstrated the presence of resistant bacteria in meat consumers . This phenomenon follows a sequence of events that starts in the antibiotic overuse in the farms. This overuse depletes susceptible bacteria and helps with the appearance of antibiotic-resistant bacteria, which are transmitted to humans through the food supply. Studies have demonstrated that approximately 90% of the antibiotics provided to animals are secreted in urine and stool, which subsequently are used as fertilizers altering the environmental microbiome .
Another growing problem is related to antibacterial products found in cleaning or hygienic purposes. For example, their effect on the environment affects the composition of indigenous bacterial populations having a direct effect on the development of a proper immune system in humans.
In order to tackle the antibiotic overuse, national or provincial programs should be established to:
educate not only health professionals but also the society to reduce this burden, including behavioral interventions;
develop a fast test to evaluate whether an infection is caused by bacteria or viruses;
restrict or limit the excessive use of antibiotics by providing education programs to farmers.
An examination of the FDA approval during the period 1998–2003 revealed that the approval of new antibiotics has declined by 56% over the past 20 years . Surprisingly, only 7 of a total of 225 new drugs approved in that period were antibiotics  and only 2 antibiotics had a new mechanism of action . This low number is insufficient to meet with the growing needs of our society to cope with infections.
The fact that the introduction of new antibiotics in the market decreased over the last decades together with the appearance of resistance fueled the investigation of alternative sources of antimicrobial agents.
The new research venues include bacteriocins, phages, and nanoparticles.
Bacteriocins are short or long sequences of amino acids with antibacterial activities produced by lactic bacteria. Their sequences are heterogeneous and classified according to their molecular weight . For example, some of them consist of short peptide sequences (19–37 amino acids), but others can reach molecular weights of up to 90,000 Da.
Bacteriocins are considered to possess antibacterial activity against a broad spectrum of bacteria, making them nonspecific and considered safe and natural antimicrobial agents because of their consumption in dairy products since ancient times . In other words, bacteria considered beneficial to human produce bacteriocins.
Bacteriocins are produced by lactic bacteria in the intestine probably to gain access to nutrients in a highly competitive environment with trillions of different bacterial species striving to survive. However, bacteriocins are not exclusive to the lactic bacteria group. Other bacterial strains have been shown to produce bacteriocin as well such as
Bacteriocins are grouped in different classes, but lantibiotics and thiopeptides are the most extensively studied . For example, lantibiotics are very effective to control Gram-positive infections
On the other hand, thiopeptides have shown extraordinary results as antimicrobial agents, but their applications have been restricted because of water solubility issues [37, 38]. However, analogs of these thiopeptides have been generated with successful applications to control infections of
Although bacteriocins can be delivered as bacteriocin-producing bacteria, their activity in the intestinal tract should be monitored. In the case of bacteriocin treatment in chicken, it has been shown that low-molecular-weight bacteriocins are active in the intestinal environment. For instance, the secretion of curvacin produced by
The bacteriocin nisin produced by
Studies have reported that bacteriocins target different pathways. For example, lantibiotics and other bacteriocins bind the lipid II, which is an intermediate in the peptidoglycan biosynthesis [44, 45, 46]. Moreover, upon binding lipid II, lantibiotics enable the formation of pores in the bacterial cell membrane leading to a membrane potential unbalance, resulting in cell death [44, 45].
It looks that pore formation is a mechanism observed in different types of bacteriocins. Their activity depends on the binding to specific receptors on the bacterial membrane in order to exert their activity. For instance, some bacteriocins recognize the cell envelope-associated mannose phosphotransferase system (Man-PTS), whereas others recognize siderophore receptors (e.g., FepA, CirA, or Fiu) [47, 48].
Other mechanisms of action of bacteriocins such as the interference in gene expression and protein biosynthesis have been proposed. Examples include interference with DNA (e.g., inhibition of supercoiling mediated by gyrase), RNA (e.g., blocking mRNA synthesis and binding to the 50S ribosomal subunit), and protein synthesis (e.g., modification of amino acids and binding to the elongation factor Tu) [49, 50, 51, 52, 53].
The appearance of resistance is always a concern that may be developed as a result of changes in the membrane composition/structure. In this regard, resistance to nisin has been reported in specific strains of
Resistance mechanisms have been mainly identified with bacteriocins targeting the cell envelope. In this regard, studies have shown that a decrease in the receptor of the bacteriocin targeting the lipid II conferred resistance to
In conclusion, resistance to bacteriocins has already been reported and potential solutions should be taken into consideration to reduce the appearance of such resistance. These include the derivatization of the original molecule to synthesize new molecules that may bind the receptors to reduce their recognition by the bacteria . Alternatively, the use of a cocktail of bacteriocins in combination with other antibacterial agents should be evaluated as well.
One attractive system for the delivery of bacteriocins is the use of
Small antimicrobial peptides are produced by probably every organism to cope with bacterial invasion. Antimicrobial peptides are short peptides with a molecular mass of 1000–5000 Da. Analysis of their sequences revealed that they interact with the negatively charged of bacterial membranes based on their net positive charge . Further analysis of antibacterial peptides revealed that in their sequences a hydrophobic sequence is required to bind to the bacterial membrane as well as a conformation change to intercalate in the membrane.
Structural analysis of the peptides showed that they may acquire different 3D-conformations such as helices, sheets, or loops . The structure of the peptides is very important because a redesign of the secondary structures of the peptides may increase their antibacterial activities or their stability being more resistant to the activity of proteases [70, 71, 72, 73].
It appears that the main mechanism of action of antibacterial peptides is permeabilization. Therefore, they depend on the interaction with the cell membrane. This interaction involves an electrostatic interaction when the cationic peptide binds the negatively charged outer bacterial envelope. The negative charge on the cell membrane is the result of phosphate or lipoteichoic groups present in the lipopolysaccharides or surface of Gram-negative and Gram-positive bacteria, respectively. Once the electrostatic interaction occurs, hydrophobic interactions allow the insertion of them into the outer membrane structure in Gram-negative strains. Then, a translocation may occur led by an unknown mechanism, which can be the formation of a transient channel, dissolution of the membrane, or translocation across the membrane .
Antibacterial peptides act in different targets such as the inhibition of nucleic acid and protein syntheses, enzymatic activity, and cell wall synthesis . For example, buforin II (isolated from a frog) crosses the bacterial membrane by penetration, binding both DNA and RNA molecules in the cytoplasm of
As similar to bacteriocins, the development of resistance against antimicrobial peptides has been shown. This resistance is apparently associated but studies have shown that certain genes can confer increased resistance to antimicrobial peptides, such as the gene
Antimicrobial peptides, such as gallinacins, have been isolated from leukocytes in chicken and showed antimicrobial activity against
The use of antimicrobial peptides faces stability issues. As a result of their proteinaceous nature, they are subjected to degradation by proteolytic enzymes highly abundant in the body. Although antimicrobial peptides are also produced by the immune system, they do not face any vulnerability as their activity is very close to the production site. Thus, a potential use of these antimicrobials should address the proteolysis issue perhaps designing more resistant peptides, including chemical modification as well as an encapsulation to protect them or to develop a system of slow release. Other alternatives of delivery have been proposed, such as their production in genetic-modified plants, which can be used as an animal feed .
Bacteriophages or phages are viruses that infect and multiply in bacteria. As mentioned earlier, viruses infecting cells can be released into the environment by a process of bacterial cell destruction or lysis.
Phages are attractive for therapy because of their specificity of interaction with only a specific strain of bacteria. The interaction of phages with their hosts is based on the identification of specific binding sites, rendering strains without these receptors unaffected. On the other hand, this host specificity may signify a challenge for phage therapy. For example, lytic phages able to infect all
To overcome this problem, a mixture of phages will be necessary to cover the most common infections caused by the same pathogen. For example, studies have reported that the use of a cocktail of lytic phages was effective to control
Trials using phage therapy were not enthusiastic and the attractiveness for this therapy has decreased in the past. However, with the appearance of antibiotic resistance in bacteria, its potential use has reemerged.
Therapeutic phages also face issues related to their interaction with the target bacteria. The introduction of the viral genetic material can cause undesired changes in the bacterial strain. For example, some phages may integrate into the bacterial chromosome, which may introduce new characteristics or modifies the expression of host genetic characteristic. These characteristics may include effects on the secretion of bacterial virulence factors, such as toxins, or antibiotic-resistant genes [92, 93, 94, 95]. Taking together, it is desired that phages will enter in a lytic cycle to destroy their bacterial host rather than to be incorporated into the bacterial chromosome. In this way, cell lysis is preferred in phage therapy because of the destruction of the host, reducing the chances for viral interactions into the bacterial chromosome.
It seems that future phage therapy will focus principally in the digestive and respiratory tracts with little possibilities to be used as a systemic therapy. In blood, phages will be exposed to circulating antibodies, which will clear the phage from the blood circulation. However, in the digestive tract, phages are subjected to adverse factors such as pH changes, which might change their antimicrobial activity. For example, the load of
Safety concerns have been also elevated in the production of phages for phage therapy. For example, phages should be produced in live microorganisms and then their production is limited to their pathogen hosts. In this regard, phages can carry genetic material from the host that, in this case, is the pathogen and transmit it to other bacteria. It seems that this scenario is not a frequent event, but it will be desirable to produce the phages in a nonvirulent pathogen to reduce this likelihood. In some application, the use of the enzyme responsible for the lysis of the host may suffice to control the pathogen , but it may be limited to topical applications or mucosal infections to avoid its travel through the digestive tract with little possibilities to survive.
Although disadvantages related to phage therapy have been discussed above, it is still considered a natural alternative to control infections in humans [97, 98]. Its use is supported by studies that showed protecting effects in different animal models. For example, intramuscular injection of phages protected mice infected with
An alternative approach based on a genetically engineered phage to deliver genetic material into the bacteria has been reported. The approach is based on the use of lysogenic (nonlytic) phage to deliver the genetic material, which encodes proteins with bactericidal activity, such as toxins .
The use of nanoparticles (NPs) to control bacterial diseases has shown promising results. Over the last decade, NPs mainly synthesized from Ag, Au, Zn, and Cu have been tested as a potential antibacterial agent. AgNPs are the most studied NPs probably because of the long use of Ag in medicine already described in the ancient literature by Hippocrates of Kos (c.460-c.370 BC). As a result of the enormous amount of papers published regarding AgNPs as antibacterial agents, this section will focus only on these NPs.
NPs possess a range between 1 and 100 nm and have different physicochemical characteristics compared to the bulk material. One of their characteristics is the large surface area compared to their volume, making them very reactive.
During the process of AgNP synthesis, Ag ion (Ag+) is reduced to Ag0 by using chemical reductants. However, over the last years, a more friendly technology using plant extracts has been proposed to diminish the toxicity problems linked to classical chemical synthesis [103, 104].
Physical characterization of the AgNPs revealed that the shape and size are important parameters with a profound effect in their antibacterial activity. For example, maximal activity was achieved when the size of the AgNPs is <40 nm and the highest activity was measured when an elongated or spherical shape was attained [2, 105, 106, 107].
The antibacterial activity of AgNPs appears to be based on different mechanisms. It is not completely clear whether AgNPs internalize into the bacterial cell or as a result of their activity the membrane ruptures allowing their internalization [2, 108]. Many studies indicated that the adsorption of the NPs on the extracellular portion of the bacteria is the main mechanism of toxicity . As a result of the adsorption, a depolarization of the cell wall ensues and the cell becomes more permeable, leading to cell death [109, 110]. Other studies have reported that AgNPs aggregate on the bacterial cell wall, causing a cell envelope disruption [105, 111, 112] with interactions with different functional groups, such as carboxyl, amino, and phosphate groups, leading to Ag precipitation .
Another mechanism of bacterial toxicity is the generation of reactive oxygen species (ROS) by the AgNPs. ROS (free radicals, superoxides, and peroxides) are generated in any cell as a result of metabolic reaction (Figure 3); however, cells have different systems to cope with the toxicity of these ROS. The production of ROS, either intracellular or extracellular, may lead to membrane disruption , including lipid peroxidation .
Other toxicity mechanisms are related to the inhibition of the bacterial respiration [116, 117, 118], and protein and thiol binding [109, 114, 119]. It is noteworthy that the amino acid cysteine has a high affinity for Ag+ and has an important role in the proper folding of proteins and also is involved in the catalytic activity of many enzymes. Then, AgNPs target a diversity of enzymes at once with detrimental effects on the bacterial cell [119, 120]. A model of AgNP toxicity in
The continued misuse of antibiotics, as well as other factors, has accelerated the appearance of bacteria showing multidrug resistance. The problem is aggravated by a lack of new antibiotics introduced by the pharmaceutical companies. Both situations have incurred in a dangerous position to humanity, which will need to cope with a lack of antibiotics to combat diseases in a short term. To overcome this problem, the development of new antibacterial agents has ensued. It is of great importance that everyone in our society will take responsibility in reducing the burden of diseases; including regulatory agencies by accelerating the process of approvals, governmental agencies to provide incentives to pharmaceutical companies to continue with the development of new antibacterial agents, agricultural extension to educate the farmers for a wise use of antibiotics, and everyone in society to be aware of the misuse of antibiotics.
Department of Medicine, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
*Address all correspondence to: firstname.lastname@example.org
In this chapter, I will argue that the proposition of “math is a language” has beneficial implications on the way we conduct scientific inquiry and math education. I posit that model-based discovery and learning are the acts of “replacing a theory-less domain of facts by another for which a theory is known.” Data collected over 30 years of mathematical modeling research demonstrate that the creative process of model
Mathematicalmodels appear everywhere in science. Models explain our understanding of the natural world even beyond the frontiers of science. For example, our worldview of the space/time continuum of Einstein’s theory of relativity is not merely a study subject in theoretical physics, but is present in popular knowledge. What exactly constitutes a mathematical model? Can the art of generating a mathematical model about the physical world be learned or taught? Wikipedia offers the following on mathematical modeling: “A mathematical model is a description of a system using mathematical concepts and language.” The Wiki page proceeds to characterize mathematical terms such as types of equations, but has no information on the role of language in modeling, or how this language would be employed. Despite its significance, mathematical modeling is not a term readily defined. A rare exception is Aris’ book entitled mathematical modeling techniques . This book entirely devoted to the art of mathematical modeling offers a definition as a set of equations corresponding to a physical biological or economical prototype. Aris also cites the logician Tarski who sees it as realizations in which all valid sentences of a theory are satisfied. I prefer a more general definition: mathematical modeling is the replacement of a theory-less domain by another for which theory is known.
The replacement of theory-less domain by another for which the theory is known has hardly ever been more successfully demonstrated than in the revolutionary discoveries of Alan Turing whose model of mathematical operations establishes the logical basis of modern computing. Turing realized that algebraic operations in our decimal system, additions and multiplications in our common natural number system, can be replaced with AND or OR logic operators within a binary number system. Accordingly, natural and real numbers could be digitized. More importantly, expensive tasks such as evaluating long lists of summation or multiplication operations could be executed by clocking thousands of AND and OR operations on a simple electronic machine, which became later known as a
His contribution fits the definition of modeling as the replacement of decimal operations in the common number systems with the extremely simple and fast binary AND/OR logic operations; his breakthrough achievement rang in the digital age. His seminal work rooted in an ingenious mathematical model that combined well-known facts of mathematical logic with electronic principles, heralded a new era of computing.
Discoveries like Turing’s spring from inspiration and creativity; thus, mathematical modeling is an art. But is there a formal way to generate modeling art? The sciences offer a useful template for model-based discovery and learning. I propose an analogy with a feedback circuit depicted in Figure 1. The learning cycle begins with a problem formulation conceived in the mind of the modeler that described properties and possible transformations about a novel physical prototype such as a chemical process, whose extent and critical parameters may not yet be fully known or understood. Problem formulation implies the transformation of domain-specific states and transition into suitable mathematical relations. When the physical domain, its state descriptions, and transitions are adequately mapped into a suitable mathematical surrogate, the algorithmic machinery is invoked to make mathematical predictions about the system states, X, typically using digital computers. Mathematical analysis involves the solution of mathematical equations for which we typically use computers. This involves the numerical solution of algebraic equations, nonlinear models, and optimization or system dynamics using well-known rules in the mathematical domain.
But model-based learning does not stop here: properties of the mathematical surrogate, X, are not the object of study. Instead, it is critical to interpret (=or back-transform) predictions in terms of the physical states in the problem domain. In engineering, this means inferences about the states of the matter such as pressure, temperatures, and concentrations (P, T, and C). Since the range and deliberate manipulation in the original prototype space are not fully known, or may be just vaguely known, it is often necessary to sharpen the original mathematical problem formulation or assumptions. The incorporation of feedback about the original domain realized in the domain mathematical analysis actually constitutes the essence of model-based learning and discovery. Let us also emphasize that the model-based learning paradigm proposed here requires frequent translations between the prototype domain and the mathematics. The need to frequently translate between different reference systems characterizes mathematical modeling as a linguistic activity. Mathematical modeling strongly relies on math literacy, which has implications on how we should teach mathematics to engineering students. This point will be discussed more at the end of this section. When feedback is omitted and mathematical predictions are taken at face value about the actual system, a gap between reality and its mathematical surrogate may open. This undesirable phenomenon is called model mismatch. Model mismatch is the most severe problem affecting mathematical models and is often unavoidable in the early stages of a study, but needs to be gradually mitigated by repeated reformulation, testing, and interpretation feedback cycles. Learning requires frequent model adjustments and reformulations.
Unfortunately, model formulation is a rather strenuous task performed in the creative mind of a modeler. The graphs from Aris’ book in Figure 2 illustrate profound modeling insights generated about reactive systems. The wonderful illustrations were not created spontaneously, but were conceived by intensive contemplation. From a practical point of view, it is legitimate to ask how taxing is the art of model generation, and whether industry can afford to pay for the labor of modeling artists? Let me offer an empirical estimate for the cost of mathematical model generation using examples of my past 20 years in chemical engineering research. My PhD thesis on coal gasification mathematical problem formulation took 2 years: its resulting nonlinear equations were encoded in Fortran. Today, this task may be accomplished a few months faster using modern modeling languages such as gPROMS or GAMS [2, 3]. My second example concerns chemical process flowsheets typically analyzed in chemical engineering senior design course. Development and analysis of the process flowsheet may take a student team operating the Aspen FlowSheet Software at the beginner’s level about 45 man-hours, or 3 hours per week for 15 weeks. An expert user may be able to set up the flowsheet in only 10 hours. The third example concerns the development of a computational fluid mechanics (CFD) model, of which we will see more later in the study of the brain. Using commercial CFD software such as FLUENT , it may take a few days or weeks to set up a routine flow problem such as laminar flow in a cylindrical domain. A complex problem like subject-specific blood flow predictions such as an aneurysm may require 1–2 years of a PhD student. In biological systems, some CFD problems require even more than 2 years. The chart of Figure 3 shows that in cost for model formulation falls in the range of a few hundred thousand dollars for junior level engineers; it may reach or exceed a half a million dollars if the modeling problem requires an expert such as a senior expert or a scientist. In contrast, the computational time for merely solving the mathematical model was calculated to amount to only $500 of CPU time (=1 week cpu time). Accordingly, the cost of formulating models is much higher than the expense for solving mathematical equations. The expense for mathematical model-based learning stems mainly from the effort for problem formulation, a much smaller fraction is attributable to the solution on the computer. In a computationally intense scenario (=1 week CPU time), computational cost may accumulate about 10% of the cost. In the more complex modeling situation, the cost essentially lies mainly in the mathematical model formulation, interpretation, and testing of the system. It is therefore desirable to accelerate model generation and thus reduce its cost.
My former advisor, George Stephanopoulos at MIT, was one of the first chemical engineers to systematically consider whether machines could help formulate mathematical models. He called such machines intelligent systems for which he developed several modeling languages including MODEL.LA, a mathematical language to formulate process engineering models [5, 6]. He introduced a series of papers on formal modeling frameworks, intelligent systems in process engineering, and agent-based approaches for mathematical modeling [7, 8]. I had the pleasure to collaborate with a team of colleagues on one of his projects on a machine-assisted modeling approach entitled BatchDesignKit (BDK). BDK is a software architecture designed to interactively generate mathematical models [9, 10, 11, 12, 13, 14, 15]. It is composed of a batch sheet which features the formalized natural language input
I will now turn to the question of whether model generation is practical. Numerous examples of my work in brain research provide evidence that formal generation of mathematical models has an attractive place in knowledge discovery. I will highlight oxygen exchange and blood flow in the aging brain to show how mathematical models can serve as an instrument of knowledge discovery. The mathematical modeling paradigm we propose for the brain is based on model generation from medical images, and we term it anatomical model generation. I demonstrate this in an example of the generation of the mathematical model for the mouse as well as the synthesis of vascular trees in humans. The guiding principle of generating mathematical models shown in Figure 5 is the convergence of medical images into mathematical model representations. The schematic outlines this process. For instance, in this case study of the mouse brain, two-photon microscopy was used to acquire the anatomy of the brain cells and blood vessels in a large section of the primary somatosensory cortex of the mouse. My lab then used image segmentation to create vectorized image data to create an inventory of all the blood vessels and the cells capturing their precise location, diameter, and connectivity. This vectorized image data were encoded into a network graph using adjacency matrices to store connections of nodes with arcs and property vectors for Cartesian dimensions, diameters, and sizes. Based on this image-derived domain representation, we have generated an anatomically concise network topology of the primary somatosensory cortex of the mouse. Figure 4 shows examples of four different sample sections, which are digital representations of the same cortical region of the brain of four different animals. Once we have generated the topological representation, in this case the anatomy of the cerebral cortex, we can automatically generate the equations from this network. The network representation of the mouse brain enables the computer to perform the task of synthesizing transport equations using a set of rules automatically.
Figure 6 depicts the phenomenological description of the model generation methodology. Biphasic blood flow equations for mass conservation as well as a simplified momentum equation can be generated automatically for each node of the vascular network. Blood flow in the microcirculation does not behave like a single fluid, but at the microlevel, blood plasma and red blood cells act as a bi-phasic suspension. It is also very well known that red blood cells in the microcirculation do not travel uniformly between the branches of the microcirculation but essentially tend to concentrate in vessels with higher flow and larger diameter. Therefore, we need a biphasic representation of blood flow which was implemented with a very simple drift flux model of biphasic blood flow  depicted in Figure 6. This kinematic model is equivalent to a mixture with species of different volatilities; thus, red blood cells tend to divide into the thinner or thicker branch of a bifurcation as a function of their relative kinematic affinity (volatility) described by an empirical drift flux parameter, m. This simple hematocrit split rule allows us to predict the uneven distribution of red blood cells; the descriptive equations of the biphasic drift flux models are again automatically synthesized. Additional modeling details instantiate equations for oxygen transport to brain tissues, expressed by molar flux balances for red blood cells and oxygen unbinding from hemoglobin into plasma according to dissociation kinetics.
The entire set of these complex model equations fit on a single piece of paper as shown in Figure 7. The anatomical modeling approach enables the automatic generation of system equations from information encoded in the network topology. Once these network equations are automatically generated, the numerical solution of these equations on the computer yields predictions of microcirculatory blood flow patterns and oxygen extraction at an unprecedented level of detail as is shown in Figure 8 for four samples of the primary somatosensory cortex. The diagram shows with microscopic detail the distribution of red blood cells, blood oxygen saturation, the uneven distribution of hematocrit, and the patterns of blood pressure for any capillary and its surrounding tissue in the mouse cortex. The computer-generated mathematical model allows analysis at an unprecedented scale, down to the detail of individual cells or capillaries. The model predicted that the blood pressure is not uniform, but there are large deviations of hemodynamic states along different paths traversing the microcirculatory network. Previously, it was believed that in the capillary bed there are representative average conditions of pressures or oxygen saturation as a function of level in the anatomical vessel hierarchy. Our research shows that the hematocrit distribution, the hydrostatic pressure, and red blood cell saturation all experience large variability as a function of the different pathways red blood cells can take to traverse the network. These findings are nonintuitive and have been revealed with the help of the anatomical mathematical model. Key findings include that hemodynamic states in the microcirculation are not uniform, but that the tissue is evenly oxygenated, and that pressure drop occurs mainly in the capillary evenly . None to these findings were previously predicted by prior models that did not offer the fine-grained level of anatomical detail shown in this research.
I attempt to address how oxygen is supplied to the human brain. In humans, it is not possible to access microcirculation data through open cranial windows as was shown for rodents; rather, a noninvasive approach was needed. My lab successfully deployed a model generation methodology to overcome this limitation. In humans, we used a modified constructive constrained optimization algorithm  originally developed by Wolfgang Schreiner for the synthesis of coronal arterial networks . Schreiner randomly added segments to a main coronal arterial tree and determined the optimal segment location in the tree hierarchy, its coordinates, and segment diameters by minimizing the vascular tree volume subjected to flow conditions. Remarkably, when sequentially repeating the process of random segment addition followed by deterministic optimization, a tree emerges whose topology resembles natural vasculature. This discovery suggests that in nature, vascular trees grow in a manner that perfuses capillaries evenly, while at the same time, the segment diameters as well as locations of bifurcations are chosen so that the total required blood filling volume is at a minimum. We have modified this original algorithm to generate vascular structures for very complicated organs such as the brain. Our modified algorithm is versatile and is capable of delineating vascular structures in quite complicated domains. The example in Figure 9 shows the initials of my laboratory (lppd-laboratory for product and process design) literally painted in blood. Each letter constitutes a physiological vascular tree that discharges the exact same amount of flow through its terminal nodes (capillaries). We have successfully used vascular synthesis to generate cerebrovascular models for rodents as well as for humans that are virtually indistinguishable from real vascular structures. Specifically, we made a computer-generated anatomical model of human microcirculation. Figure 10 depicts a comparison between the synthetic vasculature structure and a real sample. Using an artificially generated human cortical structure, we were able to predict oxygen exchange in humans at a length scale that has not been acquired experimentally. These results allowed us to predict blood flow and oxygen exchange in a large section of the somatosensory cortex for a 3 3 3 millimeter section .
These two examples show how model generation can create mathematical representations of complex biological domains to make them amenable to mathematical analysis. Specifically, these models allow nonintuitive inferences about cerebral circulation. The first conclusion concerned the uneven distribution of hemodynamic states in the microcirculation and the role that the network plays in ensuring even oxygenation. The second example of vascular synthesis enabled predictions of the oxygen change in humans where currently there is no imaging modality capable of penetrating into the human brain at the level of individual capillaries. Having demonstrated the practical role of model generation and automatic formulation of process models for the normal brain, I now ask the question, is model generation significant?
Here, let me demonstrate the potential value of modeling for knowledge discovery in the aging brain. Brain diseases create a worldwide health problem. For instance, the total cost for brain disorders in the European Union amounted almost 800 billion in 2010. Thus, the average cost per capita was about 5500. Nature medicine quoted a similar expensive picture for the United States . Alzheimer disease is the most common cause of dementia. Mental disorders are at the top of the list of most costly health conditions in the United States. According to the Information Technology and Innovation Foundation (ITIF), annually $1.5 trillion are spent on mental disorders, a cost to the economy of about 8.8% of the GDP . These diseases also have a personal face. Parkinson’s disease and Alzheimer’s disease affect popular personalities as well as loved ones. Due to the severity of the crisis, the ITIF policy recommendations include expansion funding for NIH/NINDS and to encourage pharmaceutical companies to invest in more research.
But how can progress be made to address diseases of the brain especially in an aging population? Here, I point out preliminary data about research in mathematical model to better understand metabolic changes that affect the aging brain. Figure 11 shows microcirculatory changes that affect cerebral perfusion. In this preliminary research, we can see that the capillary blockage caused by lymphocytes or the increase of tortuosity is capable of causing subtle changes in blood microperfusion and also enhances intracranial resistance. Even though these results are still in preliminary stages, mathematical models at the microcirculatory level offer a unique perspective capable of answering questions at the macroscopic scale which is very difficult to access experimentally. There are numerous additional questions concerning brain pathologies that mathematical models can effectively address. The example in Figure 12 depicts the computation of hemodynamic risk factors in the human arterial tree. We envision a system in which
A second area of significance for future intelligent modeling environments concerns the rational design of intrathecal drug delivery methods into the central nervous system. A prototype interface is depicted in Figure 14. Currently, the fate of drugs inserted into the central nervous system is difficult to predict, so new drugs need to undergo trial and error testing in animals. We are working on a three-dimensional virtual reality tool that will enable physicians to perform virtual infusion experiments with drug pumps to decide on continuous infusion or bolus injection for the purpose of achieving desired biodistribution of drugs in the central nervous system. Innovative treatments with gene vectors or antisense oligonucleotide (ASO) therapies that are designed to treat patients with brain diseases have never been used clinically before. For these situations, the use of mathematical models can help optimize drug dosing and anticipate risks
It is therefore possible to conclude that modeling can dramatically accelerate the discovery about complex systems as we have shown in the aging brain and the rational design of drug delivery methods. Formulation of a process model is rather expensive, meaning machine generation is both lucrative in terms of cost savings, and effective because it allows a faster feedback/inference cycle. There is an intellectual demand for system engineering research to generate these models.
In the last section, I would like to apply conclusions from the discussion of mathematical modeling in sciences to engineering education. Throughout this essay, mathematical modeling has been characterized as an effort of replacing a theory-less domain of facts by another in which all theories are known. This definition explains why pure mathematical property exploration—the solution of equation—does not necessarily lead to insights about the prototype. The proposed model-based learning model delineated a continuing feedback cycle of sharpening the problem formulation, solution, and interpretation of results. Accordingly, the rigorous solution of mathematical properties is only a subtask, but not the essence of mathematical modeling which requires translation between physical prototype and mathematical relations and between computational predictions and actual process system states. It is a key that the interpretation of mathematical results (predictions) informs knowledge of the behavior of the original study system. The repeated translations pose a linguistic, more than a mere logical challenge. We therefore suggest that problem formulation of process models is similar to a communication and composition task. The realization about the linguistic nature of mathematical modeling has implications on how it ought to be taught.
Mathematical modeling involved frequent translation between the physical and mathematical languages. The view that mathematical modeling is a form of translation and composition between languages gives indications on how modeling can effectively be learned and taught. First, let us appreciate that languages requires a grammar and syntax. In the world of mathematics, these are the mathematical properties that need to be studied before any serious composition can commence. In this aspect, students are often at a loss, not because they fail to comprehend the logic of mathematics, but because they fail to parse its terminology. Even if the logic is clear, we do not comprehend wisdom written in a foreign tongue. It requires good reading practice before students are able to compose in this language.
I have implemented the “math-as-a-foreign language” pedagogy in several course offerings in the past 10 years, for instance, a course in biological system analysis. Accordingly, we have reading exercises to make sure that students are familiar with the words of the mathematical syntax. Grammatical rules are introduced as the properties of linear and nonlinear systems. All assignments are given as a natural language memo, which forces students to translate instruction in natural language into mathematical expressions. There are translation exercises in which students practice physical prototypes conversion into quantitative expressions. This involves the choice of mathematical entities (scalars, vectors, and matrices) and a suitable mapping into biological properties. Temperatures are an example of scalar fields, velocities form vector fields. For the description of the state of stress, a tensor field is needed. The characterization of an anisotropic porous medium requires a diffusion tensor field. We also have composition exercises in which multi-physical phenomena are transcribed from the physical world of reactors with connecting processing streams into networks of mathematical relationships using property vectors and connectivity matrices. Finally, students are tasked in the validation steps in which mathematical predictions are interpreted in terms of physical model behavior.
This course has been running the last 10 years with a high success in terms of student retention rates. Typically, our students are more confident about mathematical modeling than they were before the course or the sequence of math courses before. The biggest obstacle to mathematic learning was removed by recognizing that mathematics is a language.
An earlier version of this text was delivered as a presentation of the 2040 Visions of Process Systems Engineering—A Symposium on the Occasion of George Stephanopoulos’s 70th Birthday and Retirement from MIT, June 2, 2017.
Andreas A. Linninger
University of Illinois, Chicago, United States
*Address all correspondence to:
Precipitation is the main driver of the hydrological cycle, and therefore, the measurement and forecasting of precipitation is a key element in hydrological and meteorological applications such as rainfall-runoff modeling, precipitation forecasting, flood forecasting, flood risk management, and hydrological and climate studies. Flooding is one of the most vulnerable natural hazards in the world. It has vast impacts, including loss of life, damage to property and goods, and negative health, social and economic impacts. Reliable and accurate meteorological and hydrological forecasting is therefore a major priority to minimize such impacts. Significant progress has been made to improve the forecasting of extreme rainfall events for flood prediction in large rural catchments. However, accurate, reliable, and timely flood forecasting in urban areas is a challenging task that it is now crucial for the reduction of hazard and the preservation of life and property. This chapter discusses some of the latest advances in the measurement and forecasting of precipitation with weather radars, including applications for river catchments and small urban areas.
Flooding is an important hydrometeorological hazard in the world that affects the local population and it has significant consequences for the socioeconomic development of the local region. Flash floods are produced by very heavy localized precipitation affecting urban areas producing vast human and economic impacts. Climate change can further increase the frequency and intensity of floods, and so, it is important to develop measures to manage flood risk. Structural measures, such as dams, river embankments, channels to divert flood water, storage tanks, etc., can help to reduce flood risk, but they can be very expensive to build. Nonstructural measures such as early flood forecasting and warning systems can help to forecast floods several hours ahead allowing a timely emergency response to take place and benefiting the local population during a major flooding event. The economic benefit of early flood warnings in Europe was estimated to be around 400 Euros per 1 Euro invested , and so, flood forecasting systems are a fundamental part of flood risk management. In the UK, the National Flood Forecasting System (NFFS) provides hydrological forecasts for all the catchments across England and Wales . These flood forecasts are only possible with the help of suitable models (e.g., hydrological models and inundation models) and reliable rainfall forecasts (based on radar rainfall or numerical weather prediction models or a combination of both) to produce reliable flood warnings. So, hydrological models and rainfall forecasts are essential parts in flood forecasting systems.
The hydrological cycle is controlled by different processes such as precipitation, evapotranspiration, groundwater fluxes, and changes in catchment water storage and soil moisture. Understanding the impact of changes in the hydrological cycle due to climate change, urbanization, land use, etc. is an active area of research. Hydrological models allow to simulate the hydrological processes in a catchment. Hydrological models are largely classified into lumped, semidistributed, and distributed models. Lumped models consider the catchment as a single unit, and catchment-averaged values (forcing inputs and model parameters) are used to model the hydrological processes in the catchment. In contrast, distributed models can describe spatial variability within the catchment by using distributed measurements (e.g., rainfall, land use, soil characteristics, terrain elevation, etc.). Semidistributed models take into account some of the spatial variability within the catchment by dividing the catchment in subcatchments and treating each subcatchment as a lumped model. The choice of the model is highly dependent on the task. For instance, distributed models can be useful to study the effects of land use change in the hydrological response of the catchment. However, increasing model complexity does not always guarantee better hydrological simulations . Urbanization has a strong influence in the hydrological response of the catchment by increasing runoff rates and decreasing infiltration due to the presence of impervious surfaces, whereas changes in infiltration and land use can affect evapotranspiration . In fact, urban hydrological processes such as infiltration, evaporation, and storm drainage vary at small spatial and temporal scales, and therefore, the water losses due to these processes need to be accounted for when computing the amount of rainfall that becomes runoff . There are a number of models available in the literature to model the hydrological processes in river catchments and urban areas, each of them with their own complexity, data requirements, and mathematical formulations to estimate the rainfall-runoff processes. These models require calibration of the model parameters to ensure the simulated runoff is close to the observations for a number of storms (or calibration period) that is representative of the climatology of the study area.
Precipitation is one of the key drivers of the hydrological cycle, and so, any errors in the measurement of precipitation have important implications when modeling hydrological processes in river catchments or urban areas. Rainfall can be measured by different instruments such as rain gauges, disdrometers, microwave links, weather radars, and active/passive satellite sensors. Both rain gauges and disdrometers provide point measurements and therefore unable to measure the spatial distribution of precipitation. Typical rain gauge instruments are tipping bucket rain gauges (TBRs) and weighing rain gauges (WGs). The measurements of both instruments are affected by systematic and random errors [6, 7, 8]. Typical errors in TBR measurements include gauge malfunctioning, blockages, wetting and evaporation, delayed in rain delivery, underestimation during high rain rates, condensation errors, wind effects, and timing errors . WGs are subject to systematic delays in producing an accurate weight measurement of the precipitation collected in the bucket  and the measurements can be affected by evaporation. Disdrometers do not measure the rainfall rates directly, but the drop size distribution (the number of raindrops of different sizes) which can be related to the precipitation rates. The disdrometers are often used to validate radar rainfall and satellite observations and require very little maintenance in comparison with rain gauges. Satellite rainfall measurement is improving and the latest global precipitation measurement (GPM) mission will help to improve our understanding of the water and energy cycles across the globe and to improve our capabilities to forecast extreme rainfall events. The GPM core observatory includes active and passive instruments such as the dual-frequency phased-array precipitation radar (DPR), infrared (IR) sensors, and the GPM microwave imager (GMI) which can provide the three-dimensional structure of storms . GPM provides rainfall measurements from space with spatial and temporal resolutions of 0.1° (approximately 10 km) every 3 h, respectively, from 65° south to 65° north in latitude. Satellite precipitation is particularly important in places where there are no other ground precipitation observations available. For instance, the measurement of precipitation over the oceans is an active area of research and the early detection of hurricanes, tropical cyclones, and large precipitation systems allows meteorologists to forecast these large-scale events several days in advance. Microwave links (MLs) measure the signal attenuation due to rain from commercial communication MLs (e.g., from mobile telephone networks), and the precipitation rates along the link can be estimated from the measured attenuation in rain . Although the application of this technique is very promising in urban areas due to both, the lack of rain gauge stations and the large number of MLs available, it is not straightforward to get access to ML data from mobile network operators.
Weather radars, on the other hand, provide distributed rainfall measurements with good spatial and temporal resolutions over a larger area. For instance, the operational C-band weather radar network in the UK, consisting of 15 radars, produces rainfall measurements at 1 km every 5 min over the UK (see Figure 1). Mobile polarimetric X-band weather radars can produce rainfall measurements at even higher spatial and temporal resolutions (e.g., 250 m/1-min) which make them suitable for urban flash flooding applications . Radar technology was developed during the World War II to detect enemy aircraft at long distances. Early radar systems used long wavelengths that require huge antennas to operate, but the development of the magnetron allowed radar systems to use shorter wavelengths typically in the microwave frequency range resulting in more compact systems that can be installed on aircraft . At the time, radar operators realized that radar systems were sensitive enough to detect precipitation, and so, there was a huge potential for radar in weather forecasting. Nowadays, weather radars are used for meteorological services around the world to estimate precipitation over large regions at high spatial and temporal resolutions for hydrological and meteorological purposes. Weather radar measurements can be used to produce short-term precipitation forecasts up to several hours ahead (typically 3–6 h of forecasting lead time) for real-time flood forecasting and warning. Weather radar measurements can be used with other atmospheric observations to improve the initial conditions of numerical weather prediction models through data assimilation to advance weather forecasting. The following section briefly describes how radar operates and the latest advances in weather radar technology.
A weather radar typically sends a high-power signal in the microwave frequency range (S-band at 3 GHz, C-band at 5 GHz, and X-band at 10 GHz), and if precipitation particles lie along the path of the radar beam, then a small percentage of energy is reflected back to the radar antenna. This reflected power is related to a measurement known as the radar reflectivity (