Open access peer-reviewed chapter

Technology, Science and Culture - A Global Vision, Volume II

By Sergio Picazo-Vela and Luis Ricardo Hernández

Reviewed: October 11th 2019Published: January 13th 2020

DOI: 10.5772/intechopen.90099

Downloaded: 158

Viral Structures in Nanomedicine

Rafael Vázquez-Duhalt

Abstract

Nanotechnology has made progress in the creation of new materials with potential applications in the biomedical field. The potential uses of viral particles in nanomedicine include applications, in areas such as drug delivery, medical imaging, biosensors, and enzyme replacement therapies. Virus capsids are used as protein cages, scaffolds, and templates for the production of bionanostructured materials, where organic and inorganic molecules could be incorporated in a precise and a controlled fashion. Potential applications of virus particles include the following: (i) gene therapy, in which the virus particle is used as a carrier to deliver the therapeutic gene to target cells; (ii) drug delivery to ensure that pharmaceuticals get into the body and reach the tissue where they are really needed; (iii) imaging, where virus-like particles are coupled with an imaging agent, and then using ultrasound, magnetic resonance, or even traditional X-ray, to visualize the inside of the targeted organ; (iv) biosensors for the detection of diverse analytes or to measure some physicochemical conditions of the target tissue; and (v) VLPs, which have been recently proposed as carriers to deliver enzymatic activity for enzyme replacement therapies. In this work, these potential biomedical applications of virus-like particles are reviewed and discussed.

Keywords: bionanotechnology, nanotechnology, nanomedicine, virus-like particles, virus, capsids

1. Introduction

Nanotechnology involves the study, design, and production of materials at a nanometric scale, from 1 to 100 nm at least in one of their dimensions. At this scale the materials show singular physicochemical properties at electric, optical, and mechanical levels, among others, which originated from the dramatic increase of the ratio of surface area to volume, as also from the electron confinement in a reduced space [1].

Bionanotechnology or nanobiotechnology is a section of nanotechnology that is focused on the research and development of new materials at nanometric scale based on biomolecules, such as proteins, nucleic acids, and carbohydrates for specific uses. Among these natural materials there are the pseudoviral particles or virus-like nanoparticles (VLPs) that have gained attention due to their potential applications in the biomedical field or nanomedicine [2]. In spite of the fact that the viral nanoparticles have been largely used as vaccines [3], new biomedical applications have been proposed, and they are reviewed and discussed here.

The VLPs, opposite to viruses, do not contain their natural genetic material, and thus they are noninfectious and unable to replicate themselves. These viral nanoparticles could be used as basic templates for the design and production of nanostructured materials ( Figure 1 ). The following are the most important properties of VLPs [4, 5]:

  1. They are highly ordered structures at nanometric scale and able to auto-assemble.

  2. They are monodispersed in size with homogenous composition.

  3. There is a diversity of sizes and shapes (icosahedral, tubes, helix, etc.) with different stabilities to pH, temperature, and ionic strength.

  4. They have a high specific surface with a large amount and diversity of reactive sites that allow the conjugation of a wide range of ligands.

  5. They are hollow structures that could be used to encapsulate molecules for diverse applications.

  6. Some of them show cell internalization capacity and are biocompatible and biodegradable.

Figure 1.

Structure of diverse viruses studied in bionanotechnology for medical applications. The icosahedral capsid images were obtained from VIPERdb and the bacteriophage M13 from [6].

The VLPs show three available interphases to be chemically or genetically manipulated: the external surface exposed to solvent, the internal surface, and the interphase between the protein subunits ( Figure 2 ). The different surfaces can be used to precisely tune new functions for the different applications [7].

Figure 2.

Scheme of tree interphases in which virus-like nanoparticles can be modified for the addition of new functions (figure modified from [7]).

2. Medical applications of VLPs

2.1 Gene therapy

The VLPs have been largely studied for their use in gene therapy because of their natural capacity to transport nucleic acids and to integrate these genes into the host genome. Mainly, the nanoparticles come from virus of mammals since they have the intrinsic capacity to internalize into human or animal cells [8].

The family of adenovirus is the most used virus to produce VLPs for gene therapy. They are icosahedral capsids containing double chain DNA. So far, 24% of the clinical tests for gene therapy are using this kind of virus. These viral vectors reach a load capacity up to 35 kilobases of nonviral DNA that allow the incorporation of big transgenes as well as regulation elements [9].

The VLPs show several advantages; they are easily produced with high viral titers, they have the transduction capacity with high efficiency in both growing and quiescent cells, great genome stability, low levels of viral genome integration, and good biological characterization [10].

The surface of VLPs can be chemically modified with polymers such as polyethylene glycol (PEG) and poly-N-(2-hydroxypropyl) methacrylamide (poly-HPMA) with the aim to decrease their immunogenicity, avoiding a fast elimination. On the other hand, the chemical modification of the VLP’s surface could be used to bind ligands for specific cell receptors to be internalized in targeted tissues [11].

2.2 Drug delivery

In addition to gene transport to targeted cells for gene therapy, the VLPs are able to deliver to specific tissues small molecules as therapeutic agents. The goal is to deliver the therapeutic drug to the tissues, which is needed to increase the treatment efficiency and importantly to reduce the doses and thus the side effects. The encapsulated drug is not systemically present in the body, is protected against degradation, and increases its biocompatibility [12].

There is abundant information on the drug encapsulation into VLPs and also drug binding on the capsid surface, especially for chemotherapy [13, 14, 15]. In an interesting work by Bar et al. [16], a filamentous phage was chemically modified with the chemotherapeutic compound hygromycin or genetically modified with doxorubicin to treat cancer. The strategy for the phage modification with doxorubicin included a mutation by genetic engineering the N-end of the main coat protein (p8) introducing a peptide sensible to be degraded by the cathepsin B that was conjugated with the doxorubicin. With the goal to drive the VLPs to the targeted tissues, the VLPs were functionalized with three different IgG antibodies that recognize specific cell receptors on tumor cells. This functionalized nanocarrier containing the chemotherapeutic agent was effectively recognized and internalized in the tumor cells, liberating the drug by the hydrolyzation of the peptide by the cathepsin B from liposomes and inducing the cellular death. The multivalent VLP was able to deliver 3500 molecules of the drug per capsid, inducing a significant enhancement of the inhibitory efficiency on the tumor cells.

The photosensitizers are molecules with biomedical interest for targeted photodynamic therapy (TPD). These molecules are excited by light at specific wavelengths and produce reactive oxygen species (ROS) that are able to kill tumor cells [17].

Nanoparticles from the Cowpea chlorotic mottle virus (CCMV) were doubly functionalized to kill bacteria by TPD [18]. The VLPs were covalently modified with ruthenium complex as photosensitizer ( Figure 3 ) and directed through antibodies to pathogen bacteria Staphylococcus aureus, which is able to produce biofilms. The functionalized VLPs were then irradiated with light at 470 nm producing reactive oxygen species.

Figure 3.

The cysteines introduced by site-directed mutagenesis on the surface of the VLP from CCMV allow the chemical conjugation of the photosensitizer (modified from [18]).

The enzymes could be also encapsulated inside the VLPs and used as therapeutic agents; however, their potential has not been fully studied because some limitations include biodegradation susceptibility by proteases and nonspecificity to be targeted and internalized by specific cells. These disadvantages could be solved by encapsulating the enzymes inside the VLPs as discussed later.

The enzyme cytosine deaminase from yeast was encapsulated inside the VLPs from the monkey virus 40 (SV40) belonging to the polyomavirus family. This enzyme is able to transform the prodrug 5-fluorocytosine to the active drug 5-fluorouracil that induces cell death. The monkey CV-1 cells were treated with the VLPs containing cytosine deaminase, and the cells were sensible to the 5-fluorocytosine due to its transformation to 5-fluorouracil. This is just one example of the potential applications of VLPs as carriers for enzymatic activity targeted to a specific tissue [19].

2.3 Medical imaging

Diver compounds are widely used for medical (in vivo) imaging for diagnostic and therapeutic treatment evaluation. The VLPs have been studied in this field because they could be functionalized, inside and outside, with multiple contrast molecules such as fluorophores (including fluorescent proteins), quantum dots, as well as certain metals. In addition, functionalized VLPs can be directed and accumulated in specific targeted tissues. Both characteristics enhance the specificity and biocompatibility of medical imaging techniques [20]. The immobilization of fluorophores on VLPs allows a high loading in a site-specific fashion to prevent aggregation and lowering the quenching of fluorescent compounds [21].

An interesting example of the use of VLPs for in vivo imaging was reported by Hooker et al. [22]. The VLPs from the phage MS2 were chemically conjugated with Gd3+ ions, a contrast agent used in nuclear magnetic resonance (NMR), which is a noninvasive technique widely used for the diagnostic of several diseases. In this work, the functionalization of the virus-like particles was carried out in both internal and external phases of the protein nanoparticle. The modified nanoparticles showed an improved solubility and stability, and an important increase in the fluorescence relaxation that allowed an increase in the sensibility of the technique.

Functionalized VLPs with fluorescent proteins have been synthesized to elucidate the infection and pathogenesis of some viruses. The capsid protein VP2 from the human parvovirus B19 conjugated in the external surface with the enhanced green fluorescent protein (EGFP) was able to be internalized in tumor cells and, through the microtubule network, reach the nucleus ( Figure 4 ) [23]. These nanoparticles, in addition, could be useful to understand the biology of the virus infections as they could be monitored to elucidate the virus in vivo behavior.

Figure 4.

Confocal microscopy of Hep G2 cells incubated with virus-like nanoparticles VP2-EGFP. (A) Immuno-dyeing with VP2 antibodies visualized with Alexa-633 antimouse after 6-h incubation. The EGFP was directly visualized. (B) Immuno-dyeing with anti-α-tubulin visualized with Alexa-633 antimouse after 4-h incubation. The colocalization of VLP VP2-EGFP and microtubules is shown in yellow (modified from [23]).

2.4 Diagnosis

Another application in which the VLPs are gaining interest is as biosensors for the detection of DNA, toxins from pathogens, or protein disease markers. The virus-like nanoparticles act as carriers of different ligands, receptors, or reporter molecules to obtain sensors with high specificity and sensitivity for the in vitro diagnosis [24]. The VLPs have been used mainly in two different techniques, microarrays, and immunoassays.

With the aim to increase the microarray sensitivity in samples with low DNA concentration, the VLPs from the Cowpea mosaic virus (CPMV) were functionalized with 42 molecules of carbocyanine fluorophore (Cy5) and one molecule of NeutrAvidin (NA), a compound able to recognize biotin. The conjugation was made through cysteines introduced by genetic engineering. The nanoparticle NA-Cy5-CPMV was then used for the detection of genes from the pathogen bacteria Vibrio cholerae O139 by a microarray assay. These nanoparticles bind specifically the biotined DNA through the NA recognition ( Figure 5 ). Due to the high emitted fluorescence by the Cy5 molecules, is possible to detect very low concentrations of DNA as 1 to 10 copies of the genome [25].

Figure 5.

Comparative scheme of the detection method with NA-Cy5-CPMV and the conventional method with streptavidin-Cy5 (modified from [25]).

To increase the efficiency of immunoassays for the diagnosis of some diseases, highly sensible systems are necessary. To obtain this, the capsids of the hepatitis B virus have been genetically modified to express a fragment of the protein A from Staphylococcus (SPAB) in the coat protein surface. This fragment has specific affinity to the Fc domain of the immunoglobulins (IgG). With these virus-like nanoparticles, the right orientation of antibodies with the specific variable antigen domains was fully accessible to recognize the marker proteins such as troponin, which is a protein found in the patients showing damage in the cardiac muscle that show high tendency to have heart attacks. The use of these VLPs together with the functionalized plates with PVDF membranes or nickel nanostructures allowed the sensitivity increase of the technique up to attomolar level (10−18 M). This is six orders of magnitude lower than the conventional procedure [26].

3. Biocatalytic nanoreactors for enzyme therapies

The treatment of disorders originating from enzymatic activity deficiencies using therapeutic enzymes was first described 50 years ago [27]. There are many diseases that originated from the lack of one or more enzymatic activities, and because of this, in some cases, the administration of exogenous enzymes has been successfully used as enzyme replacement therapy (ERT). Enzyme replacement therapy has been recently reviewed [28, 29, 30]. The success of ERT is mainly based on enzyme biochemistry, local enzyme concentrations, the ability of treated cells to turn over and be replaced by normal cells, and the substrate movement, redistribution, and storage. All these factors could be modulated and improved by the use of virus capsids as carriers.

Comellas-Aragonès et al. [31] developed a virus-based single-enzyme nanoreactor. Horseradish peroxidase was encapsulated in the Cowpea chlorotic mottle virus, and the enzymatic behavior of the developed nanoreactor was studied. It was the first report that showed the permeability of virus capsid for substrate/product and the alteration of permeability by the change of pH. A breakthrough in VLP-based nanoreactors was achieved by Patterson et al. [32] who constructed a densely packed multienzyme system able to perform a coupled cascade of reactions. In this work, a 160-kDa protein was produced by the fusion of β-glucosidase, galactokinase, and glucokinase and encapsulated into bacteriophage P22-derived capsids. In addition, their results showed that intermediate channeling between sequential enzymes is dependent on both inter-enzyme distance and a balance between the kinetic parameters of the enzymes involved.

Sánchez-Sánchez et al. [33] effectively encapsulated a cytochrome P450 (CYP) variant from Bacillus megaterium to VLPs constituting of coat protein from Cowpea chlorotic mottle virus ( Figure 6 ). These catalytic VLPs were able to transform the chemotherapeutic prodrug tamoxifen into active products similar to those obtained with human CYP. In a subsequent study, Sánchez-Sánchez et al. [34] encapsulated CYP into bacteriophage P22-derived capsids. Nanobioreactors contained 109 molecules of CYP per capsid and enzyme stability toward protease degradation, and acidic pH was increased. Furthermore, these nanobioreactors were internalized into human cervix carcinoma cells, and as expected, P22-CYP transfected cells showed a 10-fold higher CYP activity than nontreated cells. PEGylation of the VLP capsids strongly reduced or eliminated the immunogenic response, and by functionalizing specific ligands to the free end of the PEG molecules, the virus-derived nanoreactors could be targeted to receptors on tumor cells [35, 36]. Ligand-receptor-mediated cell internalization increased CYP activity inside cells and thus increased tamoxifen cell sensitivity of both human cervix and breast tumor cells, reducing the dose needed to kill these cells by 50% [35].

Figure 6.

Bionanoreactor of cytochrome P450 encapsulated in virus-like particle for the activation of prodrug tamoxifen to the active drug for breast cancer treatment.

The multivalency and versatility of virus-derived nanovehicles were also used for targeted enzyme delivery in conjunction with the synergistic effect of photodynamic therapy (PDT) by Chauhan et al. [36]. The P22 bacteriophage encapsulating CYP activity was multifunctionalized with a porphyrin-based photosensitizer for PDT effect and estradiol-based ligand for targeted delivery to ER+ breast tumor cells ( Figure 7 ). The system was able to generate reactive oxygen species upon illumination with light that in synergy with CYP activity transformed tamoxifen (prodrug) to 4-hydroxytamoxifen (active drug) and showed approximately threefold higher toxicity than nontargeted VLPs. Additionally, the PEG-coat of the multifunctional nanoreactors rendered them invisible to macrophages and treatment of tumor cells with these nanoreactors significantly reducing prodrug dosage. Results obtained indicate that a drastic reduction of chemotherapy side effects and an increase in treatment effectiveness could be expected. The study is the first example to show the applicability of the VLP platform as a combinatory treatment modality, inhibiting tumor cells in multiple ways at once, which will be advantageous to overcome tumor heterogeneity and reoccurrence issues.

Figure 7.

Multifunctionalized biocatalytic P22 nanoreactor for combinatory treatment of ER+ breast cancer.

Finally, Schoonen et al. [37] encapsulated T4 lysozyme (T4L) inside elastin-like polypeptide (EPL)-stabilized CCMV capsids. The activity of T4L is highly dependent on the salt concentration and pH of the environment under which these capsids are not able to form; however, by adding metal ions, the system was stabilized. Four T4L molecules per CCMV capsid were encapsulated and remained catalytically active. Their work opened the possibility of utilizing shielded T4L in antibiotic applications.

4. Challenges for the use of VLPs in nanomedicine

Bionanotechnology, as an emerging technological field, has opened a vast number of potential applications, in which biomedical applications are included. VLPs carrying different cargos could be effectively functionalized with different ligands to be recognized and cell-internalized for tissues containing the specific receptor. In spite of several advantages showed by the VLPs in the biomedical field, still there are some challenges to be solved to be practically used. These challenges are mainly related to the immunogenic response. Different strategies could be implemented to avoid the immune system recognition that includes the covering of the VLPs with a polymer such as polyethylene glycol or the genetic engineering of the viral coat proteins to mutate the epitopes more immunogenic [13]. The recent advancements in the bioinspired “active” stealth covers combining the property to reduce or eliminate the immunogenic response, and the active targeting, could be beneficial for treating diseased tissue via a biomimicry approach with lower side effects. The toxicity of VLPs should be also evaluated with emphasis on the fate of these nanoparticles in the organism and their possible side effects [38].

Another problem to be solved is the production of VLPs at large scale. Some VLPs can be easily produced at large scale, while others are hard to be expressed and purified. Research efforts are still necessary to have efficient systems of heterologous expression of coat proteins of VLPs [39].

We can conclude so far that the use of VLPs for biomedical and therapeutic purposes is still in its infancy but shows enormous potential, and thus research efforts are still needed. However, it is clear that the VLPs are excellent systems for several applications in the biomedical field including drug, enzyme and gene delivery, medical imaging, and biosensors. These virus-derived nanoparticles are promising candidates for the treatment and diagnosis of diseases.

Rafael Vázquez-Duhalt

Centro de Nanociencias y Nanotecnología, Universidad Nacional Autónoma de México, Ensenada, Baja California, Mexico

*Address all correspondence to: rvd@cnyn.unam.mx

Mild Intervention Technologies for Increasing Shelf Life and/or Safety of Fresh Fruits: Opportunity and Challenge

Stella M. Alzamora, Paula L. Gómez, Silvia Raffellini, Eunice Contigiani, Gabriela Jaramillo, Angela Rocío Romero Bernal and Lucas González

Abstract

Postharvest diseases and senescence changes represent the most severe sources of loss of fruit production. Fruits are perishable products with active metabolism during postharvest period, which plays a major role in the senescence and affects commercial life. Many different species of fungi and bacteria are associated with fruits and contamination may occur during growing, harvesting, handling, and distribution, and while waiting to be processed. Fruits are also vehicles for transmission of infectious microorganisms. Foodborne illness outbreaks and cases associated with fresh and minimally processed fruits have been rising in the last two decades, both in developing countries as well as in the developed world. These issues lead to major economic losses and the industry is constantly seeking postharvest treatments to extend fruit shelf life while retaining its quality. This presentation is aimed to explore the application of some mild and environmental-friendly techniques (ozone, pulsed light, and ultraviolet light, among others), applied alone or in a hurdle approach, for improving the shelf life and safety of fruits and fruit products. Examples about the application of some tools to hurdle technology design for berries and other fruits are also given, evidencing opportunities and future challenges.

Keywords: fruit, mild preservation technologies, design, microorganisms

1. Introduction

Current scientific evidence shows that a high intake of fruit and vegetables reduces the risk of cardiovascular, ophthalmological, and gastrointestinal diseases, neurodegenerative disorders, some types of cancer, chronic obstructive pulmonary disease, and hypertension, among others. Concomitantly, consumption of fresh produce has largely increased in recent years in the world. Moreover, the consumption of at least 400 g of fruit and vegetables (five servings per day) has been recommended by the World Health Organization.

Fruits have a very limited postharvest life due to high metabolic rates and vulnerability to decay, traduced in rapid dehydration, loss of firmness, tissue degradation, and susceptibility to mechanical injury and color degradation. Fruits may differ in their composition and structure, which determine the kind of deterioration and how easily they can be attacked by microorganisms. The more acidic pH of most fruits and the presence of carbohydrates promote the deterioration due to the growth of molds, yeasts, and some acid-tolerant bacteria to a greater extent. Water is the major component of fruits, and fruit water activity (aw) is determined by the nature and concentration of the dissolved naturally occurring chemicals, such as sugars, organic acids, inorganic salts, and other soluble substances. As the concentration of solutes (nonionic or ionizable) naturally present in the aqueous phase of fresh fruits is relatively small, aw is close to unity. This high value facilitates the growth of microbial populations that have access to these foods, as is evident by observing the natural occurrence of numerous deteriorative genera of bacteria, molds, and yeasts as well as occasional pathogenic bacteria such as Listeria monocytogenes, Salmonella, Escherichia coli O157: H7, Clostridium botulinum, and others. Fruits have become increasingly important identified vehicles for microorganisms capable of causing disease, which is found in the many documented outbreaks associated with fresh fruits and fresh juices in recent years [1]. Fruit contamination can occur either pre- (soil, feces, irrigation water, dust, insects, wild or domestic animals, reconstituted fungicides and insecticides, and manure and human handling) or postharvest (human handling, harvesting equipment, rinse water, dust, ice, transport vehicles, and processing equipment) [2, 3]. Moreover, pathogens have been shown to enter plant tissues through both natural apertures (stomata, flowers, and cracks of the cuticle) and damaged (wounds and cut surfaces) tissues, or they can be entrapped in crevices [4]. Common unit operations such as peeling, cutting, and slicing may damage tissues, which release nutrients and facilitate microorganisms’ growth. In particular, postharvest internalization of pathogens via cut surfaces, which appeared to have long-persistence, and decontaminating agents used during minimal processing are unlikely to reach them. Most fresh fruits receive minimal processing and often are eaten raw without a pathogen “kill” step before consumption. Therefore, minimal processing is expected to result in an increased risk. In general, mild preservation treatments for obtaining fresh-like fruit products are less robust and need to be well controlled through adequate product and process design, as well as proper implementation and monitoring through the hazard analysis and critical control point (HACCP) [5].

These issues lead to major economic losses, and the industry is constantly seeking postharvest treatments to extend fruit shelf life while retaining its quality.

Current crop protection methods rely on horticultural practices, good agricultural practices (GAP), and synthetic conventional fungicide applications. However, these chemicals may not be the best solution because of the development of fungicide resistance, the risk to humans and environmental health, and the restrictions of governmental regulatory agencies, as well as the commercial requirements imposed by marketing chains for commodities with low number of residual pesticides [6]. Consequently, a number of alternative technologies rose up to replace historically proven synthetic fungicides. Research efforts have been focused on the following groups of treatments: microbial biocontrol agents [7], natural antimicrobials [6], disinfecting agents [8], and physical means [9, 10], as well as their combinations [11].

There is a wide range of modern agents that cause physical or chemical inactivation of microorganisms at ambient or sublethal temperatures. Some of these inactivation agents that are under research include high electric field pulses (PEF), high hydrostatic pressure (HHP), ultrasound (US), pulsed light (PL), shortwave ultraviolet light (UV-C), and ozone and hydrogen peroxide. These nonthermal factors are being encouraged for fruit preservation because, without the need for severe heating, they cause minimal damage to flavor, texture, and nutritional quality of some foods. Most of them are effective in inactivating vegetative cells of most microorganisms, but spores are far more tolerant. Thus, their applications are analogous to thermal pasteurization. It is likely that combining nonthermal agents or nonthermal agents with traditional preservation factors in a multi-hurdle preservation approach will control spoilage and foodborne microorganisms while reducing treatment intensities, detrimental effects on product quality, and energy input [12, 13, 14]. Combined preservation systems including emerging nonthermal agents are gaining commercial uses most quickly with fruit-derived products, probably due the low pH that naturally exists in this type of food materials, a hurdle that cooperates in an overall preservation strategy. On the other hand, acid adaptation of contaminant flora could adversely affect the microorganism resistance to these technologies, a fact that promotes the intelligent combination of them with other stressors or hurdles. Preservation procedures are effective when they overcome, temporally or permanently, the various homeostatic reactions that microorganisms have evolved in order to resist stresses, and the degree of change in environmental conditions will determine whether the microorganism lose their viability, become injured, or express adaptive mechanisms that would allow them to survive or even to grow during stress [15]. When stress is sensed by the microorganism, signals that induce mechanisms to cope with the stressor are developed. These mechanisms involve modifications in gene expression and protein activities [15]. Homeostatic mechanisms that vegetative cells have evolved in order to survive extreme environmental stresses are energy-dependent and allow microorganisms to keep functioning. In contrast, homeostasis in spores is passive, acting to keep the central protoplast in a constant low water level environment, this being the prime reason for the extreme metabolic inertness or dormancy and resistance of these cells. In foods preserved by combined methods (“hurdle” technologies), the active homeostasis of vegetative microorganisms and the passive refractory homeostasis of spores are disturbed by a combination of sublethal antimicrobial factors or stressors at a number of sites (“targets”) or in a cooperative manner [16]. That is, low levels of different stresses are employed rather a single intensive stress allowing less severe preservation procedures and higher quality. Overall, multiple disturbance of microbial homeostasis has been used/suggested in fruit preservation in different arrangements: (a) using two or more stressors simultaneously to prevent growth of spoilage and pathogenic microorganisms and (b) using one or more stressors (in simultaneous or in sequence) to inactivate/injure or physically remove some microorganisms and then, in sequential mode, one or more stressors to prevent survival/proliferation of remaining refractory or sublethally damaged cells (these last with greater sensitivity to adverse agents).

The targeted application of the hurdle concept has aimed to improve quality and safety of fruit products at the farm level and in the whole and fresh-cut minimally processed fruit industry [17, 18, 19, 20, 21, 22, 23, 24, 25].

This presentation will discuss the application of some mild stressors (ozone, PL, and UV-C, among others), used in a hurdle approach, for improving the shelf life and safety of fruits and fruit products. The impact on microbiota, structure, and quality factors will be analyzed. Some tools for preservation technology design will be also highlighted, evidencing opportunities and future challenges.

2. Selected nonthermal preservation factors

Table 1 presents some selected hurdles or stressors (already used industrially) along with their mode of action, their advantages and disadvantages, and the combined processes in which they had been applied to preserve fruits. Emerging nonthermal factors reported herein are not broad-spectrum inactivation processes like thermal treatment but represent pasteurization techniques that allow minimizing the disadvantages of severe thermal processing. Most of these factors do not affect one specific cell target but individual constituents, structures, molecules, and reactions, killing cells through multiple mechanisms.

Table 1.

Selected nonthermal stressors for obtaining fresh-like fruit products (from various sources).

2.1 Ozone, hydrogen peroxide, and other oxidants

Oxidative stress by reactive oxygen species (ozone, chlorine dioxide, hydrogen peroxide, electrolyzed water, and peroxyacetic acid) and nitrogen species caused an imbalance between intracellular oxidant concentration, cellular antioxidant protection, and oxidative change of lipids of membrane, proteins, and DNA repair enzymes [15].

Application of ozone (in gaseous or aqueous forms) as a potential sanitizer against plant and human pathogens in easy-to-damage soft fruit such as blueberries, strawberries, and raspberries had been widely investigated. Ozone was approved by the US Food and Drug Administration for the decontamination of raw commodities in 2001. It is one of the most potent disinfectant agents due to its powerful oxidizing action, being effective against a broad spectrum of microorganisms [26]. It is very unstable mainly in water state. Its degradation product is oxygen, leaving no undesirable by-products on produce surface [26, 27]. Its effectiveness largely depends on its concentration, pH, temperature, and organic material.

2.2 UV-C

A maximum lethal effect of shortwave ultraviolet light (UV-C) has been reported in the range of 250–260 nm, inactivating bacteria, virus, protozoa, fungi, and algae [28]. While UV-C radiation can be strongly absorbed by different cellular components, the most severe cell damage has been reported to occur when nucleic acids absorb UV-C light, which crosses the DNA pyrimidine bases of cytosine and thymine to form cross-links, impairing the formation of hydrogen bonds with the purine base pair on the complementary strand of DNA [28]. Cellular death occurs after the threshold of cross-linked DNA molecules is exceeded. The mutation can be reverted by dark and/or enzymatic mechanisms, and this depends on the repair systems of each microorganism. However, flow cytometry analysis demonstrated that other targets than DNA could be accounted for UV-C inactivation. UV-C radiation also produces significant damage in the cytoplasmic membrane integrity and cellular enzyme activity [29]. Exposure to low doses of UV-C light has been also shown to elicit a range of chemical responses in fresh produce ranging from antifungal enzymes to phytoalexins [30]. This beneficial plant response of agricultural produce or hormesis to inhibit fungal pathogens and delay ripening occurs after UV-C irradiation at periods of time ranging from hours to days. Hormesis is quite distinct from surface disinfection, occurs throughout the entire fruit, and may even be considered as additive to it [28]. Direct inactivation by UV-C of surface-associated microorganisms is limited solely to the surface of the fruit as UV-C has extremely low penetration into solids, but inactivation of this kind can occur at the dose levels used to induce hormesis (0.5–9 kJ m−2 for optimal effects according to the type of fruit) [31]. Both inactivation effects, direct and induced, are not easy to be distinguished in the literature information.

2.3 Pulsed light

PL involves the use of intense and short-duration (1 μs to 0.1 s) pulses of broad-spectrum light of wavelength ranging from UV to near-infrared (200–1100 nm). In addition to UV-C-induced photochemical changes, photophysical effects and photothermal effects caused by the high peak power and the visible and near-infrared portions of pulsed light spectrum, respectively, seem to be involved [32].

2.4 Ultrasound

Injury or disrupting microorganisms by high-energy ultrasound (US) (i.e., intensities higher than 1 W/cm2; frequencies between 18 and 100 kHz) are widely attributed to cavitation, that is, the rupture of liquids when applying high-intensity ultrasound and the effects produced by the motion of the cavities or bubbles thus generated in the so-called stable cavitation; the bubbles can undergo relatively stable, low-energy oscillations, provoking the liquid in the vicinity of the bubble flows or streams (microstreaming effect) that could shear and disrupt cellular membranes or break cells. In the “transient cavitation,” small bubbles expand rapidly often to many times their original size and, on the positive pressure half cycle, collapse violently breaking up into many smaller bubbles, resulting in shock waves with very high energy density and short flashes of light that shear and break cell walls and membrane structures and also depolymerize large molecules. Recent transmission electron microscopy and flow cytometry studies of yeast and Gram-negative and Gram-positive bacteria have demonstrated that (a) microbial cells contain several targets for the disruptive action of ultrasound (at least the cell wall, the cytoplasmic membrane, the DNA, the internal cell structure, the outer membrane); (b) cytoplasmic membranes do not appear to be the primary target of ultrasound at least for S. cerevisiae, E. coli, and Lactobacillus spp.; and (c) primary target would depend on the specific microorganism (for instance, the outer membrane in E. coli) [33, 34].

3. Design of preservation techniques: points to be addressed

Challenges associated with research and commercial adoptions of these technologies are still numerous. Ten years ago, Heldman et al. [35] indicated different aspects to be taken into account:

  1. Understanding and appropriate monitoring of processes to ensure uniform application of the stressors on the product.

  2. Fundamental knowledge about inactivation of spores, vegetative microorganisms, and enzymes to improve process effectiveness.

  3. Fundamental knowledge about the changes in food structure and functionality to evaluate the impact of the process.

  4. Identifying effective combinations of stressors to achieve acceptable safety and shelf life.

However, nowadays, a lack of systematic studies about the effect of the stressor/dose on the safety and quality of food products is still detected in the literature.

The key points for their design and commercialization of these technologies should include not only a deeper understanding on the mode of action of combined stressors and microbial response but also the availability and interpretation of systematic kinetic data on microbial and quality attribute behavior (with special relevance to dose-response and the influence of critical process parameters) and the optimization of equipment.

The design of hurdle techniques to obtain high-quality and safe food products needs a multidisciplinary perspective (Figure 1) [19]. The Food Safety Technology and Food Quality Technology approach, connecting science with engineering components, will provide a systematized knowledge, and a consistent design of hurdle strategy is more likely to emerge. Moreover, the complexity of the phenomena and its practical importance to food safety and quality requires qualification and quantification of these responses. This integration of the appropriate disciplines and the new and exciting tools that they offer will undoubtedly result in a reduction not only of pathogen risk and spoilage microorganism incidence but also of uncertainty.

Figure 1.

Food safety technology and food quality technology approach in the design of mild techniques for fruit preservation (adapted from [19]).

3.1 Microbial aspects

Booming “genomic” technologies (genomics, transcriptomics, proteomics, and metabolomics) contribute to the understanding of cellular behavior by a simultaneous approach in which the whole set of cellular biomolecules is studied in a given experimental setup. Cellular response at molecular level can then be used to study cellular physiology of cellular reactions to environmental conditions, supporting the development of effective food preservation processes.

The so-called predictive microbiology not only allows comparing the impact of different environmental stress factors/levels on reduction or growth inhibition of microbial population but also allows understanding microbial behavior in a systematic way [36, 37]. The model prediction of survival curves would be beneficial to the fruit industry in selecting the optimum combinations/doses of preservation agents to obtain desired levels of impact on microbial (pathogenic and spoilage organisms) behavior with minimal effects on costs and quality [19]. Sensory selection of preservation factors and their levels may be done between several “safe” equivalent combinations of interactive effects determined by the models.

The microorganisms may die, survive, adapt, or grow when mild preservation factors or stressors are applied. Sublethal damage and subsequent recovery present a big problem to manufacturing industry and catering service in terms of safety and spoilage. Microbial populations are heterogeneous. Different cells may exhibit chemical differences (they can be in different reproductive phases or in different physiological states due to differences in nutrient availability and/or environmental conditions). Also, sharing of genetic material results in the existence of genetically different individuals [38]. Using methods of multiparameter flow cytometry (FC), it is now possible to characterize the physiology of individual microorganisms. By means of both scattering and fluorescence signal measurements, information on cell parameters (physiological state, such as metabolic activity, internal pH, or integrity of cytoplasmic membrane—size, surface roughness, and granularity) at single-cell level and their distribution within cell population is provided with a relatively high degree of statistical resolution (≈5000–50,000 cells in minute), enabling assessment of population heterogeneity [39].

Evaluation of the response of microorganisms and the changes in quality during a period of storage similar to the shelf life required is essential since the major changes in quality attributes due to these techniques generally occur not after processing but during storage. Regarding microorganisms, different patterns of microbial growth in nondecontaminated and decontaminated minimally processed vegetables reported in the literature were identified by Gómez-López et al. [40], evidencing the difficulties to control microbial loads of these products during storage at low temperatures:

  • No decontamination occurred, but the growth rate of microorganisms in treated samples was slower than that in untreated samples.

  • No decontamination occurred, but microorganisms in treated samples exhibited a longer lag phase than that in untreated samples.

  • Decontamination occurred, and growth rate of microorganisms in treated samples was slower (or counts decreased) than that in untreated samples.

  • Decontamination occurred, and growth rate of microorganisms in treated samples was equal than that in untreated samples.

  • Decontamination occurred, and microorganisms in treated samples did not grow or exhibit lag phase.

  • Decontamination occurred, and the growth rate of microorganisms in treated samples was faster than that in untreated samples.

4. Application of food safety and food quality approaches to fruit preservation

Different examples in the literature or from the studies made in our research group illustrate the use of these concepts and will discuss the following during the presentation:

  • Evaluation of the combination ozone refrigeration for increasing the postharvest shelf life of strawberries and blueberries [41, 42].

  • Evaluation of the combination PL refrigeration for increasing the postharvest shelf life of strawberries [43].

  • Evaluation of the combination PL refrigeration for preserving fresh-cut apples [20].

  • Mathematical modeling and flow cytometry studies of different microorganisms subjected to PL, US, and ozone [29, 33, 44].

5. Future trends

  • The major challenges and opportunities in the future state of mild preservation techniques will arrive with a more in-depth knowledge of microbial behavior at molecular and physiological levels, as well as of the impact on quality attributes.

  • Besides, the key points for their design and commercialization of these technologies include the availability and interpretation of systematic kinetic data on microbial and quality attribute behavior (with special relevance to dose-response and the influence of critical process parameters) and the optimization of equipment.

  • The selection of the stressors and their levels is fruit-specific and depends on the required shelf life.

Appendix

See Figure 1 and Table 1 .

Author details

Stella M. Alzamora1,2*, Paula L. Gómez1,2, Silvia Raffellini3, Eunice Contigiani1,2, Gabriela Jaramillo1,2, Angela Rocío Romero Bernal1,2 and Lucas González1,2

CONICET-Universidad de Buenos Aires, Instituto de Tecnología de Alimentos y Procesos Químicos (ITAPROQ), Ciudad Universitaria, Ciudad Autónoma de Buenos Aires, Argentina

Consejo Nacional de Investigaciones Científicas y Técnicas, Ciudad Autónoma de Buenos Aires, Argentina

Departamento de Tecnología, Universidad Nacional de Luján, Avenida Constitución y Ruta, Luján, Argentina

* Address all correspondence to: smalzamora@gmail.com

Water Quality in the Twenty-First Century: New Tools for the Characterization and Remediation of Emerging Chemical Contaminants

Damian E. Helbling, Corey M.G. Carpenter and Yuhan Ling

Abstract

There are hundreds of thousands of chemicals used around the world to meet the global demands for food, energy, and a higher standard of living. Decades of environmental monitoring studies have demonstrated that many of these chemicals accumulate in the aquatic environment. The incredible number of chemicals that may be present in any given water system poses challenges for water quality monitoring and associated engineered solutions. In the first part of this contribution, new techniques for water quality monitoring afforded by high-resolution mass spectrometry will be introduced. Case studies will be used to highlight the advantages and challenges associated with target screening and nontarget screening techniques. In the second part of this contribution, a new polymer will be introduced that outperforms many conventional adsorbents for the removal of organic chemicals from water at environmentally relevant concentrations. The polymer is derived sustainably from cornstarch, and characterization studies demonstrate that it exhibits rapid adsorption kinetics, excludes interactions with natural organic matter (NOM), and can be regenerated with a mild washing solution at ambient temperatures without a loss in performance. These features all suggest that the polymer may be a promising alternative adsorbent for the removal of trace organic chemicals during water and wastewater treatment.

Keywords: water quality, micropollutant, adsorption, cyclodextrin, pharmaceutical, pesticide

1. Introduction

Data from monitoring studies have routinely confirmed the occurrence of thousands of organic micropollutants in surface water resources around the world [1, 2, 3]. The main targets of these monitoring studies have been pharmaceuticals [4], personal care products [5], illicit drugs [6], pesticides [7], industrial chemicals [8], or other anthropogenic chemicals [9] that have known or putative toxic effects on aquatic ecosystems or exposed human populations [10, 11, 12, 13, 14, 15]. The potential sources of these chemicals are varied, with much attention focused on sewage treatment plant (STP) outfalls [5], combined sewer overflows [16], industrial discharges [17], stormwater outfalls [18], and diffuse runoff from agricultural and urban landscapes [2], though many other potential sources have not yet been fully explored.

Improved understanding of the sources of micropollutants that are present in surface water resources is essential for risk assessment and for developing mitigation strategies. Recently, long-term monitoring data characterizing micropollutant occurrence at the watershed scale has been used to identify the relative contributions of various sources within particular watersheds. Mass balance and multivariate analyses revealed three distinct sources of micropollutants in the Minnesota River including upstream diffuse runoff, mixed pathways, and sewage outfalls [19]. High-resolution temporal sampling was used to identify an industrial source that was emitting pollutants to a river in Germany at randomly spaced temporal intervals [17]. Long-term longitudinal sampling along the Rhine River was used to identify several previously unknown sources of micropollutants, particularly from tributaries and industrial sources [3]. In each of these examples, the identification of micropollutant sources was predicated on two major features of the studies. First, each of these studies examined a diverse set of micropollutants; the selected micropollutants contained chemicals that might be expected to originate from a variety of sources including agriculture, wastewater treatment plants, or industrial discharges. Second, each of these studies employed high-frequency sampling within the watersheds, investigating either a large number of samples from a single location or a large number of samples distributed spatially throughout the watershed. Data has shown that evaluating the spatial and temporal variability of micropollutant occurrence can lead to key insights on sources of micropollutants.

In addition to micropollutant monitoring, there is a clear need for solutions to remove micropollutants during drinking water production. Adsorption processes are widely employed to remove organic chemicals from water and wastewater. Activated carbons (ACs) are the most widespread adsorbents used to remove micropollutants; their efficacy derives primarily from their high surface areas, nanostructured pores, and hydrophobicity [20]. However, AC adsorption is relatively slow [21], performs poorly for many polar and semipolar micropollutants [22], and can be fouled by natural organic matter (NOM) [23]. Further, activation and regeneration of AC are energy-intensive and slowly degrade its performance relative to the new AC [24]. New adsorbents that address these deficiencies of ACs will lead to more efficient removal of micropollutants during water and wastewater treatment.

We have recently discovered a promising alternative adsorbent that removes organic molecules from water with unprecedented speed and high capacity and can be regenerated by washing with benign solvents at room temperature [25]. This material is the first mesoporous, high-surface area polymer containing β-cyclodextrin (β-CD), which is a macrocycle comprised of seven glucose molecules (Figure 1). β-CD’s cuplike shape provides a distinct hydrophobic interior cavity, which forms host-guest complexes with thousands of organic molecules. This property suggests that CDs would make ideal adsorbents for water purification, though CDs must first be rendered insoluble by incorporating them into a polymer network. Several CD-containing polymers have been previously described, though none have had the required porosity or surface area to perform as well as AC as an adsorbent [26, 27, 28, 29]. Our new β-CD polymer combines the molecular recognition properties of CDs with the porosity and high surface area of ACs to yield an adsorbent with superior adsorption kinetics and an adsorption capacity on the order of AC. However, we have not yet tested our β-CD polymer against diverse groups of micropollutants and under environmentally relevant conditions.

Figure 1.

Schematic of the β-CD polymer. β-CD is a macrocycle of glucose (blue) and is cross-linked with tetrafluoroterephthalonitrile (red) to generate the first mesoporous, high-surface area β-CD polymer.

The objectives of this research were twofold. First, we aimed to assess the relative contributions of various sources to micropollutant occurrence in the Hudson River Estuary, a major freshwater system in New York. We collected grab samples at 17 sites along the Hudson River Estuary between the Mohawk River and the Tappan Zee Bridge. Samples were collected in May, July, and September of 2016. A map of the 17 sampling sites is provided in Figure 2 along with a delineation of the watersheds for each of the tributaries. The sites include three sewage treatment plant outfalls, five sites at the mouth of tributaries of the Hudson River, seven sites inside the tributaries of the Hudson River, and two control sites in the midchannel of the Hudson River at the northern and southern ends of the study boundaries. The samples were analyzed using a target screening analysis to quantify the occurrence of up to 200 micropollutants commonly identified in surface waters around the world. Second, we aimed to evaluate the performance of porous β-CD polymers (P-CDPs) as adsorbents of micropollutants in aquatic matrices. Adsorption kinetics and micropollutant removal were measured in batch and flow-through experiments for a mixture of 90 micropollutants at environmentally relevant concentrations (1 μg L−1) and in the presence and absence of natural organic matter (NOM). The performance was benchmarked against a coconut shell activated carbon (CCAC). Data reveal slower and nonselective uptake on CCAC and faster and selective uptake on P-CDP. The presence of NOM had a negative effect on the adsorption of micropollutants to CCAC but had almost no effect on adsorption of micropollutants to P-CDP. These data highlight advantages of P-CDP adsorbents relevant to micropollutant removal during water and wastewater treatment.

Figure 2.

Map of the Hudson River Estuary and select tributary watersheds. The sampling locations are represented with circles and site ID.

2. Methodology

2.1 Micropollutant monitoring

All spatial analyses and mapping were conducted with ArcMap 10.4. All of the data used is freely available online including the digital elevation models used to delineate the Hudson River Estuary catchment area and tributary watersheds, land cover data, and industrial discharge sites including wastewater treatment plants, hospitals, and population data. Grab samples were collected in 1 L amber, trace clean glass bottles. The samples were shipped in a cooler to our laboratory at Cornell University at the end of each sampling campaign. Samples were stored at 4°C and in the dark until sample preparation within 24 h of arrival in our laboratory. We used a mixed-bed solid-phase extraction (SPE) method to concentrate the 1 L samples as previously described [7, 30]. The high-performance liquid chromatography and tandem mass spectrometry (HPLC-MS/MS) method was previously developed and validated for a broad range of micropollutants [30, 31]. A target screening approach was used to quantify the concentrations of 200 micropollutants in each of the samples. Detection limits are generally in the low ng L−1 range for the micropollutants on this list. Statistical analyses were conducted using R Statistical Software and an alpha level of 0.01 was used to determine significance. The hclust function was used to cluster micropollutants using Ward’s method based on the occurrence profiles for all the detected micropollutants at each sample site during each sampling event. Paired Wilcoxon rank-sum tests were used to assess the differences between micropollutant concentration profiles across sample sites.

2.2 Micropollutant adsorption

P-CDP was synthesized as previously described [25], and the CCAC is commercially available (AquaCarb 1230C, Westates Carbon, Siemens, Roseville, MN). To increase the similarity in particle size between the P-CDP and CCAC, the CCAC was pulverized with a mortar and pestle until >95% (mass) passed a 74 μm sieve (200 US mesh). The P-CDP and the pulverized CCAC were dried under a vacuum in a desiccator for 1 week and stored in a refrigerator at 4°C. We selected 90 micropollutants based on their environmental relevance and previous reports of their adsorption onto AC. Stock solutions of each compound were prepared at a concentration of 1 g L−1 using 100% HPLC-grade methanol. The stock solutions were used to prepare an analytical mix containing all 90 micropollutants at a concentration of 10 mg L−1 using nanopure water.

2.3 Batch experiments

Batch experiments were performed in 125 mL glass Erlenmeyer flasks with magnetic stir bars on a multi-position stirrer (VWR) with a stirring rate of 400 revolutions per minute (rpm) at 23°C. Batch experiments were performed at an adsorbent dose of 10 mg L−1. The micropollutants were spiked to generate an initial concentration of each adsorbate of 1 μg L−1. Samples were collected in 8 mL volumes at predetermined sampling times (0, 0.05, 0.17, 0.5, 1, 5, 10, 30, 60, 90, 120 min) and filtered through a 0.22 μm PVDF syringe filter (Restek). Control experiments to account for other micropollutant losses were performed under the same conditions with no addition of adsorbent. All samples were analyzed by means of HPLC-MS/MS to determine the aqueous phase concentration of each micropollutant as a function of contact time with the adsorbent.

2.4 Flow-through experiments

Flow-through experiments were performed with a 10 mL Luer Lock glass syringe and Restek 0.22 μm PVDF syringe filters at 23°C with a constant flow rate of 25 mL min−1. Flow-through experiments were performed with either nanopure water or nanopure water amended with humic acid (HA) as a surrogate for NOM and NaCl as a surrogate for inorganic matrix constituents. Syringe filters were loaded with adsorbent by passing 1 mL of the adsorbent suspension through the inorganic syringe filter to form a thin layer of 1 mg of adsorbent on the filter surface. Following the loading of the filters with adsorbent, 8 mL of the analytical mix (1 μg L−1) was pushed through the adsorbent-loaded filter with constant pressure over 20 s. Control experiments were performed in the same way with no adsorbent on the filter to account for losses through the filter. The filtrates were analyzed by means of HPLC-MS/MS to determine the aqueous phase concentration of each micropollutant.

3. Results

3.1 Micropollutant monitoring

To complement the micropollutant data analysis and to enable a more comprehensive study of micropollutant sources, we first collected geospatial data for the Hudson River Estuary catchment area. We used ArcGIS and publically available data to develop maps of the Hudson River Estuary catchment area that include geospatial references for land cover, industrial discharge locations, sewage outfalls, and hospitals. We expected that the occurrence and concentrations of certain types of micropollutants would be associated with the geographic distances to these types of catchment features. For example, a recent geospatial analysis of poly- and perfluoroalkyl substances (PFASs) revealed that PFASs were found at higher concentrations in more urban areas and different types of PFASs were associated with different point sources such as airports, textile mills, and metal smelting [1]. Another recent study used spatial analysis techniques to predict mass flows and concentrations of pharmaceuticals in surface water samples using hospital locations, departments, and the number of beds [32]. These examples demonstrate powerful ways in which geospatial data can be combined with micropollutant occurrence data to improve our fundamental understanding of micropollutant sources. We collected grab samples in May, July, and September 2016 from the 17 locations along the Hudson River Estuary. The sample collected in May 2016 from Rondout Creek-Kingston STP Outfall was lost during sample shipment; therefore, a total of 50 samples were processed and analyzed in our laboratory. Our target list was comprised of 200 total micropollutants which included 134 wastewater-derived compounds (pharmaceuticals, industrial compounds, personal care products, hormones, food additives, and illicit drugs) and 66 pesticides (including herbicides, insecticides, and fungicides). From our target list, 160 of the micropollutants were detected in at least 1 of the 50 samples; 111 were wastewater-derived compounds and 49 were pesticides. Figure 3 presents the distribution of detected micropollutants by use class. Twelve of the 200 micropollutants were detected in all 50 samples, with an additional 25 being detected in at least 40 samples. The micropollutants detected in all samples included acesulfame (artificial sweetener), atenolol acid (metabolite of atenolol and metoprolol), atrazine (herbicide), benzotriazole-methyl-1H (corrosion inhibitor), carbamazepine (antiepileptic), DEET (insect repellent), gabapentin (antiepileptic), lamotrigine (anticonvulsant), metolachlor (herbicide), sucralose (artificial sweetener), desvenlafaxine (metabolite of the antidepressant venlafaxine), and valsartan (angiotensin II antagonist). The highest occurrence and concentrations of all micropollutants were detected in the STP outfall samples. Sucralose, atenolol acid, and metformin (antidiabetic) were detected at the highest concentrations in the mid-mg L−1 range. It must be noted that data derived from grab samples do not necessarily reflect the expected dynamics of micropollutant occurrence or concentration in surface water systems [33] However, longer time series of grab samples can be analyzed to provide more robust estimates of the likelihood of occurrence and average concentrations of specific micropollutants at a particular sample site. The majority of micropollutants that were detected were measured in the 1–100 ng L−1 range.

Figure 3.

The number of use classes of micropollutants detected in at least one of the 50 samples taken; 111 were wastewater-derived compounds and 49 pesticides were detected.

We next aimed to investigate how the micropollutant occurrence profiles (defined as the occurrence of individual micropollutants in a sample) compared among the 17 sample sites. To do this, we used Ward’s method to cluster each micropollutant based on its spatiotemporal occurrence pattern. The resulting dendrogram is presented in Figure 4 and reveals four distinct micropollutant clusters that describe the relationship in spatiotemporal occurrence among the micropollutants. Cluster 1 contains 56 micropollutants that are present in most samples, regardless of sample type (tributaries, control sites, and STP outfalls) or sample date. Cluster 2 contains 28 micropollutants that were detected most frequently in the tributary and control sites but rarely in the STP outfalls. Cluster 3 contains 33 micropollutants that were detected more often in the STP outfalls than the tributary or control sites. Cluster 4 contains 43 micropollutants that were detected mostly in STP outfalls and in 2 separate tributary samples collected on different dates.

Figure 4.

Dendrogram of micropollutants (n = 160) clustered by spatiotemporal occurrence patterns in all samples.

We next aimed to identify whether STP outfalls or specific tributaries were important sources of different spatiotemporal occurrence clusters to the Hudson River. To determine the relative contributions of each cluster of micropollutants to the Hudson River, we compared the micropollutant concentrations from samples taken inside the tributaries to the micropollutant concentrations measured in the midchannel control samples. Then, we examined our geospatial datasets for associations between micropollutant concentrations and different types of catchment features. It is important to note here that concentration data obtained from grab samples may not accurately assess the influence of specific tributaries to the Hudson River due to variability in stream flowrates. In that respect, micropollutant loads are a more representative metric of overall contribution to the Hudson River. Nevertheless, concentration data enabled us to preliminarily identify specific tributaries that are likely important sources of micropollutants to the Hudson River.

Rondout Creek and Normans Kill were identified as the major contributors of wastewater-derived micropollutants to the Hudson River Estuary. Rondout Creek was also identified as a major contributor of agricultural micropollutants. Our analysis confirms that the Hudson River Estuary is more impacted by micropollutants as it flows south toward New York City. Additionally, our geospatial analysis revealed several associations between the spatiotemporal occurrence clusters and certain geographic catchment features including the extent of total agricultural land cover, extent of cultivated cropland land cover, number of the major STP outfalls, and hydraulic distances to the major STP outfalls. It is important to note that while this sampling campaign had high spatial resolution and a large number of targeted micropollutants, it has low temporal resolution with only three separate grab sampling events during the 2016 recreational season. Large-scale sampling campaigns such as these can benefit from higher temporal resolution to gain a more representative understanding of micropollutant concentrations and increase the power of the statistical tests.

3.2 Micropollutant adsorption

Data from the batch experiments were first evaluated to estimate pseudo-second-order adsorption rate constants (kobs ) for each micropollutant and each adsorbent. The estimated values of kobs for each micropollutant on each adsorbent are summarized in Figure 5 . Generally, if a kobs could be estimated from the data for a particular micropollutant, its value was significantly greater on P-CDP than CCAC. These data corroborate our earlier observations of nearly instantaneous equilibrium adsorption of several model organic molecules on P-CDP [25]. The rapid micropollutant uptake by P-CDP is attributed to the accessibility of the β-CD binding sites in the polymer due to its porosity and high surface area.

Figure 5.

Comparison of pseudo-second-order rate constants (kobs) for the adsorption of each micropollutant by P-CDP and CCAC.

The estimated values of kobs describe the rate at which adsorption equilibrium is attained but do not provide any insight on the affinity of each micropollutant for each adsorbent. To enable a more robust interpretation of micropollutant affinity for each adsorbent, we measured the percent removal of each micropollutant after 5 min for CCAC and P-CDP and summarize the data in Figure 6 . Despite the faster adsorption kinetics exhibited by P-CDP relative to CCAC, the extent of micropollutant uptake at 5 min is more evenly split between the two adsorbents, which we attribute to differences in the affinities of each micropollutant for each adsorbent under the experimental conditions. Whereas most micropollutants have moderate affinity for CCAC, the distribution of micropollutant affinity for P-CDP is more variable, with some micropollutants having very strong affinity (more than 80% removal) and others having relatively weak affinity (removal less than 20%). Overall, these data from the batch experiments demonstrate that micropollutant uptake on P-CDP is generally rapid but selective, with some micropollutants attaining complete uptake in 5 min and others being removed to only minor extents. In contrast, CCAC exhibits relatively slow uptake kinetics, though uptake is rather nonselective with increasing extents of uptake of most micropollutants over time.

Figure 6.

Comparison of the percent removal of each micropollutant after 5 min contact time with either P-CDP or CCAC.

We also characterized the performance of P-CDP as an adsorbent by evaluating the instantaneous removal of micropollutants in flow-through experiments designed to simulate filtration-type adsorption processes. The same mixture of 90 micropollutants was pushed through thin layers of each adsorbent immobilized on a nonadsorbent membrane at a constant flow rate. Experiments were conducted with an adsorbent dose of 1 mg. Differences in measured concentrations before and after filtration were used to calculate the removal of each micropollutant in each experiment. The removal percentages of each micropollutant on each adsorbent are summarized in Figure 7 .

Figure 7.

Comparison of the removal percentages by P-CDP and CCAC measured for each micropollutant from flow-through experiments.

A total of 47 micropollutants were removed to greater extents by P-CDP in flow-through experiments designed to simulate filtration-type adsorption processes. This provides another example of how the rapid adsorption kinetics exhibited by P-CDP can lead to significant removal of micropollutants even in processes providing limited contact time. These data again suggest a selectivity to micropollutant uptake on P-CDP. Finally, despite this selectivity, it is important to emphasize that the micropollutants that are efficiently removed by P-CDP are nearly completely removed in the flow-through experiments; 24 of the micropollutants exhibit greater than 95% removal in these experiments, whereas only 1 micropollutant is removed to that extent by CCAC.

One of the main deficiencies of CCAC as an adsorbent is its tendency to be fouled by NOM and other matrix constituents [23]. Therefore, we evaluated the performance of both adsorbents in the presence of NOM and inorganic ions. We repeated the flow-through experiments in the presence of 20 mg L−1 of humic acid (HA, as a surrogate for NOM) and 200 mg L−1 of NaCl to simulate the conditions in a typical surface water system. The removal percentages of each micropollutant are summarized in Figure 8 . As expected, the addition of matrix constituents had a significant negative influence on the adsorption of micropollutants to the CCAC, likely as the result of a direct site competition or pore blockage mechanism [34, 35]. In contrast, no significant negative effect was observed for P-CDP. This was not necessarily unexpected; the binding sites of β-CD are contained inside its 0.78-nm-diameter interior cavity [26]. Host-guest complex formation requires that organic molecules fit inside the interior cavity of β-CD and large organic molecules do not bind well with β-CD, presumably due to a size exclusion mechanism. This result is particularly exciting because it suggests that P-CDP might not be fouled by NOM in natural waters, instead reserving its binding sites for smaller organic molecules. Remarkably, 75 micropollutants were removed to greater extents by P-CDP in the presence of matrix constituents. Of those, 44 were removed to greater than 80% on P-CDP, whereas only 13 micropollutants were removed to greater than 80% on CCAC in the presence of NOM and matrix constituents.

Figure 8.

Comparison of the removal percentages by P-CDP and CCAC measured for each micropollutant from flow-through experiments conducted with added matrix constituents (20 mg L−1 NOM and 200 mg L−1 NaCl).

4. Conclusions

The first aim of this research was to improve our understanding of the sources of micropollutants in the Hudson River Estuary. We collected samples from 17 locations along the Hudson River Estuary during May, July, and September 2016. The sample locations were selected to target sewage treatment plant (STP) outfalls and tributaries that are expected to be the major sources of micropollutants in the Hudson River. The samples were analyzed to quantify the occurrence of 200 wastewater-derived micropollutants and pesticides. The data was analyzed to identify the relative contributions of various sources of micropollutants and specific outfalls or tributaries that are significant sources of micropollutants in the Hudson River Estuary and revealed four distinct clusters of micropollutants grouped by their occurrence profiles. Rondout Creek and Normans Kill were both identified as the major contributors of wastewater-derived micropollutants to the Hudson River Estuary. Rondout Creek was also identified as a major contributor of agricultural micropollutants. Our geospatial analysis revealed several associations between the spatiotemporal occurrence clusters and certain geographic catchment features including the extent of total agricultural land cover, extent of cultivated cropland land cover, number of the major STP outfalls, and hydraulic distances to the major STP outfalls. These data can be used to develop targeted micropollutant mitigation strategies in the Hudson River Estuary. An expanded survey of micropollutants in the Hudson River Estuary that contains the data presented here has been published in the peer-reviewed literature [36].

The second aim of this research was to study cost-effective and energy-efficient technologies to enhance the removal of micropollutants in water and wastewater treatment systems. Despite their expense, AC adsorption processes have emerged as a leading alternative, though they are limited by relatively slow adsorption kinetics and a tendency to become fouled by NOM and other matrix constituents. Our results suggest that β-cyclodextrin polymer adsorbents address these specific deficiencies and therefore might be developed into a viable alternative or complementary adsorbent in water and wastewater treatment. Further, β-cyclodextrin polymer adsorbents are prepared in a single step from commercially available monomers, including the commodity chemical β-CD. Because it is synthesized through a rational process, many related compositions of β-cyclodextrin polymer adsorbents can be designed to target improved performance or different selectivity. These factors make it possible that β-cyclodextrin polymer adsorbents might be produced at large scales and deployed with competitive life cycle costs to ACs used in water and wastewater treatment. Together, these features all suggest that β-cyclodextrin polymer adsorbents may be a promising alternative adsorbent for the removal of micropollutants during water and wastewater treatment. An expanded study of micropollutant adsorption on cyclodextrin polymers has been published in the peer-reviewed literature [37].

Author details

Damian E. Helbling*, Corey M.G. Carpenter and Yuhan Ling

School of Civil and Environmental Engineering, Cornell University, Ithaca, New York, USA

*Address all correspondence to: deh262@cornell.edu

A Gentle Introduction to Cryo-EM Single-Particle Reconstruction Algorithms

Hemant D. Tagare

Abstract

This chapter provides a simplified review of cryo-EM single particle reconstruction algorithms at a level that engineering and computer science students might find accessible.

Keywords: cryo-EM, single particle reconstruction

1. Introduction

In cryogenic electron microscopy (cryo-EM), single-particle reconstruction is a method for reconstructing the three-dimensional (3D) structures of biological macromolecules. This paper provides an overview of the algorithms used for these reconstructions (to avoid cumbersome terminology, I will use the term “cryo-EM” as a simplified form of “cryo-EM single-particle reconstruction”).

Most histories of cryo-EM trace its origin to the late 1960s and early 1970s [1, 2]. Circa 2012, a minirevolution occurred in cryo-EM with the introduction of direct detector cameras; these cameras pushed cryo-EM reconstructions to near-atomic resolutions. In 2017, the Nobel Prize in Chemistry was awarded to Jacques Dubochet, Joachim Frank, and Richard Henderson for “The Development of Cryo-electron Microscopy.” The essay accompanying the announcement of the prize explains clearly that image processing algorithms were critical to the development of cryo-EM [3].

1.1 Organization of the paper

This paper is quite informal; I have stayed away from detailed mathematical calculations and proofs. My goal is to describe the reconstruction problem at a level that students in engineering and computer science might find accessible. I have also simplified the problem somewhat, hoping to retain the essential core while eliminating distracting details. I do provide pointers to the literature where the curious reader can glean additional information. For more general background information, the reader may consult [4].

I begin in Section 2 by briefly describing the imaging process in cryo-EM. Following this, Section 3 contains the signal model used in reconstruction. Section 4 describes the best-alignment class of reconstruction algorithms. Section 5 describes algorithms that are based on the expectation-maximization (EM) approach. Section 6 contains a discussion of postprocessing. Finally, Section 7 concludes the paper.

Several cryo-EM reconstruction packages are freely available (e.g., SPIDER [5], EMAN [6], FREALIGN [7], RELION [8, 9], and cryoSPARC [10]), and where relevant, I will point out which algorithms are used in these packages.

2. Cryo-EM

2.1 Macromolecules and their structure

A central fact of molecular biology is that large polymeric molecules (proteins, RNA, and DNA) and their complexes are vital to the functioning of a cell [11]. These macromolecules not only make biochemical reactions possible in the cell, but they also maintain cell structure, cause cell motion, and sense and respond to signals in the environment. Macromolecules are able to do all this because of their three-dimensional structure. Reconstructing the 3D structure and explaining the function of the molecule using the 3D structure is one of the goals of cryo-EM. Biological macromolecules and their assemblies are generically referred to as particles in cryo-EM. Most of the particles that we are interested in are proteins or protein complexes.

In very simple terms, the cryo-EM method is the following: first, several biochemical steps are carried out to isolate and purify many copies of the macromolecule from real cells. This is sample preparation. Then, the copies of the particle are frozen in a layer of vitreous (noncrystallized) ice, and a single image of the preparation is created using a transmission electron microscope ( Figure 1 ). This image, called a micrograph, contains tomographic projection images of the particle at random orientations; the orientation is determined by how the particle is frozen in ice. After a micrograph is obtained, each tomographic projection of the particle in the micrograph is isolated by a bounding box. This is called particle picking and is usually a semiautomatic process. The content of each bounding box is an image. The single-particle reconstruction (SPR) problem is to estimate the 3D structure of the particle using images obtained from one or more micrographs.

Figure 1.

A simplified schematic of a cryo-EM experiment. Particles are embedded in vitreous ice and exposed to the electron beam.

The above description of cryo-EM image formation as a tomographic projection is highly simplified; a more realistic description takes into account the details of how the electron beam interacts with the ice-embedded particle. Three effects of this interaction are important:

  1. The wave nature of the electron causes the image produced by the microscope to be a tomographic projection of the particle followed by convolution with a filter. The spectral response of the filter is called the contrast transfer function (CTF). The CTF depends on the microscope defocus. Figure 2 shows a CTF in the Fourier domain. The CTF is real-valued and circularly symmetric and takes positive and negative values. The CTF has zeros in the Fourier domain, and information about the particle is lost at these spatial frequencies. However, the CTF and its zeros can be changed by changing the microscope defocus. To take advantage of this, micrographs of the same particle are obtained at different defocus values. If the CTF zeros at these defoci do not coincide, then information is available at every frequency.

  2. The contrast in the image depends on the ice thickness. The thicker the ice, the lower the contrast.

  3. The third effect is more subtle but important for high-resolution reconstruction. When the beam is turned on in the microscope, there is a drift of the ice and the ice-embedded particles. If the micrograph is formed as an average image over this exposure time, then high-frequency spatial information about the particles is lost due to the drift. Direct detector cameras are designed to overcome this problem. They do not average over the exposure time; instead, they create a multi-frame movie. Judicially discarding some movie frames and then aligning the remaining frames, followed by averaging, compensate for the drift and create a micrograph that retains high-frequency information.

Figure 2.

Contrast transfer function.

A final fact to consider is that cryo-EM images are noisy. Noise is introduced into the image by the camera and potentially also by the beam. A simple model for noise—one that is used in most reconstruction algorithms—is the Gaussian.

The effect of noise is compounded by the fact that long exposures to the electron beam damage the particles (they are bombarded by high-energy electrons), thereby altering their structure. To limit this damage, exposures are typically short, which in turn limit the amount of “signal” in the images.

The result is that the signal-to-noise ratio (SNR) in cryo-EM images is quite low. In engineering terms, it can be less than −10 db.

3. The signal model

Assume that the particle has a standard orientation and that a rigid orthogonal reference frame is attached to the particle ( Figure 3 ). This frame is the particle reference frame or the attached frame.

Figure 3.

Tomographic projection in a particle reference frame and its embedding in a micrograph.

The term “structure of a particle” refers to the 3D electrostatic potential experienced by the electron as it passes near the particle. The structure can be described by a real-valued function Sdefined in the 3D space of the attached frame. Embedding the particle in ice at a random orientation and exposing it to a vertical electron beam are equivalent to taking the particle in its standard orientation and exposing it to the beam from a random direction in the attached frame. Let the beam direction be given by a unit vector nin the attached frame ( Figure 3 ). The tomographic projection of Salong nis the image fdefined on a 2D plane perpendicular to n(see Figure 3 ). The value of fat a point uin the plane is given by fu=Su+ρn.

We can write this relation concisely using the operator notation by saying that the tomographic projection operator Pngives the projection fof Svia f=PnS. Continuing to use the operator notation, the effect of CTF on the projection is the action of the CTF operator Cand is given by CPnS. This action is, of course, the filtering of PnSwith the CTF. The filtered image appears embedded in a micrograph with an additional 2D rotation and translation (Figure 3 ). Let Rθ,tbe an operator that rotates a planar image by angle θand translates it by the vector t. Then, the image as observed in the micrograph is I=Rθ,tCPnS+n, where nis the noise. The signal flow diagram corresponding to this equation is shown in Figure 4 .

Figure 4.

Signal flow for cryo-EM.

At this point, it is useful to introduce the adjoints of the projection operator Pnand the CTF operator C. The adjoint of the projection operator is the back-projection operator denoted by Pn. The back-projection operator takes an image fand produces a 3D function Sas follows: Sx=fΠnx, where xis a point in three-dimensional space and Πnxis the orthogonal projection of xonto the image plane (which is perpendicular to n). Note that Pnis the adjoint (loosely speaking, the “transpose”) of Pn, and not its inverse. The CTF operator Cis self-adjoint, so its adjoint is itself.

Every image picked from a micrograph has its own projection direction n, CTF, and in-plane rotation and translation. Suppose Nimages Ii,i=1,,Nare picked, then

Ii=Rθi,tiCiPniS+ni,E1

where niis the projection direction of Ii, Ciis the CTF operator of image Ii,θiand tiare the in-plane rotations and translations of image Ii, and niis the noise in image Ii.

Of all the terms that appear in the right-hand side of Eq. (1), only the CTF Ciis known (because the defocus at which the image micrograph was obtained is known). The challenge in cryo-EM reconstruction is to estimate Sgiven that ni, θi, ti, and noise are also unknown. Note that the set of all possible values of niis identical to the set of points on the surface of the unit sphere in 3D. The set of all possible values of θiis the interval [0; 2π). And the set of all possible values of tiis identical to the set of points in some square in the plane (it is not the entire plane because the particle is usually located somewhere close to the center of the image).

Although Eq. (1) is commonly used, there are two variations of the equation that are worth noting. The first simply applies the in-plane rotations and translations to the image rather than to the projected structure:

Rθi,tiIi=CiPniS+ni.E2

The second introduces a positive scalar ρi, which models contrast change due to the variable ice thickness:

Ii=ρiRθi,tiCiPniS+ni,orRθi,tiIi=ρiCiPniS+ni.E3

In this version, scalar ρiis also unknown.

The unknown variables in the cryo-EM reconstruction problem are the structure S, the set of projection directions N=n1nN, the set of 2D rotations and translations T={(θ1,t1),,(θN,tN)}, and the set of scalars ρ=ρ1ρN(if using the model of Eq. (3)). Of these variables, we are only interested in S; the rest are nuisance variables.

Cryo-EM reconstruction algorithms can be classified according to how they treat the nuisance variables. On the one hand, there are algorithms that simultaneously estimate the structure as well as the nuisance variables. I will call these algorithms (for reasons that will become clear below) best-alignment algorithms. On the other hand, there are algorithms that only determine the structure. These are based on the expectation-maximization algorithm.

4. Best-alignment algorithms

Taking Eq. (2) as the signal model and further assuming that the noise is Gaussian and white, the likelihood and the log-likelihood of the set of observed images Iigiven the structure and the nuisance variables are:

Likelihood:

LIiS,N,T)=i=1N12πA/2σAexpRθi,tiIiCiPniS22σ2E4
LIiS,N,T)=i=1NLIiS,ni,θi,ti)E5

Log-likelihood:

LIiS,N,T)=i=1NRθi,tiIiCiPniS22σ2+Const.E6

where σ2is the variance of the noise, which (for simplicity) I will assume is known and the same for every image; Ais the number of pixels in an image; and Const.is a constant independent of S, N,and T. Also, by slight abuse of notation, I have expressed each term in the product on the right-hand side of Eq. (4) as LIiS,ni,θi,ti)in Eq. (5).

The maximum likelihood estimates of S, N, and T are available as:

Ŝ,N̂,T̂=argmaxS,N,TLIiS,N,T).E7

In best-alignment algorithms, the maximization in Eq. (7) is carried out numerically, by first initializing Sand then alternately maximizing over (N,T) and Sin several iterations. See Algorithm 1 (the superscript kis the iteration index).

The maximization with respect to N,Tis carried out by some form of exhaustive search over a discrete grid. That is, a spherical grid is created over the unit sphere, and the maximization with respect to each ni is restricted to a search over the vertices of this grid. Similarly, the angle interval [0, 2π) is covered by a 1D grid, and the translation square is covered by a 2D grid, and the maximization with respect to each θiand tiare restricted to the vertices of these grids.

The maximization in line 4 of the algorithm can be decomposed into independent maximizations with respect to (n1,θ1,t1), (n2,θ2,t2), … because the right-hand side of Eq. (6) is a sum of terms, each term depending only on one tuple (ni,θi,ti). The maximization of a single term is carried out by minimizing Rθi,tiIiCiPniSk2by exhaustive search with (ni,θi,ti) restricted to the vertices of the grids mentioned above. By minimizing (ni,θi,ti), the image Iiis best aligned to a projection of Sk, in the sense that Rθi,tiIiCiPniSk2is minimized (hence the name of this algorithm).

Algorithm 1: Simple Best-Alignment Reconstruction
1: procedure SIMPLE-BA IiSIi= images, S= initial structure, k= iteration index
2:  k1, SkS
3:  while Sknot converged do
4:      Nk, TkargmaxN,TIIiSkNT
5:      Sk+1argmaxSIIiSNkTk
6:      kk+1
7:   end while
8:   return Sk
9: end procedure

Having aligned every image, line 5 of the algorithm updates the structure. Because the log-likelihood is quadratic in S, this step has a closed-form solution:

Sk+1=i=1NPnikCi2Pnik1i=1NPnikCiRθik,tikIi.E8

Here, Pnikis the back-projection operator defined earlier, and we use the fact that the CTF operator Ciis self-adjoint.

Thus, best-alignment reconstruction iteratively aligns images to the projections of the current estimate of the structure and updates the estimate of the structure using the aligned images. Figure 5 illustrates the algorithm.

Figure 5.

Best-alignment reconstruction.

The algorithm, as I have described above, is computationally far too expensive to implement directly. To see why, consider some numbers: a typical high-resolution cryo-EM reconstruction uses the order of 104105images, the order of 103104number of projection directions (vertices of the spherical grid), and the order of 102rotations and translations per image. A naïve implementation would require Num. images ×Num. projections ×Num. rotations and translations (>104×103×102) evaluations of Rθi,tiIiCiPniSk2for every execution of line 4 of the algorithm. Moreover, calculating Rθi,tiIiCiPniSk2requires one image rotation plus translation. This is computationally expensive. A number of “tricks” have been developed to keep the computational cost manageable. I will discuss these below.

Another problem with the simple algorithm is the structure update step of Eq. (8). The matrix representations of the operators in this equation are far too large to compute with, and, in practice, tricks are also used to simplify this calculation. I will discuss these below as well.

Different cryo-EM reconstruction packages differ in the tricks that the packages use to speed up computation. The combination of various tricks can occasionally be overwhelming to understand, but it helps to keep in mind that the underlying algorithm is just Algorithm 1 or a minor variation of it. SPIDER, EMAN, FREALIGN, and cryoSPARC implement versions of best-alignment reconstruction.

4.1 Speeding up alignment

Approaches to speed up the alignment step are:

  1. Multiresolution alignment: it begins with very coarse grids for alignment. When the algorithm converges on the coarse grids, refine the grids and carry out the alignment in the refined grids in a local neighborhood of the coarse grid solution.

  2. Polar coordinates: image rotation is computationally expensive; however, if the image is represented in polar coordinates, then rotation is just a translation along the angle axis, an operation that is computationally far less expensive. SPIDER uses polar representations of images in the Fourier domain [5].

  3. Branch and bound: the main idea behind branch and bound is to find a computationally simple lower bound for Rθi,tiIiCiPniSk2(for mathematical details see the supplementary information in [10]). The lower bound is evaluated on the vertices of a coarse alignment grid. Then, the Rθi,tiIiCiPniSk2term is evaluated exactly at the vertex that has the smallest lower bound. All vertices whose lower bound is greater than the exact value of Rθi,tiIiCiPniSk2cannot contain a better alignment and are ignored. The grid is then refined at the surviving vertices, and the procedure is repeated at the refined vertices. cryoSPARC introduced this method in best-alignment reconstruction [10].

4.2 Speeding up structure update

The structure update of Eq. (8) has a simpler form in the Fourier domain. The Fourier slice theorem [4, 12] suggests that the projection and back-projection operators simplify to 2D slice extraction and slice insertion operators in the Fourier domain. The CTF filter operator also reduces to a point-wise multiplication by the CTF. With these simplifications, Eq. (8) becomes tractable.

Inserting or extracting a 2D slice from a 3D volume requires careful numerical interpolation. A method called gridding is used for this [13, 14]. All cryo-EM reconstruction packages use some form of gridding. RELION uses a particularly sophisticated form of gridding incorporating a variant of the Pipe and Menon method [15].

4.3 Comments

The simplifications I made to describe best-alignment reconstruction are easy to discard. It is straightforward to allow for unknown and unequal noise variances for different images and also to account for nonwhite noise.

Some packages (e.g., SPIDER, EMAN) carry out the alignment step by maximizing the correlation coefficient between Rθi,tiIiand CiPniSkrather than minimizing Rθi,tiIiCiPniSk2. This corresponds to using the signal model of Eq. (3) in the alignment step.

In classical statistics, the estimate of any set of parameters improves as more data are added, provided that the number of parameters stays fixed. The number of parameters in best-alignment reconstructions does not stay fixed; the number of parameters in NTincreases linearly with the number of images. This can potentially limit the asymptotic accuracy of best-alignment reconstructions. Expectation-maximization algorithms attempt to overcome this limitation by treating NTas latent variables, i.e., as variables that influence the likelihood, but which are not estimated. Only Sis estimated.

5. Expectation-maximization algorithms

The theory of the expectation-maximization algorithm can be found in many textbooks, e.g., [16]. The EM algorithm works iteratively, where in each step the conditional mean of the latent variables is used to update the parameter estimate. For the cryo-EM reconstruction problem, variablesN,Tare taken to be the latent variables, and Sis taken to be the parameter to be estimated. A prior can also be included for S, if necessary.

RELION [8, 9] uses the EM algorithm with a Gaussian prior on the amplitude of the Fourier coefficients of S. The resulting algorithm is rather complex, and instead of discussing all of the details of the algorithm, I will discuss a simplified version in Algorithm 2.

Line 4 in Algorithm 2 calculates the conditional probability of the alignment parameters ni,θi,tigiven the image Iiand the current estimate of the structure Sk. The LIiSk,ni,θi,ti)term in line 4 comes from Eq. (5). The term pniθitiSkin line 4 is the prior probability of the alignment parameters given Sk, and typically this can be set to a uniform probability density. The denominator on the right-hand side of the assignment in line 4 is the normalizing constant, which makes Γni,θi,tik+1a probability density. Also, note that Γni,θi,tik+1is a function of ni,θi,tiand is calculated for all values of ni,θi,tion the vertices of the spherical, angular, and translation grids.

Algorithm 2: Simple EM Reconstruction
1: procedure SIMPLE-EM IiSIi= images, S= initial structure, Initialization
2:  k1, SkS
3:  while Sknot converged do
4:    For each image Iicalculate the following conditional probability for all values of ni,θi,ti
Γk+1LIiSk,ni,θi,ti)pniθitiSkLIiSk,ni,θi,ti)pniθitiSkdnidθidti
5:       Use the conditional probabilities to update the structure as:
         Sk+1i=1NΓni,θi,tik+1PniCiRθi,tiIi/σ2dnidθidti
Sk+1Γni,θi,tik+1PniCi2/σ2dnidθidti1Sk+1
6:       kk+1
7:  end while
8:  return Sk
9: end procedure

Line 5 updates the structure. I have split the update in two lines to fit the format of this document. The update is effectively a weighted average of the back-projected images, where the weight depends on Γni,θi,tik+1, Pni, Pniand Ci. This weighting is apparent in the integration on the right-hand side of line 5. The integrals in line 5 are approximated as Riemann sums over the spherical, angular, and translation grid. The assignments in line 5 take an especially simpler form in the Fourier domain. Hence the algorithm is typically implemented in the Fourier domain. See [8, 9] for details.

Intuitively speaking, the EM algorithm can be viewed as a “smoother” version of the best-alignment algorithm. The “smoothing” corresponds to calculating the probability of matching Iito all possible projections, rotations, and translations (line 4) and using these probabilities to reconstruct the “weighted-average” structure in line 5. This “smoothing” is in contrast to best-alignment, which only uses a single alignment (the best-alignment) for reconstruction.

As mentioned above, RELION uses the EM algorithm in the Fourier domain. RELION also estimates the noise power spectrum within the iteration and uses a weak prior on the structure (Section 6 discusses why) [8, 9].

5.1 Speeding up the EM algorithm

Calculating the conditional probabilities and the structure update in the EM algorithm is just as computationally expensive as the alignment step in the best-alignment algorithms. Several methods have been suggested to speed up the EM algorithm;

  1. Stochastic gradient descent: the idea is to limit computational complexity by choosing a small, random subset of the terms to be summed over in line 4 and line 5 of Algorithm 2, at every iteration. Stochastic gradient descent is used in cryoSPARC [10].

  2. Adaptive integration: the conditional probability distributions of the latent variables (line 4) often have very sharp peaks around which most of the probability mass of the distribution is concentrated. It is possible to adaptively choose an integration grid that samples the conditional probability distribution finely near the peaks and coarsely away from the peaks, thereby saving computation. This strategy was suggested in [17] and is used in RELION.

  3. Adaptive basis selection: this strategy is based on the idea that the projections of a structure along nearby projection directions are very similar. Of course, this implies that the images that align to these directions are also similar. It turns out that if the structure projections and the images can be represented on a small basis (small compared to the number of projection directions and number of images), then the EM calculations can be sped up. This strategy is proposed in [18], where the bases are adaptively adjusted within the EM framework.

6. Postprocessing, multiple structures, and symmetry

6.1 Postprocessing

Best-alignment, as well as the EM algorithm, gives maximum likelihood estimates of the structure, but maximum likelihood reconstructions can be noisy. The noise is especially prevalent in high frequencies and makes it difficult to visualize the details of the structure. A number of methods are employed to “filter out” the noise. Below, I group these methods together as postprocessing methods, but keep in mind that many of them are incorporated into the reconstruction algorithm itself:

  1. Filtering at the resolution: in this strategy, the spectral signal-to-noise ratio (signal-to-noise ratio in the Fourier domain) is calculated via Fourier shell coefficients (FSC), and the resolution of the structure is determined as the frequency at which the FSC falls below a threshold. The structure is then low-pass filtered at the resolution. Many of the structures reported in the literature adopt this strategy.

  2. Wiener filtering: in the cryo-EM context, Wiener filtering may be viewed as a more sophisticated version of the FSC strategy. In Wiener filtering, the structure is low-pass filtered with the low-pass filter adapting to the spectral signal-to-noise ratio at each spatial frequency. This strategy is used in SPIDER, FREALIGN, and RELION.

  3. Sparse representation: modern approaches to denoising involve using a sparse representation of a signal in an over-complete basis and then using a joint L1-L2L1 minimization to reconstruct the signal from noisy data. This strategy has also been applied to cryo-EM [19] with improvement in the resolution of the structure.

6.2 Multiple structures

There are two reasons for reconstructing multiple structures instead of a single structure from cryo-EM images. One reason is that many proteins are not rigid and exhibit several different structures, called conformations. Thus, even a chemically pure cryo-EM sample may have different structures, and reconstructing only a single structure from the sample is likely to give an “average” or even a meaningless structure. Another reason to consider multiple structures is possible problems with sample preparation. If the sample is a protein complex (e.g., several proteins held together by hydrogen bonds), then it is possible that, in a given sample, some of the complexes may have disassociated into their components. Multiple structures are used during reconstruction so that some of the reconstructed structures model the disassociated components and prevent them from corrupting the reconstruction of the main structure.

Reconstructing multiple structures fits well within the EM algorithm (see [9] for details), and is routinely used in RELION and other packages.

6.3 Symmetry

Several particles exhibit symmetry. For example, the capsid of a virus may exhibit icosahedral symmetry. If the symmetry of a particle is known before-hand, say from X-ray crystallography studies, then it is possible to incorporate the symmetry into the reconstruction algorithm.

7. Conclusion

Most cryo-EM reconstruction algorithms being used today are based on or can be thought of as being based on the maximum likelihood principle. This paper has outlined some of the main ideas and tricks used by such algorithms.

Author details

Hemant D. Tagare

Yale University, New Haven, CT, USA

*Address all correspondence to: hemant.tagare@yale.edu

Anti-Bourgeois Theory

Gary Hall

Abstract

In his celebrated 2009 memoir Returning to Reims, the Parisian intellectual and theorist Didier Eribon travels home for the first time in thirty years following the death of his father. There he tries to account for the change in politics of his working class family over the period he has been away: from supporting the Communist Party to voting for the National Front. (With the notable exception of the 2018 election of Andrés Manuel López Obrador in Mexico, it’s a shift toward the populist nationalism of the far right that’s apparent in many countries today: the UK, Germany, Poland, Italy, Greece, Hungary, the US, and Brazil.) But Eribon also discusses the transition he himself has undergone as a result of having escaped his working class culture and environment through education, and how this has left him unsure whom it is he is actually writing for. He may be addressing the question of what it means to grow up poor and gay; however, he is aware few working class people are ever likely to read his book.

At the same time, Eribon emphasizes that his nonconforming identity has left him with a sense of just how important it is to display a ‘lack of respect for the rules’ of bourgeois liberal humanist ‘decorum that reigns in university circles’ and that insists ‘people follow established norms regarding “intellectual debate” when what is at stake clearly has to do with political struggle.’ Together with his friend Édouard Louis and partner Geoffroy de Lagasnerie, Eribon wants to ‘rethink’ the antihumanist theoretical tradition of Foucault, Derrida, Cixous et al. to produce a theory ‘in which something is at stake’: a theory that speaks about ‘class, exploitation, violence, repression, domination, intersectionality’ and yet has the potential to generate the same kind of power and excitement as ‘a Kendrick Lamar concert’.

With the ‘Anti-Bourgeois Theory,’ I likewise want to reinvent what it means to theorize by showing a certain lack of respect for the rules of bourgeois decorum the university hardly ever questions. I want do so, however, by also breaking with those bourgeois liberal humanist conventions of intellectual debate that–for all his emphasis on rebelling ‘in and through’ the technologies of knowledge production–continue to govern the antihumanist theoretical tradition Eribon and his collaborators are associated with. Included in these conventions are culturally normative ideas of the human subject, the proprietorial author, the codex print book, critical reflection, linear thought, the long-form argument, self-expression, originality, creativity, fixity, and copyright. I will argue that even the current landfill of theoretical literature on the posthuman and the Anthropocene is merely a form of bourgeois liberal humanism that is padded with nonhuman stuffing–technologies, objects, animals, insects, plants, fungi, compost, microbes, stones, and geological formations–to make it appear different. Can we not do better than this?

Keywords: class, culture, environment, climate crisis, Anthropocene, liberalism, humanism, posthuman, inhuman

I have no social class, marginalized as I am. The upper class considers me a weird monster, the middle-class worries I might unsettle them, the lower class never comes to me.

Clarice Lispector.

1. Culture

During the summer of 2018, I attended an event to mark the publication in English of two French volumes: Returning to Reims by Didier Eribon [1] and History of Violence by Édouard Louis [2]. In Eribon’s powerful memoir, the Parisian sociologist travels home for the first time in 30 years following the death of his father [3]. There, he tries to account for the shift in politics of his working-class family while he has been away, from supporting the communist party to voting for the National Front. (It’s a shift toward the populist nationalism of the far right that’s apparent in many countries today—the UK, the USA, Germany, Poland, Italy, Greece, Hungary, Brazil—the 2018 election of Andrés Manuel López Obrador in Mexico being a notable exception to this seemingly global trend.) Returning to Reims was a significant influence on Louis, inspiring him to write his best-selling first novel, The End of Eddy, which he dedicated to Eribon [4]. Like the latter’s memoir, History of Violence and The End of Eddy both in their different ways tell the story of how the author having grown up gay and poor in the north of France, was eventually able to escape his working-class environment through study and education.

As is customary on these occasions, the authors read from their books and discussed their work and lives, followed by a question and answer session with the audience. During this latter part of the evening, they spoke about the transition they had made from the social realm of the working class to that of the middle class, with its very different gestures, knowledge, and manners of speech. Recognizing they now had a foot in both camps, each said the process of reinventing themselves had nonetheless left them feeling they truly belonged to neither. Arriving in Paris at the age of 20, for instance, Eribon found it much easier to come out of the sexual closet and assert his homosexuality to his new cosmopolitan friends than to come out of the class closet.

Both authors also described how, as a consequence, they were unsure for whom they were actually writing. They may be addressing the question of what it means to grow up in a working-class environment in Returning to Reims and History of Violence: the profound racism, sexism, and homophobia they found there; the violent modes of domination and subjectivation; the social impoverishment; the lack of possibilities that are imaginable, to say nothing of those that are actually realizable. However, they were aware a few people from that social class were ever likely to read their books, so can hardly say they were writing for them.

What really captured my attention, though, was the moment Eribon and Louis stressed that what they were trying to do with their writing is “reinvent theory”: to produce a theory in which “something is at stake.” (Together with Eribon’s partner, Geoffroy de Lagasnerie, they have described this elsewhere as a theory that speaks about “class, exploitation, violence, repression, domination, intersectionality [5]” and yet has the potential to generate the excitement of “a Kendrick Lamar concert [6].”) Eribon is, of course, the author of a well-known biography of the philosopher Michel Foucault. Nevertheless, this statement struck me, partly because I’m interested in theory; but mainly because it’s difficult to imagine many English literary writers of similar stature engaging with the kind of radical thought Foucault and his contemporaries are associated with, let alone expressing a desire to reinvent it. Since it undermines the idea of the self-identical human subject, that theoretical tradition is often described as antihumanist, or as even posthumanist in some of its more recent manifestations. By contrast, English literary culture (and I’m saying English rather than British literary culture here quite deliberately) is predominantly humanist and liberal, seeing education in general, and the reading and writing of literature, in particular, are a means of freeing the mind of a rational human individual whose identity is more or less fixed and secure.

One explanation given for this difference in conceptual approaches is that, historically, writers in England have been more closely associated with the ruling elite: with public schools, Oxbridge colleges, and the tradition of the gentleman as amateur scholar. It’s an association that contrasts sharply with the cafes, streets, and factory shop floors of the more political French intellectual. Suspicious as much of English culture is of radical and abstract ideas, epitomized by the emphasis in France on the universal values of freedom, justice, and liberty since at least the revolution of 1879 (England, unlike France and Mexico, has never had a revolution), “the intellectual” is often viewed negatively: as someone who is arrogant, pretentious, and full of self-importance. Paradoxically, to be viewed approvingly as intellectual by the English, it’s better not to be too intellectual at all. So authors such as Yuval Noah Harari and Mary Beard are considered acceptable and taken seriously, as they can write clearly in “plain English” and communicate with a wider public, even attain the holy grail of a popular readership. Theorists such as Gilles Deleuze and Catherine Malabou are not, as their philosophy and use of language are held as being too complex for most “real” people to understand.

This constant policing of the parameters of acceptability explains why the literary novel in England today is so unashamedly humanist. Scottish journalist Stuart Kelly even goes so far as to compare it unfavorably to the “posthuman novel” that is the TV series Westworld. (I’m drawing on newspaper commentary here to show that mainstream culture in the UK is not entirely dominated by uncritical liberal humanist thought.) For Kelly, the modern literary novel and its understanding of life are “outdated,” still constrained by its eighteenth century origins. Nowhere is this more evident than with its “unquestioned foundations,” based as they are on the idea of the autonomous human subject as protagonist, someone who has an “intact self,” “cogent agency,” and “memories they trust—and can trust—and desires they understand [7].”

In Whatever Happened To Modernism?, Gabriel Josipovici characterizes the novel of the Julian Barnes/Martin Amis generation as the product of a nonmodernistic literary culture that is determinedly realist, preferring sentimental humanism and readability to the kind of ground-breaking experimentation he associates with previous eras of the European novel [8]. That may be, but the cure for English culture’s addiction to the worldview of prosperous, middle-class white men—or fear of revolution, the underclass, and the other, depending on how you look at it—is not simply more modernism. As Isabel Waidner emphasizes in their anthology of innovative writing (Waidner’s preferred pronouns are they/them/their), even experimental literature in England is predominantly white, bourgeois, and patriarchal, very much to the exclusion of (non-Oxbridge) BAME, LGBTQIAP+, working-class, and other nonconforming identities [9]. Nor is this particularly surprising. After all, 7% of the UK population attend private schools (that’s over 600,000 pupils, double the number of the 1970s), and approximately 1% graduate from Oxford or Cambridge. Yet it was reported in 2018 that “of the poets and novelists included in Who’s Who … half went to private schools, and 44% went to Oxbridge [10].” One result of this systematic bias is that nonwhite British authors published fewer than 100 titles in 2016 [11].

I began by referring to social realms that contain a lack of possibilities that are even imaginable, let alone achievable. It’s worth noting in this context that of the 9115 children’s books published in the UK in 2017, only 4% featured BAME characters. Just 1% had a BAME lead character, 96% having no BAME characters whatsoever [12]. Not is it only literary culture that’s affected by what Eribon describes as the “terrible injustice” of the “unequal distribution of prospects and possibilities” (p. 52). Comparable statistics can be provided for the arts, drama, music, business, politics, the law, medicine, the military, the civil service, the media, and journalism. 54% of the UK’s “top” news journalists were educated in private schools, for example; while of the 81% who attended university, more than a half were educated at Oxbridge, with a third attending just one institution, Oxford [13]. Moreover, 94% of all the journalists in the UK are white and as few as 0.2% black [14].

In a modest bid to counter such inequality of opportunity and stalling of social mobility, the BBC Radio 6 presenter, Cerys Matthews, has said she wants to program less music on her show by artists who’ve been given a leg up by virtue of attending public school and more music by people from all walks of life, including women and those with a working-class upbringing [15]. Which makes me wonder: if we do want to foster culture in England that’s not so liberal and humanist and anti-intellectual and if we do want to develop an understanding of life, agency, and subjectivity that is more complex—or at least not quite so outdated and elitist—should we adopt a similar stance? Instead of setting up prizes like the Goldsmiths in order to reward literature that is daring and inventive, should we publish (and perhaps read and cite) fewer texts by people who went to public school or Oxbridge and more by writers from other backgrounds? [16].

2. Technology

I realize asking such questions can come across as strident, blunt or even rude. However, I am guided here by another refreshing aspect of Eribon and Louis’ approach to reinventing theory: their willingness to be disrespectful. Eribon explains it best in Returning to Reims. Praising the philosopher Jean-Paul Sartre for having insulted the liberal sociologist Raymond Aron in 1968 for being a “defender of the bourgeois establishment,” Eribon stresses the importance of “daring to break with the conventions of polite academic ‘discussion’—which always works in favor of ‘orthodoxy’ and its reliance on ‘common sense’ and what seems ‘self-evident’ in its opposition to heterodoxy and critical thought ” (p. 101).

In drawing attention to the fact that so many writers in the UK attended public schools and Oxbridge, I’m therefore not just making a crude and ill-mannered point about class inequality, a point that’s already quite familiar by now in any case. I’m also trying to explain why so much of the culture in England remains doggedly liberal humanist, middle class, and anti-intellectual. At the same time, I believe theory can help us to understand this situation and to think it through. Is the idea we should avoid difficult “jargon” in order to communicate better with the so-called ordinary people really so self-evident? Is it not rather an instance of what, following Antonio Gramsci, we can call society’s manufactured “common sense,” the ideology used to maintain the status quo—and more and more today to eliminate reasonable dissent? Is this one of the reasons we’re experiencing an ongoing backlash against theory, not just in journalism and the media but in academia too? The reason theory is important and should not be dismissed, no matter how abstract its ideas and how challenging its rhetorical style (and no matter how badly some ‘star’ theorists have behaved on a professional or personal level), is because it enables us to understand our modes of being and doing in the world and conceive of them differently and so change them.

That said, it’s not my intention to suggest we should all simply read more French theory: that we’d all now be posthumanists in England if only Napoleon had won at Waterloo. Like Eribon and Louis, I want to promote heterodoxy and critical thought; and I want to do so to the extent of daring to break even with the conventions of theory and what it’s currently considered to be. For this tradition of critical thought has its own blind spots that lead it to accept certain assumptions as common sense as well.

Many of these blind spots relate to how neoliberalism and its technical systems (e.g., social media such as Twitter and YouTube, professional social networks such as Academia.edu, online research portals and disciplinary repositories such as Elsevier’s PURE and SSRN) have found ways to incorporate those theorists McKenzie Wark calls “general intellect” in her book of the same name, who are today typically employed as academics as opposed to the public intellectuals of the past such as Sartre and Simone de Beauvoir [17]. My point is not that contemporary intellectual laborers are merely constituent elements of the general intellect or “social brain,” who’s only purpose “is to keep commodification going and profits flowing.” I do not deny that such commercially oriented theorists are, as Wark says, also trying to “find ways to write and think and even act in and against this very system of commodification that has now found ways to incorporate even them.” My argument is that their efforts to do so contain a number of blind spots—or, perhaps better, datum points—which limit their “ability to grasp the general situation [18].” This is especially the case as far as the bourgeois liberal humanist categories and frameworks with which they continue to operate are concerned. For them, too, datum points such as the unique human author, originality, creativity, immutability, and copyright are in practice held as self-evidently providing the basis for well-mannered debate. Far from theory enabling individuals and groups to think differently about what they are and what they do, the taking-for-granted of such categories and frameworks leads many intellectual laborers today to likewise work in favor of orthodoxy and the perpetuation of the established order.

This is why I’m interested in experimenting with ways of working I’m aware a lot of people might find counter-initiative and difficult to grasp—and perhaps even to take seriously. I’m exploring what forms our work as theorists can take if, in its performance, it does not simply go along with the pressure the neoliberal university places on us to deliver more ever quicker, and the accompanying spread of managerial technologies of measurement and commodification such as rankings, citation indexes, and other metrics. But I’m also exploring what forms our work can take if it likewise avoids falling into the trap of trying to counter the politics of the accelerated academy and its technological systems by resorting to a form of liberal humanism by default—evident in demands to “slow down” and go back even, or “assertion of the intrinsic value/unquantifiable character of scholarship [19].”

This last part is tricky. There’s actually no easy way for us to avoid adhering to liberal humanist ways of being and doing as authors and academics—no matter how posthuman the content of our theory may be. This is because there’s a strong link between our copyright laws and the production of liberal humanist subjectivity and agency. (As Mark Rose shows: “Copyright is not a transcendent moral idea, but a specifically modern formation [of property rights] produced by printing technology, marketplace economics and the classical liberal culture of possessive individualism”) [20]. This link, in turn, means there are no nonliberal and nonhumanist alternatives to publishing and sharing our work on a copyright all rights reserved basis that are legally and professionally recognized. And this is so even with regard to those instances in which a writer identifies as having a fluid, nonbinary identity that is neither male nor female, and adopts personal pronouns such as they/them/their.

In large part, this lack of alternatives is due to the fact that, although the UK, USA, and Europe have different requirements for copyrightability, in all of them, copyright is dependent on the figure of the singular human author. From this standpoint, our current copyright laws have a threefold function: (1) they protect the author’s economic and moral rights, as is generally understood. Yet—and this is something that is less frequently appreciated—they also participate in; (2) creating and shaping the author as a sovereign, liberal, human subject; (3) and in making it difficult for the author to adopt other forms—forms that are capable of acknowledging and assuming (rather than ignoring or repressing) the implications of texts coming into being through the various multiple and messy intra-actions of an extended assemblage of both humans and nonhumans.

Do the restrictions imposed on us by our laws of intellectual property explain why most radical philosophers today work in a surprisingly conservative (i.e., liberal) fashion? Even political theorists who are known for engaging directly with new forms of subjectivity and social relations, such as those associated with the horizontalist, self-organizing, leaderless mobilizations of the Occupy, Black Lives Matter, Dakota Standing Rock Sioux, gilets jaunes, and Extinction Rebellion protests are no exception. I’m thinking here of Alain Badiou, Judith Butler, Michael Hardt and Antonio Negri, Chantal Mouffe, Jodi Dean, Slavoj Zizek, etc.; the list is an extremely long one. By working in a conservative fashion, I mean texts such as Assembly [21], Podemos [22], and Crowds and Party [23] are all written as if they were the absolutely authentic creative expressions of the minds of unique sovereign individuals who are quite entitled to claim the moral and legal right to be identified as their singular human authors. They are then made available on this basis for economic exploitation by a publisher as commodities, in the form of books that can be bought and sold according to a system of property exchange that is governed by the logic of capital and its competitive, individualistic ethos.

The situation is not helped by the fact that when radical thinkers do turn their attention to how scholars operate nowadays, their concern is predominantly with the neoliberal subjects we are supposedly transitioning into with the help of digital information technologies. They are not quite so concerned about the particular configurations of subjectivity and the related information technologies (i.e., commercially copyrighted, printed-paper codex books and journal articles) we are changing from. The point I’m making here is that it’s of fundamental importance to pay close critical attention to the latter, too. This is because in practice it has typically been a liberal, humanist subjectivity. When it comes to the actual creation, publication, and communication of research especially, this model of subjectivity has occupied a position of hegemonic dominance within the profession—and, in many respects, still does. The reason is simple: liberal humanism is built into the very system of the university [24]. As Christopher Newfield explains with regard to higher education in the USA, “a consensus version of university humanism has long consisted of ‘five interwoven concepts: the free self, experiential knowledge, self-development, autonomous agency, and enjoyment.’” What’s more, “university philosophers and administrators did not simply espouse these concepts as ideals but institutionalized them [25].”

If liberalism, in a nutshell, is concerned with the human individual’s right to life, liberty, and property, together with the political conditions and institutions that secure these rights (e.g., constitutional government and the rule of law), what’s really being condemned in many accounts of the corporatization of the academy is the manner in which a version of liberalism is being intensified and transformed into another, specifically neoliberal interpretation of what, among those rights, are deemed most important: the unassailable rights of property and extension of the values of the free market and its metrics to all areas of life [26]. Yet, as I say, the focus of critical attention has too often been on the process of change, and especially on what we are changing to (capitalist entrepreneurs, including entrepreneurs of our own selves and lives), and not on what we are changing from. What is a predominantly liberal, humanist mode of academic personhood is, in effect, held up as some kind of solution, or at least preferable alternative, to the shift toward the constantly self-disciplining, self-governing, self-exploitative subject of neoliberalism by default. (It’s an attitude on the part of internet scholars that’s encapsulated perfectly, albeit unwittingly, by a remark of Shoshana Zuboff’s on surveillance capitalism: “Once I was mine. Now I am theirs.”) [27].

In other words, a form of liberal humanism, along with the attendant concepts of the self-identical autonomous human subject, the individual proprietorial author, linear thought, the long-form argument, the single-voiced narrative, the fixed and finished object, originality, creativity, and copyright, acts as something of a blind spot or datum point in a lot of established theory. The writing of peer-reviewed, sequentially-ordered, bound and printed-paper codex books and journal articles is a professional practice that is perceived as transcending the age in which it is employed, which means continuity in these matters tends to be valued more highly than transformation, let alone revolution. It’s a manner of operating that is taken for granted as fixed and enduring (although in actual fact the activities and concepts it involves are constantly changing and being renegotiated over time), and that constitutes a preprogrammed mode of performance that many academics adopt more or less passively in order to construct theoretical frameworks and draw conclusions, hence the lack of care shown by even the most politically radical of thinkers for the materiality of their own ways of working and thinking.

It can even be argued that the failure to denaturalize and destabilize what, for the sake of economy, I have referred to as the liberal humanist model of subjectivity—to confront and rigorously think through liberal concepts of human rights, freedom and property as they apply to us as theorists (although we understand philosophically that critical theory’s questioning of liberal thought must involve questioning these concepts too)—is one of the reasons it’s been relatively easy for the commodifying, measuring, and monitoring logic of neoliberalism to reinterpret our ways of being too. With the wider historical tradition of liberalism having provided the discursive framework of modern capitalism, neoliberal logic is not necessarily always going against the liberal rights and values that many of us continue to adhere to in practice. It is rather, as I say, that under this logic aspects of our liberal ways of being and doing have been intensified and transformed into another, specifically neoliberal interpretation of what, among those rights and values, are deemed to be most significant.

It’s a set of circumstances that has left many of us in a state of melancholy, of unresolved mourning, for what we have lost: unresolved, because the liberal manner of performing as academics and theorists is not fully acknowledged as something we are attached to, so it’s not something we can work through when we do experience it as a loss. This, in turn, can be said to have led to a state of political disorientation and paralysis. Since it’s a loss we cannot fully acknowledge, we are unable to achieve an adequate understanding of how the process of corporatizing the academy can be productively reinflected, or what kind of institution we should be endeavoring to replace the neoliberal university with.

Still, the problem is not just that the political rationalities of neoliberalism find it relatively easy to shape and control any efforts to counter the becoming business of higher education by acting as liberals and calling for a return to the rights and values of the public university (i.e., of academic freedom and trust; of fundamental as opposed to applied research; of individualized rather than mass teaching; and of the relatively autonomous institution, the primary function of which is to help build and maintain our democracies through the education of their citizens, and so to contribute to public value in that fashion rather than through the generation of financial profit). It’s also that such calls have a tendency to moralistically discipline and reproach, if not indeed close down, attempts to question their own, often ahistorical, liberal premises, and to search for different means of being and doing as scholars that are neither simply liberal nor neoliberal. We could go so far as to say that, far from part of the solution, calls for a restoration of the importance of the liberal values of the public university and the traditional humanities, although they may have their hearts in the right place, are actually part of the problem.

3. Science

Making critical remarks about erstwhile radical political theorists continuing to claim the legal right to be identified as the original proprietorial authors of their books is often dismissed as a vulgar thing to do. Drawing attention to the fact such theorists are making their work available for commercial exploitation on this basis, according to a system of commodity exchange that is governed by the logic of capital, is considered something of a cheap shot. And there may be some truth in this. Still, do such dismissals risk serving as an alibi for the widespread failure to take on board the implications of not thinking through liberal concepts of human rights, freedom, and property as they apply to us as theorists? Liberalism may mean we are free to make rational choices about almost every aspect of life. But it also means we are free to choose only within certain limits. What we are not legally and professionally free to choose is an authorial identity that operates in a manner consistent with a more inhuman form of theory. I’m referring to an identity that functions in terms neither of the human nor the nonhuman. Instead, inhuman theory as I see it involves a form of communication that endeavors to take account of and assume (rather than ignore or otherwise deny) an intra-active relation with the supposedly nonhuman, be it animal, plant life, technology, the planet, or the cosmos.

Why inhuman? And why am I now switching to this term, rather than continuing with the posthuman?

My use of “inhuman” relates to the way the human cannot simply be opposed to the nonhuman. In this respect, there is no such thing as the nonhuman—nor the human for that matter. Not in any simple sense the nonhuman is already in(the)human. Each is born out of its relation to the other. The inhuman is thus a mode of being and doing with the nonhuman. Based as it is on the performance of a nonunified, nonessentialist, polymorphous subject (rather than the sovereign, self-identical individual of both liberal and neoliberal humanism), it follows that inhuman theory can also be understood as an instance of the inhumanities. For if the inhuman equals the human intertwined with the nonhuman, then humanities with this intra-active inhuman figure at their heart must become the inhumanities.

Admittedly, such an understanding of subjectivity and authorship could be gathered under the sign of the posthuman. Approaches to the posthuman, however, have been dominated by the “posthuman humanities [28]” of Donna Haraway, Rosi Braidotti, Cary Wolfe, and others [29]. Like the radical political philosophers I referred to earlier, these theorists of the posthuman continue to work in quite conventional, liberal humanist ways. My proposal is that the above transformative conception of the human and the humanities can therefore on occasion be more productively articulated in terms of the inhuman. The idea is that such a rhetorical and conceptual shift might enable us to better challenge the humanist subject that serves as a datum point to so many theories—not just of the humanities but of the posthuman and posthumanities too. Building on the argument McKenzie Wark develops in “On the Obsolescence of the Bourgeois Novel in the Anthropocene”, could we go so far as to characterize the apparent inability of radical theory to operate according to a more inhuman mode of philosophy as a sign of its obsolescence? [30]

Wark’s text on the bourgeois novel was published on the blog of Verso Books as an addition to the collection of critical appreciations she provides in General Intellects: Twenty-One Thinkers For The Twenty-First Century [31]. While the chapters in that book offer succinct analyses of individual thinkers such as Isabelle Stengers, Hiroki Azuma, and Paul B. Preciado, Wark’s focus in “On the Obsolescence of the Bourgeois Novel in the Anthropocene is The Great Derangement: Climate Change and the Unthinkable” by the writer and novelist Amitav Ghosh [32]. In this nonfiction book, Ghosh contemplates the environmental crisis and global warming from a literary perspective that has its origins in the Indian subcontinent. As far as he is concerned, climate change is not just about ecological problems or even capitalism and its carbon-based political economy. Climate change is about empire, it’s about imperialism, and above all, it’s about climate justice. Providing an account of Ghosh’s influential lectures on the great derangement thus enables Wark to conceive of a geo-humanities project that brings earth science into contact with “post-colonial voices that have pushed back against imperial mappings of the world.” In doing so she acknowledges that approaching climate change in terms of social justice brings with it a conceptual challenge. “One has to avoid excluding the diversity of human voices,” Wark writes, quoting from The Great Derangement, “and yet at the same time avoid excluding the nonhuman world and rendering it a mere background, or “environment.” One has to voice “the urgent proximity of nonhuman presences [33].”

Ghosh approaches this conceptual challenge as a literary problem. The difficulty, however, is that climate change (or climate crisis or climate breakdown as many are now terming it in attempt to describe more accurately the environmental emergency we are now facing) goes far beyond what can be expressed in the form of the bourgeois novel. The issue is summed up for Wark by the fact that “fiction that takes climate change seriously is not taken seriously as fiction.” Hence some of the best responses to the Anthropocene have been provided by science fiction. Hence, too, Ghosh’s concern that we are now “entering into a great derangement.” Wark describes this as “a time when art and literature concealed rather than articulated the nature of the times and the time of nature.” In place of dealing with the Anthropocene, novels become choked with what, following Franco Moretti, can be thought of as “filler, the everyday life of bourgeois society, its objects, decors, styles and habits [34].”

The reason the bourgeois novel is obsolete, then, is because it has not “adapted to new probabilities.” Instead, Wark characterizes the bourgeois novel as “a genre of fantasy fiction smeared with naturalistic details—filler—to make it appear otherwise. It excludes the totality so that bourgeois subjects can keep prattling on about their precious ‘inner lives.’” Yet, as we have seen, critical theory has not been adapted in the Anthropocene either. In fact, to include it seriously in the argument Wark makes about literature and art only serves to place further emphasis on the idea that we are arriving at “a great derangement,” a period when no element remains in its original place. For ours is a time when established theory too can be said to obscure rather than express the changing nature of the times and the time of nature. As with the bourgeois novel, it’s a derangement that works through formal limitations. In the case of theory, these limitations involve the named individualistic author, the immutable object, intellectual property and so forth. As with the modern novel, the screening out of this scaffolding “continues to be essential” to the functioning of what we might now rather teasingly refer to as bourgeois theory [35]. To further paraphrase Ghosh by way of Wark, here then is the great irony of theory in the Anthropocene: “the very gestures with which it conjures up nonhuman actors, objects and elements “are actually a concealment” of them [36].

The performance of serious theory today is thus as formally limited to bourgeois liberal humanism as the novel. This means it’s extremely difficult, if not impossible, for even the most radical of political theories to do anything other than exclude the diversity of human and nonhuman presences. To sample and remix Wark’s text on the novel in the Anthropocene in order to further undercut notions of the author as self-identical human individual, anything that would actually impact on the concealment of theory’s established scaffolding, how it’s created, published, and disseminated, is regarded as not proper, eccentric, and odd and risks banishment. “But from what? Polite bourgeois society?” The for-profit world of Verso books and Routledge journals where proper theory is to be found? [37].

In this way theory eliminates the “improbable”—including nonhumanist, nonliberal modes of being and doing—“from serious consideration.” We could perhaps cite examples designed to provoke further speculation the fact that an orangutan in Argentina called Sandra has been declared by the courts there to be a “nonhuman person” with legal rights [38]; that the Whanganui River in New Zealand has been given the same rights as a human person [39]; and that the Amazon has recently been declared a “subject of rights” by Colombia’s supreme court in a bid to protect it from further deforestation [40]. If nonhuman things can now have rights and be the party of interest in administrative proceedings—just as they have at various times and places in the past [41]—can we envisage reaching a point in the future where a work of critical theory can be legally and professionally recognized as having been co-authored by an ape, a river, a forest, an ecosystem, and even by nature in general? If so, what would the consequences be for our notions of the author, creativity, and copyright? [42]. Does even asking such improbable questions not involve us in imposing legal and professional strictures that are designed for humans onto nature? Certainly from the perspective of bourgeois theory that which is outside its inherited frame in this respect can only appear as “strange,” “weird,” and “freaky.” Any such “strangeness” emanating from an actual engagement with the implications of the Anthropocene can thus be kept in the “background,” the unmarked environment in which theory takes place or moved into it. As is the case with the bourgeois novel, such theory—with rare exceptions—“draws a sharp distinction between the human and the nonhuman,” not to mention the “collective and collaborative.” Here, too, the actions of individual human agents are treated as “discontinuous with other agents,” elements, and energies (including “the masses, peoples, movements”), even though “the earth of the Anthropocene is precisely a world of insistent, inescapable continuities… [43].”

We can therefore see that bourgeois theory clearly “is not working.” The nonhuman, climate breakdown, the Anthropocene in general, exceeds what the form of proper theory can currently express. Like the novel, it has not adapted to the new reality ushered in by the Anthropocene, including all those laws and legal decisions that are starting to pile up around the question of the rights of nature. Instead, theory “imposes itself on a nature it cannot really perceive or value.” Just as “serious fiction, like bourgeois culture, now seems rather unserious, indeed frivolous,” so too does serious theory. The nonhuman may be what a lot of contemporary critical theory studies and writes about, but it cannot take seriously the implications of the nonhuman for theory. As a result, the current landfill of theoretical literature on the Anthropocene is merely a form of bourgeois liberal humanism smeared with nonhuman filler—objects, materials, technologies, animals, insects, plants, fungi, compost, microbes, stones, and geological formations—to make it appear otherwise.

4. Weird, unsettling monsters

To be fair, the situation I’ve described creates problems for my own ways of being a theorist, too. After all, if what I’m doing is placing a question mark against both our neoliberal and liberal humanist models of subjectivity, it’d be naïve to expect there’s going to be a large, preexisting audience out there I can appeal to with my research. So, much like Eribon and Louis, I’m not sure who I’m writing for. It could even be said that, in denaturalizing and destabilizing notions of the virtuoso human author, creativity, and copyright, my research is designed to challenge many of the common sense values and practices that could otherwise have been used to gather an audience around it.

This is another reason I’m interested in experimenting with ways of working as a theorist that a lot of people may find difficult to understand. It’s about doing something that is indeed strange, weird, awkward, confusing, and surprising, something that’s not so easy to approach unconsciously, in a default setting, as if it’s already known and understood in advance. I’m certainly not interested in making myself appear more human in my work. I do not want to think these issues through the lens of memoir in the way Eribon and Louis do. For me, the biographical human subject is more of a symptom than a cure. So I provide very little in the way of autobiographical information as a means of peaking people’s interest and holding their attention: next to nothing about my life, background, class, sexuality, personal vices, or virtues. I do not use either words or pictures to share what it feels like to be me or tell the story of the struggles I’ve overcome to get where I am and how that process has changed me. Nor do I create opportunities to form interpersonal relationships with me by using Instagram, LinkedIn, Twitter, etc. In fact, I try to avoid anything that might have the effect of obviously humanizing me.

Since it’s clearly leading me to break many of the rules about how to attract a twenty-first century audience, I realize this risks coming across as my being willfully difficult, if not self-defeating. (And all the more so in an era of intersectionality, when people are conceived as being the sum total of their class, race, gender, and other identities. It is an era when, as a number of commentators have pointed out, individuals “not only bear the entire history of these identities; they ‘own’ them. A person who is not defined by them cannot tell the world what it is like to be a person who is [44].” A backstory can be useful in such circumstances in making one appear more authentic.) But if I’m interested in transforming the dominant discourse network and its manufactured common sense about how (posthuman) knowledges are to be created, published, and circulated today, then it’s a risk I have to take.

Having said that, if we want to avoid falling passive victim to ways of acting already established in advance, we need to be careful not to merely substitute one set of rules for another: those associated with the production of long-form books of antihumanist or posthumanist theory, say. It’s for this reason that my work does not necessarily adhere to predefined ideas concerning what forms a theoretical text can take. I’m experimenting with new ways of being a theorist that are neither simply neoliberal nor liberal humanist; and I’m doing so because, rather than endeavoring to speak on behalf of a preexisting community or otherwise represent them—as we saw Eribon and Louis trying to do with the working class—it seems to me we have to actively invent the context and the culture in which such a missing community, replete with new notions of the subject, agency, the human, and so on, can emerge. What’s more we have to do so without any assurances or certainty on our part that this will actually happen. We know from Derrida that the future is monstrous. “A future that would not be monstrous would not be a future [45].” As theorists, we need to open ourselves to a future in which we do not simply adhere to the proper, accepted systems for creating, disseminating, and storing our work, together with their preprogrammed ideas regarding the singular human author, originality and copyright. Rather we need to display what Eribon describes as a “lack of respect” for those rules of bourgeois liberal humanist decorum that insist “people follow established norms regarding ‘intellectual debate’ when what is at stake clearly has to do with a political struggle” (p. 161). In short, we need to be weird, unsettling monsters.

Author details

Gary Hall

Faculty Research Centre in Postdigital Cultures at Coventry University, Coventry, UK

*Address all correspondence to: gary.hall@coventry.co.uk

Trolox Protection against Oxidative Stress in Caenorhabditis elegans

Marco Antonio González-Peña, José Daniel Lozada-Ramírez and Ana Eugenia Ortega-Regules

Abstract

The aim of this work was to evaluate the antioxidant activity of Trolox (6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid), a water-soluble vitamin E analog, on the biological model Caenorhabditis elegans, and determine if the resistance provided to the nematode against oxidative stress has an inherited component. For this purpose, nematodes previously exposed to Trolox were transferred to medium plates with juglone (5-hydroxy-1,4-naphthalenedione), generating a prooxidant environment that induces lethal oxidative stress, and then nematode survival was evaluated every hour. Additionally, nematodes were synchronized and placed in new medium plates in the presence of Trolox, until the oxidative stress resistance of four different generations was evaluated. Trolox-treated C. elegans increased their oxidative stress resistance in comparison to those without treatment. Moreover, protection was potentialized through each generation, suggesting that Trolox not only neutralizes the oxidative damage but also induces molecular changes that extend nematode survival.

Keywords: Trolox, antioxidant, oxidative stress resistance, Caenorhabditis elegans

1. Introduction

During the cellular respiration process of aerobic organisms, there is a transfer of electrons toward oxygen that generates reactive oxygen species (ROS), since oxygen acts as the final acceptor of electrons in the electron transport chain for energy production. However, when there are leaks in the flow of electrons, as part of oxidative metabolism or as a response of the immune system, free radicals are generated [1, 2]. Despite the above, the organisms have developed an antioxidant defense system capable of seizing or neutralizing free radicals, such as superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx), glutathione transferase and glutathione reductase, and endogenous (glutathione) and exogenous (ascorbic acid, tocopherol, phenolic compounds, flavonoids, and carotenoids) antioxidants. When the production of reactive species increases or antioxidant levels decrease, the prooxidant-antioxidant balance that exists in organisms is broken, leading to the so-called oxidative stress [3].

ROS, generated as by-products of chemical cell reactions as well as by several stress factors (ultraviolet light, metallic ions, drugs, chemical modifications, heat, and ionizing radiation), are responsible for the oxidative damage of lipids, proteins, and DNA, mutations, processes involved in cellular aging, and onset or development of degenerative diseases such as diabetes, cataracts, hypertension, inflammation, cancer, rheumatoid arthritis, neuropathies, and cardiopathies, among others [4, 5, 6, 7, 8].

An antioxidant molecule is any substance that delays or prevents deterioration, damage, or destruction caused by oxidation. Antioxidants are compounds capable of slowing, inhibiting, or preventing the oxidation of molecules due to their ability to quench free radicals through electron transfer mechanisms. The antioxidant transfers an electron to the free radical to stabilize it [9]. Trolox (6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid) is a water-soluble vitamin E analog with antioxidant properties [10]. In this sense, several authors have demonstrated the ability of Trolox to reduce hydrogen peroxide (H2O2) levels, inhibit cell membrane damage and DNA fragmentation, and protect against damage by lipid peroxidation [11, 12, 13, 14].

Caenorhabditis elegans is an organism frequently found in soils, feeding on bacteria and other microorganisms, and used as a biological model due to its small size, simple anatomy, short life span, transparent structure, easy reproduction, completely sequenced genome, and abundance of mutant strains [7, 15, 16]. The nematode genome has been completely sequenced; it contains 19,000 genes, and 65% of the genes is associated with human diseases, making clear its importance as a research model to understand the biological, metabolic, pathological, and molecular processes associated with the development of diseases, functioning, and toxicity of bioactive compounds and antioxidant substances [17, 18].

C. elegans can undergo experimental oxidative stress conditions upon being exposed to certain prooxidant compounds such as H2O2, tert-butylhydroperoxide, arsenite, paraquat, and juglone. This leads to increased levels of O2 and ROS, shortening the nematode life span and survival [19, 20], which has been advantageous to studies of antioxidant compounds using C. elegans as a research model organism. Several authors have established that its life span and/or resistance against oxidative stress increased after exposure to vitamin C [17]; vitamin E [21]; spinach extracts [22]; cocoa [23]; phenolic compounds such as quercetin [24, 25], epicatechin [26], and resveratrol [7]; and carotenoids such as β-carotene [17], among others.

The aim of this work was to evaluate the antioxidant effect of Trolox on the resistance of the nematode C. elegans against oxidative stress. Moreover, the effect of the antioxidant on continuous generations was studied with the purpose of determining if that resistance obeys to an inherited effect or solely to its antioxidant properties.

2. Methodology

2.1 Materials

The biological model used in this study was the wild strain (Bristol N2) nematode C. elegans, fed with the bacteria, uracil auxotroph, Escherichia coli OP50 in Luria-Bertani (LB) medium (10 g/L NaCl, 10 g/L tryptone, 5 g/L yeast extract) [27]. Both organisms were obtained through the Department of Chemical and Biological Sciences of Universidad de las Américas Puebla.

2.2 Maintenance and growth conditions

Nematodes were maintained at 22 ± 2°C in nematode growth medium (NGM) plates [3 g/L NaCl, 2.5 g/L peptone, 24 g/L bacteriological agar, 1 mL/L 1 M CaCl2, 1 mL/L 1 M MgSO4∙7H2O, 20 mL/L pH 6.0 phosphate buffer, 1 mL/L cholesterol (0.005 g in 1 mL of ethanol)] supplemented with 200 μL of E. coli OP50. Nematodes were taken from NGM plates and washed with 2 mL of M9 solution (6 g/L Na2HPO4, 3 g/L KH2PO4, 5 g/L NaCl, 0.215 g/L MgSO4∙7H2O). They were centrifuged at 4600 rpm and 4°C for 1 min (centrifuge Z 366 K, HERMLE Labortechnik, Germany). Two washes were performed by removing the supernatant, adding 1 mL of M9 solution, and centrifuging under the same conditions. The supernatant was removed, and the residue was placed in new NGM plates with E. coli OP50. The plates were incubated at 22 ± 2°C [27].

2.3 Synchronization

Synchronization was based on the methodology used by Surco-Laos et al. [25] with some modifications. Nematodes were taken in the adult stage (third day), washed with M9 solution to eliminate bacteria, and centrifuged at 4600 rpm and 4°C for 1 min. The supernatant was removed, and 1 mL of M9 solution was added and centrifuged again under the same conditions. The supernatant was removed, and 1 mL of 1 M NaOH was added, then vortexed (Vortex-Genie 2 G560, Scientific Industries, USA) for 30 s, and centrifuged under the same conditions. The supernatant was removed, and 500 μL of 1 M NaOH and 500 μL of NaOH:5% sodium hypochlorite (3 mL 1 M NaOH + 2 mL Cloralex®) were added, then vortexed for 60 s, and centrifuged under the same conditions. The supernatant was removed and washed two times with 1 mL of M9 solution, increasing the centrifugation speed to 5600 rpm. Lastly, the supernatant was removed, and the residue was placed on new NGM plates with E. coli OP50. The plates were incubated at 22 ± 2°C.

2.4 Oxidative stress resistance

For oxidative stress resistance assays, synchronized nematodes were divided into the following groups: control group (without antioxidant) and two antioxidant groups (900 μM Trolox).

For oxidative stress resistance assays, the methodology suggested by Sangha et al. [28] was used with some modifications. It consisted of selecting 60 ± 5 nematodes in L4 larval phase (2 to 2½ days), previously exposed to antioxidants, which were transferred to NGM plates with 400 μM of the prooxidant juglone (5-hydroxy-1,4-naphthoquinone; Sigma-Aldrich, Mexico), which induces lethal oxidative stress. Nematode survival was evaluated every hour until 8 h, scored as dead if they failed to respond at the stimulation with a platinum wire [7].

Simultaneously, nematodes in L4 stage were synchronized, and eggs obtained were placed in new NGM plates with bacteria and antioxidants and incubated at 22 ± 2°C until the L4 stage was reached again. At this point, one of the antioxidant groups continued with Trolox treatment (AO1), while in the other group, the treatment was discontinued after the first generation (AO2). Nematode survival was evaluated following the above-described methodology. This procedure was repeated until the oxidative stress resistance of four different generations was evaluated for each condition. Experiments were carried out in duplicate.

2.5 Statistical analysis

Survival assays of C. elegans were analyzed using the Kaplan-Meier methodology and log-rank test, using Minitab Statistical Software (18th version, Minitab Inc., USA).

3. Results and discussion

Trolox-treated nematodes (900 μM) significantly increased their resistance against oxidative stress (p < 0.05) compared to untreated nematodes in all four generations evaluated ( Figure 1 and Table 1 ). AO1 nematodes were exposed to Trolox over all four generations. On the other hand, only the first generation of AO2 nematodes was exposed to Trolox, and later generations were not exposed to the antioxidant (like control nematodes). The purpose of this was to evaluate whether nematode survival increases due to the antioxidant presence or through an inherited effect. In both treatments (AO1 and AO2), resistance increased after each generation since the antioxidant effect was potentialized, and this indicates the existence of an inherited effect.

Figure 1.

Resistance against oxidative stress of nematodes treated with Trolox. Data shown correspond to mean (n = 2). AO1 nematodes were exposed to Trolox through all generations. AO2 nematodes were exposed to Trolox only in the first generation. Control nematodes were not exposed to any antioxidant through all generations.

Trolox treatmentHalf-life time (h)
First generationSecond generationThird generationFourth generation
Control3.5b3.7b3.6c3.7c
AO14.1a5.1a5.9a6.6a
AO24.2a4.6a4.7b4.8b

Table 1.

Half-life time of Trolox-treated nematodes subjected to oxidative stress.

Data shown correspond to mean (n = 2). Superscript letters indicate significant differences (p < 0.05) between treatments within each generation.

In the first generation, no significant differences (p > 0.05) were observed between groups AO1 and AO2 (as they had the same treatment). In the second generation, no significant differences (p > 0.05) were observed between both groups, although AO1 nematodes were slightly more resistant than those from group AO2. However, in later generations significant differences (p < 0.05) were detected between both groups. The resistance of AO1 nematodes was higher than that of AO2 nematodes. In the third and fourth generations, 28 and 39% of AO1 nematodes survived in comparison to 2 and 10% of AO2 nematodes, respectively. Oxidative stress resistance was higher in nematodes with continuous exposure to Trolox. Nevertheless, AO2 resistance was superior to control (p < 0.05) in all generations, proving once again the existence of an inherited effect that protected nematodes following exposure to the antioxidant (Trolox). These results are consistent with those of Yazaki et al. [29], who suggest that nematode resistance and survival are influenced by environmental factors (nutrients, oxygen, toxic substances, and pathogens) as well as heritage (20–50%). Additionally, Harrington and Harley [30] found that continuous exposure to vitamin E promotes nematode survival, which is consistent with the results of AO1 nematodes. On the other hand, Pietsch et al. [24] showed that quercetin (100 and 200 μM) increased the life span of three different generations of nematodes. However, survival was not shown to improve over the next generation, opposed to that observed in this work. In the same way, Zuckerman and Geist [31] noticed that vitamin E increased the life span and survival of C. elegans, although the effect of is not inherited from parents to progeny.

Increased resistance against oxidative stress in nematodes treated with Trolox agrees with the results reported by several authors. In this sense, Chen et al. [7] observed that resveratrol (50 μM) increased nematode resistance by 27.6%. Also, researchers found that blue corn anthocyanin extract provided protection against oxidative stress by reducing ROS production and increasing C. elegans life span. On the other hand, Abbas and Wink [1] demonstrated that epicatechin gallate (EGCG) (200 μM) decreased H2O2 levels in C. elegans nematodes and increased its life span and resistance against oxidative stress. In the same way, González-Manzano et al. [26] reported that epicatechin (EG) (200 μM) increased nematode resistance against thermal stress by decreasing ROS levels and increasing CAT activity. Additionally, Kampkötter et al. [5] concluded that kaempferol extends the life span of nematodes through increased resistance to thermal stress. Thermal stress leads to an accumulation of ROS; therefore, the treatment reduces ROS levels and gives protection to the nematodes. In addition, Surco-Laos et al. [25] found that quercetin (200 μM) increased breeding, survival, and resistance against thermal and oxidative stress of C. elegans. Similarly, Lee et al. [16] showed that vitexin (50 and 100 μM) increased life span, decreased ROS levels, and induced thermotolerance, leading to increased resistance against oxidative stress.

Trolox has been shown to increase the life span of nematodes as well as their thermotolerance, and it has been proposed that longevity is enhanced by inducing stress response [19, 20]. In this sense, Zhang et al. [32] showed that Trolox was capable of neutralizing ROS levels and reversing the oxidative damage induced by calycosin-7-O-β-D-glucoside, findings that are consistent with the observations of this work. In contrast, Schulz et al. [33] observed that treatment with Trolox (100 μM) did not improve the life span of nematodes. It is likely that the researchers have not witnessed any effect due to the low concentration of Trolox, since in the current study, the concentration of Trolox was higher (900 μM) and its protective effect was exhibited. This theory is particularly true as mentioned by Gems and Doonan [19] who explained that antioxidant dosage affects oxidative stress response and survival of nematodes.

Vitamin E administration has been shown to increase life span, survival, and growth, protect against oxidative stress during gametogenesis, decrease toxicity and oxidative stress, and reverse damage caused by UV radiation in C. elegans [19, 20, 34]. In this sense, Harrington and Harley [30] and Zuckerman and Geist [31] showed that vitamin E (200 μg/mL) prolonged the life span and survival of C. elegans. Meanwhile, Kim et al. [35] mentioned that vitamin E improved longevity of nematodes by increasing their resistance against oxidative stress. In contrast, higher concentration of vitamin E (400 μg/mL) proved to have a toxic effect on nematode survival, exhibiting adverse effects on nematode reproduction and growth delay [30]. This is consistent with findings of Li et al. [34] who showed that high concentrations of vitamin E had neurotoxic effects on C. elegans. They found that elevated levels of vitamin E induce abnormal neuronal development that disrupts nematode thermosensation and thermotaxis. Similarly, nematodes supplemented with high doses of antioxidants, such as EUK-8 and EUK-134 (SOD mimetics), display shorter life span due to increased ROS production [19, 21]. In addition, Chen et al. [7], Yazaki et al. [29], and Desjardins et al. [36] have suggested that high levels of antioxidants exhibit a prooxidant and toxic effect on the organism, while lower concentrations display a protective effect.

Antioxidant compounds achieve their protective effect through at least two mechanisms, direct suppression of free radicals and oxidants and potentiation of the synthesis and activity of metabolites and enzymes in the body [37]. Numerous studies have shown that resistance against oxidative stress of nematodes fed with antioxidants is not solely caused by the ability to scavenge free radicals and reverse oxidative damage. Resistance also involves the regulation of antioxidant enzymes and defenses (SOD, CAT, and GPx) [8, 29]. In this sense, it has been noted that mutant strains age-1 and daf-2 have higher expression and activity of CAT and SOD enzymes than the wild strain, thus increasing their life span and tolerance to oxidative and thermal stress. On the other hand, mutant strain mev-1 has lower life span due to paraquat hypersensitivity and decreased SOD levels [20]. Other mechanisms involved in the resistance are modulation of transcriptional factors, signaling pathways, and reduction of ROS production, mechanisms that influence development, growth, metabolism, and survival of C. elegans [8, 24, 26, 29, 32, 35].

4. Conclusions

Continuous exposure of nematodes to Trolox increased their resistance against oxidative stress and survival in comparison to those without treatment. This suggests that Trolox is not only capable of neutralizing oxidative damage but also triggers changes at physiological and molecular levels that enhanced its antioxidant activity and the organism’s antioxidant defenses, allowing them to cope with the oxidative stress and increasing their survival.

Acknowledgements

The author M.A. González-Peña thanks the Universidad de las Américas Puebla (UDLAP) and Consejo Nacional de Ciencia y Tecnología (CONACYT) for the scholarship granted to complete his doctoral degree.

Author details

Marco Antonio González-Peña1, José Daniel Lozada-Ramírez2 and Ana Eugenia Ortega-Regules3*

Department of Chemical and Food Engineering, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

Department of Chemical and Biological Sciences, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

Department of Health Sciences, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

*Address all correspondence to: ana.ortega@udlap.mx

Effect of Oil Content in the Physicochemical Characteristics of Spray-Dried Powders of Anise (Pimpinella anisum L.) Essential Oil Encapsulated by Complex Coacervation

Ruth Hernández-Nava, Nancy Ruíz-González and María Teresa Jiménez-Munguía

Abstract

Complex coacervation is a technique that involves the electrostatic attraction between two biopolymers of opposite charges that surround a compound of interest and can be stabilized by spray drying. This technique has been used to increase the shelf life of functional ingredients, such as essential oils, providing controlled release and allowing an alternative food processing. The aim of this work was to evaluate the effect of essential oil content present in the coacervate, and its effect in the physicochemical characteristics of spray-dried powders of anise essential oil. Complex coacervates between gelatin and chia mucilage were used to encapsulate 5.0 and 7.5% (w/w) of anise essential oil. These coacervates were spray-dried with an inlet air temperature of 160?C and a feeding rate of 5 g/min. Powders were characterized by particle size, moisture content, solid yield, flow properties, and encapsulation efficiency. The powder with 7.5% of anise essential oil had the highest encapsulation efficiency (96.6 ± 0.02%). All physicochemical characteristics of the powders were influenced by the essential oil content in the complex coacervates. Complex coacervation between gelatin and chia mucilage resulted in an effective method to encapsulate anise essential oil stabilized by spray drying with high encapsulation efficiencies.

Keywords: anise essential oil, complex coacervation, spray drying

1. Introduction

Essential oils are secondary metabolites of aromatic plants, obtained from different plant materials [1]. Their chemical composition can be influenced by the climate and the soil where the plants are grown, as well as the extraction processes used [2]. These have achieved an increasing interest in the food industry because of their antioxidant and antimicrobial properties that have the potential to eliminate free radicals and inhibit the presence of pathogenic microorganisms in food [3]. However, essential oils are chemically unstable when they are exposed to certain environmental conditions such as light, moisture, oxygen, and elevated temperatures, all of which can cause the loss of their antimicrobial and antioxidant properties [4].

In order to protect the essential oils from the environmental conditions and the interaction with other components of the food, these can be encapsulated. Encapsulation is a process of building a barrier between the core and wall material to avoid physicochemical reactions and to maintain the biological, functional, and physicochemical properties of the core materials [5]. It is a good resource in the application of essential oils as antimicrobial and antioxidant agents in food because it helps to mask the characteristic strong odor that can alter the sensory characteristics of the food product in which essential oils are used. Other advantages of encapsulation are as follows: it minimizes the interaction of the active compound with the environment by reducing the rate of evaporation or transfer of core components to the outside, and it permits easy handling of the encapsulated substance and allows a controlled release of the active compound [6]. Among the encapsulation methods of essential oils, encapsulation by complex coacervation implies the electrostatic attraction between two polymers of opposite charges, and coacervate formation occurs over a narrow pH range [5]. Complex coacervation consists of four steps: dissolution, emulsification, coacervate formation, and wall formation and drying [5, 7, 8]:

  1. Dissolution: The preparation of aqueous solutions containing two different biopolymers (commonly a protein and a polysaccharide) is achieved by mixing and sometimes applying heat [7].

  2. Emulsification: The material to be encapsulated is added to a solution rich in biopolymers, prepared in the previous step, at a temperature above the gelation point and a pH higher than the isoelectric point of the protein used as an encapsulating agent. Agitation is kept constant until obtaining the desired drop size [5, 7, 8].

  3. Coacervate formation: The pH is then adjusted below the isoelectric point of the protein to initiate the electrostatic interactions between the polymers with opposite charges: the protein has a positive charge and the polysaccharide a negative charge. As a result, the droplets of the dispersed phase agglomerate and the separation of phases takes place [5, 7, 8].

  4. Wall formation and drying: The temperature of the system is decreased below the gelation point of the protein until reaching cooling temperatures, which allows the formation of the wall due to the accumulation of the polymer-rich phase around the material of interest. Subsequently, the capsules are dried to obtain a powder [5, 7, 8].

The formation of capsules in complex coacervation is affected by several factors, such as the composition of the emulsion, which includes the mass ratio of the encapsulating agents, the concentration of the encapsulated material, and the quantity of emulsifiers added [9, 10]. On the other hand, a coacervate can be stabilized by spray drying atomizing the liquid substance forming small drops on a stream of hot gas (air), in which the solvent evaporates producing small particles of the encapsulated material [11].

Some research has been conducted to evaluate the formation of complex coacervates encapsulating essential oils, which allows having a better understanding of their behavior [12, 13, 14]. The research on this subject is very broad because it is possible to evaluate the formation of complex coacervates of other essential oils using different encapsulating agents or combining encapsulation techniques. This work aims to evaluate the effect of varying the concentration of anise essential oil in the physicochemical characteristics (particle size, moisture content, solid yield, flow properties, and encapsulation efficiency) of complex coacervate powders obtained by spray drying.

2. Methodology

2.1 Materials

Anise essential oil (AEO) was purchased from Laboratorios Hersol (Mexico City, Mexico). To form the complex coacervates, chia seeds (Salvia hispanica L.) were purchased from Verde Limón Trading Company (Mexico City, Mexico), and gelatin (type B) was purchased from Gelco SA (Bogota, Colombia). Tween 80 (Sigma-Aldrich, USA) was used as an emulsifying agent. Other chemicals used (analytical grade) were purchased from Hycel (Jalisco, Mexico).

2.2 Extraction of chia mucilage

Chia mucilage was extracted using the modified method described in other studies [15, 16]. Chia seeds were hydrated with distilled water in a ratio of 1:20 (w/v) with constant stirring for 4 h at 35 ± 1.0°C. The hydrated seeds were freeze-dried (Triad™ Labconco, USA), and then the mucilage was mechanically separated from the seeds by sieving with a sieve mesh number 35 (500 μm).

2.3 Microencapsulation of anise essential oil by complex coacervation

Complex coacervates were prepared with 5.0 and 7.5% (w/w) of AEO using gelatin and chia mucilage as encapsulating agents and Tween 80 as an emulsifying agent. Coacervates were prepared using an ultrasound homogenizer (Cole-Parmer, CP 505, USA) adjusting the pH by adding HCl 0.1 N dropwise, and the system was cooled down to 25°C to allow the wall formation of the coacervate. The coacervate was spray-dried using a mini spray drier (B-290, BÜCHI Labortechnik, Switzerland). To guarantee a better yield, the coacervates were previously dispersed in aqueous maltodextrin (20% w/w) solution. An inlet air temperature of 160°C and a feeding rate of 5.0 g/min were used. The powders of complex coacervates were collected and stored inside an amber flask at 25 ± 1.0°C until further use.

2.4 Characterization of spray-dried powders of AEO

Particle size: The granulometric distribution of the powders was determined by a dynamic light scattering particle analyzer (Bluewave, Microtrac Inc., USA), and the instrument was previously calibrated. Span, expressing the polydispersity of the powder [17], was calculated using the following equation:

Span=D90D10D50(1)

The moisture content of powders: Moisture content was determined using the AOAC method 926.12 [18].

Solid yield: It was calculated as the ratio of the powder weight collected after the spray drying process and the initial weight of solid components in the liquid coacervate before drying [19].

Flow properties: To determine the flow properties of the powders, bulk density and tapped density were obtained [20]. Bulk density (ρbulk) of the powders was determined by introducing a known weight of powder into a graduated cylinder and then measuring its volume. Tapped density (ρtap) was determined in a similar way applying manual mechanical force to compact the powder particles until no difference in the volume was observed. Compressibility index (CI) and Hausner ratio (HR) are flow properties that measure the propensity of a powder to be compressed [20]. To calculate the CI and HR, the following equations were used:

CI=ρtapρbulkρtap×100(2)
HR=ρtapρbulk(3)

Encapsulation efficiency (EE): Surface oil (SO) and total oil (TO) of the dried coacervates were determined to calculate the encapsulation efficiency. SO was determined by the modified method described by other authors [21]. In 20 mL of n-hexane, a sample of 0.5 ± 0.01 g of dried coacervates was dispersed, with constant stirring for 15 s, then filtered (Whatman #41), and dried with a vacuum oven (G0553–10, Cole-Parmer, USA) at 70°C ± 1.0°C to evaporate the excess of n-hexane. TO content in the dried coacervates was determined by the modified method using acid digestion as suggested by other authors [22]. A sample of 0.5 ± 0.01 g of dried coacervates was dissolved in 3 mL of HCl 4 N with constant stirring until total dilution, and then 20 mL of n-hexane was added maintaining constant stirring for 1 h. Using a separation funnel, the HCl was separated, and the oil phase with n-hexane was recovered in a flask and was placed in a vacuum oven (G0553-10, Cole-Parmer, USA) to evaporate the n-hexane at 70°C ± 1.0°C for 30 min. SO and TO content were determined gravimetrically. The percent of EE was calculated using the following equation:

EE%=TOSOTO×100(4)

2.5 Statistical analysis

Analysis of variance (ANOVA) and Tukey’s comparison tests were performed to statistically analyze the obtained data, using Minitab (v.17, LEAD Technologies Inc., USA) with a confidence level of 95%.

3. Results

3.1 Particle size

Samples showed an asymmetrical distribution with a positive skew (to the right) for the particle size ( Figure 1 ). Span values for the samples with 5.0 and 7.5% of AEO were 1.60 and 1.06, respectively; these small span values indicate a narrow size distribution of the AEO powders [17]. Small particle size was observed in the evaluated microcapsules; the smaller particle size obtained was for the sample with 7.5% of AEO (D50 = 12.43 μm). Even if the particle size distribution of the samples looks similar ( Figure 1 ), a significant difference (P < 0.05) in the median diameter (D50) between the samples was determined, which may be due to the variation of the essential oil concentration in the sample and the low standard deviation of the measurements. According to the literature, particle size of the microcapsules produced depends on the material properties, the solid concentration, the viscosity of the encapsulating material, and the operating conditions of the spray drying process [23]. The obtained values agree with the size range for various powders dried by atomization, which is commonly between 10 and 50 μm [24].

Figure 1.

Particle size distribution of spray-dried powders with 5.0 and 7.5% of AEO.

3.2 Moisture content

The moisture content may change the physicochemical characteristics of powders during storage, becoming an important variable of stability during the shelf life of the product [25]. The values observed in this study are among the range of moisture content accepted for food powders (1–5%) [26]. As shown in Table 1 , the sample with 5.0% of AEO had the largest moisture content. This result is expected since this sample has fewer solids compared to the sample with 7.5% of AEO. A significant difference between samples (P < 0.05) was obtained, which suggests that the oil content affected the final moisture content of the dried coacervates.

5.0% AEO7.5% AEO
% moisture (w.b.)2.07 ± 0.02a1.25 ± 0.01b
Solid yield (%)80.0 *79.7 *
Bulk density (g/cm3)0.233 ± 0.001a0.227 ± 0.001b
Tapped density (g/cm3)0.466 ± 0.001a0.454 ± 0.001b
Compressibility index50 ± 0.01a50 ± 0.01a
Hausner ratio2.00 ± 0.01a2.00 ± 0.01a
EE (%)93.0 ± 0.3a96.6 ± 0.2b

Table 1.

Characterization of spray-dried powders of AEO.

All properties were calculated by triplicate except for solid yield.


Different letters in row show significant differences (P < 0.05) between samples.

3.3 Solid yield

For spray drying processes, the yield is highly influenced by the control of the powder formation, avoiding wet quenching or material losses due to the stickiness or adhesiveness of the material to the equipment walls. To reduce wet quenching, it is recommended to operate with a high solid content of the feed solutions, in order to reduce the quantity of water that must be evaporated during the process, which also increases the operation costs [27]. In our study, the solid yield was influenced by the essential oil content in the complex coacervates, being the 5.0% of AEO sample, the one with the highest solid yield value ( Table 1 ). The lower oil content in the feeding sample reduces the stickiness of the powder during the drying process, which contributed to recover more solids at the end of the process.

Other authors [19, 25, 28] have reported solid yields in a range of 51.8–62.04% for the encapsulation of essential oils by spray drying; these values are lower to the ones obtained in this study. It can be stated that the process conditions used for this study, as well as the formulations of the samples, were adequate for this encapsulation technique, resulting viable for practical application.

3.4 Flow properties of powders

Bulk density obtained for both powders agrees with the values reported in the literature for encapsulation of soy milk powder (0.21–0.22 g/cm3) [29], as well as for the encapsulation of rosemary essential oil using gum arabic as wall material (0.25–0.36 g/cm3) [30]. The values obtained were 0.233 and 0.227 g/cm3 for 5.0 and 7.5% of oil content in the coacervates, respectively; due to the high repeatability of the method, small standard deviation was obtained, and significant difference (P < 0.05) was found between the samples.

The tapped density is an important factor related to the packaging, transport, and commercialization of powders. The optimization of this value can be used in terms of quality of material inside a container [31]. The values obtained for both powders agree with that reported in the literature for encapsulation of rosemary essential oil by atomization (0.41 g/mL) [32]. Similar to bulk density, a significant difference (P < 0.05) was found between the samples, due to the high repeatability of the test ( Table 1 ).

It was observed that the sample with 5.0% of AEO had the biggest values of bulk density and tapped density compared to the sample with 7.5% of AEO, which is related to the particle size distribution and moisture content. With more moisture content, the particle size increases, and the bulk density and tapped density increase as well [33]. Besides, the heterogeneous particle size distribution of the coacervate powder with 5.0% of AEO provides a better reorganization of the particles when the powder is tapped, decreasing the volume occupied by the powder and, therefore, increasing the tapped density.

However, the determined flow properties of the powders, the compressibility index, and the Hausner ratio did not demonstrate significant differences (P > 0.05) between the samples ( Table 1 ). With a compressibility index of 50%, both powders were classified as materials with very bad flowability [34], and according to the Hausner ratio obtained for both powders (2.00), the cohesiveness of the samples is considered high [35].

3.5 Encapsulation efficiency

The coacervates with 7.5% of AEO had the highest encapsulation efficiency (EE) in this study (96.6 ± 0.02%) with significant differences (P < 0.05) between the obtained samples. The EE achieved in this study is higher compared to the ones reported in other studies of essential oil encapsulation by spray drying: rosemary essential oil (38.1 ± 0.2%) [36]; orange essential oil (56.02 ± 0.81%) [28]; and lemon essential oil (90.6 ± 0.8%) [37]. The differences in the EE values are attributed to the essential oils used, the combined encapsulation techniques applied, and the drying conditions used. The complex coacervation is an encapsulation technique that has been recognized to be an efficient method to protect lipid compounds, reporting encapsulation efficiencies above 70% [28, 38, 39]. The interaction of the different components creates an important barrier against environmental conditions. The spray drying technique was subsequently applied to confer more stability to the coacervates providing an additional protective barrier to the essential oils, resulting in higher EEs as reported in the latest studies.

4. Conclusions

Complex coacervation between gelatin and chia mucilage resulted in an effective method to encapsulate anise essential oil after being spray-dried. All physicochemical characteristics obtained in the powders (particle size, moisture content, solid yield, flow properties, and encapsulation efficiency) were affected by the oil content in the complex coacervates. The coacervate with 7.5% of AEO had a moisture content adequate for food powders and the highest encapsulation efficiency. Further research is needed to study the practical applications of this complex coacervate system and explore other sensitive ingredients needed to be protected by encapsulation techniques.

Acknowledgements

This work was supported by the Universidad de las Américas Puebla (UDLAP) and Consejo Nacional de Ciencia y Tecnología (CONACYT). Authors, Hernández-Nava and Ruíz-González, acknowledge the financial support for their PhD studies in food science given by the UDLAP and CONACYT.

Author details

Ruth Hernández-Nava, Nancy Ruíz-González and María Teresa Jiménez-Munguía*

Department of Chemical and Food Engineering, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

Looking for the Killer Combination: pH, Protein, and Thyme Essential Oil Interactions that Weaken Salmonella and Listeria

Leonor Lastra Vargas, Aurelio Lopez-Malo and Enrique Palou

Abstract

The past few years have seen an increase in the search and validation of clean label antimicrobials for their application in ready-to-eat meat products. Essential oils (EOs) from plants have been found to be effective against foodborne pathogenic bacteria such as Salmonella and Listeria under laboratory conditions. In the present study, the effect of thyme EO, pH, and isolated soy protein (ISP), and their interactions on the inactivation of either a Listeria or Salmonella cocktail were studied through a Box–Behnken experimental design. It was observed that as the amount of ISP decreased and thyme EO concentration increased, the number of microorganisms declined; furthermore, at higher ISP concentrations, a higher concentration of thyme EO was needed. This suggests an interaction between ISP and thyme EO that reduces bacterial inactivation. The results obtained from this research are a first step to establish appropriate thyme essential oil concentrations for its application as an antimicrobial on ready-to-eat meat products depending on their pH and protein content.

Keywords: thyme essential oil, antimicrobials, model system

1. Introduction

Ready-to-eat meat products such as cold cuts and deli style ham or sausages have been implicated in the risk of foodborne diseases caused by the ingestion of pathogenic bacteria such as Salmonella and Listeria [1], especially when sliced at retail points [2]. Antimicrobial compounds used as ingredients in the product formulation have helped diminish foodborne disease cases, but the consumers’ demand for “clean label” products has led to the search and validation of alternative compounds such as plant essential oils [3] since they not only possess generally recognized as safe (GRAS) status and wide acceptance from consumers [4] but, as plant secondary metabolites, they also contain antimicrobial components, which are important for plant defense [5], and nowadays as potential antimicrobial ingredients for food application.

Oregano and thyme have been reported as among the most effective essential oils as antimicrobials, suggesting that low concentrations of them may be needed to achieve the desired effect. This, in turn, is an advantage since high concentrations of an EO can have a negative impact on the sensory acceptability of food products [4]. Although some studies have shown the efficacy of EOs as antimicrobials in meat products, others have reported very low antimicrobial activity and the need for high concentrations of the EO to accomplish the desired effect [3].

In this context, there is a need to evaluate EOs directly in food products or in model systems that simulate food composition [4]. The use of food model media before food application is important since conditions can be controlled in order to determine the interactions between EOs and food components that could influence their antimicrobial activity.

In this regard, the main objective of this work was to, by means of a Box-Behnken design (BBD) of response surface methodology, evaluate the effects of pH, protein, and thyme essential oil concentration on the log of colony-forming units (CFU)/mL obtained after either a Salmonella or Listeria cocktail was subjected to the selected factors in model culture media.

2. Materials and methods

Essential oil and its composition: Thyme essential oil was obtained from the Laboratorios Hersol S.A. de C.V. (San Mateo Atenco, Estado de México, Mexico). It was selected based on both its reported antimicrobial activity and potential flavor compatibility with ready-to-eat meat products.

Tested EO was analyzed by gas chromatography–mass spectrometry (GC–MS) using a gas chromatograph (Agilent Technologies, 6850 N model, Santa Clara, CA) coupled to a mass selective detector with triple axis (Agilent Technologies, 5975C VL model, Santa Clara, CA) following the methodology from Ávila-Sosa et al. [6] using a fused silica HP-5MS (5% phenyl-95% polydimethylsiloxane) capillary column (30 m × 0.250 mm; film thickness, 0.25 μm) and helium as the carrier gas (flow rate 1.1 mL/min). The injection volume was 1 μL of the EO samples prepared by dilution in ethanol (5100 v/v). Injector and detector temperatures were set at 250 and 280°C, respectively, and column oven temperature will be programmed from 60°C (4 min) to 240°C (10 min) at 4°C/min. Retention indices were calculated using a homologous series of n-alkanes C8 to C18 (Sigma, St. Louis, MO), and compounds were identified by comparing with retention indices from literature and the mass profile of the compounds available from the US National Institute of Standard Technology (NIST) library. The relative content of each component was calculated by % peak area. Thyme EO density was determined in triplicate by the relationship between mass and volume in accordance with the 962.3 AOAC method [7].

Microorganisms and cocktail preparation: Salmonella and Listeria strains ( Table 1 ) were maintained in trypticase soy agar (Bixon BD, Cuautitlán Izcalli, Estado de México, Mexico) at 4°C and cultured in 10 mL trypticase soy broth at 35°C for 18 h prior to use.

StrainProvider
L. monocytogenes (Scott A) Salmonella typhimurium (ATCC ® 14028)Universidad de las Américas Puebla Food Microbiology Laboratory collection
Salmonella spp.Benemérita Universidad Autónoma de Puebla (BUAP) Food Microbiology Laboratory collection, isolated from tomatoes by Dra. María Lorena Luna Guevara
Salmonella typhi (ATCC ® 19,430)BUAP Chemical Sciences Faculty Food Microbiology Laboratory collection
Listeria monocytogenes (CDBB-B-1426)
Salmonella Typhimurium (ATCC® 13,311)Instituto Tecnológico de Estudios Superiores de Monterrey NutriOmics group

Table 1.

Studied bacterial strains and providers.

Cocktails were prepared as equal amounts of each strain by mixing 1 mL of a fresh culture of each strain containing approximately 1 × 108 CFU/mL in a sterile tube. Serial dilutions of each cocktail were made to obtain the desired inoculum concentration.

Minimum inhibitory (MIC) and bactericidal (MBC) concentrations: Minimum inhibitory concentration was determined by the agar dilution method [8, 9] tested in Petri dishes with trypticase soy agar for the strains L. monocytogenes (Scott A) and Salmonella Typhimurium (ATCC 14028). EO was added to the agar after sterilization at 45°C with Tween 80 (0.2%) as an emulsifier. For each strain, 50 μL of a fresh culture diluted to contain approximately 1x104 CFU/mL was spiral plated (Autoplate 4000, Spiral Biotech, Norwood, MA) for each concentration of thyme EO tested, 0.065–0.093% (w/w).

MIC was considered as the lowest concentration of EO at which no bacterial growth was observed after 24 h of incubation at 35°C and MBC as the lowest concentration, which resulted in no visible growth on the plate and no growth on trypticase soy broth after streaking the plate surface and further incubated under optimal conditions [10].

Plate preparation and inoculation: The media for plates were prepared by mixing isolated soy protein (ISP) adjusted to the desired concentration, trypticase soy broth as the nutrient provider, bacteriological agar as solidifying media, NaCl to adjust water activity to 0.98, and distilled water. The mixture was heated until boiling under continuous agitation and sterilized by autoclaving for 5 min at 17 lb./in2. After the media cooled down to 45°C, pH was adjusted with citric acid (10%), and EO to the desired tested concentration was added during continuous agitation with a magnetic stirrer. The media was poured in Petri dishes and left to cool for at least 2 h prior to inoculation. Plates at room temperature were inoculated with 30 μL (in three drops of 10 μL) of the cocktail dilution which, by previous determination, allowed the count of 20–70 CFU in each drop.

Experimental design: A Box-Behnken design (BBD) of response surface methodology was applied using Minitab 18.1. (Minitab, Inc., State College, PA) to study the effect of medium pH, isolated soy protein (ISP) concentration (% w/w), and EO concentration (% w/w), as well as their interactions on the growth of Salmonella or Listeria cocktails measured as fraction of surviving microorganisms when compared to CFU/mL found in each cocktail.

Each factor was investigated at three different levels (−1, 0, +1) shown in Table 2 . The experimental design included 15 sets of test conditions, including 3 replicated center points as randomized experiments to avoid bias.

FactorIndependent variableCoded level
−10+1
AIsolated soy protein (% weight/weight)10.011.513.0
BpH5.56.06.5
CEssential oil (% weight/weight)0.190.220.25

Table 2.

Studied independent variables and their levels for tested box-Behnken design.

3. Results and discussion

Essential oil composition: EO analysis revealed the presence of 37 different compounds accounting for 99.86% of the tested oil. Those making up 0.1% or more of the EO are displayed in Table 3 , the most abundant compounds being carvacrol (31.87%), β-cymene (also known as meta-cymene) (25.22%), linalool (17.86%), 1R-α-pinene (8.33%), and γ-Terpinene (6.04%).

CompoundPercentage
1R-α-Pinene8.33
Camphene0.85
1–2-methylene-(1S)-beta-pinene0.81
β-Pinene0.86
β-Cymene25.22
γ-Terpinene6.04
Terpinolene0.80
Linalool17.86
Carvacrol31.87
Caryophyllene0.98
5-Isopropenyl-2-methyl-7-oxabicyclo [4.1.0] heptan-2-ol0.11
Caryophyllene oxide3.12
12-Oxabicyclo [9.1.0] dodeca-3,7-diene0.13
Cembrene0.14
2,4,7,9-tetramethyl-5-decyne-4,7-diol0.10
1,2,5,5,8a-Pentamethyl−1,2,3,5,6,7,8,8a-octahydronaphthalen-1-ol0.50
α-Isomethyl ionone0.57
Androst-4-en-11-ol-3, 7-dione, 9-thiocyanato0.48
Retinoic acid0.19
Abietic acid0.33

Table 3.

Main compounds found in tested thyme essential oil determined by gas chromatography-mass spectrometry.

It its worth noticing that although major thyme essential oil components found in this study agree with other reports [11, 12], thymol, a characteristic and major component of thyme essential oil [13], was not found to be present in the tested essential oil; however, its monoterpene precursor (γ-terpinene) [14] was identified. Despite this, the antimicrobial activity of the tested thyme essential oil did not seem to be affected. Ruiz-Navajas and her team [15] determined that the predominant compounds found in Thymus piperella essential oil were carvacrol (31.92%), p-cymene (16.18%), γ-Terpinene (10.11%), and α-terpineol (7.29%); this essential oil proved to be effective, decreasing aerobic mesophilic and lactic acid bacterial counts when added to chitosan edible films that were applied to cooked cured ham as coatings.

Minimum inhibitory and bactericidal concentrations: Table 4 exhibits the values of tested thyme essential oil MIC and MBC against L. monocytogenes (Scott A) or S. Typhimurium (ATCC 14028).

StrainMIC %(w/w)MIC %(v/v)MBC %(w/w)MBC %(v/v)
L. monocytogenes (Scott A) S. Typhimurium (ATCC 14028)0.07400.08000.09270.1000

Table 4.

Minimum inhibitory and minimum bactericidal concentrations of tested thyme essential oil * against two of the studied strains.

Density = 0.9277 ± 0.0008 g/mL.


It can be observed that values are equally effective against both strains. Similar results have been obtained in other studies where 0.1% (v/v) of thyme EO was enough to inhibit L. monocytogenes [11]; Hammer et al. [16] reported that more than 2.0% (v/v) of thyme EO was needed to inhibit the same S. typhimurium strain.

Since other authors reported the need for an increased concentration of studied EOs for bacterial control when assessed in food matrixes instead of model media [17] such as in the case of ham [18] and barbecued chicken [19], the second set of tests were performed to determine EO concentrations suitable for the experimental design. Concentrations of 0.22, 0.25, 0.28, 0.30, and 1.00% (w/w) were tested with 10% ISP at pH 6.5 against Salmonella cocktails. It was observed (data not shown) that the proportion of survivor cells declined considerably with 0.22%(w/w) thyme EO, while no growth was observed at concentrations above 0.30% (w/w); therefore, 0.19, 0.22, and 0.25% (w/w) were set as EO concentration levels for the experimental design.

In contrast with MICs derived from optimal growth conditions, in model media with increased protein content, an increased EO concentration was needed to inactivate bacteria. This is in accordance with several studies reporting that food components can interact with EO constituents impairing their function [3, 5]. The antimicrobial activity of Myagropsis myagroides ethanol extract was reduced in the presence of soybean oil, as a fat component, and >5% beef extract, as protein component but was not influenced by starch, as carbohydrate component [20].

Microbial inactivation: The complete experimental design and bacterial culture responses are shown in Table 5 .

RunIndependent variablesLog CFU/mL *
Thyme EO
ABCListeria cocktail **Salmonella cocktail ***
1−1−106.64 ± 0.036.66 ± 0.40
21−106.61 ± 0.197.40 ± 0.12
3−1106.55 ± 0.066.75 ± 0.07
41107.53 ± 0.068.22 ± 0.12
5−10−15.98 ± 2.938.56 ± 0.07
610−17.66 ± 0.158.46 ± 0.11
7−1015.78 ± 0.066.80 ± 0.24
81016.33 ± 0.297.99 ± 0.24
90−1−17.21 ± 0.278.63 ± 0.11
1001−17.33 ± 0.158.10 ± 0.06
110−116.66 ± 0.056.42 ± 0.24
120116.51 ± 0.137.47 ± 0.09
130006.74 ± 0.157.43 ± 0.16
140006.45 ± 0.217.64 ± 0.16
150006.45 ± 0.277.52 ± 0.17

Table 5.

Box-Behnken design with independent variables and obtained response values.

n = 6.


Initial count of cocktail =8.32 ± 0.16 log CFU/mL (n = 24).


Initial count of cocktail = 8.61 ± 0.05 log CFU/mL (n = 24).


The significant effect of the three studied factors was evaluated by analyzing the response using uncoded units. The results for the Box-Behnken experimental design on the log CFU/mL of Listeria and Salmonella cocktails allowed the construction of reduced quadratic models by means of stepwise backward elimination for those terms where p > 0.10 for the Listeria cocktail model and p > 0.05 for the Salmonella cocktail model. In the case of Listeria cocktail, the lack of fit was nonsignificant (p > 0.05), while for the Salmonella cocktail, the experimental data exhibited a good fit to the model with an R2 of 0.914. The terms and uncoded coefficients included in the generated models are displayed in Table 6 .

TermListeria cocktailSalmonella cocktail
Coefficient *P-valueCoefficient *P-value
Constant45.9<0.00157.03<0.001
A1.6540.001−2.742<0.001
B−18.790.3715.58<0.001
C60.50.002−461.8<0.001
B21.5820.016−1.148<0.001
C2455.8<0.001
AB0.2404<0.001
AC6.310.0727.16<0.001
BC26.31<0.001

Table 6.

Coefficients of the reduced quadratic models for Listeria and Salmonella cocktails.

Coefficients are presented in uncoded units. A, B, and C represent isolated soy protein concentration, pH, and essential oil concentration, respectively.


As can be seen from these results, the interaction between ISP and thyme EO concentration, as well as pH quadratic interaction, and the linear effects of ISP and thyme EO concentration have a significant effect on the log CFU/mL of Listeria cocktail. For Salmonella cocktail, all variables (pH, ISP, and EO concentration) and their interactions have a significant effect on the log CFU/mL except the quadratic interaction of ISP concentration.

It is clear from Figure 1 that, for Listeria cocktail, as ISP concentration decreases and thyme EO concentration increases, the number of bacteria (log CFU/mL) decreases, being these effects more effective at pH 6.0. For the Salmonella cocktail ( Figure 2 ), a decrement in the number of bacteria (log CFU/mL) can be observed as thyme EO concentration increases and pH and ISP concentration decrease.

Figure 1.

Contour plots for Listeria cocktail (log CFU/mL) as a function of (a) thyme essential oil and pH, (b) thyme essential oil and isolated soy protein, or (c) pH and isolated soy protein.

Figure 2.

Contour plots for Salmonella cocktail (log CFU/mL) as a function of (a) thyme essential oil and pH, (b) thyme essential oil and isolated soy protein, or (c) pH and isolated soy protein.

It can also be observed that the effect of thyme EO concentration becomes more relevant as ISP concentration increases. This becomes evident for both bacterial cocktails when considering the MIC and MBC values obtained from previous tests in trypticase soy agar; under those conditions a concentration of only 0.0927% (w/w) of thyme EO was enough to inactivate the bacterial strains L. monocytogenes (Scott A) and S. Typhimurium (ATCC 14028). However, in model media conditions, 0.25% (w/w) of thyme EO still allows for the growth of more than 5 and 6 log CFU/mL of Listeria and Salmonella cocktails, respectively.

The interaction of food components and EO constituents can impair their function. These results reflect the protective effect of the protein on bacteria, which could be interacting with the EO and in that way reducing its effectiveness [21]. Veldhuizen and his team [22] incubated carvacrol with bovine serum albumin (BSA) for 15 min and then filtered the solution through a 10 kDa pore-size filter which could easily be traversed by carvacrol (molecular mass of 150 Da) but not BSA (molecular mass of 66 kDa). By measuring the amount of carvacrol recovered and compared to appropriate controls, these authors concluded that the reduction in the amount of carvacrol recovered was the result of the binding of carvacrol to BSA. This, in turn, explained the reduced antilisterial activity observed in the presence of egg yolk and BSA. The binding of carvacrol to the protein reduced the effective concentration of carvacrol in solution.

The microstructure of the food matrix must be considered as another possible reason for the diminished antimicrobial activity of EOs when tested in real foods in contrast to culture broths [23]. In our case, the effect of higher protein concentration on the reduced antimicrobial activity of thyme EO could also be explained by the physical nature of the media. In solid form, a higher protein concentration could limit its diffusion and in turn decrease tested EO effectiveness [21, 23]. Skandamis et al. [23] studied S. typhimurium in the presence or absence of oregano EO at two pH values comparing its growth in liquid and solid media; they found that while there was no significant effect of the physical state of the media on the growth of colonies in the absence of the EO, the type of media did have an influence on the inhibitory efficacy of oregano EO. Identical treatments with 0.03% oregano EO showed increased effectiveness in liquid culture than in gelatin gel, attributed to a better dispersion of tested EO in broth which could increase the interaction between cells and EO antimicrobial components.

For the Listeria cocktail, only a 2.54 log reduction could be expected at pH 6.0 with the lowest ISP (10% w/w) and highest thyme EO (0.25% w/w) tested concentrations, while for the Salmonella cocktail, the highest reduction (2.19 log) was achieved at an ISP concentration of 11.5% w/w with a pH 5.5 and 0.25% w/w of thyme EO.

4. Conclusions

Even though thymol was not found as a component of tested thyme essential oil, its antimicrobial activity did not seem to be affected, exhibiting minimum inhibitory and bactericidal concentrations similar or better than those found in other published reports. As expected, more thyme essential oil is needed to inhibit bacteria in food model media with high protein content than for laboratory media. Experiments with higher concentrations of tested thyme essential oil should be carried out seeking more pronounced log reductions in both studied bacterial cocktails. Additionally, complementary experiments are needed to validate the models obtained in this study.

Acknowledgements

The author Lastra-Vargas gratefully acknowledges Universidad de las Américas Puebla (UDLAP) and the National Council for Science and Technology (CONACyT) of Mexico financial support for her PhD studies. This work was supported by CONACyT (grant number CB-2016-2101-283636) and UDLAP (grant numbers 2409 and 3555).

Author details

Leonor Lastra Vargas*, Aurelio Lopez-Malo* and Enrique Palou*

Department of Chemical and Food Engineering, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

*Address all correspondence to: leonor.lastravs@udlap.mx, aurelio.lopezm@udlap.mx and enrique.palou@udlap.mx

Alginate Beads with Essential Oil: Water and Essential Oil Release Behavior

Maria J. Paris, Enrique Palou and Aurelio López-Malo

Abstract

Essential oils have been studied for their antimicrobial activity in vapor phase, which allows their application as antimicrobials for food preservation. This work evaluated water and essential oil release from cinnamon essential oil alginate beads in vapor phase, in different packaging systems and temperatures. Alginate beads with 5% (v/v) cinnamon essential oil were prepared and disposed in two types of containers (hermetic and perforated packages), at two different temperatures (4 and 25°C). Their weight (water and essential oil) loss was monitored over time. Essential oil released from beads in both packages at 4°C was also evaluated through gas chromatography-mass spectrometry. Water loss was faster in perforated package at 25°C, followed by hermetic package at 25°C, perforated package at 4°C, and hermetic package at 4°C. At 4°C, essential oil release rate was higher in perforated than in hermetic package, and after 21 days of storage, beads in both packages had already lost 96.27 and 96.19% of their initial content of cinnamaldehyde, respectively. Understanding water and essential oil release in different surroundings helps to choose suitable conditions for bead storage before their application in food package and know the number of beads needed to inhibit foodborne microorganisms in fresh food packages.

Keywords: cinnamon essential oil, alginate beads, vapor phase, water loss, food package headspace

1. Introduction

Consumers are increasingly aware of what they eat and demand for more natural food, with less synthetic additives and preservatives. Further, food researchers and producers seek ways to develop effective and stable systems for foodborne pathogen control, based on natural antimicrobials. Among the wide variety of natural antimicrobials explored, essential oils have shown considerable effectiveness in many studied conditions. They are found in plant organs such as flowers, buds, fruits, stem, root, and leaves and are constituted by volatile components that provide them with several properties, including antimicrobial activity [1, 2]. Some essential oils, such as cinnamon, thyme, clove, oregano, and mustard essential oils, have been reported as very effective against mold and bacteria [3, 4, 5]. Cinnamon essential oil is mainly composed of cinnamaldehyde (76.34%), which is also the major component responsible for its antimicrobial activity. Other compounds with antimicrobial activity can be found in the essential oil, namely, caryophyllene, caryophyllene oxide, linalool, and geraniol, among others, and might work synergistically with cinnamaldehyde for cinnamon essential oil antimicrobial effect. Some studies have demonstrated cinnamon essential oil inhibitory effect against food spoilage molds such as Aspergillus flavus, Aspergillus niger, Colletotrichum gloeosporioides, Rhizopus nigricans, and Penicillium expansum, as well as against some mesophilic aerobic bacteria [4, 6, 7, 8, 9].

Due to the essential oil active components’ volatility, essential oils can be effective against microbial growth when applied in vapor phase, which is advantageous for food preservation without affecting their sensory attributes. Besides, when applied in the vapor phase, lesser concentrations than direct applications are needed to inhibit microbial growth [10]. Additionally, the use of carrier polymers capable of holding and yet releasing essential oil in the vapor phase has been suggested for food preservation.

Considering the above, it is interesting to contemplate the development of active packaging systems with essential oil for food microbial control in the vapor phase, in which essential oils are incorporated in carriers. Active packages are defined as those that interact with the food they hold to increase the food product shelf life [8]. Antimicrobial active packages containing essential oil can be projected to release volatile essential oil components into the package headspace, which eventually reaches food surface. Once in contact with foodborne microbe on food surface, lag phase can be extended, and complete microbe growth inhibition is possible. The essential oil effect on foodborne microbe will, however, depend on food composition, which can support or prevent microorganism growth.

Some researchers have recently tried essential oil incorporation in films, while others propose their encapsulation in different polymeric matrices. Alginate is a polysaccharide obtained from brown algae widely used for the encapsulation of active compounds due to its biocompatibility, low toxicity, and low cost [11, 12, 13]. In food science and technology, it is, indeed, one of the most used polymers for the immobilization of enzymes, organic acids, amino acids, and essential oils [14]. In combination with bivalent (Ca2+ Ba2+, Fe2+, Sr2+) or trivalent (Al3+) cations, it forms soft and thermally stable gels, which is important in the preparation of alginate beads by extrusion [11, 15]. Extrusion is one of the simpler and most used methods to encapsulate active components in alginate beads [11] and consists of dropping alginate hydrogel containing the compounds of interest, with a needle, into a cation solution. Immediate gelation occurs due to the ionic interaction [16]. Some properties of the beads might affect encapsulation efficiency such as bead size, shape, surface morphology, and transfer properties of the polymer. The more spherical the beads, the stronger the beads and the lesser prone to fracture [11]. On the other hand, the physical properties of the hydrogel improve with increasing alginate molecular weight and with increasing cation concentration [12, 14]. Cation solution ionic force, surface tension, and viscosity, as well as needle internal diameter and dropping height, might also affect encapsulation efficiency. Some authors have tried this method for essential oil encapsulation with promising results [16, 17].

The success of an antimicrobial active package with essential oil depends on carrier polarity, permeability, and porosity but also on the volatility and molecular weight of the essential oil volatile components [18, 19]. Carrying material must be permeable and at the same time should have such barrier properties that avoid excessive or undesired loss of active components [20]. For these reasons, migration tests, run under conditions like those used for food storage, provide insights on the migration potential of the active compound in an active package.

In an aqueous environment, conditions such as temperature and pH affect active compound release [21], while in the vapor phase, temperature and relative humidity might be relevant. The considerable amount of water held within bead structural network also influences encapsulated compound release to the surrounding environment [21], and so it is important to study bead water release and its relation to other component release.

The objective of this study was to evaluate water and essential oil release from alginate beads containing cinnamon essential oil during storage in different packaging conditions and temperatures.

2. Methodology

2.1 Materials

The cinnamon essential oil was obtained from Laboratorios Hersol S.A. de C.V. (Mexico City, Mexico), and soybean oil (Nutrioli®) was bought in a local supermarket. While trans-cinnamaldehyde standard was provided by Sigma-Aldrich (St Louis, USA), sodium alginate was acquired from Sigma-Aldrich (Toluca, México).

2.2 Alginate bead preparation by extrusion

For the preparation of alginate beads with essential oil, sodium alginate (3% w/v) was dispersed in distilled water, with constant stirring at 50°C, until obtaining a homogeneous dispersion. Then, Tween 80 (0.2% v/v) was incorporated into the dispersion, followed by cinnamon essential oil (5% v/v). The mixture was emulsified at 1778 rpm, for 4 min using a high-speed mixer (Silverson L4R).

The oil in water emulsion obtained was loaded into a 10 mL syringe. This was adapted to a piston pump (Cole-Parmer, USA), and the emulsion was pumped at a 0.1 mL/min flow, into a calcium chloride solution (1 M). After extrusion of 3 g of gel (approximately 30 min), the formed beads were recovered, rinsed with distilled water, and dried in a laminar flow chamber for 1 hour.

Beads with soybean oil (Nutrioli®) were prepared using the same procedure described above, substituting the weight of essential oil for the same weight of soybean oil. Since the essential oil was added to the alginate gel in volumetric proportions, cinnamon essential oil and soybean vegetable oil densities had to be determined, to guarantee that the same weight of these components was used in the respective gels, to allow weight loss comparison between the two types of beads obtained.

2.3 Diameter of alginate beads with cinnamon essential oil

The beads’ average diameter was determined with a vernier caliper using 10 beads each time. The experiment was performed in duplicate.

2.4 Water activity of alginate beads with cinnamon essential oil

A dew point hygrometer (AQUA LAB, 4TEV, Decagon Devices, Inc., USA) was used for alginate beads with cinnamon essential oil water activity determination. Once the equipment was switched on, it was left to equilibrate for 15 min. Subsequently, approximately 1 g of beads was placed in the sample pan and introduced in the sample port. This was closed, and instructions were given to start the measurements. The experiment was run in duplicate, with three replicas each time.

2.5 Alginate beads weight loss monitoring

2.5.1 In hermetic package

Three lidless Petri dishes were filled with approximately 0.5 g of alginate beads with cinnamon essential oil and placed into a 1.7 L plastic hermetic package. This was closed and stored at 25°C and the bead was weight registered overtime. The experiments were also run at 4°C. All experiments were carried out in duplicate. The same procedures were followed for alginate beads with soybean oil.

2.5.2 In perforated fruit package

Approximately 0.5 g of alginate beads with cinnamon essential oil were deposited in a Petri dish and placed into a 485 mL perforated fruit package. This was closed and stored at 25°C, and the bead weight was registered overtime. The same set of experiments was also run at 4°C. All experiments were carried out in duplicate, with three replicas each time. The same procedures were repeated for soybean oil beads.

2.6 Determination of relative humidity inside hermetic and perforated fruit packages

A fiber hygrometer (Durotherm, Haiterbach, Germany) was placed, without its basal container, inside an empty hermetic or perforated fruit package. The package was closed and stored at 25°C. The relative humidity value displaced by the equipment was registered until it reached a constant value. Once constant, the value was taken as the relative humidity of the environment inside the package. The same experiments were run at 4°C. The measurements were taken in duplicate.

2.7 Cinnamaldehyde release from alginate beads with cinnamon essential oil

Lidless Petri dishes were filled with approximately 0.5 g of cinnamon essential oil alginate beads; three of them were placed into a 1.7 L plastic hermetic package and one in a 485 mL perforated fruit package. The packages were closed and stored at 4°C for 7, 14, or 21 days.

After the storage time, approximately 0.5 g of alginate beads from each condition (hermetic or perforated fruit package) were left overnight in a 40 mL sodium citrate solution (0.055 M) in a closed glass flask and under gentle stirring (578 rpm) with a magnetic stirrer. When the beads were completely disintegrated, 10 mL ethyl acetate was added to recover the essential oil released from the beads. Stirring with a magnetic stirrer was maintained at 578 rpm for 15 min. The ethyl acetate phase was recovered, and its cinnamaldehyde concentration was quantified using an Agilent 6850 N gas chromatograph (GC) (Agilent Technologies, Saint Catherine, California, USA) coupled to an Agilent 5975 C mass detector (MS) (Agilent Technologies, Sta. Clara, CA), as described by Aguilar-González et al. [3]. The GC unit contained an Agilent HP-5 column (30 cm × 0.25 cm, 0.25 μm film thickness) in which helium was used as carrier gas at a 1.1 mL/min flow. The injector temperature was 250°C, and the oven temperature was held at 300°C, with the following temperature rate: 60°C for 2 min, followed by increasing temperature (10°C/min) until 250°C, being this temperature finally held for 10 min.

Cinnamaldehyde peak areas obtained by GC–MS were related to cinnamaldehyde concentration in the beads, using a calibration curve. For the construction of the curve, six ethyl acetate solutions with different concentrations of cinnamaldehyde standard were prepared and submitted to GC analysis. The conditions hold in the chromatograph were the same as described for the samples obtained from disintegrated alginate beads. For each solution injected, a cinnamaldehyde peak area was obtained, and the calibration curve was attained by plotting cinnamaldehyde concentrations in the solutions against their respective peak areas.

2.8 Cinnamon essential oil and soybean vegetable oil density

The densities of cinnamon essential oil and soybean vegetable oil, at room temperature, were determined using a 10 mL capacity glass pycnometer. This was first weighted empty. Then, the pycnometer was filled with distilled water, cleaned outside with a paper tissue, and weighted again. Finally, the pycnometer was entirely filled with either oil, cleaned outside with a tissue, and weighted. The densities were calculated using Eq. (1):

ρS=SgEgWgEg×ρw(1)

where S is the weight of the pycnometer with either oil, E is the empty pycnometer weight, W is the weight of the pycnometer with water, and ρSand ρwrepresent, respectively, oil and distilled water density.

3. Results and discussion

3.1 Alginate bead diameter

The obtained alginate beads with essential oil had an average diameter of 1.96 ± 0.09 mm, which can facilitate their handling and incorporation in the food package, due to their relatively large size.

3.2 Alginate beads weight loss monitoring

Weight loss over time was monitored in beads stored in both hermetic and perforated fruit packages. The high water activity of the alginate beads with cinnamon essential oil (0.971 ± 0.002) contributed to their loss of water in the headspaces of the studied packages, which relative humidity was between 53.5 and 89.5%. For the same container type, storage at 25°C allowed higher weight loss than at 4°C ( Figure 1 ). The literature suggests that at higher temperatures, the release rate of both water and essential oil component, such as cinnamaldehyde, is higher. At higher temperatures, their molecular motion facilitates their diffusion; also, gel expansion increases the mobility of the polymer, which also promotes component diffusion through bead pores [22].

Figure 1.

Cinnamon essential oil bead weight variation over time. The values are percentages of the initial bead weight.

Figure 1 also shows that the effect of temperature in weight loss is smaller in the perforated food packages than in the hermetic one. In the fruit package, due to its perforation, regardless of the temperature, the air volume with which the beads keep contact is higher than it is in beads contained in the hermetic package. Since air saturation and equilibrium are not as easily reached in perforated fruit package as they are in the hermetic package, the mass gradient allows more rapid water and essential oil diffusion from the beads. In the hermetic package, the amount of air is not easily renewed, and so in this case temperature effect on weight lost from beads is more pronounced.

Lastly, bead water loss, and consequently the weight loss, is sharper when the relative humidity is the lowest. Figure 1 indicates higher weight loss for the perforated package at 25°C, followed by hermetic package at 25°C, perforated package at 4°C, and hermetic package at 4°C, and the relative humidities in the headspace of these containers were 53.5, 76.0, 86.0, and 89.5%, respectively.

3.3 Bead essential oil loss monitoring

Figure 2 provides a rather empirical way to estimate essential oil release from the bead. The graphs represent the difference between the weight loss from beads with essential oil, where both water and essential oil can be lost to the environment, and the weight loss from beads with soybean oil where, in principle, water is the only element with worth noticing volatility. Therefore, monitoring the weight difference between essential oil beads and vegetable oil beads over time could be a way to estimate the essential oil loss from cinnamon beads. The graphs represent the cumulative difference between weight loss from soybean oil beads and weight lost from essential oil beads, over time.

Figure 2.

Differences between the percentage of weight loss in soybean oil beads and the percentage of weight loss in essential oil beads. The values are cumulative over time.

Figure 2 suggests an increase in essential oil release over time, for the four studied cases. These results, however, should be only indicative of the trend observed for essential oil released and should not be taken as exact values for essential oil release. In point of fact, water release from the two kinds of beads could be different due to the different compositions of the beads, which might entrap water differently. On the other hand, the difficulty to maintain relative humidity stable inside the packages during measurements could threaten the accuracy of the method.

As for temperature and humidity effect, the results are not clear enough to draw a conclusion.

Figure 3 provides a more accurate determination of the essential oil loss at 4°C. This temperature was chosen for the trial since a considerable fraction of minimally processed or ready-to-eat food is kept at this temperature before consumption. Essential oil loss is faster in perforated fruit package than in hermetic one. While after 7 days of storage, beads in the perforated fruit package lost 74% of its initial cinnamaldehyde concentration, the ones in the hermetic package lost 43%. In the first case, the diffusion increases sharply until 14 days and then maintains, while in the second case there is a sharp increase, followed by a slow increase, and then diffusion tends toward equilibrium.

Figure 3.

Percentage of initial cinnamaldehyde present in bead lost after 7, 14, and 21 days of incubation at 4°C.

Finally, the results for essential oil loss at 4°C, presented in Figure 3 , agree with the results for weight loss at the same temperature exhibited in Figure 1 , since both cases indicate higher releases in perforated fruit package.

3.4 Cinnamon essential oil and vegetable soybean oil density

The density of the cinnamon essential oil used in this study was slightly higher than water density, presenting a value of 1.02 g/L. This value is not in accordance with the literature which normally reports essential oil densities lower than water’s [23]. Soybean essential oil density was 0.92 g/L, which was lower than water’s, as expected, and similar to what was reported in the literature (0.91 g/L) [24].

4. Conclusions

The study of water release behavior from beads is important to guarantee the best storage condition for the beads and to keep them from losing their appearance qualities. If beads’ water loss is considerable, their aspect changed from white shiny beads to yellowish dried beads (data not shown). Therefore, during storage, beads should be kept in considerably high humidity environments where a rapid equilibrium with the atmosphere is achieved if one wants to avoid substantial water loss. The results obtained suggest storage in hermetic environment with limited headspace air content and low temperature.

Temperature was the most determining parameter in weight loss (this was higher at 25°C), followed by the type of package used (weight loss was faster in perforated fruit package).

At refrigeration temperature, the essential oil release rate was higher in perforated fruit package than in the hermetic package, which is expected due to the higher mass gradient in the first case.

Additional studies are needed to evaluate the temperature effects on essential oil release. This is helpful to assure the minimum essential oil loss during storage and to understand the essential oil release profile in a chosen package system for fresh food products.

Acknowledgements

The author M. J. Paris thanks Universidad de las Américas Puebla (UDLAP) and the Secretary of Foreign Affairs (Secretaría de Relaciones Exteriores) of Mexico for financing her doctoral studies in food science. This work was supported by the Mexican National Council of Science and Technology (CONACyT) under Grant [CB-2016-2101-283636].

Author details

Maria J. Paris, Enrique Palou and Aurelio López-Malo*

Department of Chemical and Food Engineering, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

*Address all correspondence to: aurelio.lopezm@udlap.mx

A Simplified Generative Model Based on Gradient Descent and Mean Square Error

Omar López-Rincón and Oleg Starostenko

Abstract

Generative models can have a high computational complexity, which is reflected in its implementation and training along with specialized hardware to implement and test. We propose a model with a simplified architecture of only one neural network with two layers. It can be conditioned to generate specific data with requested features. It is a simplified model as an alternative to models with higher complexity like variational autoencoders (VAEs) or generative adversarial networks (GANs). Unlike the mentioned models, this method works with only one network of two fully connected layers. This chapter presents a simplified architecture with the conditioning advantage of the generative adversarial networks with the ease of training of the autoencoders. It can be used for generative purposes of data with a straightforward implementation. It can run on CPU for rapid prototyping. Also, the latent space can be visualized as in the variational autoencoders, and it can have queries by conditioning and sampling from it. The learning error rate keeps on the 1% average when abstracting the data on only two dimensions, and it can generate interpolations over the latent space generating new images from the continuous space.

Keywords: generative models, data completion, conditional generation

1. Introduction

Artificial intelligence is mostly known in areas of data analysis and classification. Generative models otherwise are suitable for the generation (synthesis), completion, or even removal of data. A recent application in image processing, which combines both classification and generation, is image captioning. One advantage of automatic image captioning is indexing, which is important in content-based image retrieval [1]. Supervised learning learns on labeled datasets, which takes time to prepare and clean. The datasets context should be bias-free to work properly to train a model, and this can be crucial in domains like criminal justice, infrastructure, or finance [2] or even in clinical aspects [3]. Another example is the labeled maps used for road planning. These maps are satellite pictures with labels used as a navigation system and the labels need to be prepared [4]. Generative models can help to expand datasets or generate new examples from the abstraction of features of different classes.

A class is the assigned classification given to a data value. Another area, involved in classification and generation, is natural language processing (NLP), with techniques used directly in captioning [5]. A successful method in automatic image processing is inpainting [6]; this model learns a distribution from data by sampling and learning from incomplete data. It is a supervised learning model with good results at data completion of images; it finds the distribution of surrounding areas of the pixels to complete the missing information. The downside of this method is that it is constrained to work only around existing information. It is not able to generate new data from conditions and specific constraints or without surrounding information for context. It has a simple architecture, but it is constrained to analyze pixel by pixel which makes it slow. It uses a dataset with images as targets and the same image with occlusions on different spots, creating gaps of information. Models like the restricted Boltzmann machine [7] or maximum likelihood [8] work on finding the probability density function of data and then maximizing the probability of it, but it is intractable so they result in only approximations. When these models are conditioned, generation can be specified with queries like text descriptions [9].

The generative adversarial networks (GANs) are an efficient solution for the generation of data [10]. They consist of two neural networks competing against each other in a min-max game approach. One of the networks has a Gaussian noise input and it generates data. The second networks train on discriminating real data from generated one. Both networks are competing constantly and getting better at each epoch. The training keeps going until it reaches the Nash equilibrium, which is the technique used to get them to learn and start generating realistic data. GANs are an improvement in the image generation quality on top of the variational autoencoders (VAEs) but are unstable, and finding parameters for training is not trivial. Wasserstein GAN is a variation that implements techniques to find hyperparameters in an interactive method [11].

The architecture called VAEs are a model that learns a Bayesian distribution from examples in a dataset. These learned distributions with a nonsupervised learning are capable of new data generation from the interpolation of the point sets in the latent space [12]. PixelRNN is a model that uses recurrent neural networks to generate images. The generation steps are pixel by pixel, which samples at each time from previous pixels. The problem with this kind of generation is the processing time at training and at generation runtime [9]. PixelCNN is a model that conditions the generation process with a one-hot vector [13], and it uses convolutional layers; the generation is also by pixel which increments and has a high processing runtime [14]. This chapter presents a model of a One Network Generator (ONG) tested on the dataset of the handwritten digits of MNist, with this simplified generative model based on traditional mean square error [15] and gradient descent [16] for a stable training. The rest of this chapter has the following organization. In Section 2, the proposed methodology of the model for data generation using one conditioned neural network is detailed. In Section 3, results obtained by testing the proposal are presented. In Section 4, conclusions are described.

2. Methodology and the proposed ONG

The model consists of a neural network with two fully connected layers of architecture with sigmoid function activation. It was tested with an image dataset samples to condition both sides of the network. At the output layer, it measures the result with the mean square error (Eq. (1)), and it is trained with gradient descent (Eq. (2)) and backpropagation [16]. It updates the inputs using the delta errors of the input layer.

MSE=1nk=1nYkYk̂2(1)

The Ykis the kth output target and Yk̂is the output computed by the neural network.

Θ=ΘηΘJΘ(2)

where Θare the weights that need to update at each epoch iteration of training, ηrepresents a learning rate which scales the adjustment of the error, and ΘJΘis the gradient error of the direction of update of the function. All samples have associated x and y axes which are initialized with the same values at the center of the learning space (.5,.5). The learning space is known as the latent space which is the corresponding abstraction of the information. It represents different aspects of the data [12]. In this example, the latent space is a two-dimensional continuous space between zero, and one represented graphically at the x- and y-axis for visualization. The initial weights of the layers are set with a Gaussian distribution with a mean of zero and variance of one.

The model was trained with a dataset of 10,000 labeled samples of handwritten numbers. The model creates a binary vector of 10 indexes from 0 to 9; this value is related to the class the image belongs to. For example, if we have the image of a number two, the only indexed position with value one in the vector is X̂i=2=1; the rest are set to zero. Then the vector is extended with two additional floating-point values that are related to the learning space. The used image database is the MNist csv file [17], which is a comma-separated value format with 10,000 examples of images with handwritten digits of a 28 by 28 pixel size. The MNist was chosen to visually show results in the same way these results are shown in the papers of GANs [10] and VAEs [12].

The network at the training stage starts to iterate with a random subset of the dataset. It keeps updating the weights inside the layers to improve the output and the variables associated with each sample of the training data (latent space) from the input. The values associated with the samples from the dataset are a representing vector created at the beginning of the training. This vector has a 12-dimension size as part of each of the examples. When training with each sample on every epoch it updates the generated output from the network, with the target image, and the input value, which is the conditioner vector and learns the representation of the latent space inside the vector. This vector, with initial values starting as a one-hot vector [13], has a slight difference given by an explicit extension of the same dimensionality of the latent space and initializing with random values. The first 10 positions describe which number is the value of the image from 0 to 9 with a binary value. For example, to query an image of the number two, the initial vector is [0,0,1,0,0,0,0,0,0,0,0.5,0.5], and the two floating-point values are going to be used to learn the abstraction of the data (see Figure 1 ).

Figure 1.

Conditioning the neural network with a target image and a target incomplete vector.

This helps to determine which values of the network encodes in the forward pass (Eq. (3)).

X=fWfWX+b+b(3)

where Xis the input vector, Wand Ware the matrix weights of each layer, b, b, and fare the activation functions of each layer, e.g., sigmoidal or ReLU. When backpropagation runs, the deltas of the first layer updates both dimensions from the inputs (x, y) which are the reference values of every example in the dataset. Each of the inputs of the dataset keeps its reference to the updated values computed from the delta error propagated from the MSE (see Figure 2 ).

Figure 2.

Architecture of the ONG model. At each epoch, the output and input are updated.

At each epoch, the input values start to distribute the reference values (x1, x2) that describe the images from the dataset at the learning space (latent space). The resulting distributions of the axis (x, y) at the latent space should be a guide to sample from it and retrieve an interpolation in a continuous space.

3. Results and discussion

The experiments were tested on a computer with 24 GB of RAM with a processor Intel(R) Xeon(R) CPU E5-2603 v3 at 1.60 Ghz with Windows 10 of 64 bit. A program developed in C# language of the. NET Framework to implement the model was designed. The ONG model conditions the neural network on both the inputs and the generated data. As each epoch runs, it updates the weights, and the error deltas from the first layer (input layer) compute the updated values of the axis (dimensions) of the learned space. Inside the input values of each image represented by the x, y dimensions, the latent space represented in these two dimensions is the abstraction of the data in that area. The model learns the latent space as it keeps evolving and learning to generate the data by adjusting the similar references together (see Figure 3 ).

Figure 3.

Evolution of the latent space learned with the updated input.

The latent space keeps a correlation in the two referenced dimensions of the latent space; this means that the distribution depends on both dimensions and is not complete or it does not fill the entire space of both axes. In addition, the abstraction of the data in the two dimensions makes the sampling results still blurry as the VAEs.

Even when the distribution of the learned space has gaps in certain areas, the generated images from the given distribution are an interpolation from the learned dataset. It controls the results by sampling the learned space with input vectors; again the first 10 binary values represent the requested image number, and the extra two values are sampled from the latent space area with values in the range of (0,1) (see Figure 4 ).

Figure 4.

Sampled images in a specific x, y values from the latent space. The sampling vectors with different conditioner represented with number one. The ONG network trained with these representations and the results obtained from picking at that same point in space.

The model can sample in a continuous manner from the learned space by sampling from the valid areas to get interpolated samples in continuous values from both axes. The result shows a soft transition at each change from the selected class with different features related to the position of the x, y values (see Figure 5 ).

Figure 5.

Sampled digits in continuous x, y values from the latent space interpolation. The conditioner of the vector changes only on the dimension that represents each class: 9, 5, and 2, respectively, top to bottom.

We were able to generate the image of the requested number; the training took up 30,000 epochs with a learning error rate no lower than 1% computed from the MSE (Eq. (1)) of the neural network training stage. The generation was an interpolation of a continuous value sampled from the latent space. The latent space given by the reference values showed a correlation in both dimensions, which results in a problem when sampling from outside the distribution of the latent space. The generated information is incomplete or incorrect. In addition, the abstraction of the data in two dimensions makes the sampling result still blurry as the VAEs do. If it uses more dimensions to abstract the information, the output results in black gaps inside the distributions of the latent space. When sampling from the empty space, the generation results with no information or in incomplete generated images.

A possible solution for this completion of the information or latent space management could be the training of a K-autoencoder [18] to learn the sample space and generate a secondary latent space based on the incomplete dimensions to get only valid data that is mapped from control space. This will increase complexity, but it still needs to be tested. To improve generation, the input update should stop after covering the visible (x, y) axis of the latent space. This needs to be done because, since we are trying to represent the encoded information and the resembled images are projected into only two-dimensional space, the projection of the inputs keeps updating, and so the learning keeps updating; it is necessary to stop changing the inputs. To keep conditioning different aspects of a dataset, we need those features already labeled as in supervised learning. Otherwise, the network tries to cluster the information in a loop.

This technique of keeping the input in a fixed learned state helps to increase the quality of the generated images. The variation of the classes in the dataset is another problem with the dimensionality reduction. Reducing the variation or increasing the dimensionality representation, the quality of the generated data also increases, and it reduces the learning error of the network. The model also was able to learn without the labels, but it was slower, and the error rate kept a higher threshold of learning.

4. Conclusions

This method is a simplified version of a neural network generator of two networks reduced into only one network architecture, which can learn to generate interpolations from given samples like GAN and VAE architecture do. Both models are based on two neural networks, and their parameters are optimized with extensions of the MSE. These differences make a higher complexity of implementation and training of these methods, which is reduced in our proposed method. That makes this model a better fit as a generative model in simpler cases. In all these tests, we only tried abstraction in two dimensions; the mentioned methods use commonly a range of 100–120 dimensions for the abstraction, and we were also capable of conditioning the network to generate a requested image.

With the MNist csv file [17], we tested our one-network architecture to generate learned abstractions of the different handwritten numbers of the dataset and obtained new versions of the images which contained interpolated characteristics of the queried data.

The main difference in complexity with other methods is that the proposed one in this chapter is using only a one-network and two-layered architecture. Commonly the mentioned architectures have two networks with more than two layers in each of the networks to cluster the information and start to generate new information from the abstraction of the features in a given dataset.

Author details

Omar López-Rincón* and Oleg Starostenko*

Department of Computing, Electronics and Mechatronics, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

*Address all correspondence to: omar.lopezrn@udlap.mx and oleg.starostenko@udlap.mx

An Approach for River Delta Generation Using L-Systems

Luis Oswaldo Valencia-Rosado and Oleg Starostenko

Abstract

The generation of river deltas is an open problem within the area of procedural terrain generation; there is not much research to automatically generate this type of feature. A river delta is a branching structure that is formed when rivers reach calm water bodies such as the ocean or lakes; depending on the characteristics, they can be classified as dominated by the tides, the waves, or the river itself. Lindenmayer systems or L-systems are grammars capable of generating natural-looking branching structures, such as vegetation; for this reason, an adaptation of these systems is proposed for generating branching river skeletons. The strings generated by the L-system are graphically interpreted using turtle graphics. This approach proves to be useful for generating skeletons of deltas dominated by the tide or the river. The results presented in this chapter are preliminary since this is a work in progress.

Keywords: procedural terrain generation, Lindenmayer system, branching river, skeleton generation

1. Introduction

Automatic terrain generation is a technique where a computer system generates virtual environments with as few as possible human interaction. This has been an active topic for several years and has become more prominent as production costs have risen steadily in video game production especially since high-definition graphics became standard [1]. Terrain generation techniques have evolved through the years focusing on improving realism [2, 3] or having real-time generation [4, 5]; nevertheless, most works focus in generating a few terrain features, being the most prominent the generation of mountains and rivers, while the creation of other features, such as river deltas, is an open area for research.

In nature, a delta is formed when a river leads into a bigger and mostly stagnant water body, such as a lake or the ocean. The sediments carried by the river are deposited as the river speed decreases and the river cannot carry them anymore. These deposits form new land and provoke the division of the river into smaller branches called distributaries. The deltas can be classified depending on the influence of tides, waves, and the river itself [6]. As can be observed in Figure 1 , the shape of many of these deltas resembles other branching shapes found in nature such as trees, bushes, or corals; therefore, a technique used for the generation of branching structures could be adapted and used to generate the skeletons of river deltas. Lindenmayer systems or L-systems are a grammar-based technique that has been used for generating plant structures among other types of shapes [7]. A method for generating skeletons that resemble delta shapes is presented; it is based on L-systems and turtle graphics. In the following sections, there will be a brief exposition of previous works that generate river deltas, an explanation about L-systems and how they were adapted for generating delta skeletons, the presentation of graphic results and their discussion, and finally, the conclusions and future research work.

Figure 1.

Satellite images of river deltas from NASA GloVis database [8]. (a) Mississippi River and (b) Fly River.

2. Related work

Within the procedural generation area, the only work that can produce river deltas, to the extent of knowledge of the authors, is that of Teoh [9]. The author presents a simple method where a river is first generated when it reaches the ocean; new land is generated in its mouth in an irregular semicircle shape. Random points of the new coast are then selected, and from those points, new distributary rivers run until reaching the original river mouth. This method is fast, but the resulting deltas are very limited and not capable of generating many of the river delta types.

Other methods for the modeling of river deltas in computer systems, such as the one of Seybold [6], belong to geological simulations; therefore, they contain data about the terrain composition, slope, and other variables. The amount of data and their processing produces very realistic graphical results in terms of delta shapes, but the computational cost is considerable, and thus, these models are not suitable for procedural generation of virtual worlds. Finally, methods like that of Justić et al. [10] are mathematical models of the delta behavior.

L-systems are quite versatile, and they have a wide array of applications. The work of Leitner et al. [11] is related to plant generation, but it is focused on another branching structure: the roots. They present and adapt a method that generated root systems depending on the concentrations of minerals in the soil.

On the other hand, there are applications unrelated to vegetation, such is the case of the generation of video game levels [12]. The authors propose to pair the L-systems with grammar evolution to improve the variety of the generated levels. L-systems have also been used to generate cities [13]; the distribution of city blocks is done by the recursive subdivision of the system. Even generation of virtual creatures is proposed in [14]; this method is also paired with evolutive algorithms to make changes to the generated creatures.

Based on the analysis of the different methods first, a string of symbols is generated using L-systems. Afterward, these symbols are used as instructions to provide a graphic interpretation. For this implementation, turtle graphics are used. In the upcoming sections, both the L-systems and the turtle graphics will be explained.

3. L-systems

These systems are formal grammars with parallel rewriting that were originally designed to describe plant growth. The grammar G is formed by an alphabet V which contains the symbols that can be used, an initial state ωcalled axiom, and a finite set of generation rules P; this is G=VωP.

Symbols are put together to form words. The set of words that can be generated by the system is denoted by V*; the axiom could be a symbol or a word. The generation rules are applied to a symbol α, and the result is a word χwhich is called the successor; in the case where there are no generation rules for a symbol α, the symbol is repeated as is when rewriting the strings. The rewriting occurs for each character in order; this produces a new word. The rewriting process is repeated for an arbitrary number of iterations. In this chapter, the used systems are deterministic, which means that each character could have only one rule at most; therefore, when having the same parameters, the results are the same for every execution [7].

4. Turtle graphics

This is a vector method that uses a cursor over a Cartesian plane, the cursor has an (x, y) position and a current angle θ. There is a step size r. The next position of the cursor is given by (x, y′) where x′ = r * cos θ and y′ = r * sin θ. The cursor draws a line between the current and the next position. The next position then becomes the current position. For this implementation, the width and length of the line remain constant.

The needed parameters are the initial position (x 0, y 0), initial angle θ 0, a change in angle β which will be added or subtracted to the current angle, the step size r which represents the line length, and a number i of iterations which controls how many times the L-system rewriting will be performed.

The symbols used for this implementation, and their respective graphic interpretation, are presented in Table 1 .

SymbolInterpretation
FOf forward, a straight line is drawn between two points
+The angle β is added to the current angle θ
The angle β is subtracted from the current angle θ
[The current position and angle are saved into a list
]The last position and angle saved in the list are taken
L, R, S, XTransition symbols with no graphic interpretation

Table 1.

L-system symbols and their turtle graphic interpretation.

For example, let us assume the following L-system:

Axiom: ω = X

Production rules: X → −[XF] + [FX]; F → +FFX

The results after four iterations are shown in Table 2 . It can be observed that the symbols “+,” “−,” “[,” and “]” do not change because they do not have rules.

Table 2.

Resulting words after several iterations and their graphic interpretation.

5. Implementation for generating river deltas

The contribution of this work within the area of procedural terrain generation is to provide a new method that generates river deltas. Likewise, this is a new adaptation of the L-systems. In this section rules and guidelines are provided to adapt the grammar and the graphic interpretation to achieve the generation of river delta skeletons.

L-systems are a form of chaotic systems [15]; in practice, this means that a small change in the initial conditions, the axiom or the rules, will have an important impact on the results. These changes are hard to predict and control. For this reason, the experimentations to obtain different systems that produced branching rivers where performed in an empirical way; nevertheless, there are certain observations that were taken from [7]:

  • The symbols “[” and “]” are mandatory for creating branching structures in plants; this holds true for branching rivers skeletons.

  • For controlling growth there are rules and symbols that do not translate into a line in the interpretation; they are used to slow down the growth of the shape. The more rules and shorter they are, they slower the growth will be.

  • Using the axiom within the rules augments the self-similarity of the resulting shapes, but if used too many times, the shape gets cluttered within a few iterations.

There are other guidelines specific for river deltas:

  • Rivers are sinuous, and using the symbols “+” and “−” between draw-line symbols achieves this; they should be used in pairs, if not, the shape becomes spiral-like.

  • The angle β should be lower than 45 degrees and greater than 15 degrees to generate natural-looking skeletons.

6. Results and discussion

The results are different systems which are summarized in Table 3 . They are numbered in the first column; the second column contains the model consisting of the axiom and the production rules; and the following columns contain the parameters for each system. For this chapter, the implementation was done using Python 3 and the TK module, which is the standard graphic user interface for Python. Images of the results are presented after Table 3 .

SystemAxiomProduction rulesIterationsβrx 0, y 0θ 0
1ω = II → F + +F − −R[X]
X → FR [I]
R → +FI – [LS]
S → F + I
L → F – I
9201710, 300
2ω = II → −F + F[−R] − F + F[+R] I
R → F – LSF − F
S → F − I
L → F + I
8251540, 42020
3ω = II → +F − F[−R]X
X → +F − F[+R] I
R → F − L + SF
S → F − I
L → F + I
11151540, 22020
4ω = II → FF[−R]X
X → FF[+R] I
R → FLS
S → F − I
L → F + I
122515140, 12020
5ω = II → F[X]
R → F[−I] F [+I] F
X → +F − [R] I
F → F + I−
10301010, 130310
6ω = II → −F[−X] + [+X]FI
X → +F − F[−R] FF
R → F[+S] [−S] F
S → [−I]
F → -F + I
6201010, 32060
7ω = II → -F[-FR] + F[+FR]FI
R → F[+R − I] F [−R + I] F
F → −FI520910, 15040
8ω = II → +F − F[−RI] − F + F[RI] − FI+
R → F – LSF − F + F
S → F − I
L → F + I
5151440, 22044
9ω = II → −F[−RI] + F[R] − FI
R → F[+I] F[−I] F
F → −F + S
S → FL
L → F
6151540,22060
10ω = II → +F – F − RX
X → +F − F + RI
R → F[SL] F
S → F − I+
L → F + I−
102015110, 15060
11ω = II → +F − F[−RI] − F + F[RI] − FI
R → F – LSF − F + F
S → F − I
L → F + I
4151540, 22070

Table 3.

Resulting L-systems.

According to the classification presented in [6], systems 1, 2, 3, and 4 (which are shown in Figure 2 ) generate deltas that are tide-dominated. These present many distributaries that become entangled and cover a wide area; an example is the Fly River presented in part (b) of Figure 1 . These systems generate skeletons that become too cluttered and stop looking natural when there are too many iterations. A problem when using straight lines is that their entanglement may look like a grid as observed in systems 2 and 4.

Figure 2.

Systems that generate tide-dominated deltas: (a) system 1, (b) system 2, (c) system 3, and (d) system 4.

Systems 5, 6, and 7 ( Figure 3 ) generate deltas that are of the river-dominated type. These have more elongated shapes, and their branches tend to have smaller lengths; also, the main branch can be identified. The Mississippi River is the best example of this type of deltas; it is shown on part (a) of Figure 1 .

Figure 3.

River-dominated deltas: (a) system 5, (b) system 6, and (c) system 7.

When generating this type of deltas, too many iterations have not much effect; because of the fractal nature of the generation, the general shape remains the same. For this reason, it makes no sense to have too many iterations nor for this type of delta nor for the previous one; in both cases, the number of iterations in every system remains low as can be observed in Table 3 . To obtain river-dominated deltas, the symbol “F” has a rule that calls the axiom every time; this means that each line will start another system in the following iteration.

Systems 8, 9, 10, and 11, depicted in Figure 4 , present combinations of the previously mentioned types of deltas. System 9 is more closely related to the river-dominated type, while system 10 has more influence of the tide-dominated type. Systems 8 and 11 are variations of system 2, and the difference is recalling the axiom more times and adding more straight lines; therefore, they have more self-similarity. The problem with having more lines is that they tend to look less sinuous and thus less natural.

Figure 4.

Deltas that present combined characteristics: (a) system 8, (b) system 9, (c) system 10, and (d) system 11.

Also, system 8 has one more “+” symbol than “−” symbols in the axiom, and because of this, all the branches tend to move upwards. As mentioned before, balance is needed in the number of angle-changing symbols. System 11 is another variation where the last “+” is omitted; therefore, the shape is even more curved at it would become like a spiral if more iterations are performed. This also proves how challenging the control over these systems could be, as changes are minimal but rendered images are quite different.

Figure 5 depicts the evolution of system 5. It needs five iterations to reach the first stage shown, the second stage is reached at seven iterations, and the last one is at nine iterations. Exponential growth is easily observed. When comparing the systems, it can be observed that those systems that have longer rules need fewer iterations to reach the desired skeleton. This also applies to those that recall the axiom several times.

Figure 5.

From left to right system 5 at 5, 7, and 9 iterations, respectively.

There are important differences with L-systems that generate vegetation. In plants changes in angle occur almost exclusively when creating a new branch. This is that the symbols “+” or “−” appear just after opening a new branch with the symbol “[.” In the case of the rivers, the change-angle symbols surround the draw-line symbols “F” and may also appear outside of branching. As an example of this, if system 9 has its change-angle symbols eliminated from the rule that rewrites each line (symbol F), most of the sinuosity is eliminated, and the resulting shape resembles more of a pine branch than a river as could be seen in Figure 6 .

Figure 6.

System 9 when eliminating changes in angles.

Another difference is that trees use exponential growth to differentiate the trunk from the branches and leaves. This is not desirable for river deltas as the sinuosity and the presence of branching are not restricted to areas far from the start point as it is in trees.

The presented skeletons are preliminary results, and they only use straight lines. Also, the systems are deterministic; this means that each symbol only has one rule at most and each system generates the same skeleton every time when using the same parameters.

River deltas have different widths in their branches; this changes the way they look, but the underlying structure remains constant. For example, section (a) of Figure 7 shows the entire Skeidarársandur river delta, and section (b) shows a zoom in of the central part, which shows smaller entangled distributaries. The entirety of the delta has these formations, but the greater width of some of the distributaries affects the shape and hides the root-like structure. As a comparison section, (c) shows an extract of system 3 where the width of distributaries is constant.

Figure 7.

(a) Skeidarársandur delta [8]; (b) central part zoom in [16]; and (c) extract of system 3.

In Figure 8 there are comparisons of real river deltas and some of the deltas generated by the systems; the shown rivers are the Lena river (a), Papua river (b), and a secondary delta of the Mississippi system (c). Systems 1 (d) and 10 (e) were rotated for this comparison; this is achieved by altering the initial angle θ 0, and the final image (f) corresponds to system 6.

Figure 8.

Comparison between river deltas and generated systems, (a) Lena [8], (b) Papua, (c) [8] Mississippi branch, (d) [16] system 1, (e) system 10, and (f) system 6.

Finally, Figure 9 presents a comparison of the results of the method proposed by Seybold [9, 6], Teoh [9], and system 7 of the proposed method. It is to note that the method proposed by Seybold is intended for simulation of delta growth and not for use in virtual worlds. Therefore, it is quite accurate but is computationally expensive in comparison to the other two.

Figure 9.

Results of different methods, (a) method proposed by Seybold [6], (b) method proposed by Teoh [9], and (c) system 7.

7. Conclusions and future work

In this chapter, a method for generating branching river skeletons using L-systems was presented. It was proven that L-systems could be successfully adapted to this task as it was shown by the preliminary resulting systems and their graphic interpretations. This is an original method for generating river deltas and a new application of L-systems. The generated skeletons resemble deltas dominated by the river or by tides and some combinations in between. Also, general guidelines were provided to generate this type of branching structures. These skeletons can be used in procedural terrain generation to add more features in virtual terrains depending on the characteristics of the generated terrains.

As this is a work in progress, some changes will be done in the future to generate more realistic results; curved lines could be used to increase the resemblance to river deltas. By using neural networks, textures could be added automatically over these skeletons. Variability could be improved by switching to stochastic systems, and this approach could be paired with machine learning techniques to perform an automatic generation of the skeletons.

Author details

Luis Oswaldo Valencia-Rosado* and Oleg Starostenko*

Department of Computing, Electronics, and Mechatronics, Universidad de las Américas Puebla, Cholula, Puebla, Mexico

*Address all correspondence to: luis.valenciaro@udlap.mx and oleg.starostenko@udlap.mx;

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Sergio Picazo-Vela and Luis Ricardo Hernández (January 13th 2020). Technology, Science and Culture - A Global Vision, Volume II, Technology, Science and Culture - A Global Vision, Volume II, Sergio Picazo-Vela and Luis Ricardo Hernández, IntechOpen, DOI: 10.5772/intechopen.90099. Available from:

chapter statistics

158total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Technology, Science and Culture - A Global Vision, Volume II

By Sergio Picazo-Vela and Luis Ricardo Hernández

Related Book

First chapter

Technology, Science, and Culture: A Global Vision

By Sergio Picazo-Vela and Luis Ricardo Hernández

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us