Planned versus actual coverage of the survey.
\r\n\t
\r\n\tSince they involve very small amounts of energy, high sound pressure levels are increasingly simpler and cheaper to emit. Noise is everywhere - it can be emitted as an energy waste by traffic or factories, but also by teenagers looking for loneliness in an overpopulated world.
\r\n\t
\r\n\tWhen the noise emission ends, it will not be possible to find its footprint in the environment, hence it is necessary to be in the right place at the right time to measure it. Moreover, having adequate instruments, updated protocols and trained personnel are mandatory to achieve that. Even then, decision makers must clearly understand the reported situation to decide the need and importance of taking further actions.
\r\n\t
\r\n\tThis book will address issues of noise in the city, in the neighborhood or at work, aspects about management and consequences of exposure to high sound pressure levels ranging from the auditory, extra-auditory and psychophysics effects to the addiction to noise and the loss of solidarity.
\r\n\t
\r\n\tThe book aims to provide a various points of view and analysis of cases regarding this omnipresent pollutant.
Computational fluid dynamics (CFD) simulations aim at solving more complex, more real, more detailed, and bigger problems. To achieve this, they must rely on high-performance computing (HPC) systems as it is the only environment where these kinds of simulations can be performed [1].
At the same time, HPC systems are becoming more and more complex and the hardware is exposing massive parallelism at all levels, making a challenge exploiting the resources. In order to understand the concepts that are explained in the following sections, we explain briefly the different levels of parallelism available in a supercomputer. We do not aim at giving a full description or state of the art of the different architectures available in supercomputers, but a general overview of the most common approaches and concepts. First, we depict the different levels of hardware that form a supercomputer.
Core/Central Processing Unit: We could consider a core as the first unit of computation, and a core is able to decode instructions and execute them. Here, within the core, we find the first and most low level of parallelism: the instruction-level parallelism. This kind of parallelism is offered by superscalar processors, and its main characteristic is that they can execute more than one instruction during a clock cycle. There are several developments that allow instruction-level parallelism such as pipelining, out-of-order execution, or multiple execution units. These techniques are the ones that allow to obtain instructions per cycle (IPC) higher than one. The exploitation of this parallelism relies mainly on the compiler and on the hardware units itself. Reordering of instructions, branch prediction, renaming or memory access optimization are some of the techniques that help to achieve a high level of instruction parallelism.
At this level, complementary to instruction-level parallelism, we can find data-level parallelism offered by vectorization. Vectorization allows to apply the same operation to multiple pieces of data in the context of a single instruction. The performance obtained by this kind of processors depends highly on the type of code that it is executing. Scientific applications or numerical simulations are often codes that can benefit from this kind of processors as the kind of computation they must perform usually consists in applying the same operation to large pieces of independent data. We briefly see how this technique can accelerate the assembly process in Section 4.4.
Socket/Chip: Coupling of several cores in the same integrated circuit is a common approach and is usually referred as multicore or many-core processors (depending on the amount of cores it aggregates, one name or the other is used). One of the main advantages is that the different cores share some levels of cache. The shared caches can improve the reuse of data by different threads running on cores in the same socket, with the added advantage of the cores being close on the same die (higher clock rates, less signal degradation, less power).
Having several cores in the same socket allows to have thread-level parallelism, as each different core can run a different sequence of instructions in parallel but having access to the same data. Section 4.2 studies this parallelism through the use of OpenMP programming interface.
Accelerators/GPUs: Accelerators are a specialized hardware that includes hundreds of computing units that can work in parallel to solve specific calculations over wide pieces of data. They include its own memory. Accelerators need a central processing unit (CPU) to process the main code and off-load the specific kernels to them. To exploit the massive parallelism available within the GPUs, the application kernels must be rewritten. The dominant programming language is OpenCL that is cross-platform, while other alternatives are vendor dependent (e.g., CUDA [2]).
Node: A computational node can include one or several sockets and accelerators along with main memory and I/O. A computational node is, therefore, the minimum autonomous computation unit as it includes cores to compute, memory to store data, and network interface to communicate. The main classification of shared memory nodes is based on the kind of memory access they have: uniform memory access (UMA) or [3] nonuniform memory access (NUMA). In UMA systems, the memory system is common to all the processors and this means that there is just one memory controller that can only serve one petition at the same time; when having several cores issuing memory requests, this becomes a bottleneck. On the other hand, NUMA nodes partition the memory among the different processors; although the main memory is seen as a whole, the access time depends on the memory location relative to the processor issuing the request.
Within the node, also thread-level parallelism can be exploited as the memory is shared among the different cores inside the node.
Cluster/Supercomputer: A supercomputer is an aggregation of nodes connected through a high-speed network with a specialized topology. We can find different network topologies (i.e., how the nodes are connected), such as 2D or 3D Torus or Hypercube. The kind of network topology will determine the number of hoops that a message will need to reach its destination or communication bottlenecks. A supercomputer usually includes a distributed file system to offer a unified view of the cluster from the user point of view.
The parallelism that can be used at the supercomputer level is a distributed memory approach. In this case, different processes can run in different nodes of the cluster and communicate through the interconnect network when necessary. We go through the main techniques of such parallelism applied to the assembly and iterative solvers in Sections 4.1 and 5.2, respectively.
Figure 1 shows the different levels of hardware of a supercomputer presented previously, together with the associated memory latency and size, as well as the type of parallelism to exploit them. The numbers are expressed in terms of orders of magnitude and pretend to be only orientative as they are system dependent.
Anatomy of a supercomputer. Memory latency and size (left) and parallelism (right) to exploit the different levels of hardware (middle).
We have seen all the computational elements that form a supercomputer from a hierarchical point of view. All these levels that have been explained also include different levels of storage that are organized in a hierarchy too. Starting from the core, we can find the registers where the operands of the instructions that will be executed are stored. Usually included also in the core or CPU, we can find the first level of cache and this is the smallest and fastest one; it is common that it is divided in two parts: one to store instructions (L1i) and another one to store data (L1d). The second level of cache (L2) is bigger, still fast, and placed close to the core too. A common configuration is that the third level of cache (L3) is shared at the socket and L1 and L2 private to the core, but any combination is possible.
The main memory can be of several gigabytes (GB) and much slower than the caches. It is shared among the different processors of the node, but as we have explained before it can have a nonuniform memory access (NUMA), meaning that it is divided in pieces among the different sockets. At the supercomputer level, we find the disk that can store petabytes of data.
We have seen an overview of the hardware available within a supercomputer and the different levels of parallelism that it exposes. The different levels of the HPC software stack are designed to help applications exploit the resources of a supercomputer (i.e., operating system, compiler, runtime libraries, and job scheduler). We focus on the parallel programming models because they are close to the application and specifically on OpenMP and MPI because they are the standard “de facto” at the moment in HPC environments.
OpenMP: It is a parallel programming model that supports C, C++, and Fortran [4]. It is based on compiler directives that are added to the code to enable shared memory parallelism. These directives are translated by the compiler supporting OpenMP into calls to the corresponding parallel runtime library. OpenMP is based on a fork-join model, meaning that just one thread will be executing the code until it reaches a parallel region; at this point, the additional threads will be created (fork) to compute in parallel and at the end of the parallel region all the threads will join. The communication in OpenMP is done implicit through the shared memory between the different threads, and the user must annotate for the different variables the kind of data sharing they need (i.e., private, shared). OpenMP is a standard defined by a nonprofit organization: OpenMP Architecture Review Board (ARB). Based on this definition, different commercial or open source compilers and runtime libraries offer their own implementation of the standard.
The loop parallelism in OpenMP had been the most popular in scientific applications. The main reason is that it fits perfectly the kind of code structure in these applications: loops. And this allows a very easy and straightforward parallelization of the majority of codes.
Since OpenMP 4.0, the standard also includes task parallelism which together with dependences offers a more flexible and powerful way of expressing parallelism. But these advantages have a cost: the ease of programming. Scientific programmers still have difficulties in expressing parallelism with tasks because they are used at seeing the code as a single flow with some parallel regions in it.
In Section 4.2, we describe how loop and task parallelism can be applied to the algebraic system assembly.
MPI: The message passing interface (MPI) is a parallel programming model based on an API for explicit communication [5]. It can be used in distributed memory systems and shared memory environments. The standard is defined by the MPI Forum, and different implementations of this standard can be found. In the execution model of MPI, all the processes will run the main() function in parallel. In general, MPI follows a single program multiple data (SPMD) approach although it allows to run different binaries under the same MPI environment (multicode coupling [6]).
The principal metrics used in HPC are the second (sec) for time measurements, the floating point operation (flop) for counting arithmetic operations, and the binary term (byte) to quantify memory. A broad range of unit prefixes are required; for example, the time spent on individual operations is generally expressed in terms of nanoseconds (ns), the computing power of a high-end supercomputer is expressed in terms of petaflops (1015 flop/sec), and the main memory of a computing node is quantified in terms of gigabytes (GB). There are also more specific metrics, for example, cache misses are used to count the data fetched from the main memory to the cache, and the IPC index refers to the instructions per clock cycle carried out by a CPU.
The principal measurements used to quantify the parallel performance are the load balance, the strong speedup, and the weak speedup. The load balance measures the quality of the workload distribution among the computing resources engaged for a computational task. If
Another common measurement is the strong speedup that measures the relative acceleration achieved at increasing the computing resources used to execute a specific task. If
the ideal one being
Finally, the weak speedup measures the relative variation on the computing time when the workload and the computing resources are increased proportionally. If
In the CFD context, the weak speedup is measured at increasing proportionally the mesh size and the computational resources engaged on the simulation. The ideal result for the speedup would be 1; however, this is hardly possible because the complexity of some parts of the problem, such as the solution of the linear system, increases above the linearity (see Section 5.4 on domain decomposition (DD) preconditioners). A second degradation factor is the communication costs, necessary to transfer data between parallel processes in a distributed memory context. This is especially relevant for the strong speedup tests: the overall workload is kept constant, and therefore, the workload per parallel process reduces at increasing the computing resources; consequently, while the computing time reduces, the overhead produced by the communications grows.
While the strong and weak speedups measure the relative performance of a piece of code by varying the number of processes, the load balance is a measure of the proper exploitation of a particular amount of resources. An unbalanced computation can be perfectly scalable if the dominant part is properly distributed on the successive partitions. This situation can be observed in Figure 9 from Section 4. It shows the strong scalability and the timings for two different parallel methods for matrix assembly. The faster method is not the one that shows better strong scaling, and in this case, it is the one that guarantees a better load balance between the different parallel processes.
Nonetheless, a balanced and scalable code does not mean yet an optimal code in terms of performance. The aforementioned measurements account for the use of parallel resources but do not say anything about how fast the code is at performing a task. In particular, if we consider a sequential code to be parallelized, the more efficient it is, the harder it will be to achieve good scalability since the communication overheads will be more relevant. Indeed, “the easiest way of making software scalable is to make it sequentially inefficient” [7].
CFD is considered a memory-bounded application, and this means that sequential performance is limited by the cost of fetching data from the main to the cache memory, rather than by the cost of performing the computations on the CPU registers. The reason is that the sparse operations, which dominate CFD computations, have a low arithmetic intensity (flops/byte), that is, few operations are performed for each byte moved from the main memory to the cache memory. Therefore, the sequential performance of CFD codes is mostly determined by the management of the memory transfers. A strategy to reduce data transfers is to maximize the data locality. This means to maximize the reuse of the data uploaded to the cache by: (1) using the maximum percentage of the bytes contained in each block (referred as cache line) uploaded to the cache, known as spatial locality, or (2) reuse data that have been previously used and kept into the cache, known as temporal locality. An example illustrates data locality in Section 4.4.
Moore’s law says: The number of transistors in a dense integrated circuit doubles approximately every two years. This law formulated in 1965 not only proved to be true, but it also translated in doubling the computing capacity of the cores every 24 months. This was possible not only by increasing the number of transistors but by increasing the frequency at which they worked. But, in the last years, it has come what computer scientists call The End of Free Lunch. This refers to the fact that the performance of a single core is not being increased at the same pace. There are three main issues that explain this:
The memory wall: It refers to the gap in performance that exists between processor and memory [8] (CPU speed improved at an annual rate of 55% up to 2000, while memory speed only improved at 10%). The main method for bridging the gap has been to use caches between the processor and the main memory, increasing the sizes of these caches and adding more levels of caching. But memory bandwidth is still a problem that has not been solved.
The ILP wall: Increasing the number of transistors in a chip as Moore’s law says is used in some cases to increase the number of functional units, allowing a higher level of instruction-level parallelism (ILP) because there are more specialized units or several units of the same kind (i.e., two floating point units that can process two floating point operations in parallel). But finding enough parallelism in a single instruction stream to keep a high-performance single-core processor busy is becoming more and more complex. One of the techniques to overcome this has been hyper-threading. This consists in a single physical core to appear as two (or more) to the operating system. Running two threads will allow to exploit the instruction-level parallelism.
The power wall: As we said, not only the number of transistors was increasing but also the frequency was increasing. But there exists a technological limit to surface power density, and for this reason, clock frequency cannot scale up freely any more. Not only the amount of power that must be supplied would not be assumable, but the chip would not be able to dissipate the amount of heat generated. To address this issue, the trend is to develop more simple and specialized hardware and aggregate more of them (i.e., Xeon Phi, GPUs).
Nevertheless, computer scientists still want to achieve the Exascale machine, but they cannot rely on increasing the performance of a single core as they used to. The turnaround is that the number of cores per chip and per node is growing quite fast in the last years, along with the number of nodes in a cluster. This is pushing the research into more complex memory hierarchies and networks topologies.
Once the Exascale machines are available, the challenge is to have applications that can make an efficient use and scale there. The increase in complexity of the hardware is a challenge for scientific application developers because their codes must be efficient in more complex hardware and address a high level of parallelism at both shared and distributed memory levels. Moreover, rewriting and restructuring existing codes is not always feasible; in some cases, the amount of lines of code is hundreds of thousands and, in others, the original authors of those codes are not around anymore.
At this point, only a unified effort and a codesign approach will enable Exascale applications. Scientists in charge of HPC applications need to trust in the parallel middleware and runtimes available to help them exploit the parallel resources. The complexity and variety of the hardware will not allow anymore the manual tuning of the codes for each different architecture.
A CFD simulation can be divided into four main phases: (1) mesh generation, (2) setup, (3) solution, and (4) analysis and visualization. Ideally, these phases would be carried out consecutively, but, in practice, they are interleaved until a satisfactory solution is achieved. For example, on the solution analysis, the user may realize that the integration time was not enough or that the quality of the mesh was too poor to capture the physical phenomena being tackled. This would force to return to the solution or to the mesh generation phase, respectively. Indeed, supported by the continued increase of the computing resources available, CFD simulation frameworks have evolved toward a runtime adaptation of the numerical and computational strategies in accordance with the ongoing simulation results. This dynamicity includes mesh adaptivity, in situ analysis and visualization, and runtime strategies to optimize the parallel performance. These mechanisms make simulation frameworks more robust and efficient.
The following sections of this chapter outline the numerical methods and computational aspects related with the aforementioned phases. Section 2 is focused on meshing and adaptivity, and Section 3 on the partitioning required for the parallelization. Sections 4 and 5 focus on the two main parts of the solution phase, that is, the assembly and the solution of the algebraic system. Finally, Section 6 is focused on the I/O and visualization issues.
Mesh adaptation is one of the key technologies to reduce both the computational cost and the approximation errors of PDE-based numerical simulations. It consists in introducing local modifications into the computational mesh in such a way that the calculation effort to reach a certain level of error is minimized. In other words, adaptation strategies maximize the accuracy of the obtained solution for a given computational budget. Three main components constitute the mesh adaptation process: error estimators to derive and make decisions where and when mesh adaptation is required and remeshing mechanics to change the density and orientation of mesh entities. Last but not least, dynamic load balancing is to ensure efficient computations on parallel systems. We develop the main ideas behind the first two concepts in the following subsections, and mesh partitioning and dynamic load balance are considered in Section 3.
The discretization of a continuous problem leads to an approximate solution more or less representative of the exact solution according to the care given to the numerical approximation and mesh resolution. Therefore, in order to be able to certify the quality of a given calculation, it is necessary to be able to estimate the discretization error—between this approximated solution, resulting for example from the application of the finite element (FE) or volume methods, and the exact (often unknown) solution of the continuous problem. This field of research has been the subject of much investigation since the mid-1970s. The first efforts focused on the a priori convergence properties of finite element methods to upper-bound the gap between the two solutions. Since a priori error estimate methods are often insufficient to ensure reliable estimation of the discretization error, new estimation methods called a posteriori are rather quickly preferred. Once the approximate solution has been obtained, it is then necessary to study and quantify its deviation from the exact solution. The error estimators of this kind provide information overall the computational domain, as well as a map of local contributions which is useful to obtain a solution satisfying a given precision by means of adaptive procedures. One of the most popular methods of this family is the error estimation based on averaging techniques, as those proposed in [9]. The popularity of these methods is mainly due to their versatile use in anisotropic mesh adaptation tools as a metric specifying the optimal mesh size and orientation with respect to the error estimation. An adapted mesh is then obtained by means of local mesh modifications to fit the prescribed metric specifications. This approach can lead to elements with large angles that are not suitable for finite element computations as reported in the general standard error analysis for which some regularity assumption on the mesh and on the exact solution should be satisfied. However, if the mesh is adapted to the solution, it is possible to circumvent this condition [10].
We focus now on anisotropic mesh adaptation driven by directional error estimators. The latter are based on the recovery of the Hessian of the finite element solution. The purpose is to achieve an optimal mesh minimizing the directional error estimator for a constant number of mesh elements. It allows, as shown in Figure 2, to refine/coarsen the mesh, stretch and orient the elements in such a way that, along the adaptation process, the mesh becomes aligned with the fine scales of the solution whose locations are unknown a priori. As a result of this, highly accurate solutions are obtained with a much lower number of elements.
Meshing and remeshing of 3D complex geometries for CFD applications: Airship (top left), drone (top right) and F1 vehicle (bottom).
More recently, estimation techniques have also been developed to evaluate the error committed on quantities of interest (e.g., drag, lift, shear, and heat flux) to the engineer. On this point, the current trend is to develop upper and lower bounds to delimit the observed physical quantity [11].
The parallelization of mesh adaptation methods goes back to the end of the 1990s. The SPMD MPI-based paradigm has been adopted by the pioneering works [12, 13, 14]. The effectiveness of these methods depends on the repartitioning algorithms used and on how the interfaces between subdomains are managed. Indeed, mesh repartitioning is the key process for most of today’s parallel mesh adaptation methods [15]. Starting from a partitioned mesh into multiple subdomains, remeshing operations are performed using a sequential mesh adaptator on each subdomain with an extra treatment of the interfaces. Two main approaches are considered in the literature: (1) It is an iterative one. At the first iteration, remeshing is performed concurrently on each processor, while the interfaces between subdomains are locked to avoid any modification in the sequential remeshing kernel. Then, a new partitioning is calculated to move the interfaces and remesh them at the next iteration. The algorithm iterates until all items have been remeshed (Figure 3). (2) The second approach consists in handling the interfaces by considering a complementary data structure to manage remeshing on remote mesh entities.
Iterative parallel remeshing steps on a 2D distributed mesh.
The first approach is preferred because of its high efficiency and full code-reusing capability for sequential remeshing kernels. However, hardware architectures go to be more dense (more cores per compute node) than before to fit the energy constraints fixed as a sine qua none condition to target Exascale supercomputers. Indeed, it is assumed that the nodes of a future Exascale system will contain thousands of cores per node. Therefore, it is important to rethink meshing algorithms in this new context of high levels of parallelism and using fine-grain parallel programming models that exploit the memory hierarchy. However, unstructured data-based applications are hard to parallelize effectively on shared-memory architectures for reasons described in [16].
It is clear that one of the main challenges to meet is the efficiency of anisotropic adaptive mesh methods on the modern architectures. New scalable techniques, such as asynchronous, hierarchical and communication avoiding algorithms, are recognized today to bring more efficiency to linear algebra kernels and explicit solvers. Investigating the application of such algorithms to mesh adaptation is a promising path to target modern architectures. Before going further on algorithmic aspects, another challenge arises when considering the efficiency and scalability analysis of mesh adaptation algorithms. Indeed, the unstructured and dynamic nature of mesh adaptation algorithms leads to imbalance the initial workload. Unfortunately, the standard metrics to measure the scalability of parallel algorithms are based on either a fixed mesh size for strong scalability analysis or a fixed mesh size by CPU for weak scalability.
In the finite element point of view, the problem to solve is subdivided into subproblems and the computational domain into subdomains. To adapt the mesh, an error estimator [17] is computed for each subdomain. According to the derived error estimator, and under the constraint of a given number of desired elements in the new adapted mesh, an optimal mesh is generated.
The constraint could be considered as local to each subdomain. In this case, solving the error estimate problem is straightforward. Indeed, all computations are local and no need to exchange data between processors. Another advantage to consider a local constraint is the possibility to generate a new mesh with the same number of elements per processor. This allows avoiding heavy load balancing cost after each mesh adaptation. However, this approach tends toward an overestimate of the mesh density on subdomains where flow activity is almost neglected. From a scaling point of view, this approach leads to a weak scalability model for which the problem size grows linearly with respect to the number of processors.
To derive a strong scalability model, which refers in general to parallel performance for fixed problem size, the constraint on the number of elements for the new generated mesh should be global. The global number of elements over the entire domain is distributed with respect to the mesh density prescribed by the error estimator. This is a good hard scalability model that leads to a quite performance analysis. However, reload balancing is needed after each mesh adaptation stage. The parallel behavior of the mesh adaptation is very close to the serial one and the error analysis still the same. For these reasons, this model is more relevant than the former one; nevertheless, we should investigate new efficient load balancing algorithms (see Section 3) able to take into account the error estimator prescription that will be derived.
The common strategy for the parallelization of CFD applications, to run on distributed memory systems, consists of a domain decomposition: the mesh that discretizes the simulation domain is partitioned into disjoint subsets of elements/cells, or disjoint sets of nodes, referred to as subdomains, as illustrated by Figure 4.
Partitioning into: (left) disjoint sets of elements. In white, interface nodes; and (right) disjoint sets of nodes. In white, halo elements.
Then, each subdomain is assigned to a parallel process which carries out all the geometrical and algebraic operations corresponding to that part of the domain and the associated components of the defined fields (velocity, pressure, etc.). Therefore, both the algebraic and geometric partitions are aligned. For example, the matrices expressing the linear couplings are distributed in such a way that each parallel process holds the entries associated with the couplings generated on its subdomain. As shown in Section 4.1, there are different options to carry out this partition, which depend on how the subdomains are coupled at their interfaces.
Some operations, like the sparse matrix vector product (SpMV) or the norms, require communications between parallel processes. In the first case, these are related to couplings at the subdomain interfaces, so are point-to-point communications between processes corresponding to neighboring subdomains. Indeed, a mesh partition requires the subsequent generation of communication scheme to carry out operations like the SpMV. On the other hand, for other parallel operations like the norm, a unique value is calculated by adding contributions from all the parallel processes; these are solved by means of a collective reduction operations and do not require a communication scheme.
Two properties are desired for a mesh partition, good balance of the resulting workload distribution and minimal communication requirements. However, in a CFD code, different types of operations coexist, acting on fields of different mesh dimensions like elements, faces or nodes. This situation hinders the definition of a unique partition suitable for all of them, thus damaging the load balance of the whole code. For example, in the finite element (FE) method, the matrix assembly requires a good balance of the mesh elements, while the solution of the linear system requires a good distribution of the mesh nodes. In Section 4.3, we present a strategy to solve this trade-off by applying a runtime dynamic load balance for the assembly. Also, when dealing with hybrid meshes, the target balance should take into account the relative weights of the elements, which can in practice be difficult to estimate. Regarding the communication costs, these are proportional to the size of the subdomain interfaces, and therefore, we target partitions minimizing them.
The two main options for mesh partitioning are the topological approach, based on partitioning the graph representing the adjacency relations between mesh elements, and the geometrical approach, which defines the partition from the location of the elements on the domain.
Mesh partitioning is traditionally addressed by means of graph partitioning, which is a well-studied NP-complete problem generally addressed by means of multilevel heuristics composed of three phases: coarsening, partitioning, and uncoarsening. Different variants of them have been implemented in publicly available libraries including parallel versions like METIS [18], ZOLTAN [19] or SCOTCH [20]. These topological approaches not only balance the number of elements across the subdomains but also minimize subdomains’ interfaces. However, they present limitations on the parallel performance and on the quality of the solution at growing the number of parallel processes performing the partition. This lack of scalability makes graph-based partitioning a potential bottleneck for large-scale simulations.
Geometric partitioning techniques obviate the topological interaction between mesh elements and perform its partition according to their spatial distribution; typically, space filling curve (SFC) is used for this purpose. A SFC is a continuous function used to map a multidimensional space into a one-dimensional space with good locality properties, that is, it tries to preserve the proximity of elements in both spaces. The idea of geometric partitioning using SFC is to map the mesh elements into a 1D space and then easily divide the resulting segment into equally weighted subsegments. A significant advantage of the SFC partitioning is that it can be computed very fast and does not present bottlenecks for its parallelization [21]. While the load balance of the resulting partitions can be guaranteed, the data transfer between the resulting subdomains, measured by means of edge cuts in the graph partitioning approach, cannot be explicitly measured and thus neither minimized.
Adaptive Mesh Refinement algorithms (see Section 2) can generate imbalance on the parallelization. This can be mitigated by migrating elements between neighboring subdomains [15]. However, at some point, it may be more efficient to evaluate a new partition and migrate the simulation results to it. In order to minimize the cost of this migration, we aim to maximize the intersection between the old and new subdomain for each parallel process, and this can be better controlled with geometric partitioning approaches.
In the finite element method, the assembly consists of a loop over the elements of the mesh, while it consists of a loop over cells or faces in the case of the finite volume method. We study the parallelization of such assembly process for distributed and shared memory parallelism, based on MPI and OpenMP programming models, respectively. Then, we briefly introduce some HPC optimizations. In the following, to respect tradition, we refer to elements in the FE context and to cells in the FV context.
According to the partitioning described in the previous section, there exist three main ways of assembling the local matrices
Finite element and cell-centered finite volume matrix assembly techniques. From left to right: (1) FE: Partial rows, (2) FE: Full rows using communications, (3) FE: Full rows using halo elements, and (4) FV: Full rows using halo cells.
Partial row matrix. Local matrices can be made of partial rows (square matrices) or full rows (rectangular matrices). The first option is natural in the finite element context, where partitioning consists in dividing the mesh into disjoint element sets for each MPI process, and where only interface nodes are duplicated. In this case, the matrix rows of the interface nodes of neighboring subdomains are only partial, as its coefficients come from element integrations, as illustrated in Figure 5 (1) by matrices
Full row matrix. The full row matrix consists in assigning rows exclusively to one MPI process. In order to obtain the complete coefficients of the rows of interface nodes, two options are available in the FE context, as illustrated by the middle examples of Figure 5: (1) by communicating the missing contributions of the coefficients between neighboring subdomains through MPI messages and (2) by introducing halo elements. The first option involves additional communications, while the second option duplicates the element integration on halo elements.
In the first case, referring to the example of Figure 5 (2), the full row of node 3 is obtained by communicating coefficients
The relative performance of partial and full row matrices depends on the size of the halos, involving more memory and extra computation, compared to the cost of additional MPI communications. Note that open-source algebraic solvers (e.g., MAPHYS [22], PETSC [23]) admit the first, second or both options and perform the communications internally if required.
In cell-centered FV methods, the unknowns are located in the cells. Therefore, halo cells are also necessary to fully assemble the matrix coefficients, as illustrated by Figure 5 (4). This is the option selected in practice in FV codes, although a communication could be used to obtain the full row format without introducing halos on both sides (only one side would be enough). In fact, let us imagine that subdomain 1 does not hold the halo cell 3. To obtain the full row for cell 2, a communication could be used to pass coefficient
Partial vs. full row matrix:
Load balance. As far as load balance is concerned, the partial row method is the one which a priori enables one to control the load balance of the assembly, as elements are not duplicated. On the other hand, in the full row method, the number of halo elements depends greatly upon the partition. In addition, the work on these elements is duplicated and thus limits the scalability of the assembly: for a given mesh, the relative number of halo elements with respect to interior elements increases with the number of subdomains.
Hybrid meshes. In the FE context, should the work load per element be perfectly predicted, the load balance would only depend on the partitioner efficiency (see Section 3). However, to obtain such a prediction of the work load, one should know the exact relative cost of assembling each and every type of element of the hybrid mesh (hexahedra, pyramids, prisms, and tetrahedra). This is a priori impossible, as this cost not only depends on the number of operations, but also on the memory access patterns, which are unpredictable.
High-order methods. When considering high-order approximations, the situations of the FE and FV methods differ. In the first case, the additional degrees of freedom (DOF) appearing in the matrix are confined to the elements. Thus, only the number of interface nodes increases with respect to the same number of elements with a first-order approximation. In the case of the FV method, high-order methods are generally obtained by introducing successive layer of halos, thus reducing the scalability of the method.
Sparse matrix vector product. As mentioned earlier, the main operation of Krylov-based iterative solvers is the SpMV. We see in next section that the partial row and full row matrices lead to different communication orders and patterns.
The following is explained in the FE context but can be translated straightforwardly to the FV context. Finite element assembly consists in computing element matrices and right-hand sides (
Algorithm 1 Matrix Assembly in each MPI partition
1: for elements
2: Gather: copy global arrays to element arrays.
3: Computation: element matrix and RHS
4: Scatter: assemble
5: end for
OpenMP pragmas can be used to parallelize Algorithm 1 quite straightforwardly, as we will see in a moment. So why this shared memory parallelism has been having little success in CFD codes?
Amdahl’s law states that the scalability of a parallel algorithm is limited by the sequential kernels of a code. When using MPI, most of the computational kernels are parallel by construction, as they consist of loops over local meshes entities such as elements, nodes, and faces, even though scalability is obviously limited by communications. One example of possible sequential kernel is the coarse grain solver described in Section 5.4. On the other hand, parallelization with OpenMP is incremental and explicit, making remaining sequential kernels a limiting factor for the scalability as stated by Amdahl’s law. This explains, in part, the reluctance of CFD code developers to rely on the loop parallelism offered by OpenMP. There exists another reason, which lies in the difficulty in maintaining large codes using this parallel programming, as any new loop introduced in a code should be parallelized to circumvent the so-true Amdahl’s law. As an example, Alya code [24] has more than 1000 element loops.
However, the situation is changing, for two main reasons. First, nowadays, supercomputers offer a great variety of architectures, with many cores on nodes (e.g., Xeon Phi). Thus, shared memory parallelism is gaining more and more attention as OpenMP offers more flexibility to parallel programming. In fact, sequential kernels can be parallelized at the shared memory level using OpenMP: one example is once more the coarse solve of iterative solvers; another example is the possibility of using dynamic load balance on shared memory nodes, as explained in [25] and introduced in Section 4.3.
As mentioned earlier, the parallelization of the assembly has traditionally been based on loop parallelism using OpenMP. Two main characteristics of this loop have led to different algorithms in the literature. On the one hand, there exists a race condition. The race conditions comes from the fact that different OpenMP threads can access the same degree of freedom coefficient when performing the scatter of element matrix and RHS, in step 4 of Algorithm 1. On the other hand, spatial locality must be taken care of in order to obtain an efficient algorithm. The main techniques are illustrated in Figure 6 and are now commented.
Shared memory parallelism techniques using OpenMP. (1) Loop parallelism using ATOMIC pragma. (2) Loop parallelism using element coloring. (3) Loop parallelism using element partitioning. (4) Task parallelism using partitioning and multidependences.
Loop parallelism using ATOMIC pragma. The first method to avoid the race condition consists in using the OpenMP ATOMIC pragmas to protect the shared variables
Loop parallelism using element coloring. The second method consists in coloring [26] the elements of the mesh such that elements of the same color do not share nodes [27], or such that cells of the same color do not share faces in the FV context. The loop parallelism is thus applied for elements of the same color, as illustrated in Figure 6 (2). The advantage of this technique is that one gets rid of the ATOMIC pragma and its inherent cost. The main drawback is that spatial locality is lessened by construction of the coloring. In [28], a comprehensive comparison of this technique and the previous one is presented.
Loop parallelism using element partitioning. In order to preserve spatial locality while disposing of the ATOMIC pragma, another technique consists in partitioning the local mesh of each MPI process into disjoint sets of elements (e.g., using METIS [18]) to control spatial locality inside each subdomain. Then, one defines separators as the layers of elements which connect neighboring subdomains. By doing this, elements of different subdomains do not share nodes. Obviously, the elements of the separators should be assembled separately [29, 30], which breaks the classical element loop syntax and requires additional programming (Figure 6 (3)).
Task parallelism using multidependences. Task parallelism could be used instead of loop parallelism, but the three algorithmics presented previously would not change [30, 31, 32]. There are two new features implemented in OmpSs (a forerunner for OpenMP) that are not yet included in the standard that can help: multidependences and commutative. These would allow us to express incompatibilities between subdomains. The mesh of each MPI process is partitioned into disjoint sets of elements, and by prescribing the neighboring information in the OpenMP pragma, the runtime will take care of not executing neighboring subdomains at the same time [33]. This method presents good spatial locality and circumvents the use of ATOMIC pragma.
As explained in Section 1.2, efficiency measures the level of usage of the available computational resources. Let us take a look at a typical unbalanced situation illustrated in the trace shown in Figure 7. The
Trace of an unbalanced element assembly.
Load imbalance has many causes: mesh adaptation as described in Section 2.3, erroneous element weight prediction in the case of hybrid meshes (Section 3), hardware heterogeneity, software or hardware variabilities, and so on. The example presented in the figure is due to wrong element weights given to METIS partitioner for the partition of a hybrid mesh [28].
There are several works in the literature that deal with load imbalance at runtime. We can classify them into two main groups, the ones implemented by the application (may be using external tools) and the ones provided by runtime libraries and transparent to the application code.
In the first group, one approach would be to perform local element redistribution from neighbors to neighbors. Thus, only limited point-to-point communications are necessary, but this technique provides also a limited control on the global load balance. Another option consists in repartitioning the mesh, to achieve a better load distribution. In order for this to be efficient, a parallel partitioner (e.g., using the space filling curve-based partitioning presented in Section 3) is necessary in order to circumvent Amdahl’s law. In addition, this method is an expensive process so that imbalance should be high to be an interesting option.
In the second group, several parallel runtime libraries offer support to solve load imbalance at the MPI level, Charm++ [34], StarPU [35], or Adaptive MPI (AMPI). In general, these libraries will detect the load imbalance and migrate objects or specific data structures between processes. They usually require to use a concrete programming language, programming model, or data structures, thus requiring high levels of code rewriting in the application.
Finally, the approach that has been used by the authors is called DLB [25] and has been extensively studied in [28, 33, 36] in the CFD context. The mechanism enables to lend resources from MPI idle processes to working ones, as illustrated in Figure 8, by using a hybrid approach MPI + OpenMP and standard mechanisms of these programming models.
Principles of dynamic load balance with DLB [25], via resources sharing at the shared memory level. (Top) Without DLB and (bottom) with DLB.
In this illustrative example, two MPI processes launch two OpenMP threads on a shared memory node. Threads running on cores 3 and 4 are clearly responsible for the load imbalance. When using DLB, threads running in core 1 and core 2 lend their resources as soon as they enter the synchronization point, for example, an MPI reduction represented by the orange bar. Then, MPI process 2 can now use four threads to finish its element assembly.
Let us take a look at the performance of two assembly methods: MPI + OpenMP with loop parallelism and MPI + OpenMP with task parallelism and dynamic load balance. Figure 9 shows the strong scaling and timings of these two methods. The example corresponds to the assembly of 140 million element meshes, with highly unbalanced partitions [33], as illustrated by the trace shown in Figure 7. As already noted in Section 1.2, although the strong scaling of the MPI + OpenMP with loop parallelism method is better than the other one, the timing is around three times higher. This is due to the combination of: substituting loop parallelism using coloring by task parallelism, thus giving a higher IPC; using the dynamic load balance library DLB to improve the load balance at the shared memory level.
Strong speedup and timings of two hybrid methods: MPI + OpenMP with loop parallelism and MPI + OpenMP with task parallelism and DLB, on 1024 to 16,384 cores.
Let us close this section with some basic HPC optimizations to take advantage of some hardware characteristics presented in Section 1.1.1.
Spatial and temporal locality The main memory access bottleneck of the assembly depicted in Algorithm 1 is the gather and scatter operations. Figure 10 illustrates the concept. On the top figure, we illustrate the action of node renumbering [37, 38] to achieve a better data locality: nodes are “grouped” according their global numbering [37, 38]. For example, when assembling element 3945, the gather is more efficient after renumbering (top right part of the figure) as nodal array positions are closer in memory. Data locality is thus enhanced. However, the assembly loop accesses elements successively. Therefore, when going from element 1 to 2, there is no data locality, as element 1 accesses positions 1,2,3,4 and element 2 positions 6838, 6841, 6852, 6839. Therefore, renumbering the elements according to the node numbering enables one to achieve temporal locality, as shown in the bottom right part of the figure. Data already present in cache can be reused (data of nodes 3 and 4).
Optimization of memory access by renumbering nodes and elements. (1) Top: satial locality. (2) Bottom: temporal locality.
Vectorization. According to the available hardware, vectorization may be activated as a data-level parallelism. However, the vectorization will be efficient if the compiler is able to vectorize the appropriate loops. In order to help the compiler, one can do some “not-so-dirty” tricks at the assembly level. Let us consider a typical element matrix assembly. Let us denote
This loop, part of step 3 of Algorithm 1, will be carried out on each element of the mesh. Now, let us define Ne, a parameter defined in compilation time. In order to help vectorization, last loop can be substituted by the following.
thus assembling Ne elements at the same time. To have an idea of how powerful this technique can be, in [39], a speedup of 7 has been obtained in an incompressible Navier–Stokes assembly with Ne = 32. Finally, note that this formalism can be relatively easily applied to port the assembly to GPU architectures [39].
This section is devoted to the parallel solution of the algebraic system
coming from the Navier–Stokes equation assembly described in last section. The matrix and the right-hand side are distributed over the MPI processes, the matrix having a partial row or full row format.
As explained in last section, the assembly process is embarrassingly parallel, as it does not require any communication (except the case illustrated in Figure 5 (b)). The algebraic solvers are mainly responsible for the limitation of the strong and weak scalabilities of a code (see Section 1.2). Thus, adapting the solver to a particular algebraic system is fundamental. This is a particularly difficult task for large distributed systems, where scalability and load balance enter into play, in addition to the usual convergence and timing criteria.
In this section, we do not derive any algebraic solver, for which we refer to Saad’s book [40] or [41] for parallelization aspects, but rather discuss their behaviors in a massively parallel context. The section does not intend to be exhaustive, but rather to expose the experience of the authors on the topic.
The main techniques to solve Eq. (1) are categorized as explicit, semi-implicit and implicit. The explicit method can be viewed as the simplest iterative solver to solve Eq. (1), namely a preconditioned Richardson iteration:
where
Semi-implicit methods are mainly represented by fractional step techniques [42, 43]. They generally involve an explicit update of the velocity, such as Eq. (2) and an algebraic system with an SPD matrix for the pressure. Other semi-implicit methods exist, based on the splitting of the unknowns at the algebraic level. This splitting can be achieved for example by extracting the pressure Schur complement of the incompressible Navier–Stokes Eqs. [44]. The Schur complement is generally solved with iterative solvers, which solution involves the consecutive solutions of algebraic systems involving unsymmetric and symmetric matrices (SPD for the pressure). These kinds of methods have the advantage to extract better conditioned and smaller algebraic systems than the original coupled one, at the cost of introducing an additional iteration loop to converge to the monolithic (original) solution.
Finally, implicit methods deal with the coupled system (1). In general, much more complex solvers and preconditioners are required to solve this system than in the case of semi-implicit methods. So, in any case, we always end up with algebraic systems like Eq. (1).
We start with the parallelization of the operation that occupies the central place in iterative solvers, namely the sparse matrix vector product (SpMV).
Let us consider the simplest iterative solver, the so-called simple or Richardson iteration, which consists in solving the following equation for
The parallelization of this solver amounts to that of the SpMV (say
Synchronous parallelization of SpMV for the partial row and full row formats.
SpMV for partial row matrix with MPI. When using the partial row format, the local result of the SpMV (in each MPI process) is only partial as the matrices are also partial on the interface, as explained in Section 4. By applying the distributive property of the multiplication, the results of neighboring subdomains add up to the correct solution on the interface:
In practice, the exchanges of
SpMV for full row matrix with MPI. For this format, where all local rows are fully assembled and matrices are rectangular, the exchange is carried out before the local products on the multiplicands
Asynchronous SpMV with MPI. The previous two algorithms are said to be synchronous, as the MPI communication comes before or after the complete local SpMV for the partial or full row formats, respectively. The use of nonblocking MPI communications enables one to obtain asynchronous versions of the SpMV [41]. In the case of the partial row format, the procedure would consist of the following steps: (1) perform the SpMV for the interface nodes; (2) use nonblocking MPI communications (MPI_Isend and MPI_Irecv functions) to exchange the results of the SpMV on interface nodes; (3) perform the SpMV for the internal nodes; (4) synchronize the communications (MPI_Waitall); and (5) assemble the interface node contributions. This strategy permits to overlap communication (results of the SpMV for interface nodes) and work (SpMV for internal nodes).
SpMV with OpenMP. The loop parallelization with OpenMP is quite simple to implement in this case. However, care must be taken with the size of the chunks, as the overhead for creating the threads may be penalizing if the chunks are too small. Another consideration is the matrix format selected such as CSR, COO, ELL. For example, COO format requires the use of an ATOMIC pragma to protect
Load balance. In terms of load balance, FE and cell-centered FV methods behave differently in the SpMV. In the FV method, the degrees of freedom are located at the center of the elements. The partitioning into disjoint sets of elements can thus be used for both assembly and solver. In the case of the finite element, the number of degrees of freedom involved in the SpMV corresponds to the nodes and could differ quite from the number of elements involved in the assembly. So the question of partitioning a finite-element mesh into disjoint sets of nodes may be posed, depending on which operation dominates the computation. As an example, if one balances a hexahedra subdomain with a tetrahedra subdomain in terms of elements, the latter one will hold six times more elements than the last one.
The Richardson iteration given by Eq. (7) is only based on the SpMV operation. SpMV does not involve any global communication mechanism among the degrees of freedom (DOF), from one iteration to the next one. In fact, the result for one DOF after one single SpMV is only influenced by its first neighbors, as illustrated by Figure 12 (1). To propagate a change from one side of the domain to the other, we thus need as many iterations as number of nodes between both sides, that is
Accelerating iterative solvers. From top to bottom: (1) SpMV has a node-to-node influence; (2) domain decomposition (DD) solvers have a subdomain-to-subdomain influence; and (3) coarse solvers couple the subdomains.
Krylov subspace methods, represented by the GMRES, BiCGSTAB, and CG methods among others, construct specific Krylov subspaces where they minimize the residual
The selection of the preconditioning of Eq. (1) is the key for solving the system efficiently [45, 46]. Preconditioning should provide robustness at the least price, for a given problem, and in general, robustness is expensive. Domain decomposition preconditioners provide this robustness, but can result too expensive compared to smarter methods, as we now briefly analyze.
Domain Decomposition. Erhel and Giraud summarized the attractiveness of domain decomposition (DD) methods as follows:
One route to the solution of large sparse linear systems in parallel scientific computing is the use of numerical methods that combine direct and iterative methods. These techniques inherit the advantages of each approach, namely the limited amount of memory and easy parallelization for the iterative component and the numerical robustness of the direct part.
DD preconditioners are based on the exact (or almost exact) solution of the local problem to each subdomain. In brief, the local solutions provide a coupling mechanism between the subdomains of the partition, as illustrated in Figure 12 (2) (note that the subdomain matrices
Coarse solvers try to resolve this dependence, providing a global communication mechanism among the subdomains, generally one degree of freedom per subdomain. The coarse solver is a “sequential” bottleneck as it is generally solved using a direct solver on a restricted number of MPI processes. Let us mention the deflated conjugate gradient (DCG) method [47] which provides a coarse grain coupling, but which can be independent of the partition.
As we have explained, solvers involving DD preconditioners together with a coarse solver aim at making the solver convergence independent of the mesh size and the number of subdomains. In terms of CPU time, this is translated into the concept of weak scalability (Section 1.2). This can be achieved in some cases, but hard to obtain in the general case.
Multigrid solvers or preconditioners provide a similar multilevel mechanism, but using a different mathematical framework [48]. They only involve a direct solver at the coarsest level, and intermediate levels are still carried out in an iterative way, thus exhibiting good strong (based on SpMV) and weak scalabilities (multilevel). Convergence is nevertheless problem dependent [49, 50].
Physics and numerics based solvers. DD preconditioners are brute force preconditioners in the sense that they attack local problems with a direct solver, regardless of the matrix properties. Smarter approaches may provide more efficient solutions, at the expense of not being weak scalable. But do we really need weak scalability to solve a given problem on a given number of available CPUs? Well, this depends. Let us cite two physical−/numerical-based preconditioners. The linelet preconditioner is presented in [51]. In a boundary layer mesh, a typical situation in CFD, the discretization of the Laplacian operator tends to a tridiagonal matrix when anisotropy tends to infinity (depending also on the discretization technique), and the dominant coefficients are along the direction normal to the wall. The anisotropy linelets consist of a list of nodes, renumbered in the direction normal to the wall. By assembling tridiagonal matrices along each linelet, the preconditioner thus consists of a series of tridiagonal matrices, very easy to invert.
Let us also mention finally the streamwise linelet [52]. In the discretization of a hyperbolic problem, the dependence between degrees of freedom follows the streamlines. By renumbering the nodes along these streamlines, one can thus use a bidiagonal or Gauss–Seidel solver as a preconditioner. In an ideal situation where nodes align with the streamlines, the bidiagonal preconditioner makes the problem converge in one complete sweep.
These two examples show that listening to the physics and numerics of a problem, one can devise simple and cheap preconditioners, performing local operations.
Figure 13 illustrates the comments we have previously made concerning the preconditioners. In this example, we solve the Navier–Stokes equations together with a
Convergence of pressure equation with 512 CPUs using different solvers (4 M node mesh). (1) Residual norm vs. number of iterations. (2) Residual norm vs. time.
Iterative parallel computing requires a lot of global synchronizations between processes, coming from the scalar products to compute descent and orthogonalization parameters or residual norms. These synchronizations are very expensive due to the high latencies of the networks. They also imply a lot of wasted time if the workloads are not well balanced, as explained in Section 4.3 for the assembly. The heterogeneous nature of the machines makes such load balancing very hard to achieve, resulting in higher time loss, compared to homogeneous machines.
Pipelined solvers. Pipelined solvers consist of algorithmically equivalent solvers (e.g., pipelined CG wrt CG) that are devised by introducing new recurrence variables and rearranging some of the basic solver operations [55, 56]. The main advantage of pipelined versions is the possibility to overlap reduction operations with some operations, like preconditioning. This is achieved by means of the MPI3 [5] nonblocking reduction operations, MPI_IAllreduce. This enables one to hide latency, provided that the work to be overlapped is sufficient, and thus to increase the strong scaling. Although algorithmically equivalent to their classical versions, pipelined solvers introduce local rounding errors due to the addition recurrence relations, which limit their attainable accuracy [57].
Communication avoiding solvers. Asynchronous iterations provide another mechanism to overcome the synchronism limitation. In order to illustrate the method, let us take the example of the Richardson method of Eq. (3). Each subdomain
This means that each subdomain
Scientific visualization focuses on the creation of images to provide important information about underlying data and processes. In recent decades, the unprecedented growth in computing and sensor performance has led to the ability to capture the physical world in unprecedented levels of detail and to model and simulate complex physical phenomena. Visualization plays a decisive role in the extraction of knowledge from these data—as the mathematician Richard Hamming famously said, “The purpose of computing is insight, not numbers [..].” [61] It allows you to understand large and complex data in two, three, or more dimensions from different applications. Especially for CFD data, the visualization is of great importance, as its results can be well represented in the three-dimensional representation known to us.
Traditionally, I/O and visualization are closely related, as in most workflows, data used for visualization are written to disk and then read by a separate visualization tool. This is also called “postmortem” visualization, since the visualization may be done after the CFD solver has finished running. Other modes of interaction with visualization are becoming more common, such as “in situ” visualization (the CFD solver also directly produces visualization images, using the same nodes and partitioning), or “in-transit” visualization (the CFD solver is coupled to a visualization program, possibly running on other nodes and with a different partitioning scheme).
I/O. Output of files for postmortem visualization usually represents the highest volume of output from a CFD code, as well as some possibly separate operations, especially explicit checkpointing a restart, requiring writing and reading of large datasets. Logging or output of data subsets also requires I/O, often with a smaller volume but higher frequency.
As CFD computations can be quite costly, codes usually have a “checkpoint/restart” feature, allowing the code to output its state (whether converging for a steady computation or unsteady state reached for unsteady cases) to disk, for example, before running out of allocated computer time. This is called checkpointing. The computation may be restarted from the state reached by reading the checkpoint from a previous run. This incurs both writing and reading. Some codes use the same file format for visualization output and checkpointing, but this assumes data required are sufficiently similar and often that the code has a privileged output format. When restarting requires additional data (such as field values at locations not exactly matching those of the visualization, or multiple time steps for smooth restart of higher order time schemes), code-specific formats are used. Some libraries, such as Berkeley Lab Checkpoint/Restart (BLCR) [62], try to provide a checkpointing mechanism at the runtime level, including for parallel codes. This may require less programming on the solver side, at the expense of larger checkpoint sizes. BLCR’s target is mostly making the checkpoint/restart sufficiently transparent to the code that it may be checkpointed, stopped, and then restarted based on resource manager job priorities, not I/O size and performance. In practice, BLCR does not seem to have evolved in recent years, and support in some MPI libraries has been dropped; so it seems the increasing complexity of systems has made this approach more difficult.
As datasets used by CFD tools are often large, it is recommended to use mostly binary representations rather than text representations. This has multiple advantages when done well:
avoid need for string to binary conversions, which can be quite costly;
avoid loss of precision when outputting floating point values;
reduced data size: 4 or 8 bytes for a single- or double-precision floating point value, while text often requires more characters even with reduced precision;
fixed size (which is more difficult to ensure with text formats), allowing easier indexing for parallel I/O;
As binary data are not easily human-readable, additional precautions are necessary, such as providing sufficient metadata for the file to be portable. This can be as simple as providing a fixed-size string with the relevant information, and associating a fixed-size description with name, type, and size for each array, or much more advanced depending on the needs. Many users with experience with older fields tend to feel more comfortable with text files, so this advice may seem counterintuitive, but issues which plagued older binary representations have disappeared, while text files are not as simple as they used to be, today with many possible character encodings. Twenty years ago, some systems such as Cray used proprietary floating-point types, while many already used the IEEE-754 standard for single-, double-, and extended-precision floating point values. Today, all known systems in HPC use the IEEE-754 standard, so it is not an issue anymore.1 Other proponents of text files sometimes cite the impossibility of “repairing” slightly damaged binary files or the possibility of understanding undocumented text files, but this is not applicable to large files anyways. Be careful if you are using Fortran: by default, “unformatted” files do not just contain the “raw” binary data that are written, but small sections before and after each record, indicating at least the record’s size (allowing moving forward and backward from one record to another as mandated by the Fortran standard). Though vendors have improved compatibility over the years, Fortran binary files are not portable by default. To use raw data in Fortran as would be done in C, the additional access = ’stream’ option must be passed to the open statement.
Some libraries, such as HDF5 [63] and NetCFD [64], handle binary portability, such as big endian/little endian issues or floating point-type conversions, and provide a model for simple, low-level data such as sets of arrays. They also allow for parallel I/O based on MPI I/O. Use of HDF5 has become very common on HPC systems, as many other models build on it.
As data represented by CFD tools is often structured in similar ways, some libraries such as CFD General Notation System (CGNS) [65], Model for Exchange of Data (MED) [66], Exodus II, or XDMF offer a data model so as to handle I/O on a more abstract level (i.e., coordinates, element connectivity, field values rather than raw data). MED and CGNS use HDF5 as a low-level layer.2 Exodus II uses NetCDF as a lower-level layer, while XDMF stores arrays in HDF5 files and metadata in XML files. In CFD, CGNS is probably the most used of these standards.
Parallel I/O. As shown in Figure 1, the access to disk has from far the highest latency in the memory hierarchy.
There are several ways of handling I/O for parallel codes. The most simple solution is to read or write a separate file for each MPI task. On some file systems, this may be the fastest method, but it leads to the generation of many files on large systems, and requires external tools to reassemble data for visualization, unless using libraries which can assemble data when reading it (such as VTK using its own format). Reassembling data for visualization (or partitioning on disk) require additional I/O, so it is best to avoid them if possible. Another approach is to use “shared” or “flat” files, which are read and written collectively by all tasks. MPI I/O provides functions for this (for example MPI_File_write_at_all using MPI), so the low-level aspects are quite simple, but the calling code must provide the logic by which data are transformed from a flat, partition-independent representation in the file to partition-dependent portions in memory. This approach provides the benefit of allowing checkpointing and restarting on different numbers of nodes and making parallelism more transparent for the user, though it requires additional work for the developers. Parallel I/O features of libraries such as HDF5 and NetCFD seek to make this easier (and libraries build on them such as CGNS and MED can exploit those too).
Performance of parallel I/O is often highly dependent on the combination of approach used by a code and the underlying file system. Even on machines with similar systems but different file system tuning parameters, performance may vary. In any case, for good performance on parallel file systems (which should be all shared file systems on modern clusters), it is recommended to avoid funneling all data through a single node except possibly as a fail-safe mode. In any case, keeping data fully distributed extending to the I/O level is a key to handling very large datasets which do not fit in the memory of a single node. Given the difficulty of obtaining portable I/O performance, some libraries like adaptable I/O system (ADIOS) [67] seek to provide an adaptable approach, allowing hybrid approaches between flat or separate files, with groups of file for process subsets, based on easily tunable XML metadata. ADIOS also provides other features, such as staging in memory (possible also with HDF5), at the cost of another library layer.
Visualization pipeline. The “visualization pipeline” is a common method for describing the visualization process. When the pipeline is run through, an image is calculated from the data using the individual steps
While the selection of different visualization applications is considerable, the visualization techniques in science are generally used in the following areas of the dimensionality of the data fields. A distinction is made between scalar fields (temperature, density, pressure, etc.), vector fields (speed, electric field, magnetic field, etc.), and tensor fields (diffusion, electrical and thermal conductivity, stress and strain tensor, etc.).
Regardless of the dimensionality of the data fields, any visualization of the whole three-dimensional volume can easily flood the user with too much information, especially on a two-dimensional display or piece of paper. Hence, one of the basic techniques in visualization is the reduction/transformation of data. The most common technique is slicing the volume data with cut planes, which reduces three-dimensional data to two dimensions.
Color information is often mapped onto these cut planes using another basic well-known technique called color mapping. Color mapping is a one-dimensional visualization technique. It maps scalar value into a color specification. The scalar mapping is done by indexing into a color reference table—the lookup table. The scalar values serve as indexes in this lookup table including local transparency. A more general form of the lookup table is the transfer function. A transfer function is any expression that maps scalars or multidimensional values to a color specification.
Color mapping is not limited to 2D objects like cut planes, but it is also often used for 3D objects like isosurfaces. Isosurfaces belong to the general visualization technique of data fields, which we focus on in the following.
Visualization of scalar fields. For the visualization of three-dimensional scalar fields, there are two basic visualization techniques: isosurface extraction and volume rendering (Figure 14).
Isosurface extraction is a powerful tool for the investigation of volumetric scalar fields. An isosurface in a scalar volume is a surface in which the data value is constant, separating areas of higher and lower value. Given the physical or biological significance of the scalar data value, the position of an isosurface and its relationship to other adjacent isosurfaces can provide a sufficient structure of the scalar field.
The second fundamental visualization technique for scalar fields is volume rendering. Volume rendering is a method of rendering three-dimensional volumetric scalar data in two-dimensional images without the need to calculate intermediate geometries. The individual values in the dataset are made visible by selecting a transfer function that maps the data to optical properties such as color and opacity. These are then projected and blended together to form an image. For a meaningful visualization, the correct transfer function must be found that highlights interesting regions and characteristics of the data. Finding a good transfer function is crucial for creating an informative image. Multidimensional transfer functions enable more precise delimitation from the important to the unimportant. Therefore, they are widely used in volume rendering for medical imaging and the scientific visualization of complex three-dimensional scalar fields (Figure 14).
Visualization of flame simulation results (left) using slicing and color mapping in the background, and isosurface extraction and volume rendering for the flame structure. Visualization of an inspiratory flow in the human nasal cavity (right) using streamlines colored by the velocity magnitude [68].
Visualization of vector fields. The visualization of vector field data is challenging because no existing natural representation can convey a visually large amount of three-dimensional directional information. Visualization methods for three-dimensional vector fields must therefore bring together the opposing goals of an informative and clear representation of a large number of directional information. The techniques relevant for the visual analysis of vector fields can be categorized as follows.
The simplest representations of the discrete vector information are oriented glyphs. Glyphs are graphical symbols that range from simple arrows to complex graphical icons, directional information, and additional derived variables such as rotation.
Streamlines provide a natural way to follow a vector dataset. With a user-selected starting position, the numerical integration results in a curve that can be made easily visible by continuously displaying the vector field. Streamlines can be calculated quickly and provide an intuitive representation of the local flow behavior. Since streamlines are not able to fill space without visual disorder, the task of selecting a suitable set of starting points is crucial for effective visualization. A limitation of flow visualizations based on streamlines concerns the difficult interpretation of the depth and relative position of the curves in a three-dimensional space. One solution is to create artificial light effects that accentuate the curvature and support the user in depth perception.
Stream surfaces represent a significant improvement over individual streamlines for the exploration of three-dimensional vector fields, as they provide a better understanding of depth and spatial relationships. Conceptually, they correspond to the surface that is spanned by any starting curve, which is absorbed along the flow. The standard method for stream surface integration is Hultquist’s advancing front algorithm [69]. A special type of stream surface is based on the finite-time Lyapunov exponent (FTLE) [70]. FTLE enables the visualization of significant coherent structures in the flow.
Texture-based flow visualization methods are unique means to address the limitations of representations based on a limited set of streamlines. They effectively convey the essential patterns of a vector field without lengthy interpretation of streamlines. Its main application is the visualization of flow structures defined on a plane or a curved surface. The best known of these methods is the line integral convolution (LIC) proposed by Cabral and Leedom [71]. This work has inspired a number of other methods. In particular, improvements have been proposed, such as texture-based visualization of time-dependent flows or flows defined via arbitrary surfaces. Some attempts were made to extend the method to three-dimensional flows.
Furthermore, vector fields can be visualized using topological approaches. Topological approaches have established themselves as a reference method for the characterization and visualization of flow structures. Topology offers an abstract representation of the current and its global structure, for example, sinks, sources, and saddle points. A prominent example is the Morse-Smale complex that is constructed based on the gradient of a given scalar field [72].
Visualization of tensor fields. Compared to the visualization of vector fields, the state of the art in the visualization of tensor fields is less advanced. It is an active area of research. Simple techniques for tensor visualization draw the three eigenvectors by color, vectors, streamlines, or glyphs.
In situ visualization. According to the currently most common processing paradigm for analyzing and visualizing data on supercomputers, the simulation results are stored on the hard disk and reloaded and analyzed/visualized after the simulation. However, with each generation of supercomputers, memory and CPU performance grows faster than the access and capacity of hard disks. As a result, I/O performance is continuously reduced compared to the rest of the supercomputer. This trend hinders the traditional processing paradigm.
One solution is the coupling of simulations with real-time analysis/visualization—called in situ visualization. In situ visualizing is visualization that necessarily starts before the data producer finishes. The key aspect of real-time processing is that data are used for visualization/analysis while still in memory. This type of visualization/analysis can extract and preserve important information from the simulation that would be lost as a result of aggressive data reduction.
Various interfaces for the coupling of simulation and analysis tools have been developed in recent years—for the scientific visualization of CFD data, ParaView/Catalyst [73] and VisIt/libSim [74] are to be mentioned in particular. These interfaces allow a fixed coupling between the simulation and the visualization and integrate large parts of the visualization libraries into the program code of the simulation. Recent developments [75, 76] favor methods for loose coupling as tight coupling proves to be inflexible and susceptible to faults. Here, the simulation program and visualization are independent applications that only exchange certain data among each other via clearly defined interfaces. This enables independent development of simulation code and visualization/analysis code.
Part of the research developments and results presented in this chapter were funded by: the European Union’s Horizon 2020 Programme (2014–2020) and from Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP) under the HPC4E Project, grant agreement 689772; EoCoE, a project funded by the European Union Contract H2020-EINFRA-2015-1-676629; PRACE Type C and Type A projects.
Research methodology is the path through which researchers need to conduct their research. It shows the path through which these researchers formulate their problem and objective and present their result from the data obtained during the study period. This research design and methodology chapter also shows how the research outcome at the end will be obtained in line with meeting the objective of the study. This chapter hence discusses the research methods that were used during the research process. It includes the research methodology of the study from the research strategy to the result dissemination. For emphasis, in this chapter, the author outlines the research strategy, research design, research methodology, the study area, data sources such as primary data sources and secondary data, population consideration and sample size determination such as questionnaires sample size determination and workplace site exposure measurement sample determination, data collection methods like primary data collection methods including workplace site observation data collection and data collection through desk review, data collection through questionnaires, data obtained from experts opinion, workplace site exposure measurement, data collection tools pretest, secondary data collection methods, methods of data analysis used such as quantitative data analysis and qualitative data analysis, data analysis software, the reliability and validity analysis of the quantitative data, reliability of data, reliability analysis, validity, data quality management, inclusion criteria, ethical consideration and dissemination of result and its utilization approaches. In order to satisfy the objectives of the study, a qualitative and quantitative research method is apprehended in general. The study used these mixed strategies because the data were obtained from all aspects of the data source during the study time. Therefore, the purpose of this methodology is to satisfy the research plan and target devised by the researcher.
The research design is intended to provide an appropriate framework for a study. A very significant decision in research design process is the choice to be made regarding research approach since it determines how relevant information for a study will be obtained; however, the research design process involves many interrelated decisions [1].
This study employed a mixed type of methods. The first part of the study consisted of a series of well-structured questionnaires (for management, employee’s representatives, and technician of industries) and semi-structured interviews with key stakeholders (government bodies, ministries, and industries) in participating organizations. The other design used is an interview of employees to know how they feel about safety and health of their workplace, and field observation at the selected industrial sites was undertaken.
Hence, this study employs a descriptive research design to agree on the effects of occupational safety and health management system on employee health, safety, and property damage for selected manufacturing industries. Saunders et al. [2] and Miller [3] say that descriptive research portrays an accurate profile of persons, events, or situations. This design offers to the researchers a profile of described relevant aspects of the phenomena of interest from an individual, organizational, and industry-oriented perspective. Therefore, this research design enabled the researchers to gather data from a wide range of respondents on the impact of safety and health on manufacturing industries in Ethiopia. And this helped in analyzing the response obtained on how it affects the manufacturing industries’ workplace safety and health. The research overall design and flow process are depicted in Figure 1.
Research methods and processes (author design).
To address the key research objectives, this research used both qualitative and quantitative methods and combination of primary and secondary sources. The qualitative data supports the quantitative data analysis and results. The result obtained is triangulated since the researcher utilized the qualitative and quantitative data types in the data analysis. The study area, data sources, and sampling techniques were discussed under this section.
According to Fraenkel and Warren [4] studies, population refers to the complete set of individuals (subjects or events) having common characteristics in which the researcher is interested. The population of the study was determined based on random sampling system. This data collection was conducted from March 07, 2015 to December 10, 2016, from selected manufacturing industries found in Addis Ababa city and around. The manufacturing companies were selected based on their employee number, established year, and the potential accidents prevailing and the manufacturing industry type even though all criterions were difficult to satisfy.
It was obtained from the original source of information. The primary data were more reliable and have more confidence level of decision-making with the trusted analysis having direct intact with occurrence of the events. The primary data sources are industries’ working environment (through observation, pictures, and photograph) and industry employees (management and bottom workers) (interview, questionnaires and discussions).
Desk review has been conducted to collect data from various secondary sources. This includes reports and project documents at each manufacturing sectors (more on medium and large level). Secondary data sources have been obtained from literatures regarding OSH, and the remaining data were from the companies’ manuals, reports, and some management documents which were included under the desk review. Reputable journals, books, different articles, periodicals, proceedings, magazines, newsletters, newspapers, websites, and other sources were considered on the manufacturing industrial sectors. The data also obtained from the existing working documents, manuals, procedures, reports, statistical data, policies, regulations, and standards were taken into account for the review.
In general, for this research study, the desk review has been completed to this end, and it had been polished and modified upon manuals and documents obtained from the selected companies.
The study population consisted of manufacturing industries’ employees in Addis Ababa city and around as there are more representative manufacturing industrial clusters found. To select representative manufacturing industrial sector population, the types of the industries expected were more potential to accidents based on random and purposive sampling considered. The population of data was from textile, leather, metal, chemicals, and food manufacturing industries. A total of 189 sample sizes of industries responded to the questionnaire survey from the priority areas of the government. Random sample sizes and disproportionate methods were used, and 80 from wood, metal, and iron works; 30 from food, beverage, and tobacco products; 50 from leather, textile, and garments; 20 from chemical and chemical products; and 9 from other remaining 9 clusters of manufacturing industries responded.
A simple random sampling and purposive sampling methods were used to select the representative manufacturing industries and respondents for the study. The simple random sampling ensures that each member of the population has an equal chance for the selection or the chance of getting a response which can be more than equal to the chance depending on the data analysis justification. Sample size determination procedure was used to get optimum and reasonable information. In this study, both probability (simple random sampling) and nonprobability (convenience, quota, purposive, and judgmental) sampling methods were used as the nature of the industries are varied. This is because of the characteristics of data sources which permitted the researchers to follow the multi-methods. This helps the analysis to triangulate the data obtained and increase the reliability of the research outcome and its decision. The companies’ establishment time and its engagement in operation, the number of employees and the proportion it has, the owner types (government and private), type of manufacturing industry/production, types of resource used at work, and the location it is found in the city and around were some of the criteria for the selections.
The determination of the sample size was adopted from Daniel [5] and Cochran [6] formula. The formula used was for unknown population size Eq. (1) and is given as
where n = sample size, Z = statistic for a level of confidence, P = expected prevalence or proportion (in proportion of one; if 50%, P = 0.5), and d = precision (in proportion of one; if 6%, d = 0.06). Z statistic (Z): for the level of confidence of 95%, which is conventional, Z value is 1.96. In this study, investigators present their results with 95% confidence intervals (CI).
The expected sample number was 267 at the marginal error of 6% for 95% confidence interval of manufacturing industries. However, the collected data indicated that only 189 populations were used for the analysis after rejecting some data having more missing values in the responses from the industries. Hence, the actual data collection resulted in 71% response rate. The 267 population were assumed to be satisfactory and representative for the data analysis.
The sample size for the experimental exposure measurements of physical work environment has been considered based on the physical data prepared for questionnaires and respondents. The response of positive were considered for exposure measurement factors to be considered for the physical environment health and disease causing such as noise intensity, light intensity, pressure/stress, vibration, temperature/coldness, or hotness and dust particles on 20 workplace sites. The selection method was using random sampling in line with purposive method. The measurement of the exposure factors was done in collaboration with Addis Ababa city Administration and Oromia Bureau of Labour and Social Affair (AACBOLSA). Some measuring instruments were obtained from the Addis Ababa city and Oromia Bureau of Labour and Social Affair.
Data collection methods were focused on the followings basic techniques. These included secondary and primary data collections focusing on both qualitative and quantitative data as defined in the previous section. The data collection mechanisms are devised and prepared with their proper procedures.
Primary data sources are qualitative and quantitative. The qualitative sources are field observation, interview, and informal discussions, while that of quantitative data sources are survey questionnaires and interview questions. The next sections elaborate how the data were obtained from the primary sources.
Observation is an important aspect of science. Observation is tightly connected to data collection, and there are different sources for this: documentation, archival records, interviews, direct observations, and participant observations. Observational research findings are considered strong in validity because the researcher is able to collect a depth of information about a particular behavior. In this dissertation, the researchers used observation method as one tool for collecting information and data before questionnaire design and after the start of research too. The researcher made more than 20 specific observations of manufacturing industries in the study areas. During the observations, it found a deeper understanding of the working environment and the different sections in the production system and OSH practices.
Interview is a loosely structured qualitative in-depth interview with people who are considered to be particularly knowledgeable about the topic of interest. The semi-structured interview is usually conducted in a face-to-face setting which permits the researcher to seek new insights, ask questions, and assess phenomena in different perspectives. It let the researcher to know the in-depth of the present working environment influential factors and consequences. It has provided opportunities for refining data collection efforts and examining specialized systems or processes. It was used when the researcher faces written records or published document limitation or wanted to triangulate the data obtained from other primary and secondary data sources.
This dissertation is also conducted with a qualitative approach and conducting interviews. The advantage of using interviews as a method is that it allows respondents to raise issues that the interviewer may not have expected. All interviews with employees, management, and technicians were conducted by the corresponding researcher, on a face-to-face basis at workplace. All interviews were recorded and transcribed.
The main tool for gaining primary information in practical research is questionnaires, due to the fact that the researcher can decide on the sample and the types of questions to be asked [2].
In this dissertation, each respondent is requested to reply to an identical list of questions mixed so that biasness was prevented. Initially the questionnaire design was coded and mixed up from specific topic based on uniform structures. Consequently, the questionnaire produced valuable data which was required to achieve the dissertation objectives.
The questionnaires developed were based on a five-item Likert scale. Responses were given to each statement using a five-point Likert-type scale, for which 1 = “strongly disagree” to 5 = “strongly agree.” The responses were summed up to produce a score for the measures.
The data was also obtained from the expert’s opinion related to the comparison of the knowledge, management, collaboration, and technology utilization including their sub-factors. The data obtained in this way was used for prioritization and decision-making of OSH, improving factor priority. The prioritization of the factors was using Saaty scales (1–9) and then converting to Fuzzy set values obtained from previous researches using triangular fuzzy set [7].
The researcher has measured the workplace environment for dust, vibration, heat, pressure, light, and noise to know how much is the level of each variable. The primary data sources planned and an actual coverage has been compared as shown in Table 1.
Planned versus actual coverage of the survey.
The response rate for the proposed data source was good, and the pilot test also proved the reliability of questionnaires. Interview/discussion resulted in 87% of responses among the respondents; the survey questionnaire response rate obtained was 71%, and the field observation response rate was 90% for the whole data analysis process. Hence, the data organization quality level has not been compromised.
This response rate is considered to be representative of studies of organizations. As the study agrees on the response rate to be 30%, it is considered acceptable [8]. Saunders et al. [2] argued that the questionnaire with a scale response of 20% response rate is acceptable. Low response rate should not discourage the researchers, because a great deal of published research work also achieves low response rate. Hence, the response rate of this study is acceptable and very good for the purpose of meeting the study objectives.
The pretest for questionnaires, interviews, and tools were conducted to validate that the tool content is valid or not in the sense of the respondents’ understanding. Hence, content validity (in which the questions are answered to the target without excluding important points), internal validity (in which the questions raised answer the outcomes of researchers’ target), and external validity (in which the result can generalize to all the population from the survey sample population) were reflected. It has been proved with this pilot test prior to the start of the basic data collections. Following feedback process, a few minor changes were made to the originally designed data collect tools. The pilot test made for the questionnaire test was on 10 sample sizes selected randomly from the target sectors and experts.
The secondary data refers to data that was collected by someone other than the user. This data source gives insights of the research area of the current state-of-the-art method. It also makes some sort of research gap that needs to be filled by the researcher. This secondary data sources could be internal and external data sources of information that may cover a wide range of areas.
Literature/desk review and industry documents and reports: To achieve the dissertation’s objectives, the researcher has conducted excessive document review and reports of the companies in both online and offline modes. From a methodological point of view, literature reviews can be comprehended as content analysis, where quantitative and qualitative aspects are mixed to assess structural (descriptive) as well as content criteria.
A literature search was conducted using the database sources like MEDLINE; Emerald; Taylor and Francis publications; EMBASE (medical literature); PsycINFO (psychological literature); Sociological Abstracts (sociological literature); accident prevention journals; US Statistics of Labor, European Safety and Health database; ABI Inform; Business Source Premier (business/management literature); EconLit (economic literature); Social Service Abstracts (social work and social service literature); and other related materials. The search strategy was focused on articles or reports that measure one or more of the dimensions within the research OSH model framework. This search strategy was based on a framework and measurement filter strategy developed by the Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) group. Based on screening, unrelated articles to the research model and objectives were excluded. Prior to screening, researcher (principal investigator) reviewed a sample of more than 2000 articles, websites, reports, and guidelines to determine whether they should be included for further review or reject. Discrepancies were thoroughly identified and resolved before the review of the main group of more than 300 articles commenced. After excluding the articles based on the title, keywords, and abstract, the remaining articles were reviewed in detail, and the information was extracted on the instrument that was used to assess the dimension of research interest. A complete list of items was then collated within each research targets or objectives and reviewed to identify any missing elements.
Data analysis method follows the procedures listed under the following sections. The data analysis part answered the basic questions raised in the problem statement. The detailed analysis of the developed and developing countries’ experiences on OSH regarding manufacturing industries was analyzed, discussed, compared and contrasted, and synthesized.
Quantitative data were obtained from primary and secondary data discussed above in this chapter. This data analysis was based on their data type using Excel, SPSS 20.0, Office Word format, and other tools. This data analysis focuses on numerical/quantitative data analysis.
Before analysis, data coding of responses and analysis were made. In order to analyze the data obtained easily, the data were coded to SPSS 20.0 software as the data obtained from questionnaires. This task involved identifying, classifying, and assigning a numeric or character symbol to data, which was done in only one way pre-coded [9, 10]. In this study, all of the responses were pre-coded. They were taken from the list of responses, a number of corresponding to a particular selection was given. This process was applied to every earlier question that needed this treatment. Upon completion, the data were then entered to a statistical analysis software package, SPSS version 20.0 on Windows 10 for the next steps.
Under the data analysis, exploration of data has been made with descriptive statistics and graphical analysis. The analysis included exploring the relationship between variables and comparing groups how they affect each other. This has been done using cross tabulation/chi square, correlation, and factor analysis and using nonparametric statistic.
Qualitative data analysis used for triangulation of the quantitative data analysis. The interview, observation, and report records were used to support the findings. The analysis has been incorporated with the quantitative discussion results in the data analysis parts.
The data were entered using SPSS 20.0 on Windows 10 and analyzed. The analysis supported with SPSS software much contributed to the finding. It had contributed to the data validation and correctness of the SPSS results. The software analyzed and compared the results of different variables used in the research questionnaires. Excel is also used to draw the pictures and calculate some analytical solutions.
The reliability of measurements specifies the amount to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument [8]. In reliability analysis, it has been checked for the stability and consistency of the data. In the case of reliability analysis, the researcher checked the accuracy and precision of the procedure of measurement. Reliability has numerous definitions and approaches, but in several environments, the concept comes to be consistent [8]. The measurement fulfills the requirements of reliability when it produces consistent results during data analysis procedure. The reliability is determined through Cranach’s alpha as shown in Table 2.
Internal consistency and reliability test of questionnaires items.
K stands for knowledge; M, management; T, technology; C, collaboration; P, policy, standards, and regulation; H, hazards and accident conditions; PPE, personal protective equipment.
Cronbach’s alpha is a measure of internal consistency, i.e., how closely related a set of items are as a group [11]. It is considered to be a measure of scale reliability. The reliability of internal consistency most of the time is measured based on the Cronbach’s alpha value. Reliability coefficient of 0.70 and above is considered “acceptable” in most research situations [12]. In this study, reliability analysis for internal consistency of Likert-scale measurement after deleting 13 items was found similar; the reliability coefficients were found for 76 items were 0.964 and for the individual groupings made shown in Table 2. It was also found internally consistent using the Cronbach’s alpha test. Table 2 shows the internal consistency of the seven major instruments in which their reliability falls in the acceptable range for this research.
Face validity used as defined by Babbie [13] is an indicator that makes it seem a reasonable measure of some variables, and it is the subjective judgment that the instrument measures what it intends to measure in terms of relevance [14]. Thus, the researcher ensured, in this study, when developing the instruments that uncertainties were eliminated by using appropriate words and concepts in order to enhance clarity and general suitability [14]. Furthermore, the researcher submitted the instruments to the research supervisor and the joint supervisor who are both occupational health experts, to ensure validity of the measuring instruments and determine whether the instruments could be considered valid on face value.
In this study, the researcher was guided by reviewed literature related to compliance with the occupational health and safety conditions and data collection methods before he could develop the measuring instruments. In addition, the pretest study that was conducted prior to the main study assisted the researcher to avoid uncertainties of the contents in the data collection measuring instruments. A thorough inspection of the measuring instruments by the statistician and the researcher’s supervisor and joint experts, to ensure that all concepts pertaining to the study were included, ensured that the instruments were enriched.
Insight has been given to the data collectors on how to approach companies, and many of the questionnaires were distributed through MSc students at Addis Ababa Institute of Technology (AAiT) and manufacturing industries’ experience experts. This made the data quality reliable as it has been continually discussed with them. Pretesting for questionnaire was done on 10 workers to assure the quality of the data and for improvement of data collection tools. Supervision during data collection was done to understand how the data collectors are handling the questionnaire, and each filled questionnaires was checked for its completeness, accuracy, clarity, and consistency on a daily basis either face-to-face or by phone/email. The data expected in poor quality were rejected out of the acting during the screening time. Among planned 267 questionnaires, 189 were responded back. Finally, it was analyzed by the principal investigator.
The data were collected from the company representative with the knowledge of OSH. Articles written in English and Amharic were included in this study. Database information obtained in relation to articles and those who have OSH area such as interventions method, method of accident identification, impact of occupational accidents, types of occupational injuries/disease, and impact of occupational accidents, and disease on productivity and costs of company and have used at least one form of feedback mechanism. No specific time period was chosen in order to access all available published papers. The questionnaire statements which are similar in the questionnaire have been rejected from the data analysis.
Ethical clearance was obtained from the School of Mechanical and Industrial Engineering, Institute of Technology, Addis Ababa University. Official letters were written from the School of Mechanical and Industrial Engineering to the respective manufacturing industries. The purpose of the study was explained to the study subjects. The study subjects were told that the information they provided was kept confidential and that their identities would not be revealed in association with the information they provided. Informed consent was secured from each participant. For bad working environment assessment findings, feedback will be given to all manufacturing industries involved in the study. There is a plan to give a copy of the result to the respective study manufacturing industries’ and ministries’ offices. The respondents’ privacy and their responses were not individually analyzed and included in the report.
The result of this study will be presented to the Addis Ababa University, AAiT, School of Mechanical and Industrial Engineering. It will also be communicated to the Ethiopian manufacturing industries, Ministry of Labor and Social Affair, Ministry of Industry, and Ministry of Health from where the data was collected. The result will also be availed by publication and online presentation in Google Scholars. To this end, about five articles were published and disseminated to the whole world.
The research methodology and design indicated overall process of the flow of the research for the given study. The data sources and data collection methods were used. The overall research strategies and framework are indicated in this research process from problem formulation to problem validation including all the parameters. It has laid some foundation and how research methodology is devised and framed for researchers. This means, it helps researchers to consider it as one of the samples and models for the research data collection and process from the beginning of the problem statement to the research finding. Especially, this research flow helps new researchers to the research environment and methodology in particular.
There is no “conflict of interest.”
As this section deals with legal issues pertaining to the rights of individual Authors and IntechOpen, for the avoidance of doubt, each category of publication is dealt with separately. Consequently, much of the information, for example definition of terms used, is repeated to ensure that there can be no misunderstanding of the policies that apply to each category.
",metaTitle:"Copyright Policy",metaDescription:"Copyright is the term used to describe the rights related to the publication and distribution of original works. Most importantly from a publisher's perspective, copyright governs how authors, publishers and the general public can use, publish and distribute publications.",metaKeywords:null,canonicalURL:"/page/copyright-policy",contentRaw:'[{"type":"htmlEditorComponent","content":"Copyright is the term used to describe the rights related to the publication and distribution of original Works. Most importantly from a publisher's perspective, copyright governs how Authors, publishers and the general public can use, publish, and distribute publications.
\\n\\nIntechOpen only publishes manuscripts for which it has publishing rights. This is governed by a publication agreement between the Author and IntechOpen. This agreement is accepted by the Author when the manuscript is submitted and deals with both the rights of the publisher and Author, as well as any obligations concerning a particular manuscript. However, in accepting this agreement, Authors continue to retain significant rights to use and share their publications.
\\n\\nHOW COPYRIGHT WORKS WITH OPEN ACCESS LICENSES?
\\n\\nAgreement samples are listed here for the convenience of prospective Authors:
\\n\\n\\n\\nDEFINITIONS
\\n\\nThe following definitions apply in this Copyright Policy:
\\n\\nAuthor - in order to be identified as an Author, three criteria must be met: (i) Substantial contribution to the conception or design of the Work, or the acquisition, analysis, or interpretation of data for the Work; (ii) Participation in drafting or revising the Work; (iii) Approval of the final version of the Work to be published.
\\n\\nWork - a Chapter, including Conference Papers, and any and all text, graphics, images and/or other materials forming part of or accompanying the Chapter/Conference Paper.
\\n\\nMonograph/Compacts - a full manuscript usually written by a single Author, including any and all text, graphics, images and/or other materials.
\\n\\nCompilation - a collection of Works distributed in a Book that IntechOpen has selected, and for which the coordination of the preparation, arrangement and publication has been the responsibility of IntechOpen. Any Work included is accepted in its entirety in unmodified form and is published with one or more other contributions, each constituting a separate and independent Work, but which together are assembled into a collective whole.
\\n\\nIntechOpen - Registered publisher with office at 5 Princes Gate Court, London, SW7 2QJ - UNITED KINGDOM
\\n\\nIntechOpen platform - IntechOpen website www.intechopen.com whose main purpose is to host Monographs in the format of Book Chapters, Long Form Monographs, Compacts, Conference Proceedings and Videos.
\\n\\nVideo Lecture – an audiovisual recording of a lecture or a speech given by a Lecturer, recorded, edited, owned and published by IntechOpen.
\\n\\nTERMS
\\n\\nAll Works published on the IntechOpen platform and in print are licensed under a Creative Commons Attribution 3.0 Unported License, a license which allows for the broadest possible reuse of published material.
\\n\\nCopyright on the individual Works belongs to the specific Author, subject to an agreement with IntechOpen. The Creative Common license is granted to all others to:
\\n\\nAnd for any purpose, provided the following conditions are met:
\\n\\nAll Works are published under the CC BY 3.0 license. However, please note that book Chapters may fall under a different CC license, depending on their publication date as indicated in the table below:
\\n\\n\\n\\n
LICENSE | \\n\\t\\t\\tUSED FROM - | \\n\\t\\t\\tUP TO - | \\n\\t\\t
\\n\\t\\t\\t Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) \\n\\t\\t\\t | \\n\\t\\t\\t\\n\\t\\t\\t 1 July 2005 (2005-07-01) \\n\\t\\t\\t | \\n\\t\\t\\t\\n\\t\\t\\t 3 October 2011 (2011-10-03) \\n\\t\\t\\t | \\n\\t\\t
Creative Commons Attribution 3.0 Unported (CC BY 3.0) | \\n\\t\\t\\t\\n\\t\\t\\t 5 October 2011 (2011-10-05) \\n\\t\\t\\t | \\n\\t\\t\\tCurrently | \\n\\t\\t
The CC BY 3.0 license permits Works to be freely shared in any medium or format, as well as the reuse and adaptation of the original contents of Works (e.g. figures and tables created by the Authors), as long as the source Work is cited and its Authors are acknowledged in the following manner:
\\n\\nContent reuse:
\\n\\n© {year} {authors' full names}. Originally published in {short citation} under {license version} license. Available from: {DOI}
\\n\\nContent adaptation & reuse:
\\n\\n© {year} {authors' full names}. Adapted from {short citation}; originally published under {license version} license. Available from: {DOI}
\\n\\nReposting & sharing:
\\n\\nOriginally published in {full citation}. Available from: {DOI}
\\n\\nRepublishing – More about Attribution Policy can be found here.
\\n\\nThe same principles apply to Works published under the CC BY-NC-SA 3.0 license, with the caveats that (1) the content may not be used for commercial purposes, and (2) derivative works building on this content must be distributed under the same license. The restrictions contained in these license terms may, however, be waived by the copyright holder(s). Users wishing to circumvent any of the license terms are required to obtain explicit permission to do so from the copyright holder(s).
\\n\\nDISCLAIMER: Neither the CC BY 3.0 license, nor any other license IntechOpen currently uses or has used before, applies to figures and tables reproduced from other works, as they may be subject to different terms of reuse. In such cases, if the copyright holder is not noted in the source of a figure or table, it is the responsibility of the User to investigate and determine the exact copyright status of any information utilised. Users requiring assistance in that regard are welcome to send an inquiry to permissions@intechopen.com.
\\n\\nAll rights to Books and all other compilations published on the IntechOpen platform and in print are reserved by IntechOpen.
\\n\\nThe copyright to Books and other compilations is subject to separate copyright from those that exist in the included Works.
\\n\\nAll Long Form Monographs/Compacts are licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license granted to all others.
\\n\\nCopyright to the individual Works (Chapters) belongs to their specific Authors, subject to an agreement with IntechOpen and the Creative Common license granted to all others to:
\\n\\nUnder the following terms:
\\n\\nThere must be an Attribution, giving appropriate credit, provision of a link to the license, and indication if any changes were made.
\\n\\nNonCommercial - The use of the material for commercial purposes is prohibited. Commercial rights are reserved to IntechOpen or its licensees.
\\n\\nNo additional restrictions that apply legal terms or technological measures that restrict others from doing anything the license permits are allowed.
\\n\\nThe CC BY-NC 4.0 license permits Works to be freely shared in any medium or format, as well as reuse and adaptation of the original contents of Works (e.g. figures and tables created by the Authors), as long as it is not used for commercial purposes. The source Work must be cited and its Authors acknowledged in the following manner:
\\n\\nContent reuse:
\\n\\n© {year} {authors' full names}. Originally published in {short citation} under {license version} license. Available from: {DOI}
\\n\\nContent adaptation & reuse:
\\n\\n© {year} {authors' full names}. Adapted from {short citation}; originally published under {license version} license. Available from: {DOI}
\\n\\nReposting & sharing:
\\n\\nOriginally published in {full citation}. Available from: {DOI}
\\n\\nAll Book cover design elements, as well as Video image graphics are subject to copyright by IntechOpen.
\\n\\nEvery reproduction of a front cover image must be accompanied by an appropriate Copyright Notice displayed adjacent to the image. The exact Copyright Notice depends on who the Author of a particular cover image is. Users wishing to reproduce cover images should contact permissions@intechopen.com.
\\n\\nAll Video Lectures under IntechOpen's production are subject to copyright and are property of IntechOpen, unless defined otherwise, and are licensed under the Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. This grants all others the right to:
\\n\\nShare — copy and redistribute the material in any medium or format
\\n\\nUnder the following terms:
\\n\\nUsers wishing to repost and share the Video Lectures are welcome to do so as long as they acknowledge the source in the following manner:
\\n\\n© {year} IntechOpen. Published under CC BY-NC-ND 4.0 license. Available from: {DOI}
\\n\\nUsers wishing to reuse, modify, or adapt the Video Lectures in a way not permitted by the license are welcome to contact us at permissions@intechopen.com to discuss waiving particular license terms.
\\n\\nAll software used on the IntechOpen platform, any used during the publishing process, and the copyright in the code constituting such software, is the property of IntechOpen or its software suppliers. As such, it may not be downloaded or copied without permission.
\\n\\nUnless otherwise indicated, all IntechOpen websites are the property of IntechOpen.
\\n\\nAll content included on IntechOpen Websites not forming part of contributed materials (such as text, images, logos, graphics, design elements, videos, sounds, pictures, trademarks, etc.), are subject to copyright and are property of, or licensed to, IntechOpen. Any other use, including the reproduction, modification, distribution, transmission, republication, display, or performance of the content on this site is strictly prohibited.
\\n\\nPolicy last updated: 2016-06-08
\\n"}]'},components:[{type:"htmlEditorComponent",content:'Copyright is the term used to describe the rights related to the publication and distribution of original Works. Most importantly from a publisher's perspective, copyright governs how Authors, publishers and the general public can use, publish, and distribute publications.
\n\nIntechOpen only publishes manuscripts for which it has publishing rights. This is governed by a publication agreement between the Author and IntechOpen. This agreement is accepted by the Author when the manuscript is submitted and deals with both the rights of the publisher and Author, as well as any obligations concerning a particular manuscript. However, in accepting this agreement, Authors continue to retain significant rights to use and share their publications.
\n\nHOW COPYRIGHT WORKS WITH OPEN ACCESS LICENSES?
\n\nAgreement samples are listed here for the convenience of prospective Authors:
\n\n\n\nDEFINITIONS
\n\nThe following definitions apply in this Copyright Policy:
\n\nAuthor - in order to be identified as an Author, three criteria must be met: (i) Substantial contribution to the conception or design of the Work, or the acquisition, analysis, or interpretation of data for the Work; (ii) Participation in drafting or revising the Work; (iii) Approval of the final version of the Work to be published.
\n\nWork - a Chapter, including Conference Papers, and any and all text, graphics, images and/or other materials forming part of or accompanying the Chapter/Conference Paper.
\n\nMonograph/Compacts - a full manuscript usually written by a single Author, including any and all text, graphics, images and/or other materials.
\n\nCompilation - a collection of Works distributed in a Book that IntechOpen has selected, and for which the coordination of the preparation, arrangement and publication has been the responsibility of IntechOpen. Any Work included is accepted in its entirety in unmodified form and is published with one or more other contributions, each constituting a separate and independent Work, but which together are assembled into a collective whole.
\n\nIntechOpen - Registered publisher with office at 5 Princes Gate Court, London, SW7 2QJ - UNITED KINGDOM
\n\nIntechOpen platform - IntechOpen website www.intechopen.com whose main purpose is to host Monographs in the format of Book Chapters, Long Form Monographs, Compacts, Conference Proceedings and Videos.
\n\nVideo Lecture – an audiovisual recording of a lecture or a speech given by a Lecturer, recorded, edited, owned and published by IntechOpen.
\n\nTERMS
\n\nAll Works published on the IntechOpen platform and in print are licensed under a Creative Commons Attribution 3.0 Unported License, a license which allows for the broadest possible reuse of published material.
\n\nCopyright on the individual Works belongs to the specific Author, subject to an agreement with IntechOpen. The Creative Common license is granted to all others to:
\n\nAnd for any purpose, provided the following conditions are met:
\n\nAll Works are published under the CC BY 3.0 license. However, please note that book Chapters may fall under a different CC license, depending on their publication date as indicated in the table below:
\n\n\n\n
LICENSE | \n\t\t\tUSED FROM - | \n\t\t\tUP TO - | \n\t\t
\n\t\t\t Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) \n\t\t\t | \n\t\t\t\n\t\t\t 1 July 2005 (2005-07-01) \n\t\t\t | \n\t\t\t\n\t\t\t 3 October 2011 (2011-10-03) \n\t\t\t | \n\t\t
Creative Commons Attribution 3.0 Unported (CC BY 3.0) | \n\t\t\t\n\t\t\t 5 October 2011 (2011-10-05) \n\t\t\t | \n\t\t\tCurrently | \n\t\t
The CC BY 3.0 license permits Works to be freely shared in any medium or format, as well as the reuse and adaptation of the original contents of Works (e.g. figures and tables created by the Authors), as long as the source Work is cited and its Authors are acknowledged in the following manner:
\n\nContent reuse:
\n\n© {year} {authors' full names}. Originally published in {short citation} under {license version} license. Available from: {DOI}
\n\nContent adaptation & reuse:
\n\n© {year} {authors' full names}. Adapted from {short citation}; originally published under {license version} license. Available from: {DOI}
\n\nReposting & sharing:
\n\nOriginally published in {full citation}. Available from: {DOI}
\n\nRepublishing – More about Attribution Policy can be found here.
\n\nThe same principles apply to Works published under the CC BY-NC-SA 3.0 license, with the caveats that (1) the content may not be used for commercial purposes, and (2) derivative works building on this content must be distributed under the same license. The restrictions contained in these license terms may, however, be waived by the copyright holder(s). Users wishing to circumvent any of the license terms are required to obtain explicit permission to do so from the copyright holder(s).
\n\nDISCLAIMER: Neither the CC BY 3.0 license, nor any other license IntechOpen currently uses or has used before, applies to figures and tables reproduced from other works, as they may be subject to different terms of reuse. In such cases, if the copyright holder is not noted in the source of a figure or table, it is the responsibility of the User to investigate and determine the exact copyright status of any information utilised. Users requiring assistance in that regard are welcome to send an inquiry to permissions@intechopen.com.
\n\nAll rights to Books and all other compilations published on the IntechOpen platform and in print are reserved by IntechOpen.
\n\nThe copyright to Books and other compilations is subject to separate copyright from those that exist in the included Works.
\n\nAll Long Form Monographs/Compacts are licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license granted to all others.
\n\nCopyright to the individual Works (Chapters) belongs to their specific Authors, subject to an agreement with IntechOpen and the Creative Common license granted to all others to:
\n\nUnder the following terms:
\n\nThere must be an Attribution, giving appropriate credit, provision of a link to the license, and indication if any changes were made.
\n\nNonCommercial - The use of the material for commercial purposes is prohibited. Commercial rights are reserved to IntechOpen or its licensees.
\n\nNo additional restrictions that apply legal terms or technological measures that restrict others from doing anything the license permits are allowed.
\n\nThe CC BY-NC 4.0 license permits Works to be freely shared in any medium or format, as well as reuse and adaptation of the original contents of Works (e.g. figures and tables created by the Authors), as long as it is not used for commercial purposes. The source Work must be cited and its Authors acknowledged in the following manner:
\n\nContent reuse:
\n\n© {year} {authors' full names}. Originally published in {short citation} under {license version} license. Available from: {DOI}
\n\nContent adaptation & reuse:
\n\n© {year} {authors' full names}. Adapted from {short citation}; originally published under {license version} license. Available from: {DOI}
\n\nReposting & sharing:
\n\nOriginally published in {full citation}. Available from: {DOI}
\n\nAll Book cover design elements, as well as Video image graphics are subject to copyright by IntechOpen.
\n\nEvery reproduction of a front cover image must be accompanied by an appropriate Copyright Notice displayed adjacent to the image. The exact Copyright Notice depends on who the Author of a particular cover image is. Users wishing to reproduce cover images should contact permissions@intechopen.com.
\n\nAll Video Lectures under IntechOpen's production are subject to copyright and are property of IntechOpen, unless defined otherwise, and are licensed under the Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. This grants all others the right to:
\n\nShare — copy and redistribute the material in any medium or format
\n\nUnder the following terms:
\n\nUsers wishing to repost and share the Video Lectures are welcome to do so as long as they acknowledge the source in the following manner:
\n\n© {year} IntechOpen. Published under CC BY-NC-ND 4.0 license. Available from: {DOI}
\n\nUsers wishing to reuse, modify, or adapt the Video Lectures in a way not permitted by the license are welcome to contact us at permissions@intechopen.com to discuss waiving particular license terms.
\n\nAll software used on the IntechOpen platform, any used during the publishing process, and the copyright in the code constituting such software, is the property of IntechOpen or its software suppliers. As such, it may not be downloaded or copied without permission.
\n\nUnless otherwise indicated, all IntechOpen websites are the property of IntechOpen.
\n\nAll content included on IntechOpen Websites not forming part of contributed materials (such as text, images, logos, graphics, design elements, videos, sounds, pictures, trademarks, etc.), are subject to copyright and are property of, or licensed to, IntechOpen. Any other use, including the reproduction, modification, distribution, transmission, republication, display, or performance of the content on this site is strictly prohibited.
\n\nPolicy last updated: 2016-06-08
\n'}]},successStories:{items:[]},authorsAndEditors:{filterParams:{sort:"featured,name"},profiles:[{id:"6700",title:"Dr.",name:"Abbass A.",middleName:null,surname:"Hashim",slug:"abbass-a.-hashim",fullName:"Abbass A. Hashim",position:null,profilePictureURL:"https://mts.intechopen.com/storage/users/6700/images/1864_n.jpg",biography:"Currently I am carrying out research in several areas of interest, mainly covering work on chemical and bio-sensors, semiconductor thin film device fabrication and characterisation.\nAt the moment I have very strong interest in radiation environmental pollution and bacteriology treatment. The teams of researchers are working very hard to bring novel results in this field. I am also a member of the team in charge for the supervision of Ph.D. students in the fields of development of silicon based planar waveguide sensor devices, study of inelastic electron tunnelling in planar tunnelling nanostructures for sensing applications and development of organotellurium(IV) compounds for semiconductor applications. I am a specialist in data analysis techniques and nanosurface structure. I have served as the editor for many books, been a member of the editorial board in science journals, have published many papers and hold many patents.",institutionString:null,institution:{name:"Sheffield Hallam University",country:{name:"United Kingdom"}}},{id:"54525",title:"Prof.",name:"Abdul Latif",middleName:null,surname:"Ahmad",slug:"abdul-latif-ahmad",fullName:"Abdul Latif Ahmad",position:null,profilePictureURL:"//cdnintech.com/web/frontend/www/assets/author.svg",biography:null,institutionString:null,institution:null},{id:"20567",title:"Prof.",name:"Ado",middleName:null,surname:"Jorio",slug:"ado-jorio",fullName:"Ado Jorio",position:null,profilePictureURL:"//cdnintech.com/web/frontend/www/assets/author.svg",biography:null,institutionString:null,institution:{name:"Universidade Federal de Minas Gerais",country:{name:"Brazil"}}},{id:"47940",title:"Dr.",name:"Alberto",middleName:null,surname:"Mantovani",slug:"alberto-mantovani",fullName:"Alberto Mantovani",position:null,profilePictureURL:"//cdnintech.com/web/frontend/www/assets/author.svg",biography:null,institutionString:null,institution:null},{id:"12392",title:"Mr.",name:"Alex",middleName:null,surname:"Lazinica",slug:"alex-lazinica",fullName:"Alex Lazinica",position:null,profilePictureURL:"https://mts.intechopen.com/storage/users/12392/images/7282_n.png",biography:"Alex Lazinica is the founder and CEO of IntechOpen. After obtaining a Master's degree in Mechanical Engineering, he continued his PhD studies in Robotics at the Vienna University of Technology. Here he worked as a robotic researcher with the university's Intelligent Manufacturing Systems Group as well as a guest researcher at various European universities, including the Swiss Federal Institute of Technology Lausanne (EPFL). During this time he published more than 20 scientific papers, gave presentations, served as a reviewer for major robotic journals and conferences and most importantly he co-founded and built the International Journal of Advanced Robotic Systems- world's first Open Access journal in the field of robotics. Starting this journal was a pivotal point in his career, since it was a pathway to founding IntechOpen - Open Access publisher focused on addressing academic researchers needs. Alex is a personification of IntechOpen key values being trusted, open and entrepreneurial. Today his focus is on defining the growth and development strategy for the company.",institutionString:null,institution:{name:"TU Wien",country:{name:"Austria"}}},{id:"19816",title:"Prof.",name:"Alexander",middleName:null,surname:"Kokorin",slug:"alexander-kokorin",fullName:"Alexander Kokorin",position:null,profilePictureURL:"https://mts.intechopen.com/storage/users/19816/images/1607_n.jpg",biography:"Alexander I. Kokorin: born: 1947, Moscow; DSc., PhD; Principal Research Fellow (Research Professor) of Department of Kinetics and Catalysis, N. Semenov Institute of Chemical Physics, Russian Academy of Sciences, Moscow.\r\nArea of research interests: physical chemistry of complex-organized molecular and nanosized systems, including polymer-metal complexes; the surface of doped oxide semiconductors. He is an expert in structural, absorptive, catalytic and photocatalytic properties, in structural organization and dynamic features of ionic liquids, in magnetic interactions between paramagnetic centers. The author or co-author of 3 books, over 200 articles and reviews in scientific journals and books. He is an actual member of the International EPR/ESR Society, European Society on Quantum Solar Energy Conversion, Moscow House of Scientists, of the Board of Moscow Physical Society.",institutionString:null,institution:{name:"Semenov Institute of Chemical Physics",country:{name:"Russia"}}},{id:"62389",title:"PhD.",name:"Ali Demir",middleName:null,surname:"Sezer",slug:"ali-demir-sezer",fullName:"Ali Demir Sezer",position:null,profilePictureURL:"https://mts.intechopen.com/storage/users/62389/images/3413_n.jpg",biography:"Dr. Ali Demir Sezer has a Ph.D. from Pharmaceutical Biotechnology at the Faculty of Pharmacy, University of Marmara (Turkey). He is the member of many Pharmaceutical Associations and acts as a reviewer of scientific journals and European projects under different research areas such as: drug delivery systems, nanotechnology and pharmaceutical biotechnology. Dr. Sezer is the author of many scientific publications in peer-reviewed journals and poster communications. Focus of his research activity is drug delivery, physico-chemical characterization and biological evaluation of biopolymers micro and nanoparticles as modified drug delivery system, and colloidal drug carriers (liposomes, nanoparticles etc.).",institutionString:null,institution:{name:"Marmara University",country:{name:"Turkey"}}},{id:"61051",title:"Prof.",name:"Andrea",middleName:null,surname:"Natale",slug:"andrea-natale",fullName:"Andrea Natale",position:null,profilePictureURL:"//cdnintech.com/web/frontend/www/assets/author.svg",biography:null,institutionString:null,institution:null},{id:"100762",title:"Prof.",name:"Andrea",middleName:null,surname:"Natale",slug:"andrea-natale",fullName:"Andrea Natale",position:null,profilePictureURL:"//cdnintech.com/web/frontend/www/assets/author.svg",biography:null,institutionString:null,institution:{name:"St David's Medical Center",country:{name:"United States of America"}}},{id:"107416",title:"Dr.",name:"Andrea",middleName:null,surname:"Natale",slug:"andrea-natale",fullName:"Andrea Natale",position:null,profilePictureURL:"//cdnintech.com/web/frontend/www/assets/author.svg",biography:null,institutionString:null,institution:{name:"Texas Cardiac Arrhythmia",country:{name:"United States of America"}}},{id:"64434",title:"Dr.",name:"Angkoon",middleName:null,surname:"Phinyomark",slug:"angkoon-phinyomark",fullName:"Angkoon Phinyomark",position:null,profilePictureURL:"https://mts.intechopen.com/storage/users/64434/images/2619_n.jpg",biography:"My name is Angkoon Phinyomark. I received a B.Eng. degree in Computer Engineering with First Class Honors in 2008 from Prince of Songkla University, Songkhla, Thailand, where I received a Ph.D. degree in Electrical Engineering. My research interests are primarily in the area of biomedical signal processing and classification notably EMG (electromyography signal), EOG (electrooculography signal), and EEG (electroencephalography signal), image analysis notably breast cancer analysis and optical coherence tomography, and rehabilitation engineering. I became a student member of IEEE in 2008. During October 2011-March 2012, I had worked at School of Computer Science and Electronic Engineering, University of Essex, Colchester, Essex, United Kingdom. In addition, during a B.Eng. I had been a visiting research student at Faculty of Computer Science, University of Murcia, Murcia, Spain for three months.\n\nI have published over 40 papers during 5 years in refereed journals, books, and conference proceedings in the areas of electro-physiological signals processing and classification, notably EMG and EOG signals, fractal analysis, wavelet analysis, texture analysis, feature extraction and machine learning algorithms, and assistive and rehabilitative devices. I have several computer programming language certificates, i.e. Sun Certified Programmer for the Java 2 Platform 1.4 (SCJP), Microsoft Certified Professional Developer, Web Developer (MCPD), Microsoft Certified Technology Specialist, .NET Framework 2.0 Web (MCTS). I am a Reviewer for several refereed journals and international conferences, such as IEEE Transactions on Biomedical Engineering, IEEE Transactions on Industrial Electronics, Optic Letters, Measurement Science Review, and also a member of the International Advisory Committee for 2012 IEEE Business Engineering and Industrial Applications and 2012 IEEE Symposium on Business, Engineering and Industrial Applications.",institutionString:null,institution:{name:"Joseph Fourier University",country:{name:"France"}}},{id:"55578",title:"Dr.",name:"Antonio",middleName:null,surname:"Jurado-Navas",slug:"antonio-jurado-navas",fullName:"Antonio Jurado-Navas",position:null,profilePictureURL:"https://mts.intechopen.com/storage/users/55578/images/4574_n.png",biography:"Antonio Jurado-Navas received the M.S. degree (2002) and the Ph.D. degree (2009) in Telecommunication Engineering, both from the University of Málaga (Spain). He first worked as a consultant at Vodafone-Spain. From 2004 to 2011, he was a Research Assistant with the Communications Engineering Department at the University of Málaga. In 2011, he became an Assistant Professor in the same department. From 2012 to 2015, he was with Ericsson Spain, where he was working on geo-location\ntools for third generation mobile networks. Since 2015, he is a Marie-Curie fellow at the Denmark Technical University. His current research interests include the areas of mobile communication systems and channel modeling in addition to atmospheric optical communications, adaptive optics and statistics",institutionString:null,institution:{name:"University of Malaga",country:{name:"Spain"}}}],filtersByRegion:[{group:"region",caption:"North America",value:1,count:5684},{group:"region",caption:"Middle and South America",value:2,count:5166},{group:"region",caption:"Africa",value:3,count:1682},{group:"region",caption:"Asia",value:4,count:10211},{group:"region",caption:"Australia and Oceania",value:5,count:887},{group:"region",caption:"Europe",value:6,count:15616}],offset:12,limit:12,total:117315},chapterEmbeded:{data:{}},editorApplication:{success:null,errors:{}},ofsBooks:{filterParams:{},books:[{type:"book",id:"7724",title:"Climate Issues in Asia and Africa - Examining Climate, Its Flux, the Consequences, and Society's Responses",subtitle:null,isOpenForSubmission:!0,hash:"c1bd1a5a4dba07b95a5ae5ef0ecf9f74",slug:null,bookSignature:" John P. Tiefenbacher",coverURL:"https://cdn.intechopen.com/books/images_new/7724.jpg",editedByType:null,editors:[{id:"73876",title:"Dr.",name:"John P.",surname:"Tiefenbacher",slug:"john-p.-tiefenbacher",fullName:"John P. Tiefenbacher"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"7829",title:"Psychosis - Phenomenology, Psychopathology and Pathophysiology",subtitle:null,isOpenForSubmission:!0,hash:"a211068a33e47af974e3823f33feaa43",slug:null,bookSignature:"Dr. Kenjiro Fukao",coverURL:"https://cdn.intechopen.com/books/images_new/7829.jpg",editedByType:null,editors:[{id:"32519",title:"Dr.",name:"Kenjiro",surname:"Fukao",slug:"kenjiro-fukao",fullName:"Kenjiro Fukao"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"7901",title:"Advances in Germ Cell Biology – New Technologies, Applications and Perspectives",subtitle:null,isOpenForSubmission:!0,hash:"4adab31469b82dd5a99eec04dbbe09f2",slug:null,bookSignature:"Ph.D. Sonia Oliveira and Prof. Maria De Lourdes Pereira",coverURL:"https://cdn.intechopen.com/books/images_new/7901.jpg",editedByType:null,editors:[{id:"323848",title:"Ph.D.",name:"Sonia",surname:"Oliveira",slug:"sonia-oliveira",fullName:"Sonia Oliveira"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"7921",title:"Optogenetics",subtitle:null,isOpenForSubmission:!0,hash:"3ae7e24d8f03ff3932bceee4b8d3e727",slug:null,bookSignature:"Dr. Thomas Heinbockel",coverURL:"https://cdn.intechopen.com/books/images_new/7921.jpg",editedByType:null,editors:[{id:"70569",title:"Dr.",name:"Thomas",surname:"Heinbockel",slug:"thomas-heinbockel",fullName:"Thomas Heinbockel"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8485",title:"Weather Forecasting",subtitle:null,isOpenForSubmission:!0,hash:"eadbd6f9c26be844062ce5cd3b3eb573",slug:null,bookSignature:"Associate Prof. Muhammad Saifullah",coverURL:"https://cdn.intechopen.com/books/images_new/8485.jpg",editedByType:null,editors:[{id:"320968",title:"Associate Prof.",name:"Muhammad",surname:"Saifullah",slug:"muhammad-saifullah",fullName:"Muhammad Saifullah"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8575",title:"Animal Regeneration",subtitle:null,isOpenForSubmission:!0,hash:"689b9f46c48cd54a2874b8da7386549d",slug:null,bookSignature:"Dr. Hussein Abdelhay Essayed Kaoud",coverURL:"https://cdn.intechopen.com/books/images_new/8575.jpg",editedByType:null,editors:[{id:"265070",title:"Dr.",name:"Hussein Abdelhay",surname:"Essayed Kaoud",slug:"hussein-abdelhay-essayed-kaoud",fullName:"Hussein Abdelhay Essayed Kaoud"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8737",title:"Rabies Virus",subtitle:null,isOpenForSubmission:!0,hash:"49cce3f548da548c718c865feb343509",slug:null,bookSignature:"Dr. Sergey Tkachev",coverURL:"https://cdn.intechopen.com/books/images_new/8737.jpg",editedByType:null,editors:[{id:"61139",title:"Dr.",name:"Sergey",surname:"Tkachev",slug:"sergey-tkachev",fullName:"Sergey Tkachev"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8950",title:"Birds - Challenges and Opportunities for Business, Conservation and Research",subtitle:null,isOpenForSubmission:!0,hash:"404a05af45e47e43871f4a0b1bedc6fd",slug:null,bookSignature:"Dr. Heimo Juhani Mikkola",coverURL:"https://cdn.intechopen.com/books/images_new/8950.jpg",editedByType:null,editors:[{id:"144330",title:"Dr.",name:"Heimo Juhani",surname:"Mikkola",slug:"heimo-juhani-mikkola",fullName:"Heimo Juhani Mikkola"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8977",title:"Protein Kinase - New Opportunities, Challenges and Future Perspectives",subtitle:null,isOpenForSubmission:!0,hash:"6d200cc031706a565b554fdb1c478901",slug:null,bookSignature:"Dr. Rajesh Kumar Singh",coverURL:"https://cdn.intechopen.com/books/images_new/8977.jpg",editedByType:null,editors:[{id:"329385",title:"Dr.",name:"Rajesh",surname:"Singh",slug:"rajesh-singh",fullName:"Rajesh Singh"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"9008",title:"Vitamin K - Recent Advances, New Perspectives and Applications for Human Health",subtitle:null,isOpenForSubmission:!0,hash:"8b43add5389ba85743e0a9491e4b9943",slug:null,bookSignature:"Prof. Hiroyuki Kagechika and Dr. Hitoshi Shirakawa",coverURL:"https://cdn.intechopen.com/books/images_new/9008.jpg",editedByType:null,editors:[{id:"180528",title:"Prof.",name:"Hiroyuki",surname:"Kagechika",slug:"hiroyuki-kagechika",fullName:"Hiroyuki Kagechika"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"9016",title:"Psychoneuroendocrinology",subtitle:null,isOpenForSubmission:!0,hash:"cb4ce09b8e853bef06c572df42933500",slug:null,bookSignature:"Dr. Ifigenia Kostoglou-Athanassiou",coverURL:"https://cdn.intechopen.com/books/images_new/9016.jpg",editedByType:null,editors:[{id:"307495",title:"Dr.",name:"Ifigenia",surname:"Kostoglou-Athanassiou",slug:"ifigenia-kostoglou-athanassiou",fullName:"Ifigenia Kostoglou-Athanassiou"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"9046",title:"Amyloidosis History and Perspectives",subtitle:null,isOpenForSubmission:!0,hash:"371a4ad514bb6d6703406741702a19d0",slug:null,bookSignature:"Dr. Jonathan Harrison",coverURL:"https://cdn.intechopen.com/books/images_new/9046.jpg",editedByType:null,editors:[{id:"340843",title:"Dr.",name:"Jonathan",surname:"Harrison",slug:"jonathan-harrison",fullName:"Jonathan Harrison"}],productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}}],filtersByTopic:[{group:"topic",caption:"Agricultural and Biological Sciences",value:5,count:9},{group:"topic",caption:"Biochemistry, Genetics and Molecular Biology",value:6,count:18},{group:"topic",caption:"Business, Management and Economics",value:7,count:2},{group:"topic",caption:"Chemistry",value:8,count:7},{group:"topic",caption:"Computer and Information Science",value:9,count:10},{group:"topic",caption:"Earth and Planetary Sciences",value:10,count:5},{group:"topic",caption:"Engineering",value:11,count:15},{group:"topic",caption:"Environmental Sciences",value:12,count:2},{group:"topic",caption:"Immunology and Microbiology",value:13,count:5},{group:"topic",caption:"Materials Science",value:14,count:4},{group:"topic",caption:"Mathematics",value:15,count:1},{group:"topic",caption:"Medicine",value:16,count:60},{group:"topic",caption:"Nanotechnology and Nanomaterials",value:17,count:1},{group:"topic",caption:"Neuroscience",value:18,count:1},{group:"topic",caption:"Pharmacology, Toxicology and Pharmaceutical Science",value:19,count:6},{group:"topic",caption:"Physics",value:20,count:2},{group:"topic",caption:"Psychology",value:21,count:3},{group:"topic",caption:"Robotics",value:22,count:1},{group:"topic",caption:"Social Sciences",value:23,count:3},{group:"topic",caption:"Technology",value:24,count:1},{group:"topic",caption:"Veterinary Medicine and Science",value:25,count:2}],offset:12,limit:12,total:307},popularBooks:{featuredBooks:[{type:"book",id:"9208",title:"Welding",subtitle:"Modern Topics",isOpenForSubmission:!1,hash:"7d6be076ccf3a3f8bd2ca52d86d4506b",slug:"welding-modern-topics",bookSignature:"Sadek Crisóstomo Absi Alfaro, Wojciech Borek and Błażej Tomiczek",coverURL:"https://cdn.intechopen.com/books/images_new/9208.jpg",editors:[{id:"65292",title:"Prof.",name:"Sadek Crisostomo Absi",middleName:"C. Absi",surname:"Alfaro",slug:"sadek-crisostomo-absi-alfaro",fullName:"Sadek Crisostomo Absi Alfaro"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9139",title:"Topics in Primary Care Medicine",subtitle:null,isOpenForSubmission:!1,hash:"ea774a4d4c1179da92a782e0ae9cde92",slug:"topics-in-primary-care-medicine",bookSignature:"Thomas F. Heston",coverURL:"https://cdn.intechopen.com/books/images_new/9139.jpg",editors:[{id:"217926",title:"Dr.",name:"Thomas F.",middleName:null,surname:"Heston",slug:"thomas-f.-heston",fullName:"Thomas F. Heston"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"8697",title:"Virtual Reality and Its Application in Education",subtitle:null,isOpenForSubmission:!1,hash:"ee01b5e387ba0062c6b0d1e9227bda05",slug:"virtual-reality-and-its-application-in-education",bookSignature:"Dragan Cvetković",coverURL:"https://cdn.intechopen.com/books/images_new/8697.jpg",editors:[{id:"101330",title:"Dr.",name:"Dragan",middleName:"Mladen",surname:"Cvetković",slug:"dragan-cvetkovic",fullName:"Dragan Cvetković"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9785",title:"Endometriosis",subtitle:null,isOpenForSubmission:!1,hash:"f457ca61f29cf7e8bc191732c50bb0ce",slug:"endometriosis",bookSignature:"Courtney Marsh",coverURL:"https://cdn.intechopen.com/books/images_new/9785.jpg",editors:[{id:"255491",title:"Dr.",name:"Courtney",middleName:null,surname:"Marsh",slug:"courtney-marsh",fullName:"Courtney Marsh"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9343",title:"Trace Metals in the Environment",subtitle:"New Approaches and Recent Advances",isOpenForSubmission:!1,hash:"ae07e345bc2ce1ebbda9f70c5cd12141",slug:"trace-metals-in-the-environment-new-approaches-and-recent-advances",bookSignature:"Mario Alfonso Murillo-Tovar, Hugo Saldarriaga-Noreña and Agnieszka Saeid",coverURL:"https://cdn.intechopen.com/books/images_new/9343.jpg",editors:[{id:"255959",title:"Dr.",name:"Mario Alfonso",middleName:null,surname:"Murillo-Tovar",slug:"mario-alfonso-murillo-tovar",fullName:"Mario Alfonso Murillo-Tovar"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"7831",title:"Sustainability in Urban Planning and Design",subtitle:null,isOpenForSubmission:!1,hash:"c924420492c8c2c9751e178d025f4066",slug:"sustainability-in-urban-planning-and-design",bookSignature:"Amjad Almusaed, Asaad Almssad and Linh Truong - Hong",coverURL:"https://cdn.intechopen.com/books/images_new/7831.jpg",editors:[{id:"110471",title:"Dr.",name:"Amjad",middleName:"Zaki",surname:"Almusaed",slug:"amjad-almusaed",fullName:"Amjad Almusaed"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"8468",title:"Sheep Farming",subtitle:"An Approach to Feed, Growth and Sanity",isOpenForSubmission:!1,hash:"838f08594850bc04aa14ec873ed1b96f",slug:"sheep-farming-an-approach-to-feed-growth-and-sanity",bookSignature:"António Monteiro",coverURL:"https://cdn.intechopen.com/books/images_new/8468.jpg",editors:[{id:"190314",title:"Prof.",name:"António",middleName:"Cardoso",surname:"Monteiro",slug:"antonio-monteiro",fullName:"António Monteiro"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"8816",title:"Financial Crises",subtitle:"A Selection of Readings",isOpenForSubmission:!1,hash:"6f2f49fb903656e4e54280c79fabd10c",slug:"financial-crises-a-selection-of-readings",bookSignature:"Stelios Markoulis",coverURL:"https://cdn.intechopen.com/books/images_new/8816.jpg",editors:[{id:"237863",title:"Dr.",name:"Stelios",middleName:null,surname:"Markoulis",slug:"stelios-markoulis",fullName:"Stelios Markoulis"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9376",title:"Contemporary Developments and Perspectives in International Health Security",subtitle:"Volume 1",isOpenForSubmission:!1,hash:"b9a00b84cd04aae458fb1d6c65795601",slug:"contemporary-developments-and-perspectives-in-international-health-security-volume-1",bookSignature:"Stanislaw P. Stawicki, Michael S. Firstenberg, Sagar C. Galwankar, Ricardo Izurieta and Thomas Papadimos",coverURL:"https://cdn.intechopen.com/books/images_new/9376.jpg",editors:[{id:"181694",title:"Dr.",name:"Stanislaw P.",middleName:null,surname:"Stawicki",slug:"stanislaw-p.-stawicki",fullName:"Stanislaw P. Stawicki"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"7769",title:"Medical Isotopes",subtitle:null,isOpenForSubmission:!1,hash:"f8d3c5a6c9a42398e56b4e82264753f7",slug:"medical-isotopes",bookSignature:"Syed Ali Raza Naqvi and Muhammad Babar Imrani",coverURL:"https://cdn.intechopen.com/books/images_new/7769.jpg",editors:[{id:"259190",title:"Dr.",name:"Syed Ali Raza",middleName:null,surname:"Naqvi",slug:"syed-ali-raza-naqvi",fullName:"Syed Ali Raza Naqvi"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9279",title:"Concepts, Applications and Emerging Opportunities in Industrial Engineering",subtitle:null,isOpenForSubmission:!1,hash:"9bfa87f9b627a5468b7c1e30b0eea07a",slug:"concepts-applications-and-emerging-opportunities-in-industrial-engineering",bookSignature:"Gary Moynihan",coverURL:"https://cdn.intechopen.com/books/images_new/9279.jpg",editors:[{id:"16974",title:"Dr.",name:"Gary",middleName:null,surname:"Moynihan",slug:"gary-moynihan",fullName:"Gary Moynihan"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"7807",title:"A Closer Look at Organizational Culture in Action",subtitle:null,isOpenForSubmission:!1,hash:"05c608b9271cc2bc711f4b28748b247b",slug:"a-closer-look-at-organizational-culture-in-action",bookSignature:"Süleyman Davut Göker",coverURL:"https://cdn.intechopen.com/books/images_new/7807.jpg",editors:[{id:"190035",title:"Associate Prof.",name:"Süleyman Davut",middleName:null,surname:"Göker",slug:"suleyman-davut-goker",fullName:"Süleyman Davut Göker"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}}],offset:12,limit:12,total:5131},hotBookTopics:{hotBooks:[],offset:0,limit:12,total:null},publish:{},publishingProposal:{success:null,errors:{}},books:{featuredBooks:[{type:"book",id:"9208",title:"Welding",subtitle:"Modern Topics",isOpenForSubmission:!1,hash:"7d6be076ccf3a3f8bd2ca52d86d4506b",slug:"welding-modern-topics",bookSignature:"Sadek Crisóstomo Absi Alfaro, Wojciech Borek and Błażej Tomiczek",coverURL:"https://cdn.intechopen.com/books/images_new/9208.jpg",editors:[{id:"65292",title:"Prof.",name:"Sadek Crisostomo Absi",middleName:"C. Absi",surname:"Alfaro",slug:"sadek-crisostomo-absi-alfaro",fullName:"Sadek Crisostomo Absi Alfaro"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9139",title:"Topics in Primary Care Medicine",subtitle:null,isOpenForSubmission:!1,hash:"ea774a4d4c1179da92a782e0ae9cde92",slug:"topics-in-primary-care-medicine",bookSignature:"Thomas F. Heston",coverURL:"https://cdn.intechopen.com/books/images_new/9139.jpg",editors:[{id:"217926",title:"Dr.",name:"Thomas F.",middleName:null,surname:"Heston",slug:"thomas-f.-heston",fullName:"Thomas F. Heston"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"8697",title:"Virtual Reality and Its Application in Education",subtitle:null,isOpenForSubmission:!1,hash:"ee01b5e387ba0062c6b0d1e9227bda05",slug:"virtual-reality-and-its-application-in-education",bookSignature:"Dragan Cvetković",coverURL:"https://cdn.intechopen.com/books/images_new/8697.jpg",editors:[{id:"101330",title:"Dr.",name:"Dragan",middleName:"Mladen",surname:"Cvetković",slug:"dragan-cvetkovic",fullName:"Dragan Cvetković"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9343",title:"Trace Metals in the Environment",subtitle:"New Approaches and Recent Advances",isOpenForSubmission:!1,hash:"ae07e345bc2ce1ebbda9f70c5cd12141",slug:"trace-metals-in-the-environment-new-approaches-and-recent-advances",bookSignature:"Mario Alfonso Murillo-Tovar, Hugo Saldarriaga-Noreña and Agnieszka Saeid",coverURL:"https://cdn.intechopen.com/books/images_new/9343.jpg",editors:[{id:"255959",title:"Dr.",name:"Mario Alfonso",middleName:null,surname:"Murillo-Tovar",slug:"mario-alfonso-murillo-tovar",fullName:"Mario Alfonso Murillo-Tovar"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9785",title:"Endometriosis",subtitle:null,isOpenForSubmission:!1,hash:"f457ca61f29cf7e8bc191732c50bb0ce",slug:"endometriosis",bookSignature:"Courtney Marsh",coverURL:"https://cdn.intechopen.com/books/images_new/9785.jpg",editors:[{id:"255491",title:"Dr.",name:"Courtney",middleName:null,surname:"Marsh",slug:"courtney-marsh",fullName:"Courtney Marsh"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"7831",title:"Sustainability in Urban Planning and Design",subtitle:null,isOpenForSubmission:!1,hash:"c924420492c8c2c9751e178d025f4066",slug:"sustainability-in-urban-planning-and-design",bookSignature:"Amjad Almusaed, Asaad Almssad and Linh Truong - Hong",coverURL:"https://cdn.intechopen.com/books/images_new/7831.jpg",editors:[{id:"110471",title:"Dr.",name:"Amjad",middleName:"Zaki",surname:"Almusaed",slug:"amjad-almusaed",fullName:"Amjad Almusaed"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9376",title:"Contemporary Developments and Perspectives in International Health Security",subtitle:"Volume 1",isOpenForSubmission:!1,hash:"b9a00b84cd04aae458fb1d6c65795601",slug:"contemporary-developments-and-perspectives-in-international-health-security-volume-1",bookSignature:"Stanislaw P. Stawicki, Michael S. Firstenberg, Sagar C. Galwankar, Ricardo Izurieta and Thomas Papadimos",coverURL:"https://cdn.intechopen.com/books/images_new/9376.jpg",editors:[{id:"181694",title:"Dr.",name:"Stanislaw P.",middleName:null,surname:"Stawicki",slug:"stanislaw-p.-stawicki",fullName:"Stanislaw P. Stawicki"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"7769",title:"Medical Isotopes",subtitle:null,isOpenForSubmission:!1,hash:"f8d3c5a6c9a42398e56b4e82264753f7",slug:"medical-isotopes",bookSignature:"Syed Ali Raza Naqvi and Muhammad Babar Imrani",coverURL:"https://cdn.intechopen.com/books/images_new/7769.jpg",editors:[{id:"259190",title:"Dr.",name:"Syed Ali Raza",middleName:null,surname:"Naqvi",slug:"syed-ali-raza-naqvi",fullName:"Syed Ali Raza Naqvi"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"9279",title:"Concepts, Applications and Emerging Opportunities in Industrial Engineering",subtitle:null,isOpenForSubmission:!1,hash:"9bfa87f9b627a5468b7c1e30b0eea07a",slug:"concepts-applications-and-emerging-opportunities-in-industrial-engineering",bookSignature:"Gary Moynihan",coverURL:"https://cdn.intechopen.com/books/images_new/9279.jpg",editors:[{id:"16974",title:"Dr.",name:"Gary",middleName:null,surname:"Moynihan",slug:"gary-moynihan",fullName:"Gary Moynihan"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}},{type:"book",id:"7807",title:"A Closer Look at Organizational Culture in Action",subtitle:null,isOpenForSubmission:!1,hash:"05c608b9271cc2bc711f4b28748b247b",slug:"a-closer-look-at-organizational-culture-in-action",bookSignature:"Süleyman Davut Göker",coverURL:"https://cdn.intechopen.com/books/images_new/7807.jpg",editors:[{id:"190035",title:"Associate Prof.",name:"Süleyman Davut",middleName:null,surname:"Göker",slug:"suleyman-davut-goker",fullName:"Süleyman Davut Göker"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}}],latestBooks:[{type:"book",id:"7434",title:"Molecular Biotechnology",subtitle:null,isOpenForSubmission:!1,hash:"eceede809920e1ec7ecadd4691ede2ec",slug:"molecular-biotechnology",bookSignature:"Sergey Sedykh",coverURL:"https://cdn.intechopen.com/books/images_new/7434.jpg",editedByType:"Edited by",editors:[{id:"178316",title:"Ph.D.",name:"Sergey",middleName:null,surname:"Sedykh",slug:"sergey-sedykh",fullName:"Sergey Sedykh"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8545",title:"Animal Reproduction in Veterinary Medicine",subtitle:null,isOpenForSubmission:!1,hash:"13aaddf5fdbbc78387e77a7da2388bf6",slug:"animal-reproduction-in-veterinary-medicine",bookSignature:"Faruk Aral, Rita Payan-Carreira and Miguel Quaresma",coverURL:"https://cdn.intechopen.com/books/images_new/8545.jpg",editedByType:"Edited by",editors:[{id:"25600",title:"Prof.",name:"Faruk",middleName:null,surname:"Aral",slug:"faruk-aral",fullName:"Faruk Aral"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"9569",title:"Methods in Molecular Medicine",subtitle:null,isOpenForSubmission:!1,hash:"691d3f3c4ac25a8093414e9b270d2843",slug:"methods-in-molecular-medicine",bookSignature:"Yusuf Tutar",coverURL:"https://cdn.intechopen.com/books/images_new/9569.jpg",editedByType:"Edited by",editors:[{id:"158492",title:"Prof.",name:"Yusuf",middleName:null,surname:"Tutar",slug:"yusuf-tutar",fullName:"Yusuf Tutar"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"9839",title:"Outdoor Recreation",subtitle:"Physiological and Psychological Effects on Health",isOpenForSubmission:!1,hash:"5f5a0d64267e32567daffa5b0c6a6972",slug:"outdoor-recreation-physiological-and-psychological-effects-on-health",bookSignature:"Hilde G. Nielsen",coverURL:"https://cdn.intechopen.com/books/images_new/9839.jpg",editedByType:"Edited by",editors:[{id:"158692",title:"Ph.D.",name:"Hilde G.",middleName:null,surname:"Nielsen",slug:"hilde-g.-nielsen",fullName:"Hilde G. Nielsen"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"7802",title:"Modern Slavery and Human Trafficking",subtitle:null,isOpenForSubmission:!1,hash:"587a0b7fb765f31cc98de33c6c07c2e0",slug:"modern-slavery-and-human-trafficking",bookSignature:"Jane Reeves",coverURL:"https://cdn.intechopen.com/books/images_new/7802.jpg",editedByType:"Edited by",editors:[{id:"211328",title:"Prof.",name:"Jane",middleName:null,surname:"Reeves",slug:"jane-reeves",fullName:"Jane Reeves"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8063",title:"Food Security in Africa",subtitle:null,isOpenForSubmission:!1,hash:"8cbf3d662b104d19db2efc9d59249efc",slug:"food-security-in-africa",bookSignature:"Barakat Mahmoud",coverURL:"https://cdn.intechopen.com/books/images_new/8063.jpg",editedByType:"Edited by",editors:[{id:"92016",title:"Dr.",name:"Barakat",middleName:null,surname:"Mahmoud",slug:"barakat-mahmoud",fullName:"Barakat Mahmoud"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"10118",title:"Plant Stress Physiology",subtitle:null,isOpenForSubmission:!1,hash:"c68b09d2d2634fc719ae3b9a64a27839",slug:"plant-stress-physiology",bookSignature:"Akbar Hossain",coverURL:"https://cdn.intechopen.com/books/images_new/10118.jpg",editedByType:"Edited by",editors:[{id:"280755",title:"Dr.",name:"Akbar",middleName:null,surname:"Hossain",slug:"akbar-hossain",fullName:"Akbar Hossain"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"9157",title:"Neurodegenerative Diseases",subtitle:"Molecular Mechanisms and Current Therapeutic Approaches",isOpenForSubmission:!1,hash:"bc8be577966ef88735677d7e1e92ed28",slug:"neurodegenerative-diseases-molecular-mechanisms-and-current-therapeutic-approaches",bookSignature:"Nagehan Ersoy Tunalı",coverURL:"https://cdn.intechopen.com/books/images_new/9157.jpg",editedByType:"Edited by",editors:[{id:"82778",title:"Ph.D.",name:"Nagehan",middleName:null,surname:"Ersoy Tunalı",slug:"nagehan-ersoy-tunali",fullName:"Nagehan Ersoy Tunalı"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"9961",title:"Data Mining",subtitle:"Methods, Applications and Systems",isOpenForSubmission:!1,hash:"ed79fb6364f2caf464079f94a0387146",slug:"data-mining-methods-applications-and-systems",bookSignature:"Derya Birant",coverURL:"https://cdn.intechopen.com/books/images_new/9961.jpg",editedByType:"Edited by",editors:[{id:"15609",title:"Dr.",name:"Derya",middleName:null,surname:"Birant",slug:"derya-birant",fullName:"Derya Birant"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8686",title:"Direct Torque Control Strategies of Electrical Machines",subtitle:null,isOpenForSubmission:!1,hash:"b6ad22b14db2b8450228545d3d4f6b1a",slug:"direct-torque-control-strategies-of-electrical-machines",bookSignature:"Fatma Ben Salem",coverURL:"https://cdn.intechopen.com/books/images_new/8686.jpg",editedByType:"Edited by",editors:[{id:"295623",title:"Associate Prof.",name:"Fatma",middleName:null,surname:"Ben Salem",slug:"fatma-ben-salem",fullName:"Fatma Ben Salem"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}}]},subject:{topic:{id:"38",title:"Horticulture",slug:"horticulture",parent:{title:"Agricultural and Biological Sciences",slug:"agricultural-and-biological-sciences"},numberOfBooks:19,numberOfAuthorsAndEditors:560,numberOfWosCitations:561,numberOfCrossrefCitations:424,numberOfDimensionsCitations:1043,videoUrl:null,fallbackUrl:null,description:null},booksByTopicFilter:{topicSlug:"horticulture",sort:"-publishedDate",limit:12,offset:0},booksByTopicCollection:[{type:"book",id:"10165",title:"Legume Crops",subtitle:"Prospects, Production and Uses",isOpenForSubmission:!1,hash:"5ce648cbd64755df57dd7c67c9b17f18",slug:"legume-crops-prospects-production-and-uses",bookSignature:"Mirza Hasanuzzaman",coverURL:"https://cdn.intechopen.com/books/images_new/10165.jpg",editedByType:"Edited by",editors:[{id:"76477",title:"Dr.",name:"Mirza",middleName:null,surname:"Hasanuzzaman",slug:"mirza-hasanuzzaman",fullName:"Mirza Hasanuzzaman"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"8152",title:"Modern Fruit Industry",subtitle:null,isOpenForSubmission:!1,hash:"4ea4aff1aa2988e552a7a8ff3384c59a",slug:"modern-fruit-industry",bookSignature:"Ibrahim Kahramanoglu, Nesibe Ebru Kafkas, Ayzin Küden and Songül Çömlekçioğlu",coverURL:"https://cdn.intechopen.com/books/images_new/8152.jpg",editedByType:"Edited by",editors:[{id:"178185",title:"Ph.D.",name:"Ibrahim",middleName:null,surname:"Kahramanoglu",slug:"ibrahim-kahramanoglu",fullName:"Ibrahim Kahramanoglu"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"7014",title:"Horticultural Crops",subtitle:null,isOpenForSubmission:!1,hash:"62d269dbecb5881a63b040c9ec933e9d",slug:"horticultural-crops",bookSignature:"Hugues Kossi Baimey, Noureddine Hamamouch and Yao Adjiguita Kolombia",coverURL:"https://cdn.intechopen.com/books/images_new/7014.jpg",editedByType:"Edited by",editors:[{id:"201690",title:"Dr.",name:"Hugues",middleName:null,surname:"Kossi Baimey",slug:"hugues-kossi-baimey",fullName:"Hugues Kossi Baimey"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"6996",title:"Strawberry",subtitle:"Pre- and Post-Harvest Management Techniques for Higher Fruit Quality",isOpenForSubmission:!1,hash:"dc740162f400a4dd3e9377a140424917",slug:"strawberry-pre-and-post-harvest-management-techniques-for-higher-fruit-quality",bookSignature:"Toshiki Asao and Md Asaduzzaman",coverURL:"https://cdn.intechopen.com/books/images_new/6996.jpg",editedByType:"Edited by",editors:[{id:"106510",title:"Dr.",name:"Toshiki",middleName:null,surname:"Asao",slug:"toshiki-asao",fullName:"Toshiki Asao"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"6492",title:"Vegetables",subtitle:"Importance of Quality Vegetables to Human Health",isOpenForSubmission:!1,hash:"c9b3988b64bc40ab0eb650fe8a2b2493",slug:"vegetables-importance-of-quality-vegetables-to-human-health",bookSignature:"Md. Asaduzzaman and Toshiki Asao",coverURL:"https://cdn.intechopen.com/books/images_new/6492.jpg",editedByType:"Edited by",editors:[{id:"171564",title:"Dr.",name:"Md",middleName:null,surname:"Asaduzzaman",slug:"md-asaduzzaman",fullName:"Md Asaduzzaman"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"6203",title:"Potassium",subtitle:"Improvement of Quality in Fruits and Vegetables Through Hydroponic Nutrient Management",isOpenForSubmission:!1,hash:"b4208bd87e8d6c2569ebdda0e4868ad2",slug:"potassium-improvement-of-quality-in-fruits-and-vegetables-through-hydroponic-nutrient-management",bookSignature:"Md Asaduzzaman and Toshiki Asao",coverURL:"https://cdn.intechopen.com/books/images_new/6203.jpg",editedByType:"Edited by",editors:[{id:"171564",title:"Dr.",name:"Md",middleName:null,surname:"Asaduzzaman",slug:"md-asaduzzaman",fullName:"Md Asaduzzaman"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"5972",title:"Postharvest Handling",subtitle:null,isOpenForSubmission:!1,hash:"68eb74526fe2b5a328ad537425137a0d",slug:"postharvest-handling",bookSignature:"Ibrahim Kahramanoglu",coverURL:"https://cdn.intechopen.com/books/images_new/5972.jpg",editedByType:"Edited by",editors:[{id:"178185",title:"Ph.D.",name:"Ibrahim",middleName:null,surname:"Kahramanoglu",slug:"ibrahim-kahramanoglu",fullName:"Ibrahim Kahramanoglu"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"6026",title:"Active Ingredients from Aromatic and Medicinal Plants",subtitle:null,isOpenForSubmission:!1,hash:"f5988dd981b01f4497052300329105b2",slug:"active-ingredients-from-aromatic-and-medicinal-plants",bookSignature:"Hany A. El-Shemy",coverURL:"https://cdn.intechopen.com/books/images_new/6026.jpg",editedByType:"Edited by",editors:[{id:"54719",title:"Prof.",name:"Hany",middleName:null,surname:"El-Shemy",slug:"hany-el-shemy",fullName:"Hany El-Shemy"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"5286",title:"Products from Olive Tree",subtitle:null,isOpenForSubmission:!1,hash:"b1c4ed3e0237d388a235b51b1b415886",slug:"products-from-olive-tree",bookSignature:"Dimitrios Boskou and Maria Lisa Clodoveo",coverURL:"https://cdn.intechopen.com/books/images_new/5286.jpg",editedByType:"Edited by",editors:[{id:"77212",title:"Dr.",name:"Dimitrios",middleName:null,surname:"Boskou",slug:"dimitrios-boskou",fullName:"Dimitrios Boskou"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"5253",title:"Grape and Wine Biotechnology",subtitle:null,isOpenForSubmission:!1,hash:"5626f83050894f6dfc5640fa908dc920",slug:"grape-and-wine-biotechnology",bookSignature:"Antonio Morata and Iris Loira",coverURL:"https://cdn.intechopen.com/books/images_new/5253.jpg",editedByType:"Edited by",editors:[{id:"180952",title:"Prof.",name:"Antonio",middleName:null,surname:"Morata",slug:"antonio-morata",fullName:"Antonio Morata"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"5218",title:"New Challenges in Seed Biology",subtitle:"Basic and Translational Research Driving Seed Technology",isOpenForSubmission:!1,hash:"cbdf379c83007e5a7341c51bcd02db9a",slug:"new-challenges-in-seed-biology-basic-and-translational-research-driving-seed-technology",bookSignature:"Susana Araujo and Alma Balestrazzi",coverURL:"https://cdn.intechopen.com/books/images_new/5218.jpg",editedByType:"Edited by",editors:[{id:"156799",title:"Dr.",name:"Susana",middleName:null,surname:"Araújo",slug:"susana-araujo",fullName:"Susana Araújo"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}},{type:"book",id:"5179",title:"Organic Fertilizers",subtitle:"From Basic Concepts to Applied Outcomes",isOpenForSubmission:!1,hash:"93748f3bd6a9c0240d71ffd350d624b1",slug:"organic-fertilizers-from-basic-concepts-to-applied-outcomes",bookSignature:"Marcelo L. Larramendy and Sonia Soloneski",coverURL:"https://cdn.intechopen.com/books/images_new/5179.jpg",editedByType:"Edited by",editors:[{id:"14764",title:"Dr.",name:"Marcelo L.",middleName:null,surname:"Larramendy",slug:"marcelo-l.-larramendy",fullName:"Marcelo L. Larramendy"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter",authoredCaption:"Edited by"}}],booksByTopicTotal:19,mostCitedChapters:[{id:"43317",doi:"10.5772/54833",title:"Extreme Temperature Responses, Oxidative Stress and Antioxidant Defense in Plants",slug:"extreme-temperature-responses-oxidative-stress-and-antioxidant-defense-in-plants",totalDownloads:10639,totalCrossrefCites:45,totalDimensionsCites:92,book:{slug:"abiotic-stress-plant-responses-and-applications-in-agriculture",title:"Abiotic Stress",fullTitle:"Abiotic Stress - Plant Responses and Applications in Agriculture"},signatures:"Mirza Hasanuzzaman, Kamrun Nahar and Masayuki Fujita",authors:[{id:"47687",title:"Prof.",name:"Masayuki",middleName:null,surname:"Fujita",slug:"masayuki-fujita",fullName:"Masayuki Fujita"},{id:"76477",title:"Dr.",name:"Mirza",middleName:null,surname:"Hasanuzzaman",slug:"mirza-hasanuzzaman",fullName:"Mirza Hasanuzzaman"},{id:"166818",title:"MSc.",name:"Kamrun",middleName:null,surname:"Nahar",slug:"kamrun-nahar",fullName:"Kamrun Nahar"}]},{id:"51934",doi:"10.5772/64420",title:"Seed Priming: New Comprehensive Approaches for an Old Empirical Technique",slug:"seed-priming-new-comprehensive-approaches-for-an-old-empirical-technique",totalDownloads:6094,totalCrossrefCites:17,totalDimensionsCites:53,book:{slug:"new-challenges-in-seed-biology-basic-and-translational-research-driving-seed-technology",title:"New Challenges in Seed Biology",fullTitle:"New Challenges in Seed Biology - Basic and Translational Research Driving Seed Technology"},signatures:"Stanley Lutts, Paolo Benincasa, Lukasz Wojtyla, Szymon Kubala S,\nRoberta Pace, Katzarina Lechowska, Muriel Quinet and Malgorzata\nGarnczarska",authors:[{id:"94090",title:"Prof.",name:"Stanley",middleName:null,surname:"Lutts",slug:"stanley-lutts",fullName:"Stanley Lutts"},{id:"181730",title:"Prof.",name:"Paolo",middleName:null,surname:"Benincasa",slug:"paolo-benincasa",fullName:"Paolo Benincasa"},{id:"181732",title:"Dr.",name:"Lukasz",middleName:null,surname:"Wojtyla",slug:"lukasz-wojtyla",fullName:"Lukasz Wojtyla"},{id:"181733",title:"Dr.",name:"Szymon",middleName:null,surname:"Kubala",slug:"szymon-kubala",fullName:"Szymon Kubala"},{id:"181734",title:"Mrs.",name:"Katzzarina",middleName:null,surname:"Lechowska",slug:"katzzarina-lechowska",fullName:"Katzzarina Lechowska"},{id:"181735",title:"Dr.",name:"Muriel",middleName:null,surname:"Quinet",slug:"muriel-quinet",fullName:"Muriel Quinet"},{id:"181736",title:"Prof.",name:"Malgorzata",middleName:null,surname:"Garnczarska",slug:"malgorzata-garnczarska",fullName:"Malgorzata Garnczarska"}]},{id:"44143",doi:"10.5772/54592",title:"Production of Anthocyanins in Grape Cell Cultures: A Potential Source of Raw Material for Pharmaceutical, Food, and Cosmetic Industries",slug:"production-of-anthocyanins-in-grape-cell-cultures-a-potential-source-of-raw-material-for-pharmaceuti",totalDownloads:7177,totalCrossrefCites:19,totalDimensionsCites:53,book:{slug:"the-mediterranean-genetic-code-grapevine-and-olive",title:"The Mediterranean Genetic Code",fullTitle:"The Mediterranean Genetic Code - Grapevine and Olive"},signatures:"Anthony Ananga, Vasil Georgiev, Joel Ochieng, Bobby Phills and Violeta Tsolova",authors:[{id:"74792",title:"Dr.",name:"Joel W.",middleName:null,surname:"Ochieng",slug:"joel-w.-ochieng",fullName:"Joel W. Ochieng"},{id:"126149",title:"Dr.",name:"Anthony",middleName:null,surname:"Ananga",slug:"anthony-ananga",fullName:"Anthony Ananga"},{id:"136830",title:"Dr.",name:"Devaiah",middleName:null,surname:"Kambiranda",slug:"devaiah-kambiranda",fullName:"Devaiah Kambiranda"},{id:"137412",title:"Dr.",name:"Violetka",middleName:null,surname:"Tsolova",slug:"violetka-tsolova",fullName:"Violetka Tsolova"},{id:"165414",title:"Dr.",name:"Vasil",middleName:null,surname:"Georgiev",slug:"vasil-georgiev",fullName:"Vasil Georgiev"},{id:"165415",title:"Dr.",name:"Bobby",middleName:null,surname:"Phills",slug:"bobby-phills",fullName:"Bobby Phills"}]}],mostDownloadedChaptersLast30Days:[{id:"56159",title:"Processing and Preservation of Fresh-Cut Fruit and Vegetable Products",slug:"processing-and-preservation-of-fresh-cut-fruit-and-vegetable-products",totalDownloads:4307,totalCrossrefCites:3,totalDimensionsCites:6,book:{slug:"postharvest-handling",title:"Postharvest Handling",fullTitle:"Postharvest Handling"},signatures:"Afam I.O. Jideani, Tonna A. Anyasi, Godwin R.A. Mchau, Elohor O.\nUdoro and Oluwatoyin O. Onipe",authors:[{id:"169352",title:"Dr.",name:"Tonna",middleName:"Ashim",surname:"Anyasi",slug:"tonna-anyasi",fullName:"Tonna Anyasi"},{id:"200822",title:"Prof.",name:"Afam I. O.",middleName:null,surname:"Jideani",slug:"afam-i.-o.-jideani",fullName:"Afam I. O. Jideani"},{id:"204522",title:"Prof.",name:"Godwin R.A.",middleName:null,surname:"Mchau",slug:"godwin-r.a.-mchau",fullName:"Godwin R.A. Mchau"},{id:"204523",title:"Ms.",name:"Elohor O.",middleName:null,surname:"Udoro",slug:"elohor-o.-udoro",fullName:"Elohor O. Udoro"},{id:"205968",title:"Ms.",name:"Toyin O.",middleName:null,surname:"Onipe",slug:"toyin-o.-onipe",fullName:"Toyin O. Onipe"}]},{id:"51934",title:"Seed Priming: New Comprehensive Approaches for an Old Empirical Technique",slug:"seed-priming-new-comprehensive-approaches-for-an-old-empirical-technique",totalDownloads:6094,totalCrossrefCites:17,totalDimensionsCites:53,book:{slug:"new-challenges-in-seed-biology-basic-and-translational-research-driving-seed-technology",title:"New Challenges in Seed Biology",fullTitle:"New Challenges in Seed Biology - Basic and Translational Research Driving Seed Technology"},signatures:"Stanley Lutts, Paolo Benincasa, Lukasz Wojtyla, Szymon Kubala S,\nRoberta Pace, Katzarina Lechowska, Muriel Quinet and Malgorzata\nGarnczarska",authors:[{id:"94090",title:"Prof.",name:"Stanley",middleName:null,surname:"Lutts",slug:"stanley-lutts",fullName:"Stanley Lutts"},{id:"181730",title:"Prof.",name:"Paolo",middleName:null,surname:"Benincasa",slug:"paolo-benincasa",fullName:"Paolo Benincasa"},{id:"181732",title:"Dr.",name:"Lukasz",middleName:null,surname:"Wojtyla",slug:"lukasz-wojtyla",fullName:"Lukasz Wojtyla"},{id:"181733",title:"Dr.",name:"Szymon",middleName:null,surname:"Kubala",slug:"szymon-kubala",fullName:"Szymon Kubala"},{id:"181734",title:"Mrs.",name:"Katzzarina",middleName:null,surname:"Lechowska",slug:"katzzarina-lechowska",fullName:"Katzzarina Lechowska"},{id:"181735",title:"Dr.",name:"Muriel",middleName:null,surname:"Quinet",slug:"muriel-quinet",fullName:"Muriel Quinet"},{id:"181736",title:"Prof.",name:"Malgorzata",middleName:null,surname:"Garnczarska",slug:"malgorzata-garnczarska",fullName:"Malgorzata Garnczarska"}]},{id:"58261",title:"Software for Calculation of Nutrient Solution for Fruits and Leafy Vegetables in NFT Hydroponic System",slug:"software-for-calculation-of-nutrient-solution-for-fruits-and-leafy-vegetables-in-nft-hydroponic-syst",totalDownloads:3903,totalCrossrefCites:0,totalDimensionsCites:0,book:{slug:"potassium-improvement-of-quality-in-fruits-and-vegetables-through-hydroponic-nutrient-management",title:"Potassium",fullTitle:"Potassium - Improvement of Quality in Fruits and Vegetables Through Hydroponic Nutrient Management"},signatures:"Douglas José Marques, Francisco Donizeti Vieira Luz, Rogério\nWilliam Fernandes Barroso and Hudson Carvalho Bianchini",authors:[{id:"208047",title:"Prof.",name:"Hudson Carvalho",middleName:null,surname:"Bianchini",slug:"hudson-carvalho-bianchini",fullName:"Hudson Carvalho Bianchini"},{id:"215944",title:"Dr.",name:"Douglas José",middleName:"José",surname:"Marques",slug:"douglas-jose-marques",fullName:"Douglas José Marques"},{id:"215945",title:"MSc.",name:"Francisco Donizete Vieira",middleName:null,surname:"Luz",slug:"francisco-donizete-vieira-luz",fullName:"Francisco Donizete Vieira Luz"},{id:"215946",title:"MSc.",name:"Rogério William Fernandes",middleName:null,surname:"Barroso",slug:"rogerio-william-fernandes-barroso",fullName:"Rogério William Fernandes Barroso"}]},{id:"61691",title:"Role of Vegetables in Human Nutrition and Disease Prevention",slug:"role-of-vegetables-in-human-nutrition-and-disease-prevention",totalDownloads:1782,totalCrossrefCites:3,totalDimensionsCites:7,book:{slug:"vegetables-importance-of-quality-vegetables-to-human-health",title:"Vegetables",fullTitle:"Vegetables - Importance of Quality Vegetables to Human Health"},signatures:"Taha Gökmen Ülger, Ayşe Nur Songur, Onur Çırak and Funda Pınar\nÇakıroğlu",authors:[{id:"176588",title:"Prof.",name:"Funda Pınar",middleName:null,surname:"Çakıroğlu",slug:"funda-pinar-cakiroglu",fullName:"Funda Pınar Çakıroğlu"},{id:"244239",title:"Dr.",name:"Onur",middleName:null,surname:"Çırak",slug:"onur-cirak",fullName:"Onur Çırak"},{id:"251662",title:"Dr.",name:"Ayşe Nur",middleName:null,surname:"Songür",slug:"ayse-nur-songur",fullName:"Ayşe Nur Songür"},{id:"251663",title:"MSc.",name:"Taha Gökmen",middleName:null,surname:"Ülger",slug:"taha-gokmen-ulger",fullName:"Taha Gökmen Ülger"}]},{id:"55697",title:"Introductory Chapter: Postharvest Physiology and Technology of Horticultural Crops",slug:"introductory-chapter-postharvest-physiology-and-technology-of-horticultural-crops",totalDownloads:2547,totalCrossrefCites:3,totalDimensionsCites:8,book:{slug:"postharvest-handling",title:"Postharvest Handling",fullTitle:"Postharvest Handling"},signatures:"İbrahim Kahramanoğlu",authors:[{id:"178185",title:"Ph.D.",name:"Ibrahim",middleName:null,surname:"Kahramanoglu",slug:"ibrahim-kahramanoglu",fullName:"Ibrahim Kahramanoglu"}]},{id:"53418",title:"Fenugreek (Trigonella foenum-graecum L.): An Important Medicinal and Aromatic Crop",slug:"fenugreek-trigonella-foenum-graecum-l-an-important-medicinal-and-aromatic-crop",totalDownloads:2624,totalCrossrefCites:1,totalDimensionsCites:4,book:{slug:"active-ingredients-from-aromatic-and-medicinal-plants",title:"Active Ingredients from Aromatic and Medicinal Plants",fullTitle:"Active Ingredients from Aromatic and Medicinal Plants"},signatures:"Peiman Zandi, Saikat Kumar Basu, William Cetzal-Ix, Mojtaba\nKordrostami, Shahram Khademi Chalaras and Leila Bazrkar Khatibai",authors:[{id:"193070",title:"Dr.",name:"Peiman",middleName:null,surname:"Zandi",slug:"peiman-zandi",fullName:"Peiman Zandi"},{id:"196977",title:"Dr.",name:"Saikat",middleName:null,surname:"Kumar Basu",slug:"saikat-kumar-basu",fullName:"Saikat Kumar Basu"},{id:"196978",title:"Dr.",name:"William",middleName:null,surname:"Cetzal-Ix",slug:"william-cetzal-ix",fullName:"William Cetzal-Ix"},{id:"196979",title:"Dr.",name:"Mojtaba",middleName:null,surname:"Kordrostami",slug:"mojtaba-kordrostami",fullName:"Mojtaba Kordrostami"},{id:"196980",title:"MSc.",name:"Shahram",middleName:null,surname:"Khademi Chalaras",slug:"shahram-khademi-chalaras",fullName:"Shahram Khademi Chalaras"},{id:"196981",title:"Dr.",name:"Leila",middleName:null,surname:"Bazrkar Khatibai",slug:"leila-bazrkar-khatibai",fullName:"Leila Bazrkar Khatibai"}]},{id:"51881",title:"Recent Advances in Seed Enhancements",slug:"recent-advances-in-seed-enhancements",totalDownloads:3707,totalCrossrefCites:5,totalDimensionsCites:12,book:{slug:"new-challenges-in-seed-biology-basic-and-translational-research-driving-seed-technology",title:"New Challenges in Seed Biology",fullTitle:"New Challenges in Seed Biology - Basic and Translational Research Driving Seed Technology"},signatures:"Irfan Afzal, Hafeez Ur Rehman, Muhammad Naveed and Shahzad\nMaqsood Ahmed Basra",authors:[{id:"180245",title:"Dr.",name:"Irfan",middleName:null,surname:"Afzal",slug:"irfan-afzal",fullName:"Irfan Afzal"}]},{id:"43317",title:"Extreme Temperature Responses, Oxidative Stress and Antioxidant Defense in Plants",slug:"extreme-temperature-responses-oxidative-stress-and-antioxidant-defense-in-plants",totalDownloads:10639,totalCrossrefCites:45,totalDimensionsCites:92,book:{slug:"abiotic-stress-plant-responses-and-applications-in-agriculture",title:"Abiotic Stress",fullTitle:"Abiotic Stress - Plant Responses and Applications in Agriculture"},signatures:"Mirza Hasanuzzaman, Kamrun Nahar and Masayuki Fujita",authors:[{id:"47687",title:"Prof.",name:"Masayuki",middleName:null,surname:"Fujita",slug:"masayuki-fujita",fullName:"Masayuki Fujita"},{id:"76477",title:"Dr.",name:"Mirza",middleName:null,surname:"Hasanuzzaman",slug:"mirza-hasanuzzaman",fullName:"Mirza Hasanuzzaman"},{id:"166818",title:"MSc.",name:"Kamrun",middleName:null,surname:"Nahar",slug:"kamrun-nahar",fullName:"Kamrun Nahar"}]},{id:"53045",title:"Chemical Structure, Quality Indices and Bioactivity of Essential Oil Constituents",slug:"chemical-structure-quality-indices-and-bioactivity-of-essential-oil-constituents",totalDownloads:3339,totalCrossrefCites:5,totalDimensionsCites:12,book:{slug:"active-ingredients-from-aromatic-and-medicinal-plants",title:"Active Ingredients from Aromatic and Medicinal Plants",fullTitle:"Active Ingredients from Aromatic and Medicinal Plants"},signatures:"Nashwa Fathy Sayed Morsy",authors:[{id:"193168",title:"Associate Prof.",name:"Nashwa",middleName:null,surname:"Morsy",slug:"nashwa-morsy",fullName:"Nashwa Morsy"}]},{id:"54951",title:"Modified Atmosphere Packaging: Design and Optimization Strategies for Fresh Produce",slug:"modified-atmosphere-packaging-design-and-optimization-strategies-for-fresh-produce",totalDownloads:1495,totalCrossrefCites:2,totalDimensionsCites:3,book:{slug:"postharvest-handling",title:"Postharvest Handling",fullTitle:"Postharvest Handling"},signatures:"Diego A. Castellanos and Aníbal O. Herrera",authors:[{id:"203128",title:"Dr.",name:"Aníbal",middleName:null,surname:"Herrera",slug:"anibal-herrera",fullName:"Aníbal Herrera"},{id:"203129",title:"Dr.",name:"Diego",middleName:"Alberto",surname:"Castellanos",slug:"diego-castellanos",fullName:"Diego Castellanos"}]}],onlineFirstChaptersFilter:{topicSlug:"horticulture",limit:3,offset:0},onlineFirstChaptersCollection:[],onlineFirstChaptersTotal:0},preDownload:{success:null,errors:{}},aboutIntechopen:{},privacyPolicy:{},peerReviewing:{},howOpenAccessPublishingWithIntechopenWorks:{},sponsorshipBooks:{sponsorshipBooks:[{type:"book",id:"10176",title:"Microgrids and Local Energy Systems",subtitle:null,isOpenForSubmission:!0,hash:"c32b4a5351a88f263074b0d0ca813a9c",slug:null,bookSignature:"Prof. Nick Jenkins",coverURL:"https://cdn.intechopen.com/books/images_new/10176.jpg",editedByType:null,editors:[{id:"55219",title:"Prof.",name:"Nick",middleName:null,surname:"Jenkins",slug:"nick-jenkins",fullName:"Nick Jenkins"}],equalEditorOne:null,equalEditorTwo:null,equalEditorThree:null,productType:{id:"1",chapterContentType:"chapter"}}],offset:8,limit:8,total:1},route:{name:"profile.detail",path:"/profiles/203171/lucilia-alves-linhares-machado",hash:"",query:{},params:{id:"203171",slug:"lucilia-alves-linhares-machado"},fullPath:"/profiles/203171/lucilia-alves-linhares-machado",meta:{},from:{name:null,path:"/",hash:"",query:{},params:{},fullPath:"/",meta:{}}}},function(){var e;(e=document.currentScript||document.scripts[document.scripts.length-1]).parentNode.removeChild(e)}()