Abstract
Design of experiment is the method, which is used at a very large scale to study the experimentations of industrial processes. It is a statically approach where we develop the mathematical models through experimental trial runs to predict the possible output on the basis of the given input data or parameters. The aim of this chapter is to stimulate the engineering community to apply Taguchi technique to experimentation, the design of experiments, and to tackle quality problems in industrial chemical processes that they deal with. Based on years of research and applications, Dr. G. Taguchi has standardized the methods for each of these DOE application steps. Thus, DOE using Taguchi approach has become a much more attractive tool to practicing engineers and scientists. And since the last four decades, there were limitations when conventional experimental design techniques were applied to industrial experimentation. And Taguchi, also known as orthogonal array design, adds a new dimension to conventional experimental design. Taguchi method is a broadly accepted method of DOE, which has proven in producing high-quality products at subsequently low cost.
Keywords
- DOE
- Taguchi method
- industrial chemical processes
- parameter optimization
- ANOVA
1. Introduction
Industries are engaged in a variety of activities such as developing new products, improving previous designs, maintenance, controlling and improving ongoing processes and some more. Experimentation is a frequent task in these activities to measure and analyse the output, and for this purpose engineers/researchers use many tools like statistics, analytical models, etc., regardless of their background in it [1].
Montgomery [2] writes,
In today’s era, the purpose of experiments in industries is essentially optimization and robust design analysis (RDA, which is used to make the system less sensitive to variations in uncontrollable noise factors or in other words to make the system robust). DOE, or experimental design, is the name given to the techniques used for guiding the choice of the experiments to be performed in an efficient way. In a general way, the process analysis can be expressed as the study of the cause-effect relationships which may be carried out by drawing inferences from a finite number of samples. And one of the most important purposes of it is to design sampling experiments that are productive and cost-effective and provide a sufficient data base in a qualitative sense [3]. Design of experiments has been applied successfully in diverse fields such as agriculture (improved crop yields have created grain surpluses), the petrochemical industry (for highly efficient oil refineries) and Japanese automobile manufacturing (giving them a large market share for their vehicles), and still its implementation area is spreading and providing the optimized results. These developments are due in part to the successful implementation of design of experiments. The reason to use design of experiments is to implement valid and efficient experiments that will produce quantitative results and support sound decision-making [4].
1.1. Brief history
Statistical experimental design, together with the basic ideas underlying DOE, was born in the 1920s from the work of Sir Ronald Aylmer Fisher [5]. Fisher was the statistician who created the foundations for modern statistical science. The second era for statistical experimental design began in 1951 with the work of Box and Wilson [6], who applied the idea to industrial experiments and developed the response surface methodology (RSM), which is used to find out the relationships between various process parameters and one or more responses. The work of Dr. Genichi Taguchi in the 1980s [7], despite having been very controversial (described briefly in heading 2.4), had a significant impact in making statistical experimental design popular and stressed the importance it can have in terms of quality improvement.
Usually, data subject to experimental error (noise) are involved, and the results can be significantly affected by noise. Thus, it is better to analyse the data with appropriate statistical methods. The basic principles of statistical methods in experimental design are replication, randomization and blocking. Replication is the repetition of the experiment to obtain a more precise result (sample mean value) and to estimate the experimental error (sample standard deviation). Randomization refers to the random order in which the runs of the experiment are to be performed. In this way, the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs. Blocking aims at isolating a known systematic bias effect and prevents it from obscuring the main effects [8]. This is achieved by arranging the experiments in groups that are like one another. In this way, the sources of variability are reduced, and the precision is improved.
The design of experiments (DOE) is explained by Lye [9], as a methodology for systematically applying statistics to experimentation. In DOE, a sequence of tests is designed in which purposeful vary the input parameters (factors) of a product or process to examine the reasons of the variation in the output response [10]. By the end of the twentieth century, DOE was no longer viewed as merely a stand-alone tool, because it was packaged together with a structured initiative for business improvement known as Six Sigma. Moreover, an increased emphasis on DOE took place during this period in Six Sigma literature [11]. DOE is a good tool to understand and optimize products or process parameters. It is quick as well as cost-effective.
1.2. Advantages of DOE
With real engineering examples, Czitrom [12] listed the following advantages of DOE:
A good amount of data can be obtained with lesser resources (experiments, time, material, etc.).
The estimates of the effect of each factor (variable) on the response are more precise.
It is a systematic way to estimate the interactions between the process factors.
There is experimental information in a larger region of the factor space.
1.3. DOE techniques
A survey was carried out within the industry which identifies the needs of using an efficient and practical technique for the experimentation. It was surveyed that 76% of industries consider themselves in need of a methodology [13]. So here are listing some of the techniques that are in use in Industries. The list of the techniques considered is far from being complete since the aim of the section is just to introduce the reader into the topic showing the main techniques which are used in practice [14].
Randomized complete block design
Latin square
Full factorial
Fractional factorial
Central composite
Box-Behnken [15]
Plackett-Burman [16]
Taguchi [7]
Random
Halton, Faure and Sobol sequences
Latin hypercube
Optimal design
Response surface design
Several DOE techniques are available to the experimental designer. However, as it always happens in optimization, there is no best choice. The correct DOE technique selection depends on the problem to be investigated and on the aim of the experimentation.
M. Cavzzuti [14] concluded that items to be considered are:
The number of experiments N which can be afforded. In determining the number of experiments, an important issue is the time required for a single experiment. There is a lot of difference between whether the response variable is extracted from a quick simulation in which a number is computed or taken from a spreadsheet or it involves the setting up of a complex laboratory experiment. In the former case, it could take a fraction of a second to obtain a response, and in the latter one, each experiment could take days.
The number of parameters k of the experiment. For many DOE techniques, the number of experiments required grows exponentially with the number of parameters (Figure 1). Not necessarily to use a cheap technique is the best choice, because a cheap technique means imprecise results and insufficient design space exploration. Unless the number of experiments which can be afforded is high, it is important to limit the number of parameters as much as possible to reduce the size of the problem and the effort required to solve it. Of course, the choice of the parameters to be discarded can be a particularly delicate issue. This could have done by applying a cheap technique (like Plackett-Burman etc.) as a preliminary study for estimating the main effects.

Figure 1.
Number of experiments required by the DOE techniques [
2. Introduction to Taguchi method
2.1. Brief history
After the Second World War, allied forces observed some of the major drawbacks of the Japanese telephone system, that is, extremely poor quality and unsuitability for long-term communication purposes. To overcome these drawbacks, an improved system was required, for this the allied command recommended establishing research facilities to develop a state-of-the-art communication system. At that time, the electrical communication laboratories (ECL) were came on the stage with Dr. Genichi Taguchi (Figure 2) in charge of improving the R&D productivity and enhancing product quality. It was observed that the ratio of the time and money expended on engineering experimentation and testing is very high than the efforts given to the process of creative brainstorming to minimize the expenditure of resources. He noticed that the process of inspection, screening and salvaging cannot improve poor quality. The inspection process is done to check the quality but it can’t increase the quality by itself. Therefore, he believed that quality concepts should be based upon, and developed around, the philosophy of prevention.

Figure 2.
Dr. G. Taguchi [
This moved Taguchi to develop new optimizing methods of the processes of engineering experimentation. He believed that the best way to improve quality was to design and build it into the product. He quoted that [18],
He developed the techniques which are now known as Taguchi methods (TM). His main contribution lies not in the mathematical formulation of the design of experiments, but rather in the accompanying philosophy. Taguchi method is different from the traditional techniques because of Taguchi’s concepts of design. By his methods, he developed Robust Manufacturing systems that are insensitive to daily or seasonal variations of environment, wears or other noise factors. His philosophy had far reaching consequences, yet it is founded on three very simple concepts [19].
Taguchi’s new technique consist of three concepts about quality, these are:
Quality should be designed into the product and not inspected into it.
Quality is better achieved by minimizing the deviation from a target. The product should be so designed that it is immune to uncontrollable environmental factors.
The cost quality should be measured as a function of deviation from the standard and the losses should be measured system-wide.
In Taguchi’s thinking, the quality improvement is an ongoing effort. He endeavoured continually to reduce the variation around the target value. Right population selection as near to the target value or desired value is the first stair step of the quality improvement. And to accomplish this, Taguchi designed experiments using especially constructed tables known as “orthogonal arrays” (OA). It makes the design of experiments very easy and consistent.
Taguchi’s two most important contributions to quality engineering are as follows:
The use of Gauss’s quadratic loss function to quantify quality.
The development of robust designs (parameter and tolerance design).
Since the early 1980s when applications to different industries began in western hemisphere, the Taguchi method is evaluated in different platforms like books, articles, panel discussions, etc. Taguchi methods have touched most of the manufacturing processes to optimize the process in such a way so that noise factors do not affect the output. Several reports [20, 21, 22, 23, 24, 25, 26, 27, 28, 29] evaluated Taguchi methods from a statistical standpoint. In these reports, the parameter design received the most attention. These reports confirm that Dr. Taguchi made important contributions to quality engineering; however, without some basic statistical knowledge, it is hard to apply his technique. Specifically, the use of signal-to-noise ratios in identifying the nearly best factor levels to minimize quality losses may not be efficient [30].
2.2. Taguchi method
In Taguchi method, we assume that we are designing an engineering system—it might be a machine that performs some intended function or it might be a production process. We use the knowledge of fundamental about the system and process parameters for efficient experimentation. We can skip all the extra effort that might have gone in to investigating interactions. By this, we can decrease the number of factors. Taguchi categorize the factors in two sets:
Control factors, which are under our control.
Noise factors, which are not under our control, except during experiments in the laboratories.
In the 1920, Sir R. A. Fisher first proposed the DOE with multiple factors known as Factorial Design of Experiments. In the full factorial design, we work on the all possible combinations extruded from the preselected set of factors. Mostly all industrial experiments depends upon on the number of factors and if we consider all possible combination then it becomes harder and also time consuming to execute a large sequence of experiments. The full fractional design consists of kn experiments, where “k” is the number of levels of factors and “n” is the number of factors (factors are the different variables, which determine the functionality or performance of a product or system). After that, partial factorial method came into the existence. In this, the number of experiments is reduced, and only a small set from all possibilities is selected, which produces the most information.
Taguchi’s approach complements the following two important areas:
Taguchi constructed a special set of orthogonal arrays (OA) to lay out his experiments. Taguchi prepared a new set of standard OAs which could be used for a number of experimental situations.
He also devised a standard method for analysis of the results.
As mentioned, the full factorial design requires many experiments to be carried out. It becomes laborious and complex, if the number of factors increases. To overcome this problem, Taguchi suggested a specially designed method called the use of orthogonal array. In this, we can study the more factors or parameter space with lesser number of experiments. Opposite to full factorial analysis, the Taguchi method reduces the number of experimental runs to a reasonable one, in terms of cost and time, using orthogonal arrays [31]. For example, if there are three factors called A, B and C, all are examined with two levels called “1” and “2” (in general, they are referred as “1” and “−1”). Then, according to the full fractional, the number of experiments should be 23 = 8. In Table 1, the full fractional array is shown; such experiments can find all main and two and three factor interactions.
A | B | C | AB | BC | AC | ABC |
---|---|---|---|---|---|---|
1 | 1 | 1 | 1 | 1 | 1 | 1 |
2 | 1 | 1 | 2 | 1 | 2 | 2 |
1 | 2 | 1 | 2 | 2 | 1 | 2 |
2 | 2 | 1 | 1 | 2 | 2 | 1 |
1 | 1 | 2 | 1 | 2 | 2 | 2 |
2 | 1 | 2 | 2 | 2 | 1 | 1 |
1 | 2 | 2 | 2 | 1 | 2 | 1 |
2 | 2 | 2 | 1 | 1 | 1 | 2 |
Table 1.
Full fractional factor assignments to experimental array columns.
But at the same time, Taguchi’s L8 array can deal with seven factors and their two levels as shown in Table 2, such experiments can find all seven main factors effects.
A | B | C | D | E | F | G |
---|---|---|---|---|---|---|
1 | 1 | 1 | 1 | 1 | 1 | 1 |
2 | 1 | 1 | 2 | 1 | 2 | 2 |
1 | 2 | 1 | 2 | 2 | 1 | 2 |
2 | 2 | 1 | 1 | 2 | 2 | 1 |
1 | 1 | 2 | 1 | 2 | 2 | 2 |
2 | 1 | 2 | 2 | 2 | 1 | 1 |
1 | 2 | 2 | 2 | 1 | 2 | 1 |
2 | 2 | 2 | 1 | 1 | 1 | 2 |
Table 2.
Orthogonal array factor assignments to experimental array columns.
So, it can be seen that the full fractional method is dealing with three factors, and orthogonal array is dealing with seven factors in same eight experimental runs.
Taguchi method is based on mixed levels, highly fractional factorial designs and other orthogonal designs. It distinguishes between control variables and noise variables. In this, we choose two sets of parameters, that is, controlled and noise parameters or variables. Respectively, we choose orthogonal designs also. The design chosen for the controlled variable is known as inner array and for the noise variables is known as outer array. The combination of the inner and the outer arrays gives the crossed array, which is the list of all the samples scheduled by the Taguchi method. It means that for each sample in the inner array, the full set of experiments of the outer array is performed. The advantage of this cross combination is that it provides the information about the relation between the parameters which plays very important for the robust system designing.
Then, it is recommended by Dr. Taguchi to use the quality loss function to measure the performance characteristics. The quality loss function is a continuous function that is defined in terms of the deviation of a design parameter from an ideal or target value. The value of this loss function is further transformed into signal-to-noise (S/N) ratio. Performance characteristics are available in three categories to determine the S/N ratio:
Nominal-the-Best
Larger-the-Best
Smaller-the-Best
The Taguchi method has four basic phases in the optimization process, these are as follows:
First phase is to timely thinking about the quality characteristics and determining the parameters which important to the product or process.
In second phase the experiments sequence is designed and executed accordingly.
In third phase of the optimization process the statistical analysis is done to determine the optimum conditions.
Finally in the fourth phase the confirmation test is run with the optimum conditions.
Barrado et al. [32] expanded above-mentioned four phases into the following steps for implementing Taguchi experimental design:
Step 1. Selection of the output or target parameters.
Step 2. Identification of the input parameters and their levels.
Step 3. Determining the suitable orthogonal array (OA).
Step 4. Assign factors and interactions to the columns of the array.
Step 5. Conduct the experiments.
Step 6. Statistical analysis and the signal-to-noise ratio and determining the optimum setting of the factor levels.
Step 7. Perform confirmatory experiment (if necessary).
First of all, the input parameters and the respective operation levels and output or responses are selected. After this choice, the best fitted or economical matrix experiment or orthogonal array is selected [33, 34]. Here, in Table 3, the array selector is shown.
Numbers of Parameters (P) | Number of levels | |||
---|---|---|---|---|
2 | 3 | 4 | 5 | |
L4 | L9 | L16 | L25 | |
L4 | L9 | L16 | L25 | |
L8 | L9 | L16 | L25 | |
L8 | L18 | L16 | L25 | |
L8 | L18 | L32 | L25 | |
L8 | L18 | L32 | L50 | |
L12 | L18 | L32 | L50 | |
L12 | L18 | L32 | L50 | |
L12 | L27 | L32 | L50 | |
L12 | L27 | L50 | ||
L16 | L27 | L50 | ||
L16 | L27 | |||
L16 | L36 | |||
L16 | L36 | |||
L32 | L36 | |||
L32 | L36 | |||
L32 | L36 | |||
L32 | L36 | |||
L32 | L36 | |||
L32 | L36 | |||
L32 | L36 | |||
L32 | L36 | |||
L32 | ||||
L32 | ||||
L32 | ||||
L32 | ||||
L32 | ||||
L32 | ||||
L32 | ||||
L32 |
Table 3.
Taguchi orthogonal array selector.
After performing the experiments as per the chosen array, we choose the desired signal-to-noise ratio function (smaller-the-better, larger-the-better and nominal-the-better). The S/N ratio is a logarithmic function which can also be defined as an inverse of variance. Generally in the optimization of the process or product design and in minimizing the variability we use it. If we maximize the S/N ratio, we reduce the variability of the process against undesirable changes in noise factors. Because S/N ration and variance are inversely proportional, so the chosen factors should produce maximum value of S/N so that we get the minimum variability.
Three types of common problems and respective signal-to-ratio function are presented in Table 4.
Choose… | S/N ratio formulas | Use when the goal is to… |
---|---|---|
Minimize the response | ||
Target the response and you want to base the S/N ratio on means and standard deviations | ||
Maximize the response |
Table 4.
Types of problems and respective signal-to-ratio function.
In Table 4,
For analysing the data, one of the most common methods used is ANOVA. ANOVA is a statistical technique that assesses potential differences in a scale-level dependent variable by a nominal-level variable having two or more categories [35]. The ANOVA is developed by Sir Ronald Fisher in 1918. It is the extended version of T test and Z test. The reason of its origin was the limitations of the T & Z tests which have the problem of only allowing the nominal level variable to have just two categories. This analysis method is also famous with the title “The Fisher Analysis of Variance.”
The use of ANOVA depends on the research design. Commonly, ANOVAs are used in three ways:
One-way ANOVA
Two-way ANOVA
N-way multivariate ANOVA.
2.3. Applications of Taguchi method in industrial chemical processes
The Taguchi method is used whenever the settings of interest parameters are necessary, not only for manufacturing processes. Therefore, the Taguchi approach is used in many domains such as environmental sciences [36, 37], agricultural sciences [38], physics [39], statistics [40], management and business [41], medicine [42] and in chemical processes as well [43]. Here, some of the literature reviews are given, which clearly show the application of Taguchi method in the chemical processes of various industries.
The identification and incorporation of quality costs and robustness criteria are becoming a critical issue while addressing chemical process design problems under uncertainty. Fernando P. Bernardo
Kundu et al. [45] investigated the optimal operation conditions to prepare activated carbon (AC) using palm kernel shell (PKS). They choose four control factors for their research which were irradiation time, microwave power, concentration of impregnation substance which was phosphoric acid and impregnation ratio between acid and PKS aided by Taguchi optimization method. After successful implementation of Taguchi method, the optimal settings were found and the optimal levels were microwave power of 800 W, irradiation time of 17 min, impregnation ratio of 2 and concentration of acid 85% (undiluted). After the confirmation test with optimal settings, activated carbon with high BET surface area of 1473.55 m2 g−1 and high porosity was obtained.
Zolfaghari et al. [46] presented a systematic optimization approach by using the Taguchi method for removal of lead (Pb) and mercury (Hg) by a nanostructure, zinc oxide-modified mesoporous carbon CMK-3 denoted as Zn-OCMK-3. CMK-3 was synthesized by using SBA-15 and then oxidized by nitric acid. Their investigation using Zn-OCMK-3 in the frame of the Taguchi method brought forth the optimum conditions for Pb and Hg adsorption from water. The determined optimum conditions for removal of Pb (II) and Hg (II) were the agitation time of 120 min, the initial concentration of 10 mg/l, the temperature of 35 ◦C, the dose of 0.7 g/l, and the pH of 6. Based on the confirmation test under optimum conditions for Zn-OCMK-3, it was observed that this nanoporous carbon is very effective in removing the lead and mercury with a high pollutant removal efficiency (PRE) i.e. 97.25% for Pb (II) and 99% for Hg (II). Removal of lead and mercury were highly concentration dependent. Number of Pb and Hg ions highly increase from initial concentration of 10–400 mg/l. It is also observed that a lot of ions cannot be adsorbed at high concentration, which reduce the removal efficiency.
Venkata Mohan et al. [47] applied design of experimental methodology (DOE) by Taguchi approach to find out the effects of selected factors on the H2 production with the final aim of optimizing the process. They selected four factors for their research study, that is, inlet pH, inoculum type, feed composition and inoculum pre-treatment methods. Here, also Taguchi method enhanced the process. Results showed significant variation and process efficiency. Among the factors studied with respect to H2 production, feed composition showed stronger influence followed by inlet pH, pre-treatment method and origin of the inoculum. Taguchi approach also permits process optimization of system by a set of independent factors (over a specific region of interest (levels) by identifying the influence of individual factors, establishing the relationship between variables and operational conditions and finally estimate the performance at optimum levels obtained. By using optimized conditions of the factors, the rate of H2 production can be enhanced by almost threefold (0.376–1.166 mmol/day), same positive enhanced results were recorded for substrate degradation also.
Taguchi-based DOE methodology provides a systematic and efficient mathematical approach to understand complex process of fermentative H2 production and substrate degradation for the optimization of the near optimum design parameters, only with a few well-defined experimental sets [48].
Messias Borges Silva
Liao et al. [50] applied the Taguchi method and designs of experiments (DOE) approach to optimize parameters for chemical mechanical polishing (CMP) processes in wafer manufacturing. Planning of experiments was based on a Taguchi orthogonal array table to determine an optimal setting, and significant results were found.
S. V. Mohan
The biological oxidation of ferrous ion by iron-oxidizing bacteria is potentially a useful industrial process for removal of H2S from industrial gases, desulphurization of coal, removal of sulfur dioxide from flue gas, treatment of acid mine drainage and regeneration of an oxidant agent in hydrometallurgical leaching operations that pH of feed solution has the most contribution in the biooxidation rate of ferrous ion. When the parameters were set according to Taguchi approach, the biological reaction rate was obtained. Mousavi et al. [52] studied and find out the optimum values of the process parameters on the ferrous biooxidation rate by immobilization of a native Sulfobacillus species on the surface of low-density polyethylene (LDPE) particles in a packed-bed bioreactor using Taguchi method. L16 array was used with five factors and their four levels. Temperature, initial pH of feed solution, dilution rate and initial concentration of Fe3+ and aeration rate are considered as input parameters in Taguchi technique. Analysis of variance (ANOVA) was used to determine the optimum conditions and most significant process parameters affecting the reaction rate. Results indicated that pH of feed solution has the most contribution in the biooxidation rate of ferrous ion. When the parameters were set according to Taguchi approach, the biological reaction rate was obtained 8.4 g L−1 h−1.
Taguchi robust design method with L9 orthogonal array was implemented to optimize experimental conditions for the preparation of nano-sized silver particles using chemical reduction method [53]. The parameters for chemical-mechanical polishing (CMP) in an ultra-large-scale integrated (ULSI) planarization process are explored using the Taguchi method [54]. So, it can be seen easily the popularity of the Taguchi method in industrial chemical processes.
2.4. Advantages and disadvantages of Taguchi method
The foundation of the DOE in Taguchi method (TM) is orthogonal array design that is very simple method for analysing the outputs. His work was controversial and criticized by some researchers for being inefficient and ineffective in some cases [55]. But still the simplicity of Taguchi method increased its use in manufacturing industries. Here, some of the advantages and disadvantages are discussed briefly which clears its popularity even in the clouds of controversies. A survey shows that 51% of respondents are familiar with TM, although only 14% of them apply it [13].
The key step of the TM is to increasing the quality level with less affecting the cost factor. TM provides the optimal settings for the processes which can improve quality, and these settings attained from TM are also insensitive to the variation of noise factors. Basically, classical process parameter design is complex and not easy-to-use, and at the other hand, Taguchi technique is user friendly [56].
Another advantage of the Taguchi method is that it emphasizes a mean performance characteristic value close to the target value rather than a value within certain specification limits, thus improving the product quality. Also, it is straightforward and easy to apply to many engineering situations, this property makes it a powerful yet simple tool for industries. It can be used to quickly narrow the scope of a research project or to identify problems in a manufacturing process from data already in existence [57].
It is probably unfortunate that the important concepts advocated by Taguchi have been overshadowed by controversy associated with his approach to modelling and data analysis. There have been several research papers and books which explain, review, or raise questions on the Taguchi’s ideas [55, 58]. One of its demerit is that the results obtained from it are only relative. It is unable to indicate exactly which parameters have the highest effect on the performance or response.
Another debating topic is that when orthogonal array do not tests all the possible combinations of the factors, then this method should not be adopted with all relationships between all factors. The Taguchi method has been criticized for its difficulty in accounting for interactions between parameters.
Another limitation is that the Taguchi methods are offline. So in the case of dynamically changing processes for example a simulation study, the TM is not appropriate. And also, the Taguchi method is most effectively applied at the designing stage of a product development, it cannot help after the process is started. Because TM deals with designing quality rather than correcting for poor quality [59].
3. Conclusion
Industries are in the need of the method of conducting experiments which optimize processes and increase the quality of products. Same is desired in chemical industrial processes also. For this, design of experiments is the basic step of quality improvement via optimized processes. And this requires proper planning and layout of the experiments and accurate analysis of the results. And Dr. G. Taguchi studied these issues very well and developed his method. Thus, DOE using Taguchi approach has become a much more attractive tool to practicing engineers and scientists. Since the conventional experimental design techniques were applied to industrial applications or processes, these conventional methods were always covered with drawbacks and limitations. And Taguchi array, also known as orthogonal array design, adds a new dimension to conventional experimental design. Taguchi method is a broadly accepted method of DOE which has proven in producing high-quality products at subsequently low cost. In most of the industrial applications or processes, the researchers and scientists used Taguchi method with other analytical tools in their research works, and in industrial chemical processes, it is also showing great results in optimization of the processes. A very important and fundamental part of Taguchi’s method is to make sure that the product performs well even in noise; it helps in making the product long lasting. Taguchi’s method can be applied in a very short amount of time; it does not take a lot of effort and improves the processes dramatically. The manufacturing industries can improve their processes very quickly and efficiently by applying the Taguchi’s method. Around Industrial chemical processes, Taguchi method is showing the outstanding performance by optimizing the process parameters and reducing the number of experiments via orthogonal arrays. Taguchi method gives a new height to the processes by making them cost-effective and quick with improved quality.