Open access peer-reviewed chapter

Statistical Approach to Mineral Engineering and Optimization

By Mehmet Deniz Turan

Submitted: May 30th 2017Reviewed: October 10th 2017Published: January 24th 2018

DOI: 10.5772/intechopen.71607

Downloaded: 790

Abstract

Mineral depositions are basic sources for obtaining metal production. Increasing metal demand based on increasing world population and decreasing grade value of mineral deposition make the evaluation to mineral processing more important, so that all metal production stages must be economical. Because of this important requirement, many researchers and practitioners have focused to the optimization of all processes. The optimization of metal production processes provide some advantages such as reducing the influence of experimental errors, statistical analysis, determining important parameters and trivial parameters, and measuring interactions between parameters. Although there are many design methods, choosing the most appropriate method is of great importance in terms of the results to be achieved. In this chapter, presumed experimental data about hydrometallurgical copper extraction accompanied by three parameters were applied to two different design models to compare the results.

Keywords

  • copper
  • mineral processing
  • hydrometallurgy
  • optimization
  • experimental design
  • statistically
  • leaching

1. Introduction

Different types of ore reservation are consumed away quickly depending on the growing population. Metal production, which is effective in making tools used to meet basic vital needs (cars, tools, buildings, ships, etc.), must also increase along with the increasing population. The main source of all these metals is the ores, and all ores in earth are known as nonrenewable sources. For these reasons, all stages of metal production such as characterization, mining activities, mineral processing, pyrometallurgical/hydrometallurgical/electrometallurgical routes, and final production methods must be economical. In the world, more than 10 billion tons of mine are produced. Seventy five percent of this production is derived from energy raw materials, 10% is from metallic mines, and 15% is from industrial raw materials. On the other hand, some metals have been taken to the status of critical metals by the European Commission ( Figure 1 ).

Figure 1.

European Commission critical metal graphics [1].

As shown in Figure 1 , it is clear that some metals have high supply risk status such as beryllium, cobalt, magnesium, germanium etc. These data show that the economy of metal production is very important and these conditions are pushed to researchers and producers for the recovery of secondary sources [2, 3, 4, 5].

  • At that rate, how a metal producing process is economical?

  • How we can benefit more effectively from existing ore grade and reserves?

  • Will ore reserves meet future needs?

The answer to all these questions can be given as optimization.

The optimization, which may be numeric or nonnumeric (known as categorical), must be carried out at every stage of the metal production process. Processes that need to be optimized from the beginning start with the ore detection, mineralization-mineralogy-petrography, geographical distributions and planning. The extraction of ore mining at optimum conditions, mining operations, transportation, marketing, and planning. Up on this stage, many steps such as truck tours, excavator cycling, casting distance (in open mines), reaching to ore with appropriate road, number of people and machines to work, and ore extraction period (underground mines) can be optimized for priority. The next stage is mineral processing, which contains ore crushing, grinding, sieving, handling, conditioning, enrichment (one or more of the appropriate methods such as magnetic-electrostatic enrichment, flotation, gravimetric methods, etc.), and dewatering. It is stated that 50–75% of total energy is consumed in crushing and grinding unit in a mineral processing plant [6]. This fact shows that the optimization of this stage (choosing of appropriate comminution and sieving machines, crushing-grinding time, treatment conditions, etc.) is quite important. The last stage is metal handling in which metallurgical methods are used, for instance, pyrometallurgy, hydrometallurgy, electrometallurgy, and casting. This stage also has a lot of substages so that some of them can be sorted as: temperature level, acid-base choosing, treatment time, solution volume, stirring conditions, electrode choosing, current applications, and casting conditions. We can roughly classify the phases as follows: ore detection, characterization, mining, mineral processing, metallurgy, and usage. How to be optimized? Which method should be used?

There are many packet computer software for this aim. Among them, one of the prominent is response surface methodology (RSM) that is used successfully in many engineering applications and especially in scientific articles. The use of response surface methodology is popular because of having easy-to-use interface and application advantages. The main advantages can be sorted as an opportunity of easy optimization, reaching maximum information by less experimental data, an opportunity of changing significant parameters simultaneously, determination of interaction between parameters, and easy elimination of insignificant parameters.

2. Experimental approach

In any process involving the metal production process, parameters that have an effect on metal recovery are primarily estimated. Then, it is aimed to determine the most favorable conditions by studying the parameters in a certain interval. However, while a single parameter is examined as a variable in traditional leaning studies, it is seen that other effective parameters are kept constant at a certain value. Furthermore, in such studies, it is only possible to define any model or model equation to represent the experimental method and only on special conditions. For example, in a hydrometallurgical study, it seems that the interaction of leaching parameters cannot be accounted, since the parameters investigated for activity on metal recovery are handled one by one. On the other hand, there is a need for a large number of experimental studies in a study in which all the parameters are examined.

2.1. RSM experimental design

Response surface methodology (RSM) was first defined and improved in 1951 by Box and Wilson. RSM is created with model regression analysis. Box and Wilson have setup experiments with the aim of reaching the point where the answer variable has the maximum value on the answer surface with the smallest possible number of observations. For this purpose, they have compared some experimental schemes and identified composite experiments [7]. Response surface methods are created with the help of model regression analysis, and it is successfully used in many disciplines. Two or more factors use the response surface pattern; for example, time, temperature, and the effects of both on the result can be investigated and optimum values are found. The results can be expressed in three-dimensional graphics or as a contour graphics. Using a very small number of experimental combinations, it is possible to estimate factors and their combinations that are not actually tested [8, 9]. In this study, a Stat-Ease software package (version 6.0.10 trial) is used for data analysis.

RSM is a collection of mathematical and statistical techniques that are useful for the modeling and analysis of problems in which a response of interest is influenced by several variables and the objective is to optimize this response. For example, suppose that an engineer wishes to find the levels of temperature X1and pressure X2that maximize the yield yof a process. The process yield is a function of the levels of temperature and pressure,

y=fX1X2+εE1

where εrepresents the noise or error observed in the response y. If we denote the excepted response by Ey=fX1X2=η, then the surface is represented by

fX1X2=ηE2

In most RSM problems, the form of relationship between the response and the independent variables is unknown. Thus, the first step in RSM is to find a suitable approximation for the true functional relationship between yand the set of independent variables. If the response is well modeled by a linear function of the independent variables, then the approximating function is the first-order model,

y=β0+β1X1+β2X2++βkXk+εE3

If there is a curvature in the system, then a polynomial of higher degree must be used, such as the second-order model,

y=β0+i=1kβiXi+i=1kβiiXi2+i<jβijXiXj+εE4

Almost all RSM problems use one or both of these models [10].

Statistical design, also known as experimental design, is the methodology of how to plan and conduct experiments to obtain maximum information with a minimum of experiments. The following questions should be answered before any design selection is made to investigate variable parameters. All of these questions will have to be answered satisfactorily before the experiment is performed.

  • What method of data analysis should be used? How will the impact of factors and response value be measured? How many factors will be effective on the response value? How many factors will be considered simultaneously? How many experiments will be needed again?

All principles based on statistical evidence have been proven by a reliable method, supported by trial and error. The use of experimental design methods provides a number of advantages in designing such studies, if there are a large number of variables that are effective on the result in a study.

Engineering processes are complex systems that are influenced by many factors. Experimental design ensures that these systems are expressed by functions and that important interactions between a large number of active variables are revealed. Response variants (metal recovery in leaching, flotation yield in flotation process, etc.) are observed as experimental outputs, while independent variables in the design of experiments are checked. Simultaneous exchange of variables allows arriving at a result in less time and with less experimental effort compared to the conventional experimental method in which a single variable is changed. On the other hand, the most important advantage of experimental design is the simultaneous variation of several factors and the independent evaluation of each factor. There are several common steps compared to most experimental designs. The first step is to determine the problem to be solved. Determining the factors affecting the process is the second step. Thirdly, it is the study of factors in different combinations in an experimental study. Finally, the best combination is to choose.

3. Design types

There are many types of designs for use to design experimental works. These design types have become easy to understand with the improvement of appropriate computer software. It is important to which one design must be chosen so that it is related to the fact that the researcher dominates the work to be done. Because there are a variety of design methods, their design criteria and application areas exhibit variety. Some of these design types are listed below for design expert software [11],

√ Central composite design, √ Box–Behnken design, √ Three-level factorial design, √ Hybrid design, √ One-factor design, √ Pentagonal design, √ Hexagonal design, √ D-Optimal design, and √ User-defined design, etc.

Each design type has its own characteristics, which are usually related to the position of design points and the number of design points. Short information on some design types is given below.

3.1. Central composite design

Each numeric factor is varied over five levels: plus and minus alpha (axial points), plus and minus 1 (factorial points), and the center point ( Figure 2 ). The biggest advantage of the central composite design (CCD) is that it allows experimental design outside the design points where the cube points are located. In addition, the presence of these points gives the rotational workability, which is why it is highly preferred by researchers. CCD is divided into three subdivisions as central composite circumscribed (CCC), central composite inscribed (CCI), and central composite face-centered (CCF).

Figure 2.

Central composite (CCD) design layout.

3.2. Box-Behnken design

Each numeric factor is varied over three levels. If categorical factors are added, the Box-Behnken design will be duplicated for every combination of the categorical factor levels ( Figure 3 ).

Figure 3.

Box-Behnken design layout.

3.3. Three-level factorial design

Each numeric factor is varied over three levels. If categorical factors are added, the three-level factorial design will be duplicated for every combination of the categorical factor levels ( Figure 4 ).

Figure 4.

Three-level factorial design layout.

3.4. Hybrid design

A minimal point design for three, four, six, or seven factors with five levels each. Because there is no replication, the lack of fit test is not available. These rotatable or nearly rotatable designs are better than a small central composite, but are still highly sensitive outliers or missing data.

3.5. One-factor design

One-factor design is a design for one numerical factor using three levels for a linear model, five levels for a quadratic model, seven levels for cubic models plus some replicated points.

The design will be duplicated for every combination of the categorical factor levels.

3.6. Pentagonal design

Design for two factors where factor A can have four levels and factor B has five levels. This minimal point design is extremely sensitive to outliers or missing data. The design will be duplicated for every combination of the categorical factor levels ( Figure 5 ).

Figure 5.

Pentagonal design layout.

3.7. Hexagonal design

Design for two factors where factor A has five levels and factor B has three levels. The design will be duplicated for every combination of the categorical factor levels ( Figure 6 ).

Figure 6.

Hexagonal design layout.

4. Optimization of comparative sample hydrometallurgy study

Hydrometallurgy is an important part of the metal production process; it is becoming more important because it has many advantages such as being eco-friendly, easy operation, low energy depletion, and low cost. Recently, the planning of hydrometallurgy studies with RSM increases to examine the effect of many effective parameters [12, 13, 14, 15, 16, 17, 18]. The advantages of using response surface methods in such studies can be listed as follows:

  • Reduce the influence of experimental errors.

  • Allows statistical analysis.

  • Helps determine important parameters and trivial parameters that need to be checked.

  • Helps to determine and measure interactions between parameters.

  • Allows for the best results to be searched within the examined intervals of the test parameters and to extrapolate the data.

  • Enables you to draw graphs describing how variables relate to each other and to determine the values of variables that give optimum results.

  • Allows the creation of predictive model equations that reveal the mathematical relationship between dependent variables and dependent response values.

  • Ensures that results are displayed in three-dimensional or contour graphics.

  • Allows simultaneous modification of the parameters that are active on the result during the experimental run.

In this section, a sample hydrometallurgy study is represented comparative by using two different response surface methods. For this purpose, determination of experimental conditions, examination of results, evaluation of statistical data, and optimization of results were studied with central composite design (CCD) and Box-Behnken Design (BBD) layout. Assuming that oxidized copper ore (CuO) will be leached in the presence of sulfuric acid (H2SO4). Suggested reaction is as follows and effective parameters are H2SO4 concentration, leaching temperature, and leaching time.

CuO+H2SO4CuSO4+H2OE5

Experimental design will be considered as only one response value evaluation as copper extraction. Tables 1 and 2 show the interval of the examined parameters for CCD and BBD, respectively.

As seen in Tables 1 and 2 , CCD allows for experimental researching at extra points, which represents distance from center design points (alpha points). On the other hand, BBD has same design points with CCD except alpha values. Here, (−1), (0), and (+1) represent the design low value, design high level, and design center level, respectively. This fact points out that these alpha points give rotability to design layout. Also, interval of investigating parameters is (4–8 M) of H2SO4, (25–65°C) of leaching temperature, and (30–90 min) of leaching time. Assumed results are entered in the experimental conditions specified by the software. While CCD has 20 experimental runs and 6 center points, BBD has 17 experimental runs and 5 center points. In Figure 7 , experimental design model and its investigating interval of each parameter are seen.

Figure 7.

Experimental design for CCD and BBD.

4.1. Analysis of comparative experimental results

The experimental results mentioned above were obtained. Standard error of design graphics is shown for CCD and BBD in Figures 8 and 9 . According to error distribution results, different graphic trends exhibit for CCD and BBD, so that error values increase as it is approaching the low design points in CCD model.

Figure 8.

Standard error of central composite design model (Time: 60 min-constant).

Figure 9.

Standard error of Box-Behnken design model (Time: 60 min constant).

On the other hand, Tables 3 and 4 show that sequential model sum of squares in which can be decided to fit what type source in design conditions. Practically, source can be chosen according to the high value of sum of squares. Thus, it can be said that appropriate source for model can be chosen as linear and quadratic sources for CCD and quadratic source for BBD.

FactorUnit−α−10+1
H2SO4M2.644689.36
Temperature°C11.3625456578.64
TimeMin9.55306090110.,45

Table 1.

Interval of examined parameters for CCD.

FactorUnit−10+1
H2SO4M468
Temperature°C254565
TimeMin306090

Table 2.

Interval of examined parameters for BBD.

SourceSum of squaresDFMean squareF valueProb > F
Mean58644.45158644.45
Linear5950.1631983.397.790.0020
2FI543.373181.120.670.5872
Quadratic2324.653774.886.420.0107
Cubic1074.344268.5812.210.0048
Quartic132.031132.036.366E+007<0.0001
Fifth0.0000
Sixth0.0000
Residual0.00050.000
Total68669.00203433.45

Table 3.

Sequential model sum of squares for CCD [degrees of freedom (DF)].

SourceSum of squaresDFMean squareF valueProb > F
Mean55461.24155461.24
Linear620.753206.921.720.2114
2FI6.2532.080.0130.9977
Quadratic1429.013476.3426.520.0003
Cubic125.75341.926.366E+007< 0.0001
Quartic0.0000
Fifth0.0000
Sixth0.0000
Residual0.00040.000
Total57643.00173390.76

Table 4.

Sequential model sum of squares for BBD [degrees of freedom (DF)].

Analysis of variance (ANOVA) may be used to provide assistance with the interpretation of results. Essentially, ANOVA tables have numerous benefit data about model design such as interaction of parameters, lack of fit, degrees of freedom, etc. The ANOVA data of CCD and BBD are shown in Tables 5 and 6 , respectively. In these tables, leaching parameters are symbolizing A, B, and C as sulfuric acid concentration, leaching temperature, and leaching time, respectively. Because of the quadratic of models, there are exponential and interaction between parameters too. On the other hand, statistical information about residual, lack of fit, and pure error for each model is available. ANOVA values also represent effective parameters on the result according to “F value” or “Prob > F” columns in which Prob > F value of parameters less than 0.05 are significant parameters on the result. Values greater than 0.1000 indicate the model terms are not significant.

SourceSum of squaresDFMean squareF valueProb > F
Model8818.199979.808.120.0015
A1734.4811734.4814.380.0035
B3645.7113645.7130.220.0003
C569.971569.974.720.0548
A21023.6211023.628.490.0155
B2480.831480.833.990.0738
C21249.5811249.5810.360.0092
AB36.13136.130.300.5962
AC231.121231.121.920.1964
BC276.121276.122.290.1612
Residual1206.3610120.64
Lack of fit1206.365241.27
Pure error0.00050.000
Cor total10024.5519

Table 5.

ANOVA for response surface quadratic model of CCD.

SourceSum of squaresDFMean squareF valueProb > F
Model2056.019228.4512.720.0015
A55.13155.133.070.1233
B253.131253.1314.090.0071
C312.501312.5017.400.0042
A2931.641931.6451.860.0002
B2331.641331.6418.460.0036
C255.33155.333.080.1227
AB6.2516.250.350.5738
AC0.00010.0000.0001.0000
BC0.00010.0000.0001.0000
Residual125.75717.96
Lack of fit125.75341.92
Pure error0.00040.000
Cor total2181.7616

Table 6.

ANOVA for response surface quadratic model of BBD.

In this case, A, B, A2, and C2 are significant model terms for central composite design while B, C, A2, and B2 are significant model terms for Box-Behnken design. The meaning of these significant models is that A and B parameters have linear effect on the results; also exponential symbols (like A2, C2) have second-order effect on the results for central composite design model. For Box-Behnken design, B and C parameters have linear effect and A2 and B2 of parameters have exponential effect on the results. According to ANOVA of both models, interesting approaches are arising on the account of different parameters and their different exponential values are significant in each design model despite two models have been built in same experimental data.

This situation can only be explained by the presence of extra experimental points, so that it is seen how important the choice of design model in optimization. Some statistical data from both models are shown in the following ( Tables 7 and 8 ). Important data may be sorted as “standard deviation (std. dev.),” R2 “(R-squared),” “adjusted R2 (adj R-squared),” and “adequate precision (adeq precision).” According to table data of CCD and BBD, standard deviation of CCD model is higher than BBD model that their values are 10.98 and 4.24, respectively. Statistically, model fit is better if standard deviation is low, so that this case influences the R-squared and adj R-squared values. In this context, R2 and Adj-R2 values of BBD model is higher than CCD model. Both type R2 value is important for model fitting, in which the high one is preferred. When comparing between the R2 values according to this result, it can be said that BBD model is more appropriate for the same experimental conditions. The probable cause of this case is that BBD has a more linear design model, but this linearity is valid only in conditions of parameters changing that have a linear effect on the response. On the other hand, adeq precision measures the signal to noise ratio. A ratio greater than 4 is desirable. Both model ratios (10.007 and 10.228) indicate an adequate signal. These models can be used to navigate the design space.

Std. dev.10.98R-squared0.8797
Mean54.15Adj R-squared0.7714
C.V.20.28Pred R-squared0.0431
PRESS9592.29Adeq precision10.007

Table 7.

Statistical data from CCD model.

Std. dev.4.24R-squared0.9424
Mean57.12Adj R-squared0.8683
C.V.7.42Pred R-squared0.0778
PRESS2012.00Adeq precision10.228

Table 8.

Statistical data from BBD model.

Result model equation can be created with RSM. As shown in Eqs. (6) and (7), final equation in terms of actual factors of CCD and BBD have different coefficient (the coefficients rounded to two digits).

Cu%CCDmodel=82.59+23.15H2SO4+1.21Temp+0.48Time2.11H2SO420.01Temp20.01Time2+0.05H2SO4Temp+0.09H2SO4Time+9.79103TempTimeE6
Cu%BBDmodel=164.77+47.34H2SO4+2.47Temp+0.69Time3.72H2SO420.02Temp24.03103Time20.03H2SO4Temp2.291016H2SO4Time6.571018TempTimeE7

The above equations are useful model equations. Especially, if a parameter data is entered except for the design points, this model equation gives possible outcome. As it can be remembered, design points are 4–8 M of H2SO4, 25–65°C of temperature, and 30–90 min of time. Let us run the model equation with any values outside the design points where it is wondered copper extraction result (response value) for 5 M of H2SO4, 30°C of temperature, and 80 min of time. According to the model equation result, it is obtained Cu%: 49.11 for central composite design model and Cu%: 59.94 for Box-Behnken design model. Although the statistical data indicate BBD, the most accurate result can be achieved by comparing the result of the experiments made under these conditions.

The residual plots were examined for the model adequacy for each metal extraction design values. In Figures 10 and 11 , the normal percent probability-studentized residual plot and predicted-actual plots are shown for copper extraction of two different model designs. All the normal probability plots show how well the model satisfies the assumptions of the ANOVA where the studentized residuals measure the number of standard deviations separating the actual and predicted values [19]. It is not desirable to have an S-shaped distribution in the normal distribution curve; however, it is desirable that the point distribution should be around line on the predicted-actual graphs. Last stage of design is to be the determination of optimization criteria. Optimization criteria of the both design models were held as the maximum copper extraction and three parameters interval was kept in range scale. Under these conditions, the software purpose solution points where different response values (Cu extraction) can be obtained by changing leaching parameters. These tables are shown in Tables 9 and 10 .

Figure 10.

Diagnostic graphics of CCD model: (a) normal probability and (b) predicted vs. actual graphics.

Figure 11.

Diagnostic graphics of BBD model. (a) normal probability and (b) predicted vs. actual graphics.

NumberH2SO4TempTimeCuDesirability
18.0065.0088.5594.15770.989
28.0065.0088.5994.15740.989
38.0065.0088.7494.15710.989
48.0065.0088.8994.15610.989
58.0065.0088.8794.15420.989
68.0064.9988.5194.15420.989
78.0065.0089.7094.14340.989
88.0065.0087.3194.14230.989
97.9465.0090.0094.06570.988
108.0065.0080.3793.46870.980

Table 9.

Optimum solution points of Cu percentage for CCD model.

NumberH2SO4TempTimeCuDesirability
16.6556.9678.9471.72341.000
26.1451.3465.6072.01471.000
35.3150.5780.3370.89281.000
45.5348.1179.9571.83941.000
57.0349.7785.7670.81481.000
65.8853.3261.7670.97371.000
76.6455.2965.8270.72351.000
86.5654.9465.3871.00671.000
96.0653.8165.7871.87611.000
105.7647.2760.1470.0461.000

Table 10.

Optimum solution points of Cu percentage for BBD model.

As seen in Tables 9 and 10 , desirability of BBD model is higher than CCD that it is indicated that the model adaptation for the current situation is better in BBD even though copper extraction values are higher in CCD model. The response surface graphs for metal extractions under the optimum condition are shown in Figure 12 for CCD and Figure 13 for BBD.

Figure 12.

The effect of parameters interaction on the result for CCD model.

Figure 13.

The effect of parameters interaction on the result for BBD model.

The presence of onion rings in the graphics indicates that optimum regions were obtained under these conditions. While Box-Behnken design model gives the optimum regions, it does not provide with central compost design model. This is a very controversial issue so that when considering the design model, all conditions must be taken into account.

These three-dimensional result graphics are very useful in terms of observing the results. In this kind of graphs, the effect of more than one parameter on the result can be determined simultaneously, and also how it affects the result by interaction between the changing parameters. Figure 12 shows that high copper extraction can be obtained at high values of the parameters. For instance, 95% of copper extraction can be achieved under conditions: the leaching temperature of 65°C, leaching time of 90 min, and H2SO4 concentration of 8 M. An increase in extraction value with increasing values is not an acceptable approach in such studies. Because the goal is to achieve optimum conditions and is well-known that over consumption already provides high extraction values.

On the other hand, Figure 13 shows optimal copper extraction under optimum conditions. It is clear that graphs of temperature-H2SO4 and time-H2SO4 interaction have completed circle lines similar to onion rings. The center region of these rings point out hump on the surface graphic (3D graphics), which indicates the optimum point. It is noticed that other sides of parameter values of hump (increasing and/or decreasing parameter values) cause decrease of copper extraction in which it is considered to be moving away from the optimum region. According to Figure 13a , approximately 70% of Cu extraction can be obtained in presence the 45–55°C of leaching temperature, and 5.5–6.5 M of H2SO4 concentration. Also, Figure 13b shows that optimum conditions of parameters are 75–90 min of leaching time and 5.5–6.5 M of H2SO4 concentration.

It is noted that if the parameter values are increased on the CCD for the same operation, the copper extraction is increased while the increased parameter values in the BBD cause the extraction values to drop significantly. This tendency arises from the examination of the effects of the parameters.

When two results about one factor plot are compared, copper extraction increases with increasing H2SO4-time-temperature for each plots of CCD model whereas copper extraction decreases or reaches the plateau with increasing parameters values after optimum condition points. Of course, it is not ignored that these factors are in an interaction.

In this chapter, presumed experimental data about hydrometallurgical copper extraction accompanied by three parameters were applied to two different design models, central composite design (CCD) and Box-Behnken design (BBD), in order to compare the results. As defined in the sections above, different statistical results and approaches were obtained although the same basic design values are entered as data. There are numerous design models, but the user should select the most suitable for own process. Namely, first of all we need to know the process well. As such, design models are not susceptible to unexpected changes in the process. According to the above model comparison, Box-Behnken design model is more appropriate for this process. It is implied that central composite design is not appropriate for this process, although it is generally more sensitive because of owing extra design points and rotatable feature. Although, only two models are compared here and different results are obtained, it is impossible to test all the models for a process. Instead of this, it is always more convenient to recognize model designs and processes and select a model accordingly.

Despite the optimization comparison above on a numerical example, similar study can be done for the modeling of nonnumerical (it will attribute as categorical factor) processes. In this context, the use of response surface methodology for optimization purposes of some science branches such as mineralization, geochemical, stirring profile of any process can be considered to increase productivity.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Mehmet Deniz Turan (January 24th 2018). Statistical Approach to Mineral Engineering and Optimization, Contributions to Mineralization, Ali Ismail Al-Juboury, IntechOpen, DOI: 10.5772/intechopen.71607. Available from:

chapter statistics

790total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Expected Return on Capital in Mining Industry

By Aneta Michalak

Related Book

First chapter

Secular Evolution of Lithospheric Mantle Beneath the Central North China Craton: Implication from Basaltic Rocks and Their Xenoliths

By Yan-Jie Tang, Hong-Fu Zhang and Ji-Feng Ying

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us