Open access peer-reviewed chapter

On the Practical Consideration of Evaluators’ Credibility in Evaluating Relative Importance of Criteria for Some Real-Life Multicriteria Problems: An Overview

Written By

Maznah Mat Kasim

Submitted: 03 December 2018 Reviewed: 14 April 2020 Published: 18 May 2020

DOI: 10.5772/intechopen.92541

From the Edited Volume

Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality

Edited by Nodari Vakhania and Frank Werner

Chapter metrics overview

721 Chapter Downloads

View Full Metrics

Abstract

A multicriteria (MC) problem usually consists of a set of predetermined alternatives or subjects to be analyzed, which is prescribed under a finite number of criteria. MC problems are found in various applications to solve various area problems. There are three goals in solving the problems: ranking, sorting or grouping the alternatives according to their overall scores. Most of MC methods require the criteria weights to be combined mathematically with the quality of the criteria in finding the overall score of each alternative. This chapter provides an overview on the practical consideration of evaluators’ credibility or superiority in calculating the criteria weights and overall scores of the alternatives. In order to show how the degree of credibility of evaluators can be practically considered in solving a real problem, a numerical example of evaluation of students’ academic performance is available in the Appendix at the end of the chapter. The degree of credibility of teachers who participated in weighting the academic subjects was determined objectively, and the rank-based criteria weighting methods were used in the example. Inclusion of the degree of credibility of evaluators who participated in solving multicriteria problems would make the results more realistic and accurate.

Keywords

  • multicriteria problem
  • credibility
  • weights
  • subjective
  • aggregation

1. Introduction

Multicriteria decision-making (MCDM) is now considered as one discipline of knowledge, which has been expanding very fast in its own domain. Basically, it is about how to make decision when the undertaken issue is surrounded with a multiple number of criteria. The MC problem consists of two main components, alternatives and criteria. In real-life situations, the alternatives are options, organizations, people, or units to be analyzed which are prescribed under a set of finite criteria or attributes. If the number of alternatives is finite and known, the task is to select the best or the optimal alternative, to rank the alternatives according to their overall quality or performance, or to sort or group the alternatives based on certain measurements or values. In this case, the MC problem is usually called as a multiattribute decision-making (MADM) problem, and the alternatives are prescribed under a finite number of criteria or attributes [1]. The MADM methods are utilized to handle discrete MCDM problems [2]. This chapter focuses on MADM problems or more generally MCDM problems, where this type of problem has a finite number of predetermined alternatives, which is described by several criteria or attributes. MCDM problems can be found in various sectors.

1.1 Examples of multicriteria decision-making problems

Selection problems are really of an MCDM type, a simple problem that we are facing almost every day, for example, when we want to select a dress or a shirt to wear. A decision to choose which dress or shirt is based on certain attributes or factors, such as for what function (office, leisure, and business), color preference, and style or fashion. Here, the types of dress/cloth are the alternatives, while all factors that become the basis of evaluation are the attributes. Another example is when we want to choose the best location to set up projects such as housing, industrial, agricultural activities, recreation center, hoteling, and so on. Many factors or criteria that may be conflicting with each other should be considered by the decision-makers. Selecting the best candidate for various positions that can be conducted in many settings such as face-to-face interviews or online test is also an MCDM problem since the selection will be based on certain requirements. Selecting employees in different organizations with different scope of jobs with different requirements imposed by the related organization can also be categorized as an MCDM problem.

Another example is about selection of the best supplier of a manufacturing firm [3, 4], selection of the best personal computer [5], and selection of a suitable e-learning system [6] to be implemented in an educational institution. These studies focused on selecting the best alternative from a finite number of alternatives that were prescribed under a few evaluation criteria. These studies have the same main issue that is the relative importance or the weights of the evaluation criteria toward the overall performance of the alternatives under study. The studies provide ways to find weights subjectively and how to aggregate the weights when a group of decision-makers were involved in judging the importance of the criteria.

In addition, conducting an evaluation of a program, for example, is usually done after identifying the aspects of the program to evaluate. We may have many programs to be evaluated under several aspects of evaluations with the involvement of one evaluator or a group of evaluators. In a different situation, it may be only one program to be evaluated under several aspects and may be evaluated by one or many evaluators. Besides, many other evaluation situations are usually performed with the presence of many criteria such as evaluation of students, evaluation of employees’ performance, evaluation of learning approaches [7], and evaluation of students’ performance [8]. In relation to the study about the evaluation of students’ academic performance in primary schools, five academic subjects were assumed to have different contribution toward the overall performance of the students. A few experienced teachers were asked to evaluate the degree of importance of the subjects. The resulting weights of the academic subjects were incorporated in finding the overall academic performance of the students in year six in one selected primary school in the northern part of Malaysia. For the purpose of illustrating the practical consideration of the credibility of the evaluators, the problem of evaluation of students’ academic performance is extended by considering the credibility of the teachers who participated in weighting the academic subjects. The detailed discussion is available in the Appendix at the end of the chapter.

1.2 Credibility of the evaluators

Referring to those examples of MCDM scenarios, decision-maker(s) or evaluator(s) are involved in many stages of the evaluation process in searching for the optimal solution. As all MCDM problems have two main components, the alternatives and the criteria or attributes, the decision-maker(s) or the evaluator(s) would involve in at least two situations: deciding the quality of each alternative based on each of the criteria and also finding the relative importance of the criteria toward the overall performance of the alternatives. As what is usually arose in solving MCDM problems, criteria are contributing at different level of importance and should become a concern to the decision-maker(s) or evaluator(s). The criteria or attributes of the units to be analyzed should not be assumed to have same contribution toward the overall quality of the alternatives.

Besides having a challenge in finding the suitable evaluator(s) or decision-maker(s), since they might come with different background and experience, they also come with different levels of superiority or credibility that should be taken into consideration. This issue should be thought seriously because the results may be misleading if those who are involved in doing the evaluation or judgment do not have enough experience or less credible to give judgment regarding the MCDM problem under study. Moreover, the results may differ among the evaluators if the evaluators are at different levels of superiority [9]. Therefore, the credibility of expert(s) or evaluator(s) or decision-maker(s) who are involved in assessing quality of the alternatives or relative importance of attributes should be taken into consideration.

Webster’s New World College Dictionary defines credibility as the quality of being trustworthy or believable. Credibility is also interpreted by good reputation, reputation, honor, and the presence of someone who stands out in the professional community [10]. Meanwhile, professionalism refers to competence or skill expected of a professional. In other words, a professional is someone who is skilled, reliable, and entirely responsible for carrying out their duties and profession [11]. This definition of professionalism has a resemblance to the term of credibility so that the two are like two sides of a coin that cannot be separated. For the purpose of assessment or evaluation, professionalism and credibility are the competencies of assessors in carrying out their functions and roles well, full of commitment, trustworthiness, and accountability.

It is normal that the assessors have different levels of credibility, and their credibility should be considered together with their assessments or evaluations. This chapter provides an overview of the current work on how the credibility of the decision-maker(s) or evaluator(s) could be considered especially on evaluating the importance of the criteria or attributes of any MCDM under investigation, how to quantify the credibility of those people, and how that quantitative values could be incorporated in finding the overall score of the alternatives. This issue falls under the concept of group decision-making and extends it with the consideration of the degree of superiority or credibility of the decision-maker(s) or evaluator(s). By deliberation of different relative importance of the attributes plus the different level of credibility or superiority of those who are involved in finding the optimal solution of the MCDM problem, the solution of the problem would be more realistic, accurate, and representative of the true setting of the problem.

In achieving the objective of the writing, the chapter is organized as follows. The next section describes the basic notations for this chapter. Section 3 discusses the concept of weights and the related methods, particularly the rank-based weighting method. Section 4 discusses on the aggregation of criteria weights and the values of criteria. Section 5 explains how to aggregate the credibility of the evaluators who are involved in weighting or finding weights or relative importance of the criteria. Furthermore, Section 5 also illustrates two approaches to aggregate the degree of credibility of evaluators in finding the relative importance in order to find the overall performance of the alternatives and their rankings. Section 6 suggests a few ways to quantify the credibility of the evaluators. The conclusion of the chapter is in Section 7, which is followed by a list of all references of the chapter. A numerical example is provided in the Appendix at the end of the chapter.

Advertisement

2. Basic notation

Let A = A 1 A n be a set of n alternatives that are prescribed under m criteria, C = C 1 C m , and x ij be a value of alternative i , under criterion j , where i = 1 , , n and j = 1 , , m . Let w = w 1 w m be the weight of the criteria with conditions that 0 w j 1 and j = 1 m w j = 1 . This information can be illustrated as a decision matrix as shown in Figure 1.

Figure 1.

A multiattribute problem as a decision matrix.

In relation to the numerical example in the Appendix, the students are the alternatives, while the academic subjects are the criteria. So, A = A 1 A 10 represents a set of 10 students that are prescribed under five academic subjects, C = C 1 C 5 , and x ij is the score of student i , under academic subject j , where i = 1 , , 10 and j = 1 , , 5 . The weights of the criteria, w = w 1 w 5 , obviously refer to the relative importance of the academic subjects toward the composite or final score of each student.

Advertisement

3. Weights of criteria

In finding the relative importance of the criteria or simply the weights of the criteria, w = w 1 w m , there are many methods available in literature which are classified into two main approaches, objective methods and subjective methods [12]. The objective methods are data-driven methods where quality values of the criteria should be available prior to the evaluation of criteria‘s relative importance. Based on the criteria’s values, proxy measures such as standard deviation, correlation, variance, range, coefficient of variation, and entropy [13, 14, 15, 16, 17] would represent the criteria weights to be calculated. In relation to the concept of entropy, it was introduced in the communication theory, usually refers to uncertainty. The measure of entropy is often used to quantify the information or message. However, the entropy measure has become the proxy measures of criterion weights in MCDM domain. In other words, these objective methods produce weights of criteria based on the intrinsic information of the criteria. These methods do not require evaluators to do the criteria weighting. No further discussion is included in this chapter because objective weights are not the focus of the chapter.

3.1 Rank-based weighting methods

This subsection focuses on the discussion of rank-based weighting methods [18, 19] as these methods are used in this chapter in the illustration of practical consideration of evaluators’ credibility in evaluating relative importance of criteria for some real-life multicriteria problems. These methods are very easy to use but have good impact [20]. Three popular rank-based methods are rank-sum (RS), rank reciprocal (RR), and rank order centroid (ROC). The mathematical representations of the three methods are as follows.

Suppose r j be a ranking of criterion j given by an evaluator where r j is an integer number with possible values from 1 to m. The smaller value of r j means that the ranking of that criterion is higher and more important than the other criteria. The value of r = r 1 r m can be transformed into w = w 1 w m by using any of the following formula for RS, RR, and ROC, respectively. It should be noted that the sum of weights of the criteria is usually equal to one:

w j rs = 2 m + 1 r j m m 1 E1
w j rr = 1 r j j = 1 m 1 r j E2
w j roc = 1 m k = 1 m 1 r k × I r k > r j E3

where I r k > r j = 1 if r k r j 0 if r k < r j .

Referring to the numerical example in the Appendix, there are five criteria representing five academic subjects; r j is a ranking of academic subject j where r j is an integer number with possible values from 1 to 5, while the value of r = r 1 r 5 represents ranks of academic subjects 1 to 5 that can be transformed into weights of academic subjects 1 to 5, w = w 1 w 5 , respectively.

Many studies were conducted to study the performance of these rank-based methods as criteria weighting methods. For example, a simulation experiment was conducted on investigating the performance of the three rank-based weighting methods (RS, RR, RS) and equal weights (EW) where the data was generated on a random basis [16]. Three performance measures of the methods were “hit rate,” “average value loss,” and “average proportion of maximum value range achieved.” The results show that the ROC was found to be the best technique in most cases an in every measure. Another study on these three rank-based weighting techniques and EW concludes that the rank-based methods have higher correlations with the so-called true weights than EW [21].

A study is also done where EW, RS, and ROC methods were compared to direct rating and ratio weight methods [22]. Basically, the direct rating method is a simple type of weighting approach in which the decision-maker or the evaluator must rate all the criteria according to their importance. The evaluator can directly quantify their preference of the criteria. The rating does not constrain the decision-maker’s responses since it is possible for the evaluator to alter the importance of one criterion without adjusting the weight of another [23]. The comparison was conducted under a condition that the evaluators’ judgments of the criteria weights are not certain and subject to random errors. The results show that the direct rating tends to give better quality of decision results when the uncertainty is set as small, while ROC provides comparable results to the ratio weights when a large degree of error is placed. Please note that the ratio weight method requires the evaluators firstly rank the related criteria based on their importance. The evaluators should allocate certain value such as 10 for the least important attribute, and the rest of attributes are judged as multiples of 10. The weight of a criterion is obtained by dividing the criterion’s weight with the sum of all attributes’ weights.

The superiority of ROC over other rank-based methods is also subsequently confirmed in different simulation conditions [24]. An investigation on RS, RR, and ROC weighting methods was also carried out by changing the number of criteria from two to seven [25]. It is found that ROC gives the largest gap between the weights of the most important criterion and the least. RS provides the flattest weight function in the linear form. For RR, the weight of the most important one descends most aggressively to that of the second highest weight value, and then, the function continues to move flatter. In relation to rank-based weighting methods, another rank-based method was proposed [26]. This new rank-based method is called as generalized sum of ranks (GRS). Further investigation was carried out where the performance of GRS was compared to RS, RR, and ROC using a simulation experiment. The result of the investigation shows that GRS has a similar performance to ROC.

Based on the previous discussion, it can be concluded that the three rank-based weighting methods, RS, RR, and ROC, are having good features especially the ROC method. Therefore, these rank-based methods are used in the current study to illustrate how to include the degree of credibility of the evaluators who are involved in ranking the importance of the criteria. Furthermore, converting the ranks into weight values is not difficult, and the related formula is given as in Equations (1), (2), and (3).

3.2 Other subjective weighting methods

Other subjective weighting methods are analytic hierarchy process (AHP) [4, 27, 28], swing methods [29, 30], graphical weighting (GW) method [31], and Delphi method [32]. The AHP technique was introduced in 1980 [33]. It is a very popular MC approach, and it is done by conducting pairwise comparison of the importance of each pair of criteria. A prioritization procedure is implemented to draw a corresponding priority vector, where this priority vector represents the criteria weights. Thus, if the judgments are consistent, all prioritization procedures would give the same results. At the same time, if the judgments are inconsistent, prioritization procedures will provide different priority vectors [34]. Nevertheless, AHP is widely criticized for being such a tedious process, especially when there are a significant number of criteria or alternatives.

For the swing method, the evaluator must identify an alternative with the worst consequences on all attribute. The evaluator(s) can change one of the criteria from the worst consequence to the best. Then, the evaluator(s) is asked to choose the criteria that he/she would most prefer to modify from its worst to its best level, the criterion with the most chosen swing is the most important, and 100 points is allocated to the most important criterion.

The GW method begins with a horizontal line that is marked with a series of number, such as (9-7-5-3-1-3-5-7-9). The evaluator is expected to place a mark that represents the relative importance of a criterion on the horizontal line with the basis that a criterion is either more, equally, or less important than another criterion by a factor of 1–9. Then, a decision matrix is built as a pairwise comparison matrix. A quantitative weight for a criterion can be calculated by taking the sum of each row, and then the scores are normalized to obtain an overall weight vector. The GW method enables the evaluators to express preferences in a purely visual way. However, GW is sometimes criticized, since it allows evaluator(s) to assign weights in a more relaxed manner.

A Delphi subjective weighting method [35] requires one focus group of evaluators to evaluate the relative importance of the criteria. Each evaluator remains nameless to each other that can reduce the risk of personal effects or individual bias. The evaluation is conducted in more than one round until the group ends with a consensus of opinions on the relative importance of the criteria under study. The main advantage of this method is that the method avoids confrontation of the experts [36]. However, to pool up such a focus group is quite costly and timely.

Advertisement

4. Aggregation of criteria weights and values of criteria

Finding the final score of each alternative is very important since the final scores of the alternatives are required to rank the alternatives. Basically, those alternatives with higher scores should be positioned at higher rankings and vice versa. In order to find the overall or composite or final values of each alternative, the criteria weights should be aggregated with each alternative’s values of the corresponding criteria. There are many aggregation methods available in literature. The section focuses on simple additive weighted average (SAW) method as the chapter uses SAW in the numerical example (in the Appendix at the end of the chapter). Furthermore, SAW method is a very well-established method and very easy to use [16].

4.1 Simple additive weighted average (SAW) method

The mathematical equation for SAW is given as follows:

Score A i = j = 1 m w j x ij E4

Score A i is the overall score of alternative i . Based on Score A i , where i = 1 , , n , the n alternatives could be ranked, selected, or sorted with the condition that the alternatives with the higher overall scores should be ranked at higher positions. Referring to the numerical example in the Appendix, Score A i represents the overall score of student i , where i = 1, …, 10.

SAW is an old method, and MacCrimmon is one of the first researchers that summarized this method in 1968 [37]. As a well-established method, it is used widely [38] in solving MC problems, particularly for the evaluation of alternatives. Basically, this method is the same as the simple arithmetic average method, but instead of having the same weight values for the criteria, SAW method uses mostly distinct weights values of the criteria. As given in Eq. (4), the overall performance of each alternative is obtained by multiplying the rating of each alternative on each criterion by the weight assigned to the criterion and then summing these products over all criteria [15]. The best alternative is the one that obtained the highest score and will be selected or ranked at the first position. Many recent studies used the SAW method, for example, in [39, 40, 41], and a review on its applications is also available [42].

Besides SAW or also known as weighted sum method (WSM), there is another average technique, called weighted product model (WPM) or simple geometric weighted (SGW) or simple geometric average method. In WPM, the overall performance of each alternative is determined by raising the rating of the alternative to the power of the criterion weight and then multiplying these products over all criteria [15]. However, WPM is a little bit complex as compared to SAW since WPM involves power and multiplications.

4.2 Other aggregation methods

AHP [14], technique for order preference by similarity to ideal solution (TOPSIS), and VlseKriterijumska Optimizacija Kompromisno Resenje (VIKOR) [43] are also popular aggregation methods in solving MC problems. As previously mentioned in Section 3.2, AHP is built under the concept of pairwise comparison either in finding the criteria weights or criteria values of the alternatives. The aggregation of criteria weights and the criteria values obtained by AHP is sometimes done by using the SAW or SGW methods.

AHP and TOPSIS are two different aggregation methods. TOPSIS assigns the best alternative that relies on the concepts of compromise solution, where the best alternative is the one that has the shortest distance from the ideal solution and the farthest distance from the negative ideal solution [44]. In other words, alternatives are prioritized according to their distances from positive ideal solutions and negative ideal solutions, and the Euclidean distance approach is utilized to evaluate the relative closeness of the alternatives to the ideal solutions. There is a series of steps of TOPSIS, but this method starts with the weighted normalization of all performance values against each criterion. Some recent applications of the TOPSIS method are available [45, 46, 47, 48].

VIKOR method [49] is quite similar to TOPSIS method, but there are some important differences, and one of the differences is about the normalization process. TOPSIS uses the vector linearization where the normalized value could be different for different evaluation unit of a certain criterion, while VIKOR uses linear normalization where the normalized value does not depend on the evaluation unit of a criterion. VIKOR has also been used in many real-world MCDM problems such as mobile banking services [50] digital music service platforms [51], military airport location selection [52], concrete bridge projects [53], risk evaluation of construction projects [54], maritime transportation [55], and energy management [56].

Advertisement

5. Inclusion of credibility of evaluators in solving multicriteria problems

This section discusses how credibility can be included practically in solving MC problems. Suppose the evaluators are requested to evaluate the relative importance of the criteria based on rank-based weighting methods as explained in Section 3.1. Suppose there is a panel of p evaluators, and let r j l be rank of criterion j , evaluated by evaluator l , where l = 1 , , p . In order to include the credibility of the evaluators, let us introduce a new set of values that represents the different credibility of the evaluators. Let u l be the degree of credibility of evaluator l , where 0 u l 1 , and l = 1 p u l = 1 . There are two approaches [57] where the degree of credibility of the evaluators could be attached in finding the overall scores. The first approach is in calculating the final weight of criteria as given in Figure 1, and the second approach is in computing the overall performance of the alternatives as given in Figure 2.

Figure 2.

Approach 1.

For the first approach as portrayed in Figure 2, the degree of credibility of the evaluators is attached to the resulted weights from the ranks of criteria by using any of the equations, Eq. (1), Eq. (2), or Eq. (3). So, here there are p sets of weights of the criteria, and the average of that p weights for each criterion is calculated by summing up all weights for that criteria and divide the sum with the total number of evaluators. So now, there is only one set of weights that can be aggregated with the values of alternatives for each corresponding criterion as given in Eq. (4). There is only one set of overall performance of all n alternatives.

For the second approach, the criteria weights obtained from each evaluator are kept, and then each set of weights is aggregated with the quality values of each alternative. So, here there are p sets of overall values of the alternatives. In order to get the final overall score of the alternatives, the average of the p scores for each alternative should be calculated. The ranking or sorting of the alternatives or selecting the best alternative is done based on the average of that p overall scores of each individual alternative. The following section provides some suggestion on how to quantify the credibility of the evaluators.

Referring to the numerical example in the Appendix, there were three evaluators involved in ranking the importance of the five academic subjects, and the number of students is 10. So, r j l is the rank of academic subject j , with j = 1,…,5, evaluated by evaluator l , where l = 1 , , 3 , and n = 10, while u l represents the degree of credibility of evaluator l , where 0 u l 1 , and l = 1 3 u l = 1 .

Advertisement

6. Quantification of credibility of evaluators

Credibility is synonym to professionalism, integrity, trustworthiness, authority, and believability. A study focuses on how to assess the credibility of expert witnesses [58]. A 41-item measure was constructed based on the ratings by a panel of judges, and a factor analysis yielded that credibility is a product of four factors: likeability, trustworthiness, believability, and intelligence. Another study concerns about the credibility of information in digital era [59]. Credibility is said to have two main components: trustworthiness and expertise. However, the authors conclude that the relation among youth, digital media, and credibility today is sufficiently complex to resist simple explanations, and their study represents a first step toward mapping that complexity and providing a basis for future work that seeks to find explanations.

It can be argued that the degree of credibility of evaluators or judges or decision-makers can be determined subjectively or objectively, where the former one can be done by using certain construct as proposed in [58] or can be determined based on certain objective or exact measures such as years of experience, salary scale, or amount of salary. The quantification of the degree of credibility opens a new potential area of research as there are very few researches done especially on finding the suitable objective proxy measures of the degree of credibility.

Finding the degree of credibility subjectively requires more time and much harder as it involves a construct or an instrument which would be used as a rating mechanism to obtain the degree of credibility. Meanwhile, finding the degree of credibility based on objective information is simpler and easier to do. As an illustration on how to quantify the credibility objectively, suppose there are three experts with their basic salaries in a simple ratio of 1:2:3. So, this ratio can be converted as 0.167:0.333:0.500, so that the sum of credibility of the evaluators is equal to 1. These values can be used to represent the degree of credibility of the evaluators or experts 1, 2, and 3, respectively. It should be noted that the sum of the degrees of credibility of the three evaluators is equal to one to make the future calculation simple while easier for interpretation of the values. Here, evaluator 3 is the most credible one since he/she has the highest salary among the three, and it is a usual practice that those who are higher in terms of expertise usually are paid higher. The same computation can be used for the years of experience or salary scale.

The numerical example in the Appendix extends the problem of evaluating students’ academic performance which is discussed earlier in the Introduction. Here, the credibility of the teachers who were asked to assess the relative importance of the five subjects was considered. In order to incorporate the degree of credibility of the teachers, a new set of values is introduced to represent these different degrees of credibility. The example shows two ways of calculations on how the credibility values could be included in finding the overall scores of the alternatives. As expected, the overall scores and the overall ranking are different as compared to overall scores of not considering the different credibility of the teachers. The details and the step-by-step methodology are also included in the Appendix.

Advertisement

7. Conclusion

This chapter provides an overview on the practical consideration of evaluators’ credibility in evaluating relative importance of criteria for some real-life multicriteria problems. Credibility of the evaluators who are involved in solving any multicriteria problem should be included in calculating the overall scores of the alternatives or the units of analysis. This chapter demonstrates how the credibility of evaluators who participated in finding the criteria weights can be combined with the criteria weights and the quality of the criteria of the alternatives. Rank-based criteria weighting methods are used as an illustration in a numerical example of evaluation of students’ academic performance problem at the end of the chapter. However, other criteria subjective weighting methods are also possible to be used but with caution especially at the stage of aggregation of criteria weights and criteria values. It may exist only one approach to do the aggregation due to the underpinning concepts of the aggregation methods. The chapter uses simple additive weighted average method as the aggregation method since the method is very well established. The use of other aggregation techniques is also plausible. The chapter also suggests a few practical proxy measures of the credibility but is still very limited. More researches should be conducted to find ways of measuring the credibility of evaluators or experts either subjectively or objectively. Inclusion of the credibility of evaluators in solving multicriteria problems is realistic since the evaluators come from different backgrounds and levels of experience. Quantification of the evaluators’ credibility subjectively or objectively opens a new insight in group decision-making field. Furthermore, the credibility of the evaluators should also be considered in other multicriteria problems in other areas, so that the results are more practical and accurate.

Advertisement

Mr. Zachariah is a class teacher of 10 excellent students in one of the best primary schools of a country. The 10 students were already given the final marks of five main academic subjects by their respective teachers as in Table 1. Mr. Zachariah must rank the students according to their performance because these students will be given awards and recognition on their graduation day.

Native language English language Mathematics Science History
Student 1, A 1 0.25 0.34 0.12 0.36 0.45
A 2 0.33 0.54 0.22 0.44 0.76
A 3 0.43 0.65 0.57 0.42 0.91
A 4 0.55 0.32 0.37 0.67 0.53
A 5 0.27 0.66 0.57 0.82 0.61
A 6 0.67 0.56 0.46 0.46 0.31
A 7 0.58 0.87 0.39 0.27 0.43
A 8 0.32 0.76 0.41 0.37 0.51
A 9 0.91 0.36 0.47 0.45 0.45
A 10 0.12 0.33 0.81 0.75 0.32

Table 1.

Ten students assessed under five academic subjects.

Suppose three experienced teachers, Edward, Mary, and Foong, were asked to evaluate the relative importance of the five academic subjects with their degree of credibility as discussed in previous section, that is, the salary ratio of the three teachers is 0167: 0.333: 0.500. The rank-based technique is used to analyze the ranking of importance of the academic subjects given by these three teachers by using Eq. (1).

The results are given in Table 2. Column 2 displays the ranking of the criteria evaluated by teacher 1, and column 3 shows the corresponding criteria weights as analyzed by Eq. (1), while columns 4 and 5 and columns 6 and 7 show the respective results by teachers 2 and 3, respectively. The second last column of the table summarizes the criteria weights when the teachers are of same credibility. The values were computed as the simple arithmetic average of the corresponding criterion, while the last column has the final weights that were calculated as the simple arithmetic average as well but with consideration of the different degree of credibility according to Approach 1 as given in Figure 2. Please note that the both sets of final weights are already summed to one. So, the normalization process to guarantee the sum of weights is one and is not necessary.

Teacher 1 (0.167) Teacher 2 (0.333) Teacher 3 (0.500) Final weight same credibility (SC) Final weight different credibility (DF)
r 1 w 1 r 2 w 2 r 3 w 3
Native language 1 0.333 2 0.267 2 0.267 0.289 0.278
English language 3 0.200 3 0.200 3 0.200 0.200 0.200
Mathematics 4 0.133 1 0.333 1 0.333 0.267 0.300
Science 5 0.067 5 0.067 5 0.067 0.067 0.067
History 2 0.267 4 0.133 4 0.133 0.178 0.156

Table 2.

Criteria weights of five academic subjects evaluated by three teachers with the same and different credibility by using rank-sum weighting technique.

Now, in order to find the overall performance of each student, for example, the overall performance of student 1 without consideration of credibility of teachers in evaluating the relative importance of the academic subjects, it is simply done by multiplying row 2 of Table 1 with its corresponding criteria weights in the second last column of Table 2 by using Eq. (4) as follows:

Score A 1 = j = 1 5 w j x 1 j = 0.289 0.25 + 0.2 0.34 + 0.267 0.12 + 0.067 0.36 + 0.178 0.45 = 0.277

The same process is performed to find the overall scores of student 1, if the credibility of the teachers in finding weights of the criteria is considered but the weights in last column of Table 2 is used, instead.

Score A 1 = j = 1 5 w j x 1 j = 0.278 0.25 + 0.2 0.34 + 0.3 0.12 + 0.067 0.36 + 0.156 0.45 = 0.244

Table 3 gives the overall scores and the corresponding final rankings of all students based on average criteria weights with the same (SC) and different (DC) credibility of the teachers. The overall scores are all different, while the rankings are different especially for ranks 8 and 9 and 4 and 5.

A 1 A 2 A 3 A 4 A 5 A 6 A 7 A 8 A 9 A 10
SC Score 0.277 0.427 0.597 0.461 0.526 0.514 0.540 0.470 0.571 0.424
Rank 10 8 1 7 4 5 3 6 2 9
DC Score 0.244 0.408 0.592 0.439 0.518 0.540 0.550 0.462 0.561 0.422
Rank 10 9 1 7 5 4 3 6 2 8

Table 3.

Overall scores and ranking of students with average criteria weights evaluated by teachers of the same and different credibility based on Approach 1.

Table 4 summarizes three individual overall score of the three different teachers without consideration of their credibility, while the second last column and the last column are the average overall scores of the three overall scores and its corresponding rankings, respectively.

Score A i 1 Score A i 2 Score A i 3 Score A i AV Ranking
A 1 0.311 0.259 0.259 0.276 10
A 2 0.479 0.400 0.400 0.426 8
A 3 0.620 0.584 0.584 0.596 1
A 4 0.483 0.449 0.449 0.460 7
A 5 0.515 0.530 0.530 0.525 4
A 6 0.510 0.516 0.516 0.514 5
A 7 0.552 0.534 0.534 0.540 3
A 8 0.474 0.467 0.467 0.469 6
A 9 0.588 0.561 0.561 0.570 2
A 10 0.349 0.461 0.461 0.424 9

Table 4.

Same credibility: four different sets of overall scores and final ranking of the 10 students based on average overall scores.

Table 5 shows the three overall scores by consideration of the credibility of teachers in finding the academic subjects’ weights, and the average overall scores of the three overall scores. The ranking of the students is based on the average overall scores in column 5 of the table. Here, Approach 2 as in Figure 3 is used to find the final overall scores of the students.

u 1 Score A i 1 u 2 Score A i 2 u 3 Score A i 3 Score A i AV Ranking
A 1 0.052 0.086 0.129 0.089 10
A 2 0.080 0.133 0.200 0.138 9
A 3 0.104 0.194 0.292 0.197 1
A 4 0.081 0.150 0.225 0.152 7
A 5 0.086 0.176 0.265 0.176 4
A 6 0.085 0.172 0.258 0.172 5
A 7 0.092 0.178 0.267 0.179 3
A 8 0.079 0.155 0.233 0.156 6
A 9 0.098 0.187 0.281 0.189 2
A 10 0.058 0.153 0.230 0.147 8

Table 5.

Different credibility: four different sets of overall scores and final ranking of the 10 students based on average overall scores.

Figure 3.

Approach 2.

To make the comparison easier, Table 6 summarizes the overall scores and their corresponding rankings of the students with SC and DC of the teachers when calculating the academic subjects’ weights based on Approach 2.

A 1 A 2 A 3 A 4 A 5 A 6 A 7 A 8 A 9 A 10
SC Score 0.276 0.426 0.596 0.460 0.525 0.514 0.540 0.469 0.570 0.424
Rank 10 8 1 7 4 5 3 6 2 9
DC Score 0.089 0.138 0.197 0.152 0.176 0.172 0.179 0.156 0.189 0.147
Rank 10 9 1 7 4 5 3 6 2 8

Table 6.

Two different set of overall scores of the students by averaging overall performance of the students and their corresponding rankings based on Approach 2.

As the two sets of the overall scores are different, all rankings based on both sets of the overall scores are the same except for ranks 8 and 9. There is not much different in the overall rankings since the MC problem that is considered here is only a small scale problem with only 10 alternatives and 5 criteria. However, the two sets of overall values are totally different. There may be much more differences in terms of rankings if a bigger MC problem with more alternatives and more criteria is considered. The final ranking of the students obtained by consideration of the different credibility of the teachers should be selected as the practical and valid results.

References

  1. 1. Hwang CL, Paidy R, Yoon K, Masud AS. Mathematical programming with multiple objectives: A tutorial. Computer & Operations Research. 1980;7:5-31
  2. 2. Rezaei J. Best-worst multi-criteria decision-making method. Omega. 2015;53:49-57
  3. 3. Ahmad N, Kasim MM, Rajoo SSK. Supplier selection using a fuzzy multi-criteria method. International Journal of Industrial Management. 2016;2:61-71
  4. 4. Ahmad N, Kasim MM, Ibrahim H. The integration of fuzzy analytic hierarchy process and VIKOR for supplier selection. International Journal of Supply Chain Management. 2017;6(4):289-293
  5. 5. Kasim MM, Ibrahim H, Al-Bataineh MS. Multi-criteria decision-making methods for determining computer preference index. Journal of Information and Communication Technology. 2011;10:137-148
  6. 6. Mohammed HJ, Kasim MM, Shaharanee INM. Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights. In: Proceedings of the 13th IMT-GT International Conference on Mathematics, Statistics and Their Applications (ICMSA 2017); 4-7 December 2017; Malaysia. Sintok: AIP Conference Proceedings 1905. 2017. p. 040019-1-6
  7. 7. Mohammed HJ, Kasim MM, Hamadi AK, Al-Dahneem E. Evaluating collaborative and cooperative learning using MCDM method. Advanced Science Letters. 2018;24(6):4084-4088
  8. 8. Kasim MM, Abdullah SRG. Aggregating student academic achievement by simple weighted average method. Malaysian Journal of Learning and Instructions. 2013;10:119-132
  9. 9. Metzger MJ. Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology. 2007;58:2078-2091
  10. 10. Zulfiqar S, Bin Tahir S. Professionalism and credibility of assessors in enhancing educational quality. 2019;2:162-175. Available from: https://www.researchgate.net/publication/334599511_PROFESSIONALISM_AND_CREDIBILITY_OF_ASSESSORS_IN_ENHANCING_EDUCATIONAL_QUALITY [Accessed: 6 April 2020]
  11. 11. Korten D, Alfonso F. Bureaucracy and the Poor: Closing the Gap. Singapore: McGraw-Hill; 1981
  12. 12. Ma J, Fan Z, Huang L. A subjective and objective integrated approach to determine attribute weights. European Journal of Operational Research. 1999;112:397-404
  13. 13. Ray AM. On the measurement of certain aspects of social development. Social Indicators Research. 1989;21:35-92
  14. 14. Diakoulaki D, Koumoutos N. Cardinal ranking of alternatives actions: Extension of the PROMETHEE method. European Journal of Operational Research. 1991;53:337-347
  15. 15. Hwang CL, Yoon K. Multiple Attribute Decision-Making: Methods and Applications. Berlin: Springer; 1981
  16. 16. Triantaphyllou E. Multi Criteria Decision Making: A Comparative Study. London: Kluwer Academic Publisher; 2000
  17. 17. Zeleny M. Multiple Criteria Decision Making. New York: McGraw-Hill Book Company; 1982
  18. 18. Barron FH, Barrett BE. Decision quality using ranked attribute weights. Management Science. 1996;42(11):1515-1523
  19. 19. Roberts R, Goodwin P. Weight approximation in multi-attribute decision models. Journal of Multi-Criteria Decision Analysis. 2002;11:291-303
  20. 20. Desa NHM, Jemain AA, Kasim MM. Construction of a composite hospital admission index in Klang Valley, Malaysia by considering the aggregated weights of criteria. Sains Malaysiana. 2015;44(2):239-247
  21. 21. Stillwell WG, Seaver DA, Edwards W. A comparison of weight approximation techniques in multi-attribute utility decision making. Organizational Behavior and Human Performance. 1981;28(1):62-77
  22. 22. Jia J, Fischer GW, Dyer JS. Attribute weighting methods and decision quality in the presence of response error: A simulation study. Journal of Behavioral Decision Making. 1998;11(2):85-105
  23. 23. Arbel A. Approximate articulation of preference and priority derivation. European Journal of Operational Research. 1989;43:317-326
  24. 24. Ahn BS, Park KS. Comparing methods for multi-attribute decision making with ordinal weights. Computers & Operations Research. 2008;35(5):1660-1670
  25. 25. Roszkowska W. Rank ordering criteria weighting methods—A comparative overview. Optimum Studia Ekonomiczne NR. 2013;5(65):14-33
  26. 26. Wang J, Zionts S. Using ordinal data to estimate cardinal values. Journal of Multi-Criteria Decision Analysis. 2015;22(3-4):185-196
  27. 27. Saaty TL. How to make decision: The analytical hierarchy process. Journal of Operational Research Society. 1990;48:9-26
  28. 28. Mohammed HJ, Kasim MM, Shaharanee INM. Evaluating of flipped classroom learning using analytic hierarchy process technique. International Journal of Trend in Research and Development (IJTRD). 2017;4(2):443-446
  29. 29. Von Winterfeldt D, Edwards W. Decision Analysis and Behavioral Research. Cambridge: University Press; 1986
  30. 30. Edwards W. How to use multi attribute utility measurement for social decision making. IEEE Transaction on Systems Man and Cybernetics. 1977:326-340
  31. 31. Hajkowicz SA, McDonald GT, Smith PN. An evaluation of multiple objective decision support weighting techniques in natural resource management. Journal of Environmental Planning and Management. 2000;43(4):505-518
  32. 32. Chang YC, Hsu CH, Williams G, Pan ML. Low cost carriers’ destination selection using a Delphi method panel. Tourism Management. 2008;5(29):898-908
  33. 33. Saaty TL. The Analytic Hierarchy Process. New York: McGraw-Hill; 1980
  34. 34. Saaty RW. The analytic hierarchy process—What it is and how it is used. Mathematical Modelling. 1987;9(3-5):161-176
  35. 35. Still BG, May AD, Bristow AL. The assessment of transport impacts on land use: Practical uses in strategic planning. Transport Policy. 1999;6(2):83-98
  36. 36. Zardani NH, Ahmed K, Shirazi SM, Yusob ZB. Weighting Methods and Their Effects on Multi-Criteria Decision-Making Model Outcomes in Water Resources Management. Cham, Heidelberg, New York, Dordrecht, London: Springer; 2015
  37. 37. MacCrimmon KR. Decision-Making Among Multiple-Attribute Alternatives: A Survey and Consolidated Approach. Santa Monica: Rand Corp Santa Monica; 1968
  38. 38. Chen SJ, Hwang CL. Fuzzy multiple attribute decision making methods. In: Fuzzy Multiple Attribute Decision Making. Berlin, Heidelberg: Springer; 1992
  39. 39. Wang LE, Liu HC, Quan MY. Evaluating the risk of failure modes with a hybrid MCDM model under interval-valued intuitionistic fuzzy environments. Computers & Industrial Engineering. 2016;102:175-185
  40. 40. Kalibatas D, Kovaiti V. Selecting the most effective alternative of waterproofing membranes for multifunctional inverted flat roofs. Journal of Civil Engineering and Management. 2017;23(5):650-660
  41. 41. Muddineni VP, Sandepud SR, Bonala AK. Improved weighting factor selection for predictive torque control of induction motor drive based on a simple additive weighting method. Electric Power Components and Systems. 2017;45(13):1450-1462
  42. 42. Abdullah L, Adawiyah CR. Simple additive weighting methods of multi criteria decision making and applications: A decade review. International Journal of Information Processing and Management. 2014;5(1):39-49
  43. 43. Opricovic S. Multicriteria Optimization of Civil Engineering Systems. Belgrade: Faculty of Civil Engineering; 1998
  44. 44. Tzeng GH, Huang JJ. Multiple Attribute Decision-making: Methods and Applications. New York: Chapman and Hall/CRC; 2011
  45. 45. Mao N, Song M, Pan D, Deng S. Comparative studies on using RSM and TOPSIS methods to optimize residential air conditioning systems. Energy. 2018;144:98-109
  46. 46. Chen W, Shen Y, Wang Y. Evaluation of economic transformation and upgrading of resource-based cities in Shaanxi province based on an improved TOPSIS method. Sustainable Cities and Society. 2018;37:232-240
  47. 47. Shen F, Ma X, Li Z, Xu Z, Cai D. An extended intuitionistic fuzzy TOPSIS method based on a new distance measure with an application to credit risk evaluation. Information Sciences. 2018;428:105-119
  48. 48. Polat G, Eray E, Bingo BN. An integrated fuzzy MCGDM approach for supplier selection problem. Journal of Civil Engineering and Management. 2017;23(7):926-942
  49. 49. Opricovic S, Tzeng GH. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. European Journal of Operational Research. 2004;156(2):445-455
  50. 50. Lu MT, Tzeng GH, Cheng H, Hsu CC. Exploring mobile banking services for user behavior in intention adoption: Using new hybrid MADM model. Service Business. 2014;9(3):541-563
  51. 51. Lin CL, Shih YH, Tzeng GH, Yu HC. A service selection model for digital music platforms using a hybrid MCDM approach. Applied Soft Computing. 2016;48:385-403
  52. 52. Sennaroglu B, Varlik Celebi G. A military airport location selection by AHP integrated PROMETHEE and VIKOR methods. Transportation Research Part D: Transport and Environment. 2018;59:160-173
  53. 53. Gao Z, Liang RY, Xuan T. VIKOR method for ranking concrete bridge repair projects with target-based criteria. Results in Engineering. 2019;3:100018
  54. 54. Wang L, Zhang HY, Wang JQ, Li L. Picture fuzzy normalized projection based VIKOR method for the risk evaluation of construction project. Applied Soft Computing. 2018;64:216-226
  55. 55. Soner O, Celik E, Akyuz E. Application of AHP and VIKOR methods under interval type 2 fuzzy environment in maritime transportation. Ocean Engineering. 2017;129:107-116
  56. 56. Sakthivel G, Sivakumar R, Saravanan N, Ikua BW. A decision support system to evaluate the optimum fuel blend in an IC engine to enhance the energy efficiency and energy management. Energy. 2017;140:566-583
  57. 57. Kasim MK, Jemain AA. Involvement of panel of evaluators with different credibility in aggregating subjective rank-based values. Sains Malaysiana. 2013;42(5):667-672
  58. 58. Flanagin AJ, Metzger MJ. Digital media and youth: Unparalleled opportunity and unprecedented responsibility. In: Metzger M, Flanagin A, editors. Digital Media, Youth, and Credibility. Cambridge: The MIT Press; 2008
  59. 59. Brodsky SL, Griffin MP, Cramer RJ. The witness credibility scale: An outcome measure for expert witness research. Behavioral Sciences & the Law. 2010;28(6):892-907

Written By

Maznah Mat Kasim

Submitted: 03 December 2018 Reviewed: 14 April 2020 Published: 18 May 2020