Measuring in Weighted Environments: Moving from Metric to Order Topology (Knowing When Close Really Means Close)

This chapter addresses the problem of measuring closeness in weighted environ‐ ments (decision-making environments). This chapter show the importance of having a trustworthy cardinal measure of proximity in weighted environments. A weighted environment is a nonisotropic structure where different directions (axes) may have different importance (weight), thus there exist privilege directions. In this kind of structure, it would be very important to have a cardinal reliable index that can say the closeness or compatibility of a set of measures of one individual with respect to the group or to anyone other. Common examples of this structure are the interaction between factors in a decision-making process (system-values interaction), matching profiles, pattern recognition, and any situation where a process of measurement with qualitative variables is involved.


Introduction
This chapter addresses the problem of measuring closeness in weighted environments (decisionmaking environments), using the concept of compatibility of priority vectors and value systems.
When using the concept of closeness, it comes in mind immediately what means to be close (when close really means close). Thus, when measuring closeness or proximity, we should add a point of comparison (a threshold) that makes possible to compare or decide if our positions, system values, or priorities are really close.
For our purposes, compatibility is defined as the proximity or closeness between vectors within a weighted space [1].
We show a proposition for a compatibility index that can measure closeness in a weighted environment, thus can assess pattern recognition; medical diagnosis support measuring the degree of closeness between disease-diagnosis profiles, buyer-seller matching profiles; measuring the degree of closeness between house buyer and seller projects, or employment degree of matching; measuring the degree of closeness between a person's profile with the desired position profile; in curricula network design, conflict resolution; measuring closeness of two different value systems (the ways of thinking) by identifying and measuring the discrepancies, and in general measuring the degree of compatibility between any priority vectors in cardinal measure bases (order topology) [1,2].
The chapter first presents some theory behind distance (measurement) and closeness concepts in different cases as well as a nice statistical view of distance. Then, it presents the concepts of scales, compatibility, compatibility index G, and some analogies among G and distance concept. Next, it shows a comparison with another compatibility indices present in the literature, reflecting the advantages of G in front of the others (especially within weighted environments). Then, it presents a necessary threshold, which allows establishing "when close really means close" in weighted environments.
Finally, three relatively simple examples are developed, each one presenting a different application for the compatibility index G. One for questioning if the order of choice should be a must to say if two rankings are compatible or not, one for quality testing (testing the Saaty's consistency index through compatibility index G), and one for measuring comparability between two different rules of measurement (two different points of view).

Literature review
In metric topology [2], the particular function of distance D(a,b) is used to assess the closeness of two points a, b as a real positive function that keeps three basic properties:

D(a,b) + D(b,c) ≥ D(a,c) (triangular inequality)
The general function of distance used to calculate the separation between two points is given as follows: ( )  For k = 1, D(a,b) = Σ i abs(a i -b i ). Norm1, absolute norm, or path norm: this norm measures the distance from a to b within a 1D line, "walking" over the path, in one-line dimension.
For k = 2, D(a,b)= [(Σ i (a i -b i ) 2 )] 1/2 . Norm2 or Euclidean norm: this norm measures the distance from a to b, within a 2D plane (X-plane) getting the shortest path (the straight line).
For k = +∞, D(a,b) = Max i (abs(a i -b i )). Norm ∞ or Norm Max: this norm measures the distance from a to b within a ∞D hyperplane, getting the shortest path (the maximum coordinate) from all the possible paths.
In the field of statistics, we may note an interesting (and beautiful) case of distance calculation, which is known as distance of Mahalanobis [3], which meets the metric properties shown before. This distance takes into consideration parameters of statistics such as deviation and covariance (which can be assimilated to concepts of weight and dependence in the Analytic Hierarchy process/Analytic Network Process (AHP/ANP) world). Its formal presentation is: 1 1 ( , ) ( ) ( ) with the matrix of covariance betweem , .
But, for a more simple case (without dependence), this formula can be written as: with S −1 the diagonal matrix with the standard deviation of variables X, Y.
It is interesting to see that the importance of the variable (to calculate distance) depends on the deviation value (bigger the deviation smaller the importance), that is, the importance of the variable does not depend on the variable itself, but just on the level of certainty on the variable (is this statement always true?).
We consider this approach into discussion, since factors such as weight and dependence are in the bases of AHP and ANP structures [4,5]; but instead to understand and deal with probabilities and statistics (which by the way are not easy to build and later interpret), the idea here is to apply the natural way of thinking of human being which is based more on priorities than on probabilities. Indeed, we can manage the same information in a more comprehensive, complete, and easy to explain form by combining AHP/ANP with compatibility index G, and working with priorities, avoiding the needs of collecting big databases or understanding and interpreting complex statistic functions. Thereby, the Multicriteria Decision Making (MCDM) approach through AHP/ANP method gives a very nice tool for our investigation and treatment of the knowledge and experience that experts possess in different fields, and at the same time staying within the decision-making domain (order topology domain), avoiding building huge, and costly databases where the knowledge about the individual behavior is lost.

Hypotheses/objectives
In order topology measurement deals with dominance between preferences (intensity of preference), for instance D(a,b) = 3 means that dominance or intensity of preference of "a" over "b" is equal to 3, or that, a is three times more preferred than b. When talking about preferences, a relative absolute ratio scale is applied. Relative: because priority is a number created as a proportion of a total (percent or relative to the total) and has no need for an origin or predefined zero in the scale. Absolute: because it has no dimension since it is a relationship between two numbers of the same scale leaving the final number with no unit. Ratio: because it is built in a proportional type of scale (6 kg/3 kg = 2) [2].
So, making a general analogy between the two topologies, one might say that "Metric Topology is to Distance as Order Topology is to Intensity" [2,6].
An equivalent concept of distance is presented to make a parallel between the three properties of distance of metric topology [1,2]. This is applied in the order topology domain, considering a compatibility function (Eq. (1)) similar to distance function, but over vectors instead of real numbers.
Consideration: A, B, and C are priority vectors of positive coordinates and Σiai = Σibi= Σici = 1.

G(A,B)
is the compatibility function expressed as: This function presents [1, 2, 5, 6]: The compatibility function, G, returns a nonnegative real number that lays in the 0-1 range. With Symmetry condition: the compatibility measured from A to B is equal to the compatibility measured from B to A.
Easy to proof, just interchanging A for B and B for A in the compatibility function G.
is compatible with C, does not imply that A is necessarily compatible with C.
For property 3, it is easy to prove that if A, B, and C are compatible priority vectors (i.e., 0.9 ≤ G i ≤ 1.0 for A, B, and C), then property 3 is always satisfied. But, this property is also satisfied for the more relaxed (and interesting) condition where only two of the three vectors are compatible. For instance, if A is compatible with B (G(A,B) ≥ 0.9) and A is compatible with C (G(A,C) ≥ 0.9), or some other combination of A, B, and C, then condition 3 is also satisfied. This more relaxed condition allows compatible and noncompatible vectors to be combined while property 3 is still satisfied.
This situation can be geometrically viewed in the next figure.  We may also define incompatibility function as the arithmetic complement of the compatibility: Thus, incompatibility is equivalent to 1 -G. By the way, the incompatibility concept is more close to the idea of distance, since the greater the distance the greater the incompatibility [1,2,5,6].
Two simple examples of this parallel between D(x,y) and G(X,Y) are given. But first, to make D and G functions comparable, absolute distance D must be transformed into relative terms as a percentage value since the priority vectors are normalized vectors for the G function. Thus, the maximum possible value for D 1 (Norm1) is 2 and for D 2 (Norm2) is 2, while performing the ratios with respect to the maximum possible value and obtaining D in relative terms as percentage of the maximum value.       Table 2 shows the results of applying D 1 , D 2 , and 1 -G (incompatibility = 1 -compatibility).  The trend of the results for D and G functions is the same in both cases, when increasing the distance or making vectors more perpendicular and when decreasing the distance or making the vectors more parallel. This is an interesting parallel to these concepts and their trends, considering that different concepts (distance and incompatibility in different ratio scales) are being used [2].

Research design/methodology
One way to calculate compatibility in a general form is by using the inner or scalar vector product, defined as A•B = |A||B|cos α. This expression of dot product is preferable to the Cartesian version, since it highlights the relevance of the projection concept represented by cos α and also because when working with normalized vectors the expression A•B becomes equal to cos α, which shows that the projection part of the dot product is the most relevant part.

Definitions and conditions [2]
Assuming: A. Two normalized vectors are closer (compatible), when the angle (α) formed by both vectors on the hyperplane is about 0° or cos α is near to 1. From a geometric point of view, they will be represented by parallel or nearly parallel vectors. In this case, they will be defined as compatible vectors. B. Two normalized vectors are not closer (not compatible) when the angle (α) formed by both vectors on the hyperplane is near 90° or cos α is near to 0. From a geometric point of view, they will be represented by perpendicular or nearly perpendicular vectors. In this case, they will be defined as noncompatible vectors.
Figure 3 [1,2] shows the geometric interpretation of vector compatibility. Therefore, there is an operative way to measure compatibility in terms of vector projection. This interpretation of dot product will be very useful for the purposes of finding a compatibility measurement of two priority vectors in the domain of order topology. Since the space is weighted (we are working in weighted environment), it is also necessary to weight each projection (each cos α i ) and to take into account the changes of the angle and the weight coordinate by coordinate (coordinate "i" may have a different projection and weight than coordinate "i + 1"). Thus, the final formula to assess a general compatibility index of two consistent or near consistent vectors A and B from point to point throughout the both profiles is [1, 2, 5-7]: ( ) general compatibility index of DM1 with respect of DM2.
This can be shown graphically in Figure 4 [1,2].  The G function is a transformation function that takes positive real numbers from the range [0, 1], coming from normalized vectors A and B and returns a positive real number on the same range: This transformation has two particularly good properties [2]: 1. It is bounded in 0-1 (presenting no singularity or divergence).

2.
The outcome is very easily interpreted as a percentage of compatibility, representing the 0 = 0%, or total incompatibility and 1 = 100%, or total compatibility.
It is also possible to define a threshold for compatibility index at 0.90 1,2 . Thus, when two vectors have an index of compatibility equal or greater than 90%, they should be considered compatible vectors.
Since, incompatibility = 1 -compatibility, then the threshold for tolerable incompatibility is 10%. (This threshold is equivalent with the consistency index in AHP that tolerates a maximum of 10% of inconsistency in the pair comparison matrix.) Next, a simple application example for 2D vectors A and B is presented.

The first case is for equal vectors
Being 100% compatible or 1-1 = 0% incompatible, this result is expected because A and B are the same vectors.

Comparing G with other compatibility indices in the literature
It should be noted that when using other compatibility index formulae, for instance, the classic dot product and dividing the result by "n" ("n" is the vector dimension), this operation is like taking the average instead of weighting to evaluate the incompatibility, the result of this operation applied on the last example of vectors A and B is: (0.1/0.9 + 0.9/0.1)/2 = 4.55 (or 355% of incompatibility or deviation from n) [1,2,6,7].
Doing the same in a matrix environment, that is forming the Hadamard product [4,5] and dividing by n 2 instead of "n" (again taking double average instead of weighting) to assess the deviation, the result is 83.012/2 2 = 20.75 (20.75 -1 = 19.75 or 1975% of incompatibility or deviation from n 2 , presenting both situations a singularity problem (not bounded function). So, they do not seem to be adequate compatibility indices.
Another formula to evaluate incompatibility is the David Hilbert's formula [8,9], whose expression is log When applying this expression, the formula returns the value of 1.908 (191% of incompatibility). Thus, it also presents a singularity problem (results above the 100% value tough to be interpreted).
Note: In this research, it was not included compatibility indices based on ordinal scale, because the ordinal scale is not able to respond adequately to the more rich and complex concept of proximity (distance). In fact, the order topology shows clearly that is the intensity (not the ranking), what really matter to establish proximity between vectors.
A simple example is shown in Figure 5 [10].

Comparing performance of G with other compatibility indices
Suppose the next situation, having an AHP and actual priority vectors for the relative electric consumption of household appliances.
The AHP priority vector is the priority vector that is obtained from the 7 × 7 pair comparison matrix of the alternatives, and actual priority vector is using the direct relation between the consumption of each alternative related to the total.
We have: The first three formulae present a divergence process (singularities). As vector A deviates from vector B (being more and more incompatible), they become closer to perpendicular vectors.
The G function becomes the only formula capable of correctly assessing the compatibility without falling in a singularity (divergence), or remaining immobilized, as a standard distance calculation does when the absolute difference between the coordinates is kept constant. In fact, a complete study 1 of the behavior of different compatibility indices is presented in Table 4 2 [2,6,7].
The study was made initially for a 2D space (vectors of two coordinates), since the idea was to perform a sensitive analysis and observe the patterns of behavior of different compatibility indices in two special situations:    Six formulae in a 2D vector for seven cases for two different trends (parallel trend and perpendicular trend) were tested. The results are shown in Figure 7.

Generating a threshold for compatibility index G
To answer the initial question (when close really means close), first it is necessary to have a reliable index of compatibility. However, that is not sufficient; it is also necessary a second condition: a limit or threshold for the index.

Applications and Theory of Analytic Hierarchy Process -Decision Making for Strategic Decisions
For useful purpose, it is necessary to have a limiting lower value (minimum threshold) to indicate when two priority vectors are compatible or close to be compatible, in order to define precisely when close really means close. We have four different ways to define a minimum threshold for compatibility [2]: (1) Considering that compatibility is ranged between 0 and 100% (0 <cos α<1) being 100% the case of total compatibility (represented by parallel vectors), it is reasonable to define a value of 10% of tolerance (1/10th of 100%) as a maximum threshold of incompatibility to consider two vectors as compatible vectors (which means a minimum of 90% of compatibility to consider two vectors compatible), this explanation is based on the idea of one order of magnitude for an admissible perturbation for measurement. This lower bound is also based on the accepted 10% used in AHP for the consistency index. In the comparison matrix of AHP, the 10% limit of tolerate inconsistency comes from the ratio of consistency (CR) obtained from consistency index compared to a random index (CR = CI/RI), that in general has to be less equal 10% (except for the 3 × 3 and 4 × 4 matrix cases). This is telling that as far as CI is from the RI (random index response) as better is CR. It is interesting to recall that CR is built as a comparison from the statistical analysis of RI (this idea will be reviewed in the last case analysis).
(2) The compatibility index is related to a topologic analysis since compatibility is related to measuring closeness in weighted environments (weighted spaces). Figure 8 presents a sequence of two, three, four, and five dimensional vectors, the first or initial vector is obtained as an isotropic flat space situation, that is, with equal values (1/n) in each coordinate (no privileged direction in the space); the second one is a vector obtained perturbing (adding or subtracting) 10% on each coordinate, creating "small crisps" or little privileged directions, then the incompatibility index is calculated with the five different formulas (the reason to use all formulae is because we are working on a near flat space [no singularities], where every formula works relatively well).
When looking the outputs for incompatibilities, it is possible to observe a good response for everyone (equal or less than 10%), with G and Norm1 ca. 10% and 5% as upper bound in every case.
(3) In Figure 9, a simple test was run over an Excel spreadsheet, using the common area example of AHP, where the result (the importance of the area of the figures) can be calculated precisely with the typical geometric formulas and then normalizing its values to obtain the exact priorities as a function of the size of their areas. Doing this way, it is possible to have a reference point of the element values (the right coordinates for the actual area vector).
The next step is perturbing the actual area values by ± 10% producing a new vector of areas, finally over these two vectors (actual and perturbed) is applied the G function to measure their compatibility obtaining a value for compatibility index of 91.92% (or 8.08% of incompatibility). This result is very close to the standard error deviation calculated as Σabs(perturbated -actual)/actual = 10%, this shows that 90% might represent a good threshold, considering that the difference between both outputs is related to the significant fact that this numbers are not just numbers but weights.

(4)
The last way to analyze the correctness of 90% for threshold was carried out, it consists in working with a random function and filling the area vector with random values and calculating G for every case. The goal is to generate an average G for the case of full random values for the areas ("full random" means without any previous order among the areas, like figure A is clearly bigger than figure B, and so on), and again producing random values but this time keeping the correct order among the figures (imitating the behavior of a rational DM), and once again generating an average G for this case, then both results are compared against actual values.
The average G for 15 experiments in the first case (keeping no order) was around 50% of compatibility and 78% for the second case (keeping the order among the five figures), both results show that limit of 90% might be a good threshold, in the first case the ratio between threshold and the full random G is almost twice bigger 1.8 (0.90 over 0.50), keeping the 0.90 compatibility threshold far from random responses.   In the second case (threshold over sorted figures), the ratio is much more closer (as expected to be), with a value of 1.16 (0.90 over 0.78), saying that order may help to improve compatibility but is not enough, it needs to consider the weights (not just the preference but the intensity of the preference) which is related to the values of the elements that belong to the vector, as well as the angles of both vectors point to point (geometrically viewed as profiles).
Of course, this test should be carried out for a large number of experiments to have a more reliable response. A second test conducted for 225 experiments (15 people making 15 experiments each) has shown more or less the same initial results for average G value in both cases with and without orders (± 0.78 and ± 0.50).
Next, Table 3 provides the meaning of ranges of compatibility in terms of index G and its description.   Finally, another interesting way to illustrate the 90% as a good threshold for compatibility is the pattern recognition issue. Compatibility is the way to measure if a set of data (vector of priorities or profile of behavior) correspond to a recognized pattern or not. For instance, in the medical pattern recognition application, the diagnose profile (the pattern) is built with the intensity values of sign and symptoms that correctly describe the disease and then is compared with the sign and symptoms gathered from the patient, when these two profiles were about 90% or more of matching, then the physician was confident to say that the patient has the disease.
When the matching between those profiles were 85-90%, the physicians in general agreed with the diagnoses offered by the software, but when the G values was below 85% (between 79-84%), then the doctor sometime found trouble to recognize if the new sign and symptoms (the new patient's profile) was corresponding or not to the disease initially presented (nonconclu-sive information). Finally, when the matching value (the G index) was below 75%, the physician was not able anymore to clearly recognize in the patient's profile the disease initially offered.
Notice: the new profiles were built artificially, changing some values of sign and symptoms in an imaginary patient profile, in order to achieve matching values of 90%, 85%, 80%, and so on, the intention was to evaluate when an experimented doctor change his perception (mostly based on his pattern recognition ability).
Thus, two vectors may be considered compatible (similar or matching patterns) when G is greater or equal 90% with great certainty or confidence. Also, values between 85% and 90% in general have a good chance to be correct (to have a good level of certainty or approximation).

Is the order of choice a must?
We use to say that under the same decision problem, two compatible persons should make similar decisions. But, what do we mean when we say: "two compatible people should make similar decisions" [6,7].
It means that they should make the same choice?
Take a look to the following case: On the other hand, P3 is an extreme person, thus his intensity of preference is 5-95 for candidate B.
Is really P3 more compatible with P2 than P1 just because P3 makes the same choice of P2? (both have the same order of choice voting by candidate B). It seems that the order of choice is not the complete or final answer.
On the other hand, we know that in order topology, metric of decision means intensity of choice (dominance of A over B). So, compatibility is not related only to the simple order of choice, but also something more complex, more systemic, which is related with the intensity of choice.
Let us see the next numerical example (Figure 10).  Suppose three people are having equal and different order of choices and its related priority vectors.
As we can see from Figure 10, the order of P1 is the same than the order of P3 and inverse of P2.
G(P1;P3)= 0.77 (<90%), which implies that P1 and P2 have noncompatible choices (moderate to low). This is a very interesting result, considering that P2 have a totally inverted order of choice compared with P1. Yet, they are compatible people.
On the other side, P1 and P3, which have the same order of choices, are not compatible people.
Hence, it is very important to be able to measure the degree of compatibility (alignment) in a reliable way.

Mixing consistency and compatibility indices in a metric quality test drive
A different and interesting application of G is possible when it is used to check the quality of a metric.
When it is possible to compare a metric obtained with some method with the expected or actual metric, then the compatibility index G represents a great tool to test and verify the quality of the created metric.
Suppose (for instance), we want to measure the quality of the following simple example.

The problem (the criticism)
Setting an hypothetic problem (a criticism made by some critic person) about the quality of the consistency index in pair comparison matrices (Saaty's index) [4,5].
The hypothetical critic says: the index of consistency (Saaty's index) is wrong, since it lest pass through values (comparisons) that are not acceptable by common sense.
Suppose, three equal bars of same length like the ones in Figure 11.  Of course, the correct matrix comparison for this situation is the following (consistent) comparison matrix: The obvious (correct) priority vector "w" is: 1/3, 1/3, 1/3, with 100% of consistency (CR = 0).
Suppose now that (due to some visualization mistake), the new appreciation about the bars is: The new (perturbed) priority vector w* is 0.4126, 0.3275, 0.2599, with CR = 0.05 (95% of consistency), which according to the theory is the maximum acceptable CR for a 3 × 3 comparison matrix.
The critic claims that the A-C bars comparison has a 100% of difference (100% of error), which is not an acceptable/tolerable error (easy to see even at naked eye).
Also, the global error in the priority vectors is 15.85%, calculated with the common formula: e = abs(u -v)/v, for each coordinate and then take the average of the coordinates. But, Saaty's consistency index says that (CR = 95%), which is tolerable for a 3 × 3 comparison matrix. Hence, the critic claims that Saaty's consistency index is wrong.

The response
The already described problem has at least two big misunderstanding issues: First: • The CR (the Saaty's index of consistency) comes from the eigenvalue-eigenvector problem, so it is a systemic approach (does not care in a particular comparison) [4,5]. Second: • The possible error should be measured by its final result (the resulting metric), not in the prior or middle steps.

The explanation
The first misunderstanding is explained by itself.
For the second one, before any calculation, we need to understand what kind of numbers are we dealing with (in what environment we are working), because it is not the same to be close to a big priority than to a small one. This is a weighted environment and the measure of the closeness (proximity) has to consider this situation.
We must work in the order topology domain to correctly measure the proximity (closeness) on this environment. To do this correctly, two aspects of the information must to be considered: the intensity (the weight or priority) and the degree of deviation between the priority vectors (the projection between the vectors). The index that takes good care of these two factors simultaneously is the compatibility index G.
Summarizing, the vectors of correct and perturbed metric are: The basic question here is how close is the approximated metric to the correct metric?
Evaluating G(correct -perturbed), the G value obtained is 85.72%, which in numerical terms represents almost compatible metrics (high compatibility).
As explained at the end of point 5, G = 90% is a threshold to consider two priority vectors as compatible vectors. Also, G = 85% is an acceptable lower limit value (high compatibility).
Hence, the two metrics are relatively close (close enough considering that they are not physical measures).
Of course, better consistency can be achieved.

Applications and Theory of Analytic Hierarchy Process -Decision Making for Strategic Decisions
The question is do we really obtain a better result when being totally consistent?
And the answer is: probably no. Because, in real problems we never have the "real" answer (the true metric to contrast). Experience shows that pursuing consistent metrics per se may provide less sustained results.
For instance, in the presented problem one could answer that A -B = 2, A -C = 2, and B -C = 1, and he/she would be totally consistent, but consistently wrong.
By the way, the new priority vector would be w** = (0.5, 0.25, 0.25), with CR = 0 (totally consistent), and G = 71.5%, which means incompatible vectors (low compatibility). Thus, a totally consistent metric is incompatible with the correct result.
So, at the end it is better to be approximately correct than consistently wrong.

Note 2: If metric B is compatible with metric A, then it is possible to use metric B as a good approximation of A. This is a useful property when metric A is not available (the most of the times we do not know the correct/exact metric).
Note 3: The same exercise was performed from 4 × 4 to 9 × 9 matrices that is, putting a value (n -1) in the cell-position: (1, n), (n = matrix dimension), obtaining similar results (some times even better).

Using compatibility index G to measure the comparability (closeness) of two different metrics
When you have two rules of measurement in the same structure of decision (two different system values), how to combine and or compare the outcome of both?
First option: Try to reach a verbal consensus (the verbal or psychological option) Second: Using the geometric mean (the numerical or statistical option) Third: Measuring the compatibility (the topological or closeness option) The first two options are long known and described in the literature [1,2,4,5]. The third one is based on the compatibility principle and presents the following five advantages [6]: 1. Find out if the output of one rule is comparable with the other (compatible rules).

2.
Find out the closeness between the two rules (how much comparable they are?).

3.
Find out the criteria responsible for the possible gap (where to act in the most efficient way?).

4.
Find out the closeness between the initial personal rule with the final group rule. Assessing: G(P1,GM). For a better explanation, we will use an example to illustrate in details the already exposed idea.
Suppose that we are in front the following problem, a mine company needs to change its shiftwork system, from the actual 6 × 1 × 2 × 3 (from family of shifts of eight working hours), to the new (desired) shift-work system 4 × 4 (from family of shifts of 12 working hours).
6 × 1 × 2 × 3 means a shift of 6 days of work 1 day off, 6 days of work 2 out and 6 days of work 3 out, with 8 h per day, every week). After calculation, we have 144 of total working hours in 24 days of each working cycle.
4 × 4 means 4 days of work followed by 4 days off with 12 h per day every week). After calculation, we have again 144 of total working hours in 24 days of each working cycle.
Are those shifts equivalents? How to know what shift work is better? (or less risky, since any shift is bad by essence).
Even if the total labor hours is the same (in terms of quantity of working hours), the shift-works are not equivalent (in terms of quality of life and production), it depends on a bunch of interdependent variables (numbers of working hours per day, the entry time, the number of free days per year, the number of complete weekends per year, the number of nights per shift, the number of changes day/night/day per cycle, the number of sleeping hours, the opportunity of sleep, among many others), it depends also on how those variables are settled down and, of course, the weight (the importance that each variable has), which in time depends on what people you ask for (workers, managers, stakeholders, owners, family, or even the people that live surrounding the mine). Suppose now we have two evaluation scenarios: First: The decision rule (DR) of measurement is built with the people that work in the 6 × 1 × 2 × 3 shift-work (weighting the variables involved in the rule of measurement of this shift), since they know better how it work their shift-work.
Second: The DR of measurement is built with the people that work in the 4 × 4 build weighting the variables involved in rule of measurement of this shift, since they know better their shiftwork.
As the process concludes, we end with two different outputs: for the 6 × 1 × 2 × 3 shift (using the first rule of measurement), we have an impact index of 0.33 (33%), while for the second shift (with the second rule), we have an impact index of: 0.37 (37%). Of course, I cannot say that shift 6 × 1 × 2 × 3 is better than 4 × 4 just because it has a lower impact (0.33 < 0.37), since they were built with different rules of measurement.
At the end, we have two DRs for the same problem; of course the question is not what rule is better, but how to make comparable both DRs.
One option could be to agree to use the same DR for both cases (the consensus, or verbal solution). But, this is not an option since the knowledge is located in different groups of people and is specific for each case; also they did not feel comfortable making this agreement.
Another option could be to take the geometric average (GM) of both rules and work with it as the final rule. Even if this could be a possibility, we really do not know what we are doing when combining or mixing both rules (as an analogy we cannot just combine one rule of measurement in meters with one rule in inches). We need to know first if both DRs are comparable. Graphically: The problem statement can be placed as "are these two set of decision criteria (profiles gray and black of Figure 13) comparable/compatible among them? Recall that they represent two different DRs. Rule one (the black bars) represents the rule of measurement for 6 × 1 × 2 × 3 shift-work and is formed by four criteria (extracted from the global rule). Rule two (in gray) represents the rule of measurement for 4 × 4 shift-work and is formed by the same four criteria, but with different intensities.
By the way, when saying comparable DR, it means compatible DR (equivalents of measure), that is we can measure the level of risk of the alternatives with any of the two rules described above.
To establish if both DR are equivalent, it is necessary to calculate G of both profiles: G(Profile gray -Profile black) = 0.91 (91%) ≥ 90%, which means they are compatible (sometimes, we can take 85% as an acceptable limit point of compatibility, but no less).
Thus, we can use any of the two DRs to measure the effect of the changing shift-work. Moreover, we may use the geometric mean (GM) of both DRs as the final rule, but now knowing that we are combining rules that are compatible in fact, we are not mixing two far away points of view. This is a relevant issue, which helps a lot when working with different groups of decision making.
But, how to make both DR comparable when they are not compatible? (or what rule to use to measure the alternatives?) There are four different cases of compatibility for DR where "G" can be applied: Case 1: Case 1 is the case already described where the DR of each group decision making is compatible.
In this case, it is possible to use any of the two rules or (still better) use the geometric mean of both DRs.
When measuring the alternatives with this final DR, make the results to be comparable.

Case 2:
The DRs of both persons (or groups) are not comparable (compatible) among them, but are compatible with the GM: In this case, take GM of both rules then measure the compatibility of each DR with regard to the GM rule. If both initial DRs are compatible with the GM rule, then you may use the GM rule as the final rule.

Case 3:
The DRs of both persons are not compatible among them, but one is compatible with the GM.
In this case, look at the compatibility profile of GM with P2, G(GM;P2), for the position with the smallest number and proceed in the next sequence.
First, check if it is any "entry" error in the calculation process of P2 profile (some large inconsistency, or inverse entry in the comparison matrix).
Second, check if the comparisons in the matrix associated to that position is what P2 really meant to say.
Third, suggesting P2 to test acceptable numbers that produce a bigger G(getting closer to GM rule), until you can fall in Case 2. When measuring the alternatives with this final DR, make the results to be comparable.
There is a fourth case. The case where the initial DRs are not compatible among them and any of two DRs are also not compatible with GM rule. This is the toughest case, since P1 and P2 have very different points of view.
The suggestion for this case is as follows: Revise the structure of the model to find some lost criterion or border condition. The weights of the criteria have to be checked, and support information of P1 and P2 opinions revised.
If there are no changes with the initial position, you may (as last resource) apply (or impose) the GM as the final rule, but probably both people (or group) may feel not fully represented by that imposed DR.

Conclusions
There are two global conclusions on this study: First, we need some kind of index for distance/alignment measurement in weighted environments (order topology domain), in order to mathematically define if two profiles (or behaviors) are really close.
This index (the compatibility index) makes possible: • a matching analysis process • analysis and test of quality of the results • have one more tool for conflict resolution on group decision making to achieve a possible agreement considering that those profiles may represent system values • a pattern recognition process (assessing how close is one pattern to another) • making a better benchmarking • membership analysis (closeness analysis to estimate if an element belong to one set of elements or another).
Second, this analysis shows that the only compatibility index that performs correctly for every case is the G index, keeping the outcome always in the 0-100% range (like Euclidean formula), and this is an important condition, since any value out of the 0-100% range would be difficult to interpret (and the beginning of a possible divergence).
It is also important to note the G and Euclidean outcomes in Table 4 are close, but G is much more accurate or sensitive to changes, because G is not based on absolute differences (Δx i ) as distance does, but on relative absolute ratio scales. We have to remember that we are working on ratio scales (absolute ratio scale to be precise). In fact, the Euclidean distance calculation shows no differences in the distance of parallel trend from case 1 to 6 (see Figure 8), Euclideanbased index cannot detect the difference in the compatibility value among those cases because the absolute difference of the coordinates remains the same.
Therefore, with the Euclidian-based index one may reach the wrong conclusion that no difference exists for vector compatibility from cases 1 to 6 in the figure (the first case study is as incompatible as 2, 3, 4, 5, or 6), which is not an expected result. This unexpected behavior occurs because the Euclidean norm is based on differences, and also because it is not concerned about the weights of the coordinates and the projection between the priority vectors. It is important to remember that the numbers inside the priority vector represent preferences. Hence, in terms of proximity, it is better to be close to a big preference (big coordinate) than to the small one and this issue is better resolved in ratio scales.
Other tests made in greater spaces (3D to 10D) show the same trend. Even more, the bigger the space dimension the greater the likelihood of finding singularity points for the others formulae, in both trends parallel and perpendicular.
It is interesting to note that function G is not the simple dot product, since it depends on two different dimensional factors. On the one hand, it is the intensity of preference (related with the weight of the element), and on the other hand it is the angle of projection between the vectors (the profiles).

• Compatibility for sensitive analysis and threshold
G can help to establish the degree of membership or the trend for membership (tendency) of an alternative. The idea is equivalent to the classic sensitive analysis when making small changes in the variables. The change resulting in the G value (before and after the sensitive analysis) would show where the alternative is more likely to belong (trend of belonging).