RANK ORDERING CRITERIA WEIGHTING METHODS – A COMPARATIVE OVERVIEW

Multicriteria decision making (MCDM) refers to screening, prioritizing, ranking or selecting the alternatives based on human judgment from among a finite set of ` alternatives in terms of the multiple usually conflicting criteria. A very significant role in MCDM models plays the weights of criteria which usually provide the information about the relative importance of the considered criteria. Several different methods are developed to take criteria priorities into account. The aim of the paper is a comparative overview on several rank ordering weights methods which are considered to convert the ordinal ranking of a number of criteria into numerical weights. Using ranks to elicit weights by some formulas is more reliable than just directly assigning weights to criteria because usually decision makers are more confident about the ranks of some criteria than their weights, and they can agree on ranks more easily. The great advantage of those methods is the fact that they rely only on ordinal information about attribute importance. They can be used for instance in situations of time pressure, quality nature of criteria, lack of knowledge, imprecise, incomplete information or partial information, decision maker’s limited attention and information processing capability. The equal weights, rank sum, rank exponent, rank reciprocal as well centroid weights technique are presented. These methods have been selected for their simplicity and effectiveness.


Introduction
Many real world decision making problems involve multiple criteria.Corporate decision making rarely involves a single criterion.Multicriteria decision making (MCDM) refers to screening, prioritizing, ranking or selecting the alternatives based on human judgment from among a finite set of decision alternatives in terms of multiple usually conflicting criteria [Hwang, Yoon, 1981].Three separate steps are utilized in MCDM models to obtain the ranking of alternatives: determine the relevant criteria and alternatives, attach weights to the criteria and numerical measures to the impacts of the alternatives on these criteria and finally process the numerical values to determine a ranking score of each alternative [Hwang, Yoon, 1981].In the multi-criteria models the weights of criteria play a very significant role and they have different interpretations depending on context decision making, on multi-criteria analysis methods [Choo et al., 1999].However, they usually provide the information about the relative importance of the considered criteria.
Several different methods are developed to take criteria priorities into account.In this paper a limited selection of weight techniques, the ranking weight methods are presented.Judgments of the decision makers are frequently vague and their preferences as well weight cannot be exactly evaluated with numerical values in practice.The "true" weights of criteria remain unknown in practice.Even if the elicitation of precise weights is possible, it would probably be time-consuming and difficult and therefore impractical.The rank ordering weighting methods provide approximations of "true" weights of criteria when rank ordering information is known.Such assumption, complete rank ordering being provided by decision maker (DM), is also made in this study.Attaching ranks to elicit weights by some formulas is more reliable than just directly assigning weights to criteria.This is because usually even experts and decision makers are more confident about the ranks of some criteria than their weights, and they can agree on ranks more easily.Hence, it is concluded that usually ranking is easier than weighting for non expert or even experts.The great advantage of ranking weight methods is fact that they rely only on ordinal information about criteria importance.They can be used for instance in situations of time pressure, quality nature of criteria, lack of knowledge, imprecise, incomplete information or partial information, decision maker's limited attention and information processing capability.This follows that the decision maker may not be willing or able to provide exact estimations of decision parameters.Also the group of decision makers may not be able to reach agreement on a set of exact weights, so in such situation may be realistic to expect agreement only on a ranking of weights.Moreover, the ranking methods are easy to use and simply to understand for decision maker.
Generally, the ranking method weight determination involves two steps: ranking the criteria according to their importance and weighting the criteria from their ranks using one of the rank order weighting formula.In fact, two practical and important problems arise here.The first problem is criterion ranks: how to understand which criterion is more important than the other.The second problem is how to elicit weights from this information.This paper addresses only the second step decision problem: if only the criteria ranks are supplied by the decision maker how do we determine relative criteria weights?
The aim of the paper is a comparative overview on ranking method of weight determination.Rank ordering weighting methods take into account decision maker's information about rank ordering of criteria weights, from which approximations for criteria weights are calculated by using corresponding formula.The rank sum, rank order centroid and rank reciprocal as the most used rank ordering weighting methods are benchmarked in this study.
The paper is organized as follows.In Section 2, we formulate MCDM problem.In Section 3, literature overview on technique weight criteria is done.Conversion of ranks into weights is discussed in Section 4. In Section 5 we presented application the analytic hierarchy process (AHP) method for weight determination.Comparisons of presented rank ordering weight method as well AHP are provided in Section 6.Finally, conclusions and comments are given in Section 7-8. .We assume that all alternatives score with respect to all criteria are known or has been estimated by the decision maker.The criteria may be grouped into two categories: benefit and cost.The benefit criteria are those attributes for maximization whose values are always the larger the better.The cost criteria are those for minimization whose values are the smaller the better.For criteria, we have weight vector ] ,.. w w w w  Finally, a choice from two or more alternatives requires a decision rule or ranking rule in which the DMs can obtain the information available to make a best choice.At the next stage, all criteria are normalized to have commensurable unit.For benefit criterion the normalized values can be calculated using the following formula3 :

In this paper, we consider a MCDM problem with m decision alternatives
and for the cost criterion using the following formula where ij x is the score of th  i alternative with respect to th  j criterion before normalization.After normalization all criteria are in benefit criteria where higher ij c is pre- be the normalized decision matrix.
The simple and most often used multi attribute decision technique is the Simple Ad- ditive Weighting (SAW) which is also known as weighted linear combination or scoring methods4 .This method is based on the weighted average.The advantage of this method is that it is a proportional linear transformation of the raw data which means that the relative order of magnitude of the standardized scores remains equal.To use this method, we first must express values of decision making matrix as linear non-scaled values.The SAW technique consists of three basic steps: scale the scores to make them comparable, apply criteria weights, and sum the values along rows and select best (top ranked) alternative.In SAW technique, final score of each alternative is calculated as follow: where: i S is score for th  i alternative, and ij c is the normalized score of th  i alternative with respect to th  j criterion and j w is the weight of criteria j are as be- fore.Next the final scores are ranked.It implies that the higher the value of i S the higher is the rank.

The techniques of weights elicitation -the literature overview
Several approaches have been proposed to determine weights in multicriteria models.Some methods are more formal, and some have an axiomatic basis.For a comparison of weighting techniques, see Tzeng et al. [1998], Belton and Stewart [2002], Borcherding, Eppel and Winterfeldt [1991], Xu [2004].Tzeng et al. [1998] classify weighting methods into objective or subjective.The subjective approaches select weights based on preference information of criteria, subjective intuitions or judgments based on their knowledge given by the decision maker, the objective methods determine the weights of criteria through mathematical calculation using objective information in a decision matrix.The subjective methods include ranking ordering method [Ahn, Park, 2008;Solymosi, Dombi, 1986;Stillwell et al., 1981], the tradeoff method and the pricing-out method [Keeney and Raiffa, 1976], the ratio method [Edwards, 1977], and the swing method [Kirkwood, 1997] and the analytic hierarchy process (AHP) [Saaty, 1980], the direct rating (DR) method [Bottomley, Doyle, 2001], point allocation (PA) method [Doyle et al, 1997)], Delphi method [Hwang, Yoon, 1981], eigenvector method [Takeda et al., 1987], LINPAC (LINear ProgrAmming of preference Comparisons) [Horsky, Rao, 1984], and others [Hwang, Yoon, 1981].The objective methods determine the weights of criteria by the information in a decision matrix through mathematical models, but they neglect the subjective judgment information of the decision maker.They include entropy method [Hwang, Yoon, 1981], standard deviation (SD) method [Diakoulaki et al., 1995], CRITIC (Criteria Importance Through Intercriteria Correlation) method [Diakoulaki et al., 1995], maximizing deviation method [Wu, Chen, 2007] and ideal point method [Ma et al., 1999].The integrated methods determine the weights of criteria using both decision maker's subjective information and objective decision matrix information.The subjective and objective integrated approach [Ma et al., 1999] is formulated as a two-objective mathematical programming model, integrated approach [Fan et al., 2002] integrates decision maker's fuzzy preference information on decision alternatives and objective decision matrix information into one model.The integrated approach [Wang, Parkan, 2006] integrates decision maker's fuzzy preference relation on decision alternatives decision maker's multiplicative preference relation on the weights of criteria, and objective decision matrix information into a general model framework.Weber and Borcherding [1993] classify weight-determining procedures according to whether they are statistical or algebraic, holistic or decomposed, and direct or indirect.According to Weber and Borcherding [1993], the concept of weight can be defined only in reference to one of the specific theories of preference.Belton and Stewart [2002] summarize two kinds of weights: tradeoff-based weights and non-tradeoff-based weights.Tradeoff-based weights emphasize the "compensation" of values across criteria, which permit preference data to be compared as they are aggregated into a single representative evaluation.Non-tradeoff-based weights do not permit direct tradeoffs across criteria; they are usually associated with outranking methods.Among the tradeoff-based weight methods, AHP and geometric ratio weighting are an integrated method, which means they proceed from preference data and weight assessments to aggregated preferences to final results.

Ranking methods of weights determinations
There are a variety of situations where it is reasonable to use ranked weights, and there have been various techniques developed to deal with ranked weights and arrive at a choice or rank alternatives under consideration.The relative criteria weights are based on the assumption that a universal weight-rank functional relationship exists between criteria ranks and average weight values.Such conversion is based on the different formulas.Rank ordering the importance of criteria may be easier than describing other imprecise weights such as bounded weights.This happens for instance in situations of time pressure, quality nature of criteria, lack of knowledge, imprecise, incomplete information or partial information, decision maker's limited attention and information processing capability.This follows that the decision maker may not be willing or able to provide exact estimations of decision parameters.There are also more specific reasons why the assumption of exact weights is unrealistic.For instance, Barron andBarrett (1996a, 1996b) state that various methods for eliciting exact weights from the decision maker may suffer on several counts, because the weights are highly dependent on the elicitation method and there is no agreement as to which method produces more accurate results since the "true" weights remain unknown.Also the group of decision makers not be able to reach agreement on a set of exact weights, so in such situation may be re-alistic to expect agreement only on a ranking of weights.The rank order weight determination involved two steps: ranking the criterion according to their importance weighting the criteria from their ranks using one of the rank order weighting formula.
More precisely, the rank order describes the importance of the criteria and next the information describing them (rank number) can be used for generating numerical weights.We assume here that that the decision maker (DM) is capable of providing preference information and rank ordering criteria.We have a list of n prioritized (ranked) criteria, where each criterion j has a rank j r ,  .
The decision maker selects and ranks a set of n criteria that he or she seems to be relevant, giving each criterion j a rank j r ,  .
The rank is inversely related to weight (rank denotes the highest weight, rank n r n  the lowest weight).In this paper we concentrate only on second step of ranking weight method.Our objective is to convert the list of ranks   n r r ,...,. for n criteria.Many authors suggested specific functions for assigning weights j w to n criteria with ranks j r .Stillwell et al. [1981] propose three functions: rank reciprocal (inverse), rank sum (linear), and rank exponent weights.Solymosi and Dompi [1985] and Barron [1992] propose rank order centroid weights.Lootsma [1996] and Lootsma and Bots [1999] suggest two types of geometric weights.Here we presented equal weight, rank sum, rank exponent, rank reciprocal as well centroid weights technique.These methods have been selected for their simplicity and effectiveness.

Equal weight method
The equal weight method (EW) requires minimal knowledge about priorities of criteria and minimal input of decision maker.If the decision maker has no information about true weights than the true weights could be represented as a uniform distribution on the unit  n simplex of weights defined by conditions (1).The  n simplex of weight is a geometric object.For instance, with 2  n criteria and no information  2 simplex of weights is a set of points lying on the segment line between points on coordinates ) 0 , 1 ( and ) 1 , 0 ( .If we have no knowledge about the weights then our knowledge can be represented by a uniform provability density function over this line.The expected value of this distribution is centroid (centre of mass) of the line (point . More generally, in case no information about weights distribution the expected value of the weight distribution determines the following weight formula: where n j ,.., 2 , 1  .

Rank Sum weight method
In the rank sum (RS) procedure the weights, are the individual ranks normalized by dividing by the sum of the ranks.The formula producing the weights is the following [Stillwell et al., 1981]: where j r is the rank of the th  j criterion, n j ,..., 2 , 1  .

Rank Exponent weight method
The rank exponent weigh method (RE) is a generalization of the rank sum method.We have the following formula where j r is the rank of the th  j criterion,  p parameter describing the weights, .,..., 2 , 1 n j  The parameter p may be estimated by a decision maker using the weight of the most important criterion or through interactive scrolling (as in Table 1).The 0  p results to equal weights, 1  p rank sum weight.As p increases, the weights distribution becomes steeper.In Table 1 and Figure 1 the estimated weights for some values of p in case of 5  n criteria has been shown.Source: own study.

Inverse or reciprocal weights
The inverse (or reciprocal) weights method (RR) uses the reciprocal of the ranks which are normalized by dividing each term by the sum of the reciprocals (Stillwell et al. 1981).The formula is the following: where j r is the rank of the th  j criterion, n j ,..., 2 , 1  .

Rank-order centroid weight method
The rank-order centroid (ROC) weight approach produces an estimate of the weights that minimizes the maximum error of each weight by identifying the centroid of all possible weights maintaining the rank order of objective importance.Barron and Barrett [1996a] found that weights obtained in this manner were very stable.If we know the rank order of the true weight, but have no other quantitive information about them, then we may assume that the weights are uniformly distributed on the simplex of rankorder weight , ...  .Barron and Barret [1996] generalized this argument for 2  n , showing that the expected value of the weights can be calculating using the following formula: This method is called rank-order centroid (ROC) weight because these weights reflect the centroid (centre of mass) of the simplex defined by the ranking of the criteria.With more criteria, the error for ranked criteria will be much less.

The AHP method
The well-known subjective method for determining weight is analytic hierarchy process method (AHP) proposed by Saaty [1980].When applying the AHP, the preferences of the decision criteria are compared in a pairwise manner with regard to the criterion preceding them in the hierarchy.If two criteria are of equal importance, a value of 1 is given in the comparison, whereas a value of 9 indicates the absolute importance of one criterion over the other (see Table 2.).The matrix of pairwise comparisons when there are n criteria at a given level can be formed as where , i w , j w are the relative importance of criteria i and j, respectively.We have also . Based on the matrix , A criteria weights can be calculated in some methods, such as arithmetic mean method, characteristic root method, and least square method.Because individual judgments will never agree perfectly, the degree of consistency achieved in the pairwise comparison is measured by a consistency ratio indicating whether the comparison made is done.The Consistency Index (CI) of the square matrix , A was established by Saaty [1980].This measure can be used to verify in what measure the judgments supplied are consistent.
where max  is the highest eigen value of the matrix A. The calculated priorities are plausible only if the comparison matrices are consistent or near consistent.The approximate ratio of consistency can be obtained using the degree of inconsistency of the square matrix A can be measured by the ratio of CI to RI, which is called the Consistency Ratio (CR).
where RI is the Random Consistency Index, which is the average CI of a randomly generated reciprocal matrices with dimension n [see: Saaty, 1980].
We can conclude that the matrix is sufficiently consistent and we accept the matrix when CR≤10%.In other cases it can be concluded that the inconsistency is too large and unacceptable, so that decision makers must revise their judgments.The AHP is especially designed to assess weights within a hierarchical structure of the criteria.However, due to the fast-growing number of pairwise comparisons it is not sensible to use the method for a large set of criteria.We show some relationships between AHP as well ROC, RR and RS technique in the case of 5 criteria.Let us denote by ROC A , RR A , RS A matrices of pairwise criteria comparisons: In the Table 3 the results of estimation weight obtaining by AHP method based on matrices ( 13)-( 15) and weight obtained by ROC, RR and RS methods in the case of n=5 criteria are presented.

The comparison of the rank order weight functions
The decision about weight determination will strongly influence the final results of the decision making.The choice of weighting method depends mainly on knowledge of the underlying distributions of "true" weight.The first decision that needs to be made is equal and different weighting.Maggino and Ruviglioni [2011] suggest that: "equal weighting represents the preferred procedure, adopted in most of the applications.This happens mainly when: i) the theoretical structure attributes to each indicator the same adequacy in defining the variable to be measured, ii) the theoretical structure does not allow hypotheses to be consistently derived on differential weightings, iii) the statistical and empirical knowledge is not adequate for defining weights, iv) the correct adoption and application of alternative procedures do not find any agreement".
They also conclude that "differential weighting does not necessarily correspond to the identification of different weights but rather to the selection of the most appropriate approach in order to identify the weights among the identified ones.Assigning differential weights can be just as doubtful, especially when the decision is not supported by i) theoretical reflections that endow a meaning on each indicator or consider its impact on the synthesis, ii) methodological concerns that helps to identify the proper techniques, consistently with the theoretical structure".The ROC approach to rank order has clear statistical basis and interpretation whereas other methods have taken a more heuristic approach.A choice between rank sum weights (RS), reciprocal (RR) weights, centroid (ROC) weights depends in part on the decision maker beliefs about the steepness of the true weights.The centroid ROC weights are "steeper" assigning relatively greater weight to the more important criterion, the rank sum (RS) weights are much "flatter" that centroind (ROC) weights.The rank sum weights (RS) are reduced linearly lower from the most important to the least important.The reciprocal (RR) weights descend aggressively after the most important, but in the least important end the centroid (ROC) weights are the lowest ones.Approximations for criteria weights given by used formulas ( 5)-( 9), in case of criteria are presented in the Table 4.The differences of the weighting methods of criteria are also illustrated on the Figures 2-7.Source: own studies.
There have been some studies on comparing the decision quality of weighting methods.Several studies have compared weight functions and found centroid weights to be superior in terms of accuracy and ease of use.Some of the studies are the following: - Olson and Dorai [1992] compare centroid weights to AHP on a student job selection problem, concluding centroid weights provide almost the same accuracy while requiring much less input and mental effort from decision makers.-Edwards and Barron [1994] using centroid weights extend SMART into SMARTER (SMART Exploiting Ranks).- Barron and Barrett [1996a] analyse the effectiveness of centroid weights in SMARTER.Barron and Barrett [1996b] [2003] compared centroid weights with AHP and fuzzy method and find that the simplicity and ease of use of centroid weights make it a practical method for determining criteria weights.

Example
In the multicriteria evaluation, different combinations of criteria weights have great influence on the final evaluation result.To illustrate the input of rank order weight methods in multi-criteria decision making a simple numerical example is considered.This example reviews several applications of SAW method using different weighting schemes and compares results of different sets of weights applied to this same set of multiple criteria data.We consider four different weight sources such as centroid weights, reciprocal weights, sum weight and equal weights.Let us assume that multicriteria decision problem consists of: the set of five alternatives: A 1 ,A 2 , A 3 , A Source: own study.
Once a set of weights is determined according centroid methods, reciprocial method, rank sum method, equal, weight method (see: Table 4. for n=5).Let us observe that weights of RS are reduced linearly lower from the most important to the least important, RR weights descend aggressively after the most important, but in the least important end the ROC weights are the lowest ones.In the considered example differences of the weighting methods of five criteria are also illustrated on the Figure 8.The SAW function is then used to rank the alternatives.The Table 6 provides the values computed by formulas (2)-( 4) and ranking of the alternatives based on various criteria weights methods.We obtain different alternative rank ordering deepening on weights determination.Source: own study based on Table 5.
As the result from Table 6, the order of rating among those alternatives is the following:

Conclusion
The criteria weights are the key point in obtaining the total scores of alternatives and most importantly the conclusion of multicriteria analysis problems.In this paper the ranking methods to determine criteria weights in multicriteria analysis were presented.Ranking methods use the rank order on the considered criteria as inputs and converts them to weights for each of the items.Ranking is a necessary first step in most procedures for eliciting more precise weights There are a variety of situations where it is reasonable to use ranked weights, and there have been various techniques developed to deal with ranked weights and arrive at a choice or rank alternatives under consideration.Using ranks to elicit weights by some formulas is more reliable than just directly assigning weights to criteria because usually decision makers are more confident about the ranks of some criteria than their weights, and they can agree on ranks more easily.Rank ordering the importance of criteria may be easier than describing other imprecise weights such as bounded weights.In the situation where there is a group of decision makers, it may be realistic to expect agreement only on a ranking of weights.However, real application of this method is needed to detect real problems that the method may encounter.
FIGURE 1.Estimated rank exponent weight in case of 5  n criteria for different parameters p

TABLE 3 . The result of estimation weight by AHP, ROC, RR and RS technique in the case of n = 5 criteria -a comparison.
Source.Own study based on the matrices A ROC , A RR, A RS and ROC, RS, RR rank order weights functions.

TABLE 4 . Approximations for criteria weights given by used different formulas, in case of n=2,…, 7 criteria
Jia et al. [1998]t [1996b]RS, RR, ROC, and EW) using a simulation study and report that the ROC weights appear to perform better than the other approximate weights.-Srivastavaetal.[1995]on the basis of simulation experiments compare five weight elicitation methods, including rank sum and centroid weights, finding centroid weights to be superior to other methods.-BarronandBarrett[1996b]findcentroidweights superior to rank sum and reciprocal (inverse) weights.-Jiaetal. [1998]performed a detailed comparison of several weighting schemes and used simulation to compare centroid and rank sum weights with equal weighting and ratio weights, finding that equal weights do not always perform well, but rank-ordered centroid weights based on only an ordering of criteria lead to much the same choices as do actual weights.