Math of AHP

The Math of AHP -- Computing priorities from pairwise comparisons

Eigenvector computation

Priorities for the elements in a cluster are derived from the relative pairwise comparisons by normalizing the principle right eigenvector of the matrix consisting of the pairwise comparisons.  An insight into why this is so "natural" a computation can be obtained by considering n rocks of known weights w1, w2,.. wn.  We form a matrix of the ratio of the known weights of the rocks as follows:

If we wanted to "recover" or find the vector of weights [w1, w2, w3, ... wn] given these ratios, we can take the matrix product of the matrix A with the vector w to obtain [1]:

A     * w  =  w

If we knew A, but not w, we could solve the above for w.  The problem of solving for a nonzero solution to this set of equations is very common in engineering and physics and is known as an eigenvalue equation:

Aw  = λ w

The solution to this set of equations is, in general, found by solving an nth order equation for l.  Thus, in general, there can be up to n unique values for l, with an associated w vector for each of the n values.

In this case, however, the matrix A has a special form since each row is a constant multiple of the first row.  For such a matrix, the rank of the matrix is one, and all the eigenvalues of A are zero, except one.  Since the sum of the eigenvalues of a positive matrix is equal to the trace of the matrix (the sum of the diagonal elements), the non-zero eigenvalue has a value of n, the size of the matrix.  This eigenvalue is referred to as λ max.

Notice that each column of A is a constant multiple of w.  Thus, w can be found by normalizing any column of A.

The matrix A is said to be strongly consistent in that

aikakj = aij for all i,j. 

Now let us consider the case where we do not know w, and where we have only estimates of the aij's in the matrix A and the strong consistency property most likely does not hold.  (This allows for small errors and inconsistencies in judgments.)  It has been shown that for any matrix, small perturbations in the entries imply similar perturbations in the eigenvalues, thus the eigenvalue problem for the inconsistent case is:

Aw  = λ maxw,

where λ max will be close to n (actually greater than or equal to n) and the other l's will be close to zero.  The estimates of the weights for the activities can be found by normalizing the eigenvector corresponding to the largest eigenvalue in the above matrix equation.

Aside:  The eigenvalue equation, when solved, produces one or more eigenvalues with associated eigenvectors. The word eigen, in German, means characteristic, or one's own.  This is appropriate because eigenvalue equations arise naturally in many instances of mathematics and science -- on their own -- as a characteristic of the system itself.  Eigenvalue equations are among the most frequent and universally occurring concepts in math and science; others include pi, the natural number e, fibonacci numbers, and the golden ratio.  Even the Google search algorithm uses eigenvectors.  (See 25 billion dollar eigenvector

[1] The matrix product is formed by multiplying, element by element, each row of the first factor, A, by corresponding elements of the second factor, w, and adding.  Thus, the first element of the product would be: (w1/w1)*w1 + (w1/w2)*w2 + .....+ (w1/wn)*wn = nw1.  Similarly, the second element would be (w2/w1)*w1 + (w2/w2)*w2 + .....+ (w2/wn)*

n = nw2. The nth element would be nwn.  Thus, the resulting vector would be nw.