You can manage the Measurement Methods in DEFINE MODEL > Set Measurement Options > Measurement Methods page:
The Measurement Methods page is used to designate how priorities are to be derived or assigned:
- For objectives with respect to their parent objective
- For alternatives with respect to the covering objectives
Depending on the selected option, you can define both the objectives and alternatives measurement methods on the same page or separately:
For example, in the model whose objectives hierarchy is shown below:
Vendor/Partner Access, Customer Access/Service, and Internal Access are covering objectives.
When the For Objectives option is selected, we define how the objectives will be measured with respect to their parent.
For objectives, we can select either Pairwise Comparisons or Direct entry
When the For Alternatives option is selected, we define how the alternatives are measured with respect to the covering objectives.
For alternatives, we can select the following methods:
When All is selected, we can define both the Objectives and Alternatives methods at the same time:
Measurement Methods for evaluating objectives can be found on DEFINE MODEL > Set Measurement Options > Measurement Methods > For Objectives radio button is where we designate how priorities are to be derived or assigned with respect to those objectives (elements) in the objectives hierarchy that have elements below them.
NOTE: You can also define Measurement Methods for Objectives in All mode (measurement methods options For Objectives and For Alternatives options are available).
The default measurement method for each cluster in the objectives hierarchy is pairwise as can be seen below.
Pairwise comparisons are used to derive ratio scale priorities for the relative importance of objectives, possibly sub-objectives, sub-sub objectives... in the objectives hierarchy. (In some decisions, other types of elements such as scenarios or constituencies may be in the hierarchy. Priorities representing the relative likelihood of different scenarios, or the relative importance of different constituencies, are derived in the same manner.)
The pairwise comparisons can be made verbally, numerically, or graphically.
It is also possible to assign priorities directly to objectives instead of making pairwise comparisons. This is not generally recommended but can be useful if priorities have been derived in another model or specified by a process such as an RFP.
Enabling the Advanced Mode switch at the bottom of the page will show additional information and options per cluster.
The same pairwise comparison technique used for measuring the priorities of the objectives can be used to measure the priorities representing the relative preference of alternatives with respect to each lowest level objective (also called covering objective or terminal objective).
However, if a large number of alternatives are being prioritized (possibly hundreds or thousands), there would be an inordinate number of pairwise comparisons if all possible pairs were evaluated. There are two practical approaches that can be taken:
- Make pairwise comparisons on a subset or subsets of all possible pairs.
- Define and apply absolute measurement scales.
The most common approach and the one described here is to use absolute measurement.
There are three types of absolute measurement scales in Expert Choice.
- Rating scales
- Step functions
- Utility curves
When there are many alternatives to score, rating scales are an effective way to elicit judgments from participants. Priorities for rating scale intensities are derived using pairwise comparison by the project manager and team prior to scoring alternatives. Rating scales are appropriate for qualitative judgments or qualitative facts about alternatives.
Step functions and utility curves are appropriate for ingesting quantitative data and converting into ratio scale measures. Priorities for step function or utility curve data are derived using pairwise comparison by the project manager and team prior to scoring alternatives.
Specifying Measurement Type and Measurement Scale for Evaluating Alternatives with Respect to Covering Objectives
Use the measurement methods page to maintain measurement methods and to assign measurement methods to covering objectives. Use the drop-down menu in the “Measurement type” column of each covering objective to specify the measurement type for each covering objective. Use the drop-down menu in the “Measurement scale” column of each covering objective to specify the desired scale from the drop-down menu. To create a new measurement method, choose “create new” from the drop-down in the “Measurement scale” column. Click the pencil in the Actions column to edit the measurement scale.
Alternatively, use the Manage Scales button, to maintain all measurement scales in one place. After that, select the measurement method and measurement scale for each covering objective in the “Measurement scale” column, as explained above. A single measurement scale can be assigned to multiple covering objectives.
The Measurement Type is set by selecting from the drop-down menu under Measurement type:
With the exception of pairwise comparisons, you also specify a measurement scale (either an existing one or create a new one) for the selected measurement type. The Measurement Scale is set by selecting from the Measurement Scale drop-down:
From our example above, alternatives with respect to Vendor/Partner Access are to be evaluated with the "Rating Scale" measurement type using "Scale for Vendor/Partner Access."
Clicking the icon will open the evaluation page specific to the cluster:
In the example above, clicking the eye icon for the Vendor/Partner Access will redirect you to the first evaluation step for evaluating the alternative(s) with respect to the Vendor/Partner Access.
You can manage (create, edit, delete, clone, etc.) the measurement scales of the model using the button.
Clicking the Manage Scales button will open a dialog box as below:
The existing scale(s) of the selected Measurement Method are populated in the Measurement Scale drop-down.
Comparion provides at least one "default" scale for each rating type, and unless you have already defined additional scales specific to your application, you may have the choice of only one default scale for each measurement type. Although it is more time-consuming, it is also more accurate and we recommend creating scales specific to the evaluation being performed.
Measurement scale settings/options vary depending on the Measurement Type (Rating scale, Step Function, Utility curve).
You can specify, create, and edit measurement scales for covering objectives. Use the drop-downs to set measurement type and measurement scale for each covering objective (either an existing scale or "New" to create a new scale).
From above, the covering objectives are assigned with Rating Measurement Type with a specific scale for each.
Clicking the Measurement Scale drop-down on one covering objective will list all the Rating Scales available in the model. You can click "Create New ..." if you want to create a new one and assign it to that covering objective.
You can also edit the measurement scale currently selected on the drop-down by clicking the button.
The parameters needed to define a measurement scale vary according to Measurement Method (rating scale, step function, or utility curve).
NOTE: If you edit the priorities or shapes of utility curves for any scales after some or all evaluations are performed, the new priorities will be applied to the existing evaluations.
However, if you choose a new scale or modify an existing scale by adding or removing intensities, any prior evaluations will need to be redone. You can also click the button in the grid to edit a specific scale selected in the row.
Rating Scales
Rating scales can be created and modified by specifying Intensities (words) and corresponding priorities. It is recommended that the rating scales contain more than just three or four intensities and that the priorities be derived, rather than arbitrarily assigned.
Copy / Paste Scale
Click to copy the intensity names, priorities, and descriptions to the clipboard which you can paste to another scale of the same or another model.
You can paste intensities from the clipboard using the . Scale to be pasted should be in this format:
Scale Name
Intensity Name<space>Priority<space>Description
e.g:
Note: The Scale Name and Intensity Description are optional.
Clone Scale
Click to create an exact copy of a rating, step function, or utility curve scale. This includes all the scale details as well as the judgments made to derive the priorities (if any).
Deriving Rating Scale Priorities
For example, we will create a new rating scale with the following intensities:
- Very High
- High
- Average
- Low
- Very Low
- None
Clicking the button will open a window where you can select the option settings for assessing the intensities:
Click Proceed.
Another dialog box will be displayed where you can select which of the intensities will be excluded in the assessment. Excluded intensities will get a zero priority.
From above, we excluded the intensity "None."
Clicking the OK button will open the evaluation page where you can do the assessments as shown below:
Click the Next button.
Be sure to click the Finish button to generate the intensity priorities from the assessment made.
The derived priorities are normalized so that the maximum is 1 as shown above.
Step Functions
Step functions are similar to rating scales in that they contain verbal intensities and priorities, but they are more similar to Utility curves which translate data into priorities.
By default, the Piecewise Linear option is enabled. Disabling this will produce the data-priority graph as shown below:
You can also derive the Step function priorities the same way when Deriving the Rating Scale Priorities.
Utility Curves
Utility curves translate data into priorities.
Utility curves can be increasing or decreasing, linear, or non-linear.
You can set the lower and upper limits of utility curves, which by default are 0 and 1.
The lower limit on the x-axis of the utility curve can be negative to accommodate negative data. Ratio scale priorities (y values), however, are always positive. Some care should be exercised in selecting the x value representing 0 priority because an arbitrary shifting of a utility curve will destroy the ratio scale property of the resulting priorities.
Curvature:
Increasing the value in the curvature specification will introduce more curvature and a concave utility curve:
Decreasing the value in the curvature specification to negative values will produce a convex utility curve:
For decreasing utility curves, data that is below the lower limit will translate to priorities of 1, and data to the right of the upper limit will translate to priorities of 0. Data between the lower and upper limit will translate to priorities as specified by the utility curve:
Concave and Convex decreasing utility curves can be established by increasing or decreasing the curvature specification above or below zero in a similar fashion as for the increasing utility curves:
Enabling the Advanced Mode switch at the bottom of the page will show additional information and options per cluster:
The # of Elements column displays the number of alternatives that are contributing to a given cluster.
The # of Judgments for non-pairwise measurement methods is simply equal to the # of elements.
When defining the Measurement Methods, additional information and options can be displayed when Advanced Mode is ON.
Options per cluster for ALL measurement types:
The # of Elements in cluster column displays the number of sub-objectives of the given objective (parent).
The # of Judgments in Cluster shows the total number of judgments required on a given cluster (ignoring roles). For pairwise comparisons, the number of judgments depends on the number of elements and the number of comparisons (diagonals) specified. For non-pairwise comparisons, the number of judgments is equal to the number of elements.
Hovering on the cell will display the formula used:
Options per cluster for Pairwise Comparisons
You can define the options for Pairwise Comparisons (No. of diagonals, Display, Pairwise Type, Force Graphical) when evaluating the objectives with respect to its parent, or alternatives with respect to the covering objectives in Judgments Options page which determine the default options for each cluster.
When the Auto-advance option is ON, you have additional control over these options by assigning them per cluster. These help you to better balance rigor and accuracy with the time and energy required to evaluate the various steps.
In the example below, for "Goal: Optimize IT Portfolio Optimization" we keep the number of comparisons Two Diagonals (as defined in Judgments Options page), but for the "Leverage Knowledge" and "Improve Organizational Efficiency" clusters, we specify to evaluate all the possible pairs (All pairs).
The # of Comparisons can be All pairs, two diagonals, or one diagonal.
The most accurate results are achieved with the first option above but at the expense of taking more time. If the number of elements in a cluster is small, then this option provides the most redundancy and hence the most accurate results.
The choice of first and second diagonals in the above example would entail 4+3 judgments. This would consist of 3 "redundant" judgments (since at least 4 judgments are required for a spanning set) and would be reasonable even if verbal judgments were made.
Choosing the minimum number of comparisons is not recommended unless pairwise graphical judgments are made and you have confidence in the accuracy of each of the judgments.
The Display column defines the number of pairs to display during the evaluation -- whether to compare one pair at a time or display all pairs at once:
The Pairwise type can be Graphical/Numerical or Verbal:
Graphical judgments, being ratio scale measures to begin with, generally produce more accurate results and require less redundancy to produce accurate results than do verbal judgments.