After structuring the model and collecting judgments, use the Synthesis Results menu to analyze the results.
Synthesize automatically combines qualitative judgments with quantitative data, if available. Qualitative judgments are based on the knowledge, experience, and judgment of the participants.
You can analyze objective priorities and alternative priorities. You can look at the results across all participants and/or one or more participant groups or individual participants.
Charts display results in a graphical format, such as bar charts (called Column charts) and pie charts. These charts can be exported in .png, .pdf, .svg, or multipage pdf format.
Grids display results in tabular format. Grid data can be downloaded to in Excel .xlsx format.
Sensitivity Analysis with respect to changes in objective priority allows you to judge how sensitive (or insensitive) individual alternatives are to changes in objective priority.
Sensitivity Analysis with respect to changes in alternative priority allows you to judge how sensitive (or insensitive) individual alternatives are to changes in alternative priority. Sensitivity Analysis with respect to changes in alternative priorities is also useful for detecting bias - when looking at the results for individual participants.
Consensus View shows you the judgments with the highest level of disagreement and allows you to drill down to a teamtime view showing each participant's judgment. This can be very useful for uncovering bias, and for surfacing and sorting out information asymmetry among your participants.
The synthesis results include priorities for the competing objectives as well as overall priorities for the alternatives. Because of the structuring and measurement methods used by Expert Choice, the results are mathematically sound, unlike many traditional approaches such as using spreadsheets to rate alternatives.
But having mathematically sound results is not enough. The results must be intuitively appealing as well. The synthesis workflow step provides tools (such as sensitivity analysis and consensus measures) to allow you and your colleagues to examine the results from numerous perspectives. Using these tools, you can ask and answer questions such as, "What might be wrong with this conclusion? Why is Alternative Y not more preferable than Alternative X? If we were to increase the priority of the financial objective, why does Alternative Z become more preferable? Why might others in the organization feel that Alternative W should have a higher priority than alternative X?"
The answers to one or more of these questions might signal the need for iteration. If, for example, you feel that Alternative Y might be more preferable than Alternative X because of its style, and style is not one of the objectives in the model, iteration is necessary. If "style" is already in the model, then does increasing the importance of style shift the priorities so that Alternative Y becomes more preferable than alternative X? If not, then perhaps the judgments were entered incorrectly, and iteration to re-examine the judgments is called for. If "style" is already in the model and the judgments are reasonable, how much would the importance of style have to be changed before the decision was reversed? If it is just a little bit, then you might reconvene those whose role it was to prioritize style and ask that they discuss their judgments and determine that they are reasonable.