Be Sensitive to Priorities Questions

A popular format on online surveys requests respondents to consider a list and assign priorities. “Consider the following challenges of work from home as compared to at the law firm:” could be the format of one such question; the list that follows has “inadequate software, interruptions, noise, pets, social isolation, work-life balance, Other.” [Note that the selections are in alphabetical order, with “Other” at the end.] The instructions say to assign those list items one of several kinds of priorities.

Ranking questions. One variant asks that the list be put in numeric order by priority. It asks survey respondents to compare a list of different objects to one another (e.g., “Please rank each of the following items in order of importance with #1 being the most important to #10 being the least important.”). The instructions would tell them to assign a number from 1, for the least important challenge from WFH, to 7, for the biggest challenge. Note that the “gaps” between the ranked selections can not be distinguished; they are all treated as if the same. The item ranked 1 is exactly as “far” from the item ranked 3 and the item ranked 5 is from the item ranked 7.

Rating questions. Another variant asks respondents to assign the appropriate rating on a scale to each list item. Our rating question might state “Please rate each of the following WFH challenges on a from 1-10, where 1 is ‘not at all important’ and 10 is ‘very important.’”). For an image of such a question, see SnapSurveys.

More than one item might receive the same rating. Usually when you define a rating scale, you specify an odd number of ratings so that the middle value of the scale represents neutral or no opinion. Other survey designers want to force respondents to take a position and so give them an even number of ratings (such as 1 to 4). If someone doesn’t rate an item, what should you do? My default solution is to assign the item the lowest rating (but ideally confirm that with the respondent).

Instructions are particularly important for prioritizing questions because you want to avoid someone inadvertently reversing the rank order or scale. Your instruction might say, “This question uses 1 to rate the least desirable method of outside counsel cost control; it uses 10 to rate the best method. Be sure to follow that rating order in your answers.” You might also resort to font choices, such as bolding or coloring “least desirable” and “best.” We have discussed other measures to try to assure compliance in “Add Instructions.”

Forced rank questions. Here, the question instructs the respondents to assign ratings, but with the difference that constraints are imposed on the distribution of the ratings, e.g., no more than one third 1-2 or high, one third 3-4 or medium, and one third 5-6 or low. Alternatively, no more than two “high” and two “low.” A third possibility might be to require that for each “high” there must be a “low”, or any other ratio as between the ratings. It may, however, demand too much of your hosting software to enforce such parameters.

Percentage allocation questions. In this format, survey takers are instructed that they have 100 points to distribute among the items, to be allocated according to how important they deem each item to be. The percentages assigned must total 100 and the respondent need not assign a percentage to each selection. [Note how relatively complex the instructions must be. And further note that the hosting software should check that the total equals 100.]

Compared to ranking and rating questions, percentage allocations can finely differentiate between individual items. Someone might rank pets one, software two, and interruptions three, which does not make precise their relative weight beyond that ordering. The sponsor will learn much more if someone assigns pets 30%, software 15%, interruptions 10%, and so on. More sophisticated quantitative analyses become available with percentage numbers than with rankings.

Each variant of a prioritizing question demands much more thought from a conscientious respondent than does the simpler “Check the most important” or even “Check the two most important.” They require the person answering the question to evaluate the entire list of items relative to each other in stack order (ranking) or against a standard scale (ratings). If the ratings must follow a prescribed distribution (forced ratings) or equal 100 (percentage allocations), the task becomes even more difficult.

These questions are much more cognitively demanding. As a result, the analyst can be less sure of the answers being diligent and thoughtful. Worse, they might fatigue or frustrate respondents to the point that they drop out. They do, however, allow more insightful findings, and thus present a trade-off.