Avoid Bias in Item Order for Multiple-choice Questions

When you create a multiple-choice question on a survey, whether it is single-choice style or check-all-that-apply, take time to sensibly order the items available for selection. This suggestion, which should be a command, applies both to radio button designs and to drop-down menus; they both list items and the arrangement of those items influences respondents. Poorly constructed items spawn all manner of pernicious, subtle distortions; here are a handful.

• Inferred priority: By “inferred priority,” I mean that some respondents might interpret the first item or two to be a deliberate elevation of them (the more socially desirable, presumed correct, or an expected choice) and demotion of later items (less desirable, wrong, or expected toward the end).

• Primacy effect: The “primacy effect” is my term for the greater likelihood that respondents will pick one of the first choices rather than read all the way through to the end and choose one of the later items. Even lawyers, who are inured to reading dense language to the end, are satisficers, so we seize upon the first plausible answer.

• Style suggestiveness: It is a good practice to write each item approximately the same length and with the same grammatical structure. The goal is parallel form; write items as “bird,” “cat,” “dog”, not “One of those avians that flies,” “Animal of feline attributes,” “Snoopyish”. In other words, each item should adhere to a similar grammar and syntax. You don’t want an item’s brevity or formulation to put the thumb on the scale of choice, nor its distinctive syntax to make it stand out in any way. Follow parallelism and consistent word counts.

• Individual words: An item that includes a popular, idiomatic phrase probably gets selected more than a dry, technical item. For an engagement survey of paralegals in a law firm or law department, “WFH freedom” would likely attract more checks than “Nonoffice-based alternatives.”

To combat biasing answers with ill-considered placement of items, designers of surveys should stay vigilant. To remedy skewing of responses brought about my careless phrasing, follow a style pattern, test the items on a careful reader, and revise, revise, revise. As to the order of the items, consider these prophylactic practices:

• Randomized for each person: Some hosting software will assemble items in random order for each respondent. If your app can’t handle that, you might make multiple versions of the survey, where each version arrays the items differently, and distribute the links to those versions randomly. The order of items on a question makes no difference to your analysis software. With the order of items changing for everyone, you won’t have to worry about potential distortions.

• Alphabetical: You can put items in alphabetical order, which injects a semblance of randomness. My practice is to write each item as clearly and as consistently (in form) as I can and then let Word alphabetize them. But, of course, you can cheat and change the first word of a selection so that it appears higher or lower in the list. Even with alphabetical order, you might put related ideas together so that respondents can evaluate them together. Thus “Management Committee” might be next to “Practice Group Heads” if those two choices are likely to be considered against each other.

• Typical progression: If it is a fact question, such as “What is your highest level of education?,” you might put the items in an expected, normal progression: four-year college, Masters, MBA, JD, PhD. With similar logic, if you ask about the first state of bar admission, you list U.S. states in alphabetical order; or you display the months of the year in calendar order.

The goal behind item presentation is to avoid biasing the opinions or thinking of the respondents, as reflected in their item selection. Alphabetized, randomized, or accustomed lists lead to the fairest results.

While on the subject of item construction for multiple-choice survey questions, take a moment to note three related points.

  1. Regardless of the order you impose on the main items, put “Other” as the last item and always provide a text box where you urge the respondents who mark “Other” to explain what they have in mind.

  2. The number of choice items should probably be no more than 6 to 8. Those who take your survey are unlikely to hack through the dense underbrush of double-digit items.

  3. If respondents can choose more than one item, the biases of order may be muted because the respondent has in mind that several items might deserve to be checked. Privileging any of them by how they are written, however, still tips the scales of impartiality.