Deliberately Order the Questions

For most surveys, the questionnaire develops as the sponsor thinks of information or opinions they would like to collect. Only when the sprawl of brainstormed questions peters out does the project team step back and think about the flow of the questions in terms of their logic, priorities, time demands, and effort needed.

My vote goes to starting with easy questions, such as demographics. I am against delaying demographics, such as gender, to the end. A person who has revealed much in his or her answers doesn’t want to find that you ask at the end for their name, age, e-mail, and income. After the initial, automatic questions, then you move to positive, feel-good questions. Your respondents feel comfortable answering them and they speed from one to the next.

In the middle are the meat-and-potatoes questions that most interest the sponsor and probably demand the most thought from the respondents.

The most complex or controversial few inquiries come at the end. The respondents may be fatigued, but they should be able to see that the finish line is close. Considered from a different perspective, once a person has spent a time on a survey and invested in its topic, they might be more willing to persevere through the hardest parts. This psychology honors the sunk cost fallacy.

A well-disciplined questionnaire also clumps related questions, rather than forcing respondents to jump back and forth between topics. For example, group the questions about cybersecurity or the questions about paid time off policy.

When you group related questions, it’s good to insert an explanation element at the start of the group that explains why the following questions circle around a certain topic. You can also be more efficient by defining key terms: “In the following set of questions, the term ‘fixed fees’ means that the law firm has agreed to perform specified tasks for a certain amount.”

However, grouping related questions admits an exception. To learn important attitudes, you might ask for a ranking of priorities early in the survey and then later ask the same questions but worded in the negative. Thus, the first (positive) version asks, “What are the advantages of learning software on your own?”; the second (negative) version turns it around and asks, “What are the disadvantages of learning software on your own?” This technique of positive/negative paired questions, separated by an interim inquiry or two, reduces the risk that a person misreads the ranking scale. It also avoids some of the bias of only a positive presentation or only a negative presentation.

I advocate that survey designers put the broadest, most thoughtful questions at the end, such as free-text comments, because the preceding material has warmed up the participants and started their mental juices flowing about the topic. True, this strategy risks tipping the scales and seeding their thoughts because of earlier questions, but counterbalancing that is the concerted mental energy you have nurtured.