Change the Question Set, but Cautiously
You want as many people as possible to complete your survey. For that reason, as you decide on the first version of the questionnaire, you inevitably drop questions that would stretch the survey discouragingly long – who will trudge through 55 questions? Or you’d love to know about the law department’s square footage, but that inquiry takes research to answer. Perhaps you admit that you can’t phrase the question or its selections in an unambiguous, answerable way. Even so, such candidate questions stay on your waiting list, so to speak.
Alternatively, as you study the responses that come in, you realize that you hadn’t thought of an insightful question or two. In one survey for Executive Directors of law firms, we asked about years in the current role, but not about years as an ED in the aggregate (across more than one law firm). We tacked on the new inquiry.
If you insert new selections for a multiple choice question or drop a selection (or even rewrite a selection), that action will throw off your overall conclusions. For example, a ranking of selections A through F by the first 50 respondents won’t match the ranking of selections B through G if you drop A and add G. Or in a survey by a law department of outside counsel management techniques, if you reword the A selection from “Outsourcing work” to “Relying on Alternative Legal Service Providers (ALSPs)”, you disrupt the analysis. Participants might interpret those two selections as being quite different. At the least, your methodology section ought to explain the modification and how it might affect your analysis.
A third circumstance might encourage you to modify your survey: you might conclude that the answers so far have been so consistent that to pile up more of them brings no further value. For example, you have proved that Executive Directors only report to one of three possibilities. You drop the question. For each of these reasons, you may come to realize that you should weigh changing the question (or selection) set.
Now, you might foreclose that option because you want consistency with previous surveys. The “Annual Legal Survey” may value trend data above all else. Or possibly an alteration to the survey risks throwing off your hosting software or the analysis technique and program code that you have so arduously constructed.
But my belief is that a survey has a life and change is part of life. Furthermore, you are conducting research, and want to dig as deeply as you can.
Therefore, once you have a respectable data set, if you trigger additional invitations to people or organizations you can readily add a few questions (especially if at the same time you discard a few). For example, in a survey of law firm Chief Operating Officers, it became certain that their reporting lines fell into an evident pattern. After 150 responses, that pattern was not going to shift, so why ask the question anymore? Out it went, and in came three more questions that explored new topics.
Those appended questions generated more insights, even if the number of respondents fell well below the total “Ns” of the other questions that were asked all the way through. If twenty or more COOs tell you about their prior position, you can populate a respectable plot and draw preliminary conclusions. Second, when you publicize your survey, you have one more finding or topic that you can tout. Third, if you conduct the survey the next year (or quarter), you will have field tested the question – always a desirable result if you are wary of the wording of the question, have concerns about whether people will answer a sensitive question, or fret as to the quality of a selection list. Fourth, once you have accumulated enough survey responses to secure the research goals of your inquiry, you can be more daring in what you add to the mix. For example, you might include a question about gender or ask for a more detailed ranking of selections instead of “pick the top three.”
If you revise the questions in an online survey, you should clone the first survey, modify it, and use the link to the new version from then on. That method makes it easier to track uptake of the revised survey. Elsewhere I discuss ways to identify the source of a response.
Remember that when the data you report comes from widely different numbers of respondents, it becomes imperative to state the “N = ” for each question – the number of respondents for that particular question. However, I know of no similar, simple method to report how many selections on a multiple choice question a respondent faced.
On a side note, I believe that changing the order of questions also raises issues of interpretation. If you ask respondents about disadvantages of working from home before you ask them about advantages, but then later swap the order of those two questions, have you subtly signaled your view of the matter? Will the sequence of positive-then-negative alter how respondents react to the reversal?