Deal with the Clean-up Challenges of Changing the Question Set
We have written about the advantages of modifying questions or the selections to them while a survey is still in the field. If that constitutes the only changes you make, your cleanup tasks will not be burdened. However, if you drop a question from the survey or add a question, your program code will become more convoluted.
One reason for this additional burden arises if you rename your variables to short names (doing so makes code easier to write and improves the look of axis titles on graphs, among other advantages). If you drop a question you must remember to modify your renaming steps correspondingly. If you simply add a question or two, you will have to remember to rename the new questions and adjust the order of renaming all subsequent questions. Other programming consequences may be set in motion, such as if you select certain variables to analyze – but an intruder question now lurks in the midst of them.
Additionally, if you follow sound methodology and state how many people responded to each question when you report data in a table or graph, you must wrestle with the fact that your “**N = **” captions or disclosures will have varying numbers because only later respondents to the survey will answer the new questions and those latter respondents may never have been presented with the questions you dropped.
We can drill down one more step. If you revise the selections for a multiple-choice question, you will lose the straightforward way to calculate the percentages of respondents who chose a particular selection or combination of selections (if they are able to “click all that apply”). I suspect you can handle this calculation by weighting the newly-added selections once the survey closes.
Even more difficult, if you ask for a ranking of the selections, the addition of a new selection means that earlier respondents could not have ranked that added selection. Other than as speculated above, I don’t know an algorithmic way to resolve these two challenges. If you merely reword a selection, such as revising “number of offices” to “number of city locations,” that may not make much difference in how accurately people answer, but one can imagine a substantive shift the results from some rewording because the interpretations shift.
Likewise, if you change the instructions other than to clarify how someone should craft her answer (“Do not insert dollar signs.”), anyone who responds from that point on may give a somewhat different answer. For example, if you ask for the number of locations of a law firm and then later add the instruction that the location needs to have at least one permanent partner and one permanent associate (to exclude offices that are just mail drops), you will then have introduced an inconsistency in how the before-instruction and after-instruction respondents answered that question. Or perhaps you add an instruction to convert foreign currencies to U.S. dollars – but can’t be sure that all respondents adhere to that step or do so correctly or consistently (on what date should the conversion rate be set?).
One of the most irritating knock-on effects of modifying a survey hits you when you merge the responses from the two (or more) surveys. That seemingly innocuous move confronts the analyst with a host of issues.
Here’s the point. You need to balance the urge to refine and extend your survey with the resulting back-end challenges that come with such revisions. You will also need to elaborate in your methodology on changes that you have made that might influence the findings.