Set Thresholds for Partial Responses
One of the frustrations of conducting surveys on behalf of law firms, law departments or legal vendors arises from the partial responses that are occasionally returned. By partial response, I mean that the respondent has not answered a material number of the questions on the questionnaire or has plugged in for some questions obviously unconsidered answers, such as all “2’s” on 10 straight 1-10 evaluation questions. In my comments below I treat the two situations the same.
How you define a partial response for any given survey depends in part on how many questions you have asked, and thus the proportion of answered questions to all the questions. For my consulting practice, I cast a critical eye on responses that have fewer that three-quarters of the questions answered.
As a rough rule of thumb, you should probably be able to live with a response that is two thirds complete. Somewhat analogously to accepting participation rates of two thirds or so, this seems to be a realistic and pragmatic threshold.
More subtly, however, some of your questions might seek more important information than others, whereby a non-answer to a relatively peripheral inquiry does not matter as much as a blank on the key question. If you have a table in a technology survey that asks: “How do you rate your facility with the following software?,” leaving that table blank eviscerates the survey. Given that void and even with only that void, you might discard the response as a partial response. Whereas, another blank answer, such as “What year did you join the firm?” might be less crucial in your analysis of the software survey’s data.
Sponsors and analysts of surveys may be miffed at partial responses, but they might be quite innocent. A respondent might simply not know the answer to a question, as in, “How frequently did you ask for help from the law department in the last six months?” If there is no option for “not applicable” or “do not know,” they may skip the question. Then, too, a question might be unclear or ambiguous, without adequate clarifying instructions, so that the respondent struggles a bit, gives up, and passes on. Or, a question might be deemed too personal or sensitive, such as if you asked about sexual orientation or physical health.
Further along the lines of not blaming the respondent, you might have to acknowledge that the questionnaire has flaws that lead to incomplete responses (perhaps a logic question that misfires), or that the survey may be so long that fatigue sets in and the tired, resentful respondent jumps over arduous or complicated questions. Nothing like a table with eight rows and five columns to be filled in to squash enthusiasm!.
Alternatively, a person might skip a question because they need to rustle up a piece of information or they want to ponder their response but then either forgot to return to the question or couldn’t manage to navigate back.
Preempting partial responses or curing them if they are submitted is possible – aside from treating the quality and number of questions. It is not as simple as flagging many questions, or the primary questions at least, as required to be answered. It is quite plausible that people will then not respond at all if they can’t get past the required questions or don’t want to be forced to answer them.
A survey might be able to highlight a missing answer by nudging the respondent (“Did you mean to leave Question 16 blank?”). Whether such a futuristic real-time nudge is possible, I don’t know. Short of that capability, you might preface key questions, the sine qua non’s of your survey, with text that explicitly explains how crucial it is that the respondent complete it.
Assuming you have asked for the email address of the respondent, you might write back to someone who submitted a patchwork of answers and explain that omitted answers significantly dilute the value of their opinions and knowledge. If you write to a person who left too many questions unfilled, your survey software needs to accommodate either multiple submissions from the same person or the reopening of a submitted survey. More severely, say if you are fielding a compensation survey and they do not give a base salary figure, the non-complying respondent behind that materially deficient answer might not deserve to receive the resulting report of aggregated results.
Let’s close with a deeper question. Even if a person has not provided answers to a majority of the questions, those that they did answer may well be important to them, and why shouldn’t you include their answers? I feel that this is a professionally and methodologically sound decision. Unless questions interlock with each other, meaning if you can treat them on a standalone basis, then even largely incomplete questionnaires still bring value from those questions that have answers. Also, when you conduct your post-mortem on the survey project, you should spend time on any questions that had a high percentage of empty responses to figure out why that might have happened.