Disqualify Unacceptable Responses
Every survey sponsor wants as many participants as possible to take their survey. Law firms want oodles of employees to take part or swathes of invited corporations; law departments want every internal client to weigh in or every paralegal and lawyer on the rolls. However, with survey data, quality outranks quantity, and respondents need to be both acceptable and to provide qualified answers to the online questionnaire for their answers to count. One of the harder tasks, therefore, of an analyst is to cull responses that should not make their way into the survey sample, which we will refer to collectively as disqualified responses.
The analyst should screen out disqualified responses before the effort begins to clean up the survey data (which includes disqualification, by the way) and to prepare the subsequent analytics.
Consider the range of circumstances under which a survey response might be rejected. I have listed these from my experience and ordered them approximately by decreasing frequency in which they occur.
Draft reviewers. Responses entered by those who test the survey for its mechanics as well as ambiguities or other problem (the stress-testers) are automatically thrown out. To do so, someone needs to track of how many people put the draft through its paces and when they put in their response.
Overtly untrue or non-genuine answers. Giveaways as to this irritating status include snarky text comments, blatantly contradictory answers, completely implausible series of answers such as ten straight rankings of “1”, or nonsensical entries. Such maliciousness doesn’t happen when respondents identify themselves, but anonymous submissions may be infected.
Too many incomplete questions. All respondents are entitled to skip a question or two, but if the omissions or incompletes are too pervasive, the partial response becomes unusable. I suspect that at times people slap in minimal data simply to obtain the report that is delivered to all “respondents” (no matter how skimpy). It is a manifestation of free-loading.
Duplicate submissions. Here and there the same person takes the survey twice (it happens when a survey stays open for a long time and multiple invitations are emailed to the entire group). My practice is to favor the second of the two submissions, if the rest of the answers appear to match, as it may represent the latest and best thinking or data of that person. Otherwise, I choose the submission that has the most complete answers. In either case, it is best to write to the person if you have their email address and ask for their preference and any differences they recall between the submissions.
Multiple submissions from the same company or law firm. If a law department surveys the law firms it has retained about outside counsel guidelines, staffing procedures, technology use, or another topic, it almost always wants to receive only a single answer per law firm. If more than one arrives, the analyst needs to decide which one to keep. Or a law firm sends out survey invitations to a set of corporations and a senior executive responds as well as a more junior person.
| I lean toward keeping only the senior person’s submission because they have the wider perspective, and even though the junior person may know more about the topic of the survey. My rule of thumb allows in the most senior respondent’s submission, but relative thoroughness of the response may shift my decision. Again, it is best to write to both of them and figure out the best set of answers to keep. They might suggest an amalgamation of their replies. Conceivably you can average the responses, but only for numeric entries.
Anonymous replies. Hard and fast rules are impossible about what to do with anonymous answers. They could be completely genuine and valuable, but the nagging worry is always that you can’t confirm whether that is true, and under the cloak of anonymity they could be polluting your data set with misleading or wrong answers. Many factors weigh on whether to keep or junk mystery responses; my lodestar is that you have lost an anchor of credibility and a means to check.
Below a threshold criterion. If a person who submits a survey ranks below the level that you intend to keep, such as if you only want directors and above and the person is a manager, you must adhere to your guidelines for acceptability. A survey might want to hear only from those who meet a threshold, such as only in-house lawyers who managed more than $250,000 of outside counsel services in the past year. If a survey response admits a fee level below that bar, it should be set aside.
Not in the target group. If a law firm wants only paralegals and paraprofessionals to complete a survey, an administrative assistant who slips in should be disqualified. Or, if a law firm has sent a survey to corporations that are believed to have earned more than $1 billion of revenue in their latest fiscal year, smaller fry should be netted out.
Have enough responses in a strata. To avoid an egregiously imbalanced survey set, you might shut off responses from an overloaded tranche as you try to restore balance to the overall set.
Late responses. A sponsor or two may hold firmly to the survey deadline, as for example if they have to complete the report by a certain date, but in my experience, sponsors welcome as many responses as possible and will squeeze in a latecomer, if at all possible.
Disqualification might not be a binary decision, a decision that a response is fully in or it is fully out. I can imagine treating problematic responses by weighting their data less, but in the real world I have never tried it or heard of it done. Weighting would raise complicated issues. No matter what, whoever prepares the survey report should explain in the report’s appendix about disqualified responses and how and why responses were filtered out.