Track Methods and Sources of Responses

Assuming you try more than one method to reach potential participants, you may want to learn which sources or methods were more successful. You would like to know absolute numbers of participants – “doing X method produced 25 participants – and proportionate numbers – “ZZ percent of the people sourced by phoned took part.” On the negative side, you also want to get a handle on where efforts to publicize the survey produced an insufficient number of new survey takers. Perhaps you purchased and mailed to a list of email addresses for general counsel of U.S. companies (both a method and a source). Maybe you posted snail mail invites. Was the investment worthwhile?

For example, you might send email invitations from your contacts in different tranches or from more than one mailing list. You would like to gauge the response rates because you altered the invitation text (a modest A/B test) or you reached out to different groups (Associate GCs instead of GCs maybe). You would also like to know what percentage of your survey participants came from your own email compilations.

Here is a second reason to track numbers of responses. A vendor or consulting firm agrees to pay an amount unless by tapping their mailing list they generate fewer than a minimum threshold number of respondents. [This indirect invitation preserves the confidentiality of their mailing list, although it might duplicate some invitations.] Unless you can count how many responses you got because of their efforts, you can’t have such an arrangement. What are ways to pin participants to specific methods or sources?

  • Identifiable URL: Some hosting software allows you to append an identifier to the base URL of your survey so that you can track the source from that parameter as it is reported. When you use a new URL, you will know how many invitees took you up on that version of the survey. This feature allows you to track which mailing lists accounted for which submitted responses.

  • Identifiable Question: Another technique to help you tally sources or methods is to tweak the wording of one question, which then tags responses as coming from a certain source. This approach requires sorting the response pool by that question and counting.

  • Closing Directive: Alternatively, you might change the “thank you” at the end of the survey, which asks the person who has just completed the survey to email you, with a prompt that identifies the source. Unfortunately, you cannot depend on everyone obliging.

  • Matching Emails: Although the task is painstaking, you can match email addresses of respondents to source emails. That way you can attribute XX participants to the YY mailing list. Software can do the matching, but the effort requires attention to detail. Also, every now and then you send your invitation to one email address, such as the work email of a CFO at a law firm, but their answer data has a personal email address. Matching fails.

Despite your best efforts, with a multi-pronged campaign you won’t be able to attribute a specific method or source to every response.

One cause of the falling short on one-to-one links occurs if you publish articles or post on social media about your survey. If you included your generic URL survey link, you can’t track back the source. Thereafter, it will be hard to identify what led a person to take your survey. However, on social media you can use a URL shortener, such as Bitly, and at least know how many people clicked on that short link (whether they carried on to complete the survey you can’t know).

Time frames of responses are not a reliable way to identify methods or sources. People may wait for weeks to get around to completing a survey, so you can’t be sure that responses received after a date of a second method (or source) are attributable to it or to the earlier wave of invites. Likewise, if you encourage those you invite to forward the invitation to peers (aka ** snowball participation**), or if you thank participants and urge them to enlist peers, you will receive responses from “old” surveys. You will be unsure of the sources of some respondents.

Mindful of all the uncertainties of the above methods, you might simply ask respondents where they heard about the survey. I ran a huge benchmarking survey for law departments where I asked a final question, “How did you learn about the survey” so that I could do better the next year and so that I could thank the persons and groups that had helped me. I ended up creating a drop-down list so that I would not have to code all the variations by hand. I added to the drop-down choices as more sources came online. Here, too, not everyone bothers to complete the question.