Publishing date October 2003. © 2003. All rights reserved. Copyright rests with the author. No part of this article may be reproduced without written permission from the author.

Response Rates

On-Line Survey Response Rates: Any Tricks of the Trade?

by Annie Pettit, Ph.D.

 

Response rates are currently the #1 issue in market research. As researchers our priority should be to determine why response rates are decreasing, and secondly, to discover what can be done to either slow down or eliminate the problem. The goal of this article is to consider issues that relate to slowing down the problem for on-line surveys. A number of things are thought to affect survey response rates. Subject lines and invite letters are usually at the top of the list, but other features can be equally or more important.

We used similar types of surveys to carry out this research and all were sourced from the same opt-in list. Surveys were no longer than about 20 questions and asked about past and future purchases, and recall of receiving samples and/or coupons. Generally, the surveys did not include sensitive items such as ethnicity or income, and all questions were optional, unless required otherwise due to skips.

Subject Lines
Your responder's first impression of the survey comes from the subject line. As we all know, you never get a second chance to make a first impression. The "delete" button quickly eliminates your second chance. The simplest thing to test about subject lines is length, or the number of characters that it uses. The average e-mail program has less than 75 spaces in which the subject line can be seen, and many are quite shorter than that. Subject lines therefore need to be enticing, informative, recognizable, and at the same time, as short as possible. We tested 12 unique subject lines, each of which ranged from 21 to 50 characters, and each was accompanied by the same invitation letter and survey.

Chart A

Surprisingly, or not so surprisingly, our 'control' subject line of 32 characters outperformed all others in terms of the overall response rate. Consumers may already be somewhat familiar with our current subject line, but other subject lines had similarly good response rates, even though they were much longer or shorter. Chart A clearly shows the lack of a relationship between subject line length and response rate. (While the best measure of subject lines is the open rate, that information is unavailable to us.)

Another important test is the content of the subject line. We created various options with or without a company name, mention of a sweepstakes, or a 'call to action'. In this case, we noticed that subject lines mentioning the company name and sweepstakes very early performed very well.

Chart B

Invitation Letter
Invitation letters provide the second chance for potential responders to hit the "delete" button. We used our standard invitation letter as the 'Control' letter, and then tried four other letters, all of which had the same type of html text formatting, and general appearance, and differed on the amount of information provided and the tone of the letter. The only letter to have a slightly lower response rate was the informal letter (See Chart B), which made use of emoticons (characters that are put together to form images) and slight slang. However, the response rate was still acceptable. We plan to carry out additional research to look more closely at the design of the letter as opposed to the content.

Question Style
One feature that rarely gets the attention it deserves is the effect of question styles on response rates. In this test, our original purpose was to investigate the effect of list order on incidence rates. Would placing an item at the bottom or the top of a list affect the responses? We designed two questions that had either 10 or 20 items in the list. The lists were then presented in either alphabetical order, or in random order.

Chart C

Before we even began to consider incidence rates of specific items, we noticed an extreme difference in the survey response rate. Apparently, responders were so displeased with the survey that had two long randomly ordered lists that they bailed out of the survey before they had finished. Chart C shows that almost 40% of potential completes were lost simply because the two questions presented items that were in random order. Researchers must seriously consider whether potential order effects are significant enough to over-rule the 'annoyance' factor, and the potential biasing effect that losing these 40% of responders may cause.

There are so many other question styles that need to be researched. Next on our test list is the effect of forced responses, in particular, forcing responses to all, some, or none of the questions.

Chart D

Country of research
Our research is carried out in both Canada and the US. As such, we can see differences in response rates, based on questionnaires that have the same types of subject lines, invites, and surveys. We routinely see much higher response rates in Canada, ranging from 15% to 20%, in comparison to US rates, which range from 8% to 14%. Chart D clearly shows that Canadian response rates are consistently higher than rates in the US.

Conclusion
Ah, there are still so many interesting things to test! These issues represent only a small fraction. While your base response rates may be different due to the various factors that differentiate your type of research, and the method by which you calculate response rates, the principles discussed here are still applicable. We need to be continually aware of the variety of factors that can affect response rates, and do whatever we can to keep them high. High response rates mean responders are satisfied with and interested in the survey process.

 

Annie Pettit is a Research Scientist with ICOM Information & Communication. She can be reached at (416) 297-7887 ext. 2466 or apettit@i-com.com


© 2003. All rights reserved. Copyright rests with the author. No part of this article may be reproduced without the written permission from the author.