top of page

6 common survey writing mistakes to watch out for.



Fifteen or so years ago, if you wanted to run an online survey, you would have had to go to a research agency to get them to write, program, and field the survey for you. Since then, there’s been an explosion in self-service tools, including SurveyMonkey, Alchemer, Qualtrics, Typeform and more. Even Google got in on the act with Forms. 


But we’ve advised enough businesses to know that the ability to write surveys yourself doesn’t mean you’re immune from making mistakes. Writing a good survey seems like something anyone can do, but in reality, it takes training. What’s even more worrisome is that it also takes training to realize the mistakes before it’s too late.


Here then, in no particular order, are 6 common survey writing mistakes that companies should avoid making when using DIY tools: 


  1. Writing surveys that are too long. Just because you can ask every question your stakeholders want to ask, doesn’t mean you should. Nobody has the time to answer a 20 minute survey, and even survey-takers with good intentions will start giving less thoughtful (and sometimes random) answers if a survey is taking up more time than they are willing to give. In an ideal world, no survey should be more than 10 minutes long. At least right now. As attention spans grow shorter, this might change.

  2. Asking double-barrelled questions. Questions like, “How satisfied are you with our website’s design and navigation?” can seem straightforward to a survey-writer, but it’s quite confusing to a survey-taker. A user may be completely happy with the design but hate the navigation. Or the reverse may be true. Such a user wouldn’t know how to answer this question. A better way to go is to ask two separate questions, one about website design and another about navigation. 

  3. Asking questions that respondents can’t accurately answer. If you ask someone how many times they’ve been on vacation in the last year, they can probably give you an accurate answer. But if you ask them how many times they’ve been on vacation in their life, they most likely cannot. Similarly, it’s common for businesses to ask survey takers how much time they spend doing something, or how much money they’ve spent on something. But we know from academic research that recall of these things is often wildly incorrect. And unless the purpose of the research is to highlight how bad recall is, it really is not worth asking questions in this way. 

  4. Using the wrong question format. It’s common to see survey questions that ask respondents to choose answers from a list. But there are two versions of this type of question: the survey taker can be forced to choose only one response or they can choose all that apply. It’s really important to know when to use each type of question. We’ve seen surveys where respondents have selected contradictory answers because they were allowed to select more than one response when doing so made no sense. Examples include respondents selecting “none of the above” and another option that’s “above.” Or, survey-takers could be forced to provide incomplete data because they’re only allowed to select one response: when asking “Which social media platforms do you use?” or “Which of these brands of sodas do you consume?” it’s best to allow the selection of multiple answers.

  5. Not applying randomization. Randomization is when you program a survey tool to present answers (or in some cases, questions) in a random order for each survey taker. This is a necessary tactic for minimizing biases such as primacy or recency effects (the tendency to focus too much on the first or last items). In message or product concept testing, if you don’t randomize the order of the messages and concepts presented, the position of a message in your survey - and not the quality of the message itself - could impact which message gets the highest score. Surveys that fail to randomize can lead decision-makers toward a marketing message simply because it was the first one shown on a list. Nobody wants that. 

  6. Ranking questions (including “drag and drop”). These types of questions ask survey takers to rank a list of items in order of importance, or to move items around until they’re ranked in the right order. Though these types of questions seem useful, the experience of answering them isn’t great. Dragging and dropping can be challenging, particularly on a mobile device, and ranking beyond four or five items can place too much cognitive load on respondents. If you really need to see a stack rankled list, the best way to do this is via a MaxDiff exercise, a grid question ranking the importance of each item, or simply by allowing people to select all items that are important. Rankings can be calculated from these results.


Let Spark Insights make sure your survey makes sense and is done right! Contact us to get guidance on your survey, or to give you quick, meaningful results at a price you can afford.

Commentaires


bottom of page