“We need a survey” says every company everywhere these days. At least that is how it can feel to customers, clients, consumers, and consultants. Every day my inbox is inundated with surveys from colleagues, associations, banks, corporations, and physicians. The list goes on.

As an evaluation consultant and strategist, I enjoy reading the questions and response options. I feel like an investigative journalist, exploring new ways of asking questions and scrutinizing response options. Each survey brings a new adventure. Some are exhilarating and some are concerning. Most make me wonder whether we are collectively relying on surveys as a tool of convenience rather than relevance.

Let’s look at the following case study.

An organization wants to gather information from local companies about their services, tools, and technology. They decide to use a combination of a survey plus follow-up interviews with a representative from each company.

During the survey development process, staff seek input from partners, which results in the addition of more questions. Partners are excited about the project and want to know even more! The survey expands to over 50 questions.

As questions are drafted, the team realizes that they are unsure of the response options to include for many of the questions. They make a decision to write most of the survey items as open-ended questions, with the hope of gathering more detailed information to understand the full range of experiences.

While some team members question the length and format, they feel pressure to gather information quickly, so those concerns remain unspoken or not heard.

Sound familiar?

I have found myself in some version of this scenario on more than one occasion.

Below are questions that I consider when urgency and drift appear to be influencing the project timeline and approach:

    • What learning and evaluation questions are guiding the project?
    • How does the data collection method(s) align with the learning and evaluation questions?
    • How does each question on the data collection tool(s) align with the learning objectives?
    • What information is currently available that could supplement our knowledge?
    • Whose voices are currently being elevated, and whose are being excluded?
    • What level of familiarity, collaboration, and trust currently exists between the data collector(s) and the potential respondents?
    • What power dynamics and assumptions are influencing the project?
    • What factors are contributing to the current timeline and the perceived sense of urgency?
    • Who are the decision makers for each step of the process?
    • What resources are available for collecting and analyzing the data (i.e., people, technology, skill, time, incentives)?

Depending on the answers to the questions above, we might consider changing the methodology in this example from a survey-first to an information gathering or interview-first approach.

Perhaps more introductory information is needed through a literature review or environmental scan to understand similar programs and challenges. Or maybe we have that foundational knowledge already and are ready to move to interviews or community conversations to gather a more nuanced understanding of the range of experiences and responses.

This is not to say that we would never use a survey. We might need anonymous feedback or to collect data at a large scale that would not be feasible through interviews. But maybe a survey is not the first step in the process. Maybe we are choosing a survey out of convenience rather than relevance.

When we ask partners for information, we are asking for their time, energy, expertise, and trust. It is our responsibility to choose approaches and tools that minimize the burden and honor their experiences.

Your turn!

How would you proceed in this example?

What questions and approaches would you consider?