LEWIS 19 Dec 2016 // 6:55PM GMT
Research for ink is a hot commodity right now, and it should be given its ability to align brand messaging with hot button media issues. It also has a ubiquitous nature in that it can serve as content for lead gen activities, media pitching, white papers, standalone landing pages, infographics/videos and internal knowledge advancement.
LEWIS Research helps companies create research for ink by designing research projects using data derived from current media trends and topic mapping, as well as input from PR professionals to find the most impactful pitch angles prior to fielding the research. This process allows us to create research that gives a voice to a brand’s positioning, generates interest in hot button issues and even the ability to create a new phrase.
Lurking under the surface of great infographics, expertly written surveys and intriguing findings associated with good research projects is the impact that respondent quality has on the reliability and, ultimately, the media worthiness of the results. Journalists and news organizations use methodology statements, survey questions and sample size calculations to decipher if the results actually reflect the population that the project portends.
So how do you know if the respondent quality on your survey is poor? Here are three areas to look at for clues:
Respondent Sources. Social channels, internal contact lists, website intercepts and online panels are types of sources that are frequently used to conduct online market research. Each one has its use, but some tend to be better for respondent quality than others. Not having the ability to pre-identify respondents, like with social channels or website intercepts, leads to an approximately 38 percent uptick in the amount of bad responses we receive in a survey, and it’s difficult to decipher if any respondent decided to take the survey multiple times. Internal contact lists and online panels typically provide better quality responses, though all online panels are not the same (see below).
Third Party Quality Verification. When designing research to represent the opinions of a larger population, say mothers with children or IT professionals, working with an online panel is typically the most efficient method for price and timing. It also allows for respondents to be drawn according to demographic needs and not just what’s available in a client contact list or through social channels. That being said, Pew Research Center taught us this spring that not all online sample companies are the same. One way to reduce the instances of bias and ensure respondents are who they say they are, is to work with online panel companies that use quality control methods like digital fingerprinting and identity verification.
Data Traps. Data traps typically refer to identifying respondents who provide gibberish free response answers, “straight line” answers (selecting the first answer choice for each question), or “speeders” who finish a survey much quicker than expected. It may be too late to save a project if it already has a large number of respondents that fit those categories, but many survey programming and hosting tools offer the ability to catch the low-quality respondents as they give the suspect responses and remove them before they cause harm to the overall data set.
There is only one thing that can stop a research project from gaining any traction within media, and it has nothing to do with pitching efforts, beautiful visuals or artfully written reports. Don’t let bad data ruin a project before it even gets going.