Director of Research Strategy
When McKinsey & Company released their employer study of health care benefits in June 2011, controversy quickly erupted because their findings were so at odds with the projections from the Congressional Budget Office, Urban Institute, and RAND Corporation. While critics attacked the polling nature of the study and the fact that the questionnaire “educated” respondents, I kept thinking ‘what if it’s the sample?’ Even after McKinsey released the methodology and cross-tabs of the survey data, I still wondered if I could entirely trust an online panel to deliver 1,300 qualified respondents in the health care benefit space for a self-administered survey. The high incidence of “Don’t know” on some fairly basic questions indicated potential response quality issues.
It was not the first time I wondered about the quality of the online sample for B2B research.
A lot has been written about online panels with most authors focusing on issues pertinent to consumer studies and public opinion polling. B2B is in many ways a different animal: the size of the universe may or may not be measurable; the universe may be quite small (e.g., in managed care research); fielding costs are considerably higher; and study participants must be either in positions of influence (such as buying or influencing insurance coverage decisions) and/or in the position of knowledge (e.g., specific training or expertise enabling them to evaluate merits of new products or technologies).
Thus a critical approach to online B2B panels may be appropriate to ensure research objectives are reliably met.
Lesson 1: Trust but verify
The first time I had misgivings about the quality of B2B panel respondents was when I was overseeing the fielding and performed data analysis of an online survey of dentists. I’ll refer to this panel company as Panel A. A programming glitch allowed panel members to enter the survey multiple times; this led to a discovery that some respondents changed their answers to the screening questions in an attempt to qualify for the survey. Further, 3% were disqualified because they answered that they are NOT licensed to practice dentistry in the United States. So, how could they have been included in the online panel of US dentists in the first place?
When we re-bid this study at a later time and evaluated other panel companies, I took a close look at the overall counts of dentists that the panel companies claimed and compared those to the estimates from the Bureau of Labor Statistics. Curiously, counts of dentists from Panel A approached 80% of the BLS estimated universe of dentists, whereas a company we’ll call Panel B, had only 15% of the universe – a far more reasonable subset.
We have seen even more egregious overstatements (see Table 1 for examples from panel company C); but BLS statistics are only of limited help: Most B2B respondents cannot be as neatly categorized as health care professionals.
Table 1. Counts of health care professionals from Panel C
Even though panel companies claim they validate their respondents (and with licensed health care professionals, member identity can be verified through external sources – see Unmasking the Respondent by Frost and Sullivan), it’s a good idea to screen respondents more rigorously than what a typical screening questionnaire will do. HSM routinely includes knowledge questions and “red herring”-type questions in screening (see Table 2). We also evaluate open-ended responses and look at response consistency. Both can be fairly time-consuming and, as a result, we may remove more respondents than is customary: up to 5%-10% of responses may be discarded at data processing.
My experience with Panel A prompted me to explore how other research companies approach panel use for B2B recruitment. I interviewed two research professionals. The conversations went like this:
HSM: How do you know that the B2B respondents you get from an online panel are who they say they are?
Other researcher: Well, the data that came back looked solid.
HSM: How do you know it was solid?
Other researcher: It was what we expected.
HSM: Did your survey include any red herrings or other implausible options? Did you evaluate the quality of open-ended responses?
Other researcher: No.
HSM: Then, can you really be sure the data was solid?
Other researcher: We had no reason to doubt it.
When I posed the same inquiry to a few online forums of market research professionals, the response was dead silence.
Understandably, research is often undertaken to confirm a hypothesis or validate a decision that has already been made. But if a survey only offers a universe of plausible response options, then respondent data will only include plausible response selections, increasing measurement error and failing to reveal respondent’s ignorance on the subject. A few well-placed red herrings or an occasional open-ended question, even in rigorous quantitative instruments, allow the analyst to look for damning inconsistencies which may suggest a closer look at the respondent is warranted.
Lesson 3: Expediency has a price
After the McKinsey study, I talked with another research professional whose company conducts large annual surveys of HR benefit managers. He admitted having doubts about the quality of the online sample. But, for his company, pressures around deadlines, survey length, and quotas outweigh the methodological rationale for including knowledge questions and traps, and HSM has been there, too. But just as research dollars may be wasted if we fail to scratch below the surface to expose the unexpected nuggets of insight, so too might the whole exercise be for naught if the sample fails to make the grade and strategic recommendations are based on data from uninformed respondents.
Frost and Sullivan. Unmasking the respondent: how to ensure genuine physician participation in an online panel. Retrieved from http://www.frost.com/prod/servlet/cio/159368832