Views from the Hills by R. E. Stevens, GENESIS II (The Second Beginning) E-Mail views@aol.com

Our Devotion to "Definitely Would Buy" Scores

The "Definitely Would Buy" rating is key to assessing the potential of an initiative and to forecasting market share. It is used in concept evaluation and product use tests alike. Most algorithms use either the 80/20 or the 75/25 rule in forecasting. That is, either 80% or 75% of the "Definitely Would Buy" as the major component of the share projection. This is generally done without regard to how the data are collected. Also after looking at numerous databases relating to the "Intent to Purchase" ratings, I have noticed that they are not segmented by protocol or environment. From my perspective this is a real problem. The following studies reflect my basis for concern.

The type of interview, interviewer-administered or questionnaire, can have a major effect on the responses when a contingency question is present in the interview. Frequently an interview will contain the contingency question "If you did not rate the product, 'Definitely Would Buy,' explain why." In an effort to understand the effects of this question in a questionnaire format vs. an interviewer-administered format, I conducted the following experiment.

Six hundred respondents were divided into two groups of 300. Each group was given the identical questionnaire, except for one question. One group received the standard questionnaire, containing the contingency question and the other group's questionnaire did not contain the contingency question. Each respondent received one of six concepts. Upon completion of the ratings, the "Definitely Would Buy" scores were averaged across the six concepts in each group of respondents. Those respondents receiving the questionnaire with the contingency question gave an average of 35% "Definitely Would Buy" ratings compared to 16% for the group without the contingency question. It is hypothesized that the respondents in the group with the contingency question took the easy way out. Seeing that they had more work to do if they did not vote "Definitely Would Buy," they elected to take the lesser work route.

In another study, we wanted to see if the test environment would affect the results. We elected to assess three concepts in two different environments, CLT vs. In-Store. In all three cases, the CLT yielded higher DWB numbers. The actual differences were +23.7%, +21.7% and +20.4% or an average of +21.9%. Where you conduct your research does have an effect on the data.

My take aways as a result of these studies and other studies, is two-fold. First, databases should be separated by the type of interview, the test environment and product category. And second, always include a benchmark in your study because factors peculiar to your study can have a serious effect on the data.

Research in general contains too many variables; we should control as many as possible to increase our accuracy.


[Back][Index][Forward]