I have many concerns about the quality of research conducted in political campaigns, by those governing and by industrialists. I even see those in the not-for-profit sector spending precious money on unproductive research.
“Original” research in universities is often only conducted at the doctoral level. Research that is done often involves relatively unproductive statistical tables or questionnaires that purport to be” scientific”. Many students find searching through microfilm, microfiche and original texts in libraries to be passé, if they are aware of the technique at all. This constricted research in universities translates into marginally beneficial or even irrelevant techniques and results in industry and government.
How is it that the best educated generation the world has ever seen is relying on research techniques that would not achieve a pass in a second year social science course in a reputable university? In government, and in the political campaigns designed to lead to governing, senior managers are making decisions based on flawed methodology.
But we are in unstable times when we need excellent public policy and politics. America is polarized domestically and the European Union is beginning to show signs of eventually having similar economic and political clout in some parts of the world.
For those who look to the private sector for leadership and use the refrain of “running the government like a business”—please don’t. Fully 82% of all mergers and acquisitions in private industry fail to produce new value. There is a crisis of competence in all sectors, in part because of poor research.
Here are the top ten issues and comments on research techniques and challenges faced both in campaigns and then in governing:
The population is much more sophisticated than they were when the random sample telephone survey was invented. A telephone call is now an intrusion, especially during dinner time. Pollsters are experiencing up to 70% refuse rates. I tell my clients that often the biggest message they are getting is that their constituents refuse to speak to them at all.
Compounding the problem is caller ID which tips people off that it’s a pollster calling. The moment of silence before the questioner begins speaking is a further tip off, as is the robotic reading of questions from a computer screen.
Perhaps the biggest challenge is that up to 10% of the population has just one hand-held device or phone—higher in the crucial 18-24 age group. Many will not participant in phone surveys because they have to pay the air time.
People miss-remember dates, events and attitudes—what researchers call “backward and forward telescoping”. They also tell researchers what they wish had happened, or use answers to researchers’ questions as surrogates for other messages. The classic example is that far more Americans reported that they voted for President Kennedy after his assassination than could have done so in the closest election the US had had to that date.
Social science is too imprecise to determine that 22.3% of people think or do anything—often referred to as “spurious accuracy”.
Citizens reserve the right to lie to pollsters and reserve the right to park their votes in the undecided category or tell pollsters they will vote for a party or candidate when they have no intention of doing so, in order to temporarily reward or punish candidates.
One joke about polls goes like this: “If an election were held today, everybody would be really surprised because it’s scheduled for November 4”. That kind of captures some of the unreality of polls these days.
Robert K. Merton is the inventor of focus groups. He also coined the terms “role model” and “self-fulfilling prophecy”. He disassociated himself from the way practitioners implemented his ideas about focus groups.
The dirty little secret about focus groups is the number of times companies rely on semi-professional attendees whom they know will show up on short notice to fulfill a client’s needs. Students, the disadvantaged and others who need an honorarium or have time on their hands are often overrepresented.
There are ways to make focus groups more reliable. What the Harvard Business Review calls “empathic testing” involves using a product or discussing an issue in real life conditions. Putting respondents around a board table and having a formal focus group leader ask questions is not a normal life experience or venue and the results will thus be forced and false.
Anamatics is similar and involves making the experience realistic and having participants focus on the element to be tested. Realism in the venue can be addressed by driving respondents around in a van while they listen to radio ads a politician wants tested. This is closer to how voters would listen to an ad than sitting at a board table.
For TV ads, we have stripped rough cut ads into tapes of the actual TV show in which they will appear. Testing can occur in shopping malls where hundreds or even thousands of people can view the potential ads and react to them.
For print ads and even editorial content, we have mocked up the copy and inserted it into real newspapers to see how respondents react. We don’t tell them what we want them to react to, we first want to know if they care to look at the ad or story at all. That’s the so-called “unaided” response. If they don’t look or read, we have some valuable information. Then we asked them to review the ad and get more valuable information in their “aided” response.
Campaigns and sitting politicians use lots of mail. Direct mail raises money and mobilizes troops. Newsletters and political “householders” let constituents know what their representative is doing. But nobody opens the mail or reads a householder while sitting around a boardroom table. These items should be thrown on the floor in a pile of other mail and magazines to see if anybody bothers to stoop down and pick it up. If someone does, the next question is whether the political piece is interesting enough to cull out of the pile and read. If not, that’s a valuable answer in itself.
While on the campaign literature theme, there’s always somebody in political meetings showing a mock up of a brochure or householder who points out that the candidate’s picture or name or other important information is off on the right-hand side “where the eye naturally goes”. By this time in the meeting, I’m too exhausted from trivia and nonsensical issues to point out that we read from left to right in English, Spanish, French and most other languages prevalent in North America, and only read right to left in Arabic, Persian, and some other languages. (I wonder where these perceived and received pieces of communication wisdom come from?)
With regard to video and TV production, audiences are very sophisticated. Most people own video cameras and watch TV dozens of hours per week. Research has shown that focus group attendees will review the production qualities of ads, rather than the content. To counter this, advertisements can be mocked-up by a graphic artist and one can then test the voice-over or content separately.
Candidates can test debate one-liners, still pictures for brochures, slogans and any other communication element, without layers of clutter or testing of extraneous elements.
- Graduated Questionnaires.
Self-administered questionnaires are not used much anymore, but are a valid technique. One of the best examples of these is the old Bureau of Broadcast Measurement diaries that were mailed to households to survey radio listening and TV viewing. People often put down their favourite station, not the one they actually watched most.
With telephone or in person surveys, respondents become easily and quickly fatigued with having to choose among: strongly agree, mildly agree, somewhat agree, agree, mildly disagree, strongly disagree. What does mildly agree mean, other than the fact that it’s stronger than just agreeing and weaker than strongly agreeing? How does one compare one person’s strong agreement with another person’s?
The best model to determine the weight to put on a respondent’s report is to see if that person actually changes behaviour as a result. People often report that they will change voting habits, but actually do not. This makes their threat to do so a surrogate for other matters that should be probed.
In industry, it’s the same. I have a telecommunications client which conducts quarterly research to determine how much its customers like them. The results show that up to 30% of respondents say they are “agree”, “strongly agree” or “somewhat agree” with the notion of switching service to a new company. Yet for years the so-called “churn rate”—the rate at which customers actually change telecommunications providers (phones, hand-helds, internet, etc.) is under 3%.
It is vital to distinguish between what people actually do and what they say they might do.
It may not sound egalitarian these days, but elites are good respondents because of how they became elites—they know their demographic well. These one-on-one, in-depth interviews can augment focus groups, polling and other techniques.
Who’s an elite? That’s easy. Ratepayer groups, condominium boards, religious groups, union leaders and even book club busy bodies all rose to the top of their little heap, in part through knowing what their demographic is like. They can be a great source of information.
The term, taken from navigation, stands for gathering data from several different sources, or with numerous methodologies. Where data intersect, results are more reliable.
Researchers have identified several types of triangulation including: within-method, between-method, data, investigator, theory and methodological triangulation. Within-method means two separate polls, perhaps by different companies that say the same thing. Between method might be a poll and a focus group that produce similar results. Data triangulation might involve qualitative or quantitative results that are much the same. If several investigators find out the same thing, that’s triangulation. Theory triangulation might involve a psychological and sociological explanation of behavior. Finally, these days, using mixed methods—both qualitative and quantitative—is increasingly the norm to avoid the errors that each alone can produce.
The distinction between qualitative and quantitative data has been blurred for at least fifty years. Few branches of any science have the predictability of Newtonian physics. Current thinking is to engage in a mixture of methodologies, mentioned above. So, a reproducible poll with a large sample that claims to be “scientifically” accurate might be cross referenced with qualitative focus groups, elite interviews and such that plumb small samples more deeply.
Average People don’t speak the way telephone researchers do, or the way those who write questions think they should. It’s hard to imagine anyone constructing a questionnaire where a response could be “some good” which is a common expression in the Canadian Maritimes, or “awesome” as is currently popular. The California “Valley Girl” response of “gag me with a spoon” was probably not used, even in its hay-day.
In addition to the long pause, script reading and intrusion, some companies balk at long distance charges, skewing data to urban respondents. For decades, first year social scientists have been warned that telephone surveys obviously only gather information from those with telephones. Triangulation is the antidote.
Social scientists are supposed to keep notes, tapes and a reflexive diary to examine themselves as a scientific instrument while they are examining other people or issues. Commercial researchers would rarely do this.
- The Heisenberg Uncertainty Principle.
The use of a particular research instrument has an effect on the outcome of the research. Heisenberg stated, “[o]n the level of molecular research, the presence of the experimenter is certain to affect the results in a way which cannot be measured”.
The mere fact that a pollster calls up respondents has such affect. Asking about certain topics that the respondent might not be concerned with, puts that matter on the public agenda. Moreover, researchers cannot control for the myriad other variables in that respondent’s life.
In the end, perhaps my premise is flawed. Perhaps we are not the best educated generation the world has ever seen. We have more degrees and a multiplicity of choices in methods, but may lack the clarity and professionalism of previous generations. Pity, we need that clarity.