Focus Groups

 

I’ve written often about the concerns I have with research techniques.  This column was inspired by my attendance at a focus group.

 

I was just asked to attend some focus groups and I was reminded of the dozen or so reasons I don’t like them.

 

First, rich and busy people usually won’t attend, skewing data to a narrow demographic.  This would taint research in a decent undergraduate social science class.  Worse, focus group companies know where they can get repeat attendees on short notice.  These are often students, the unemployed or at least people who live near the focus group facilities.  All of this taints the data, too.

 

In this particular focus group I feel a twinge as soon as the leader walks in the room.  She is too talkative, as if trying to entertain a client, not conduct research.  Conducting professional scientific research is a learned skill.  Notice I use the term scientific and do not make a distinction with social science.

 

About fifty years ago Heisenberg said the instrument used in a laboratory experiment affects the results of the experiment—the Heisenberg uncertainty principle.  A thermometer used to measure temperature changes the temperature of the thing it’s measuring.  In 1962, Kuhn pointed out that scientists have a tendency to collect data that supports their views and ignore that which conflicts.  He used the term paradigms to describe this phenomenon.  Merton and Feyerabend discussed how science is a value laden pursuit, driven by those performing the science.

 

So the focus group leader is in a difficult position.  Waltzing in like the star of the show, trying to impress, performing for the client behind the two-way mirror or any other false behaviour can make that position untenable and skew the data.

 

A focus group leader should dress and talk one step up from the subjects in the room.  S/he is there for scientific purposes to gather data, not make friends or entertain.

 

Most focus groups begin way too quickly.  The leader should allow discussion to evolve slowly with open ended questions to see what the respondents want to discuss.  Closed, specific questions, especially ones that can be answered with “yes” or “no” or a short sentence won’t reveal as much as an open ended questions encouraging discussion.

 

Closed questions yield what researchers call an aided response.  The questions tell respondents what to talk about.  Open questions yield an unaided response where the respondents can tell researchers what to think about—the way it should be!

 

In the focus group I was in recently, we were testing confidential matters.  But let’s say it was reactions to Candidate X.  One of the first questions was, “What kind of car do you think Candidate X drives?”

 

This sparks a discussion of myriad types of cars.  I’m reminded that some people follow car models more than others.  Some may name one car thinking it’s the sportiest on the market, while another person may name the same model because it represents fuel efficiency, economy or some other attribute.

 

In the end, without knowing what respondents mean by the car models they name, you end up with a mish-mash of information that could mean anything.  “Cadillac” means luxury, high-price, high fuel consumption and perhaps the ability of American industry to compete with anyone on the planet.  Who knows?

 

But if there really were value in the car question, it should have begun in an open ended fashion such as, ”How does Candidate X get around the district or campaign trail?”

 

Respondents might name trains, boats, planes and cars.  If we find out people think the candidate flies around in a private jet, that could be a problem.  If cars are really relevant, after an open discussion of other types of travel, the closed ended question could be asked about what kind of car the candidate uses.

 

In the focus group I watched, one respondent got up on his hind legs and said he didn’t think the candidate used a car.  It takes guts to challenge an authority figure and say the question is wrong.  If a respondent does that, it’s a very powerful message that you could be on the wrong track.

 

Next the leader held up pages from the candidate’s web page, passed them around and asked for a review.  That’s not how people use web pages, so it’s hard to tell what was being tested.

 

Next, the leader passed around literature that the candidate regularly sends out.  The question was, “What do you like about the brochures, householders and other communication?”

 

I have two problems with this line of questioning.  First, it misses the opportunity to find out if anybody remembers getting anything in the mail from the candidate.  People may feel boxed into saying they received and remembered the material, when the most valuable information might be that they didn’t.  Next, what if people hate the literature?  Asking what they like about it, cuts off the discussion about what they don’t like.

 

People like to be cooperative when being paid, so you have to be careful they aren’t so cooperative that you don’t find out what they’re really thinking.

 

Next came the Barbara Walters question.  She was famous for asking an interviewee, “If you were a tree, what kind of tree would you be?”

 

In this session, respondents were shown pictures of all kinds of people—young, old, various races, both genders and so on.  As they are looking at the pictures, the leader asks, “If the candidate’s campaign were a person, what person would it be?”

 

Not only is the question a bit odd, it’s hard to tell what people mean when they pick a picture of a fit, muscular looking man.  Does mean the campaign is intimidating, full of thugs or has staying power for the long run?  Next, respondents start answering questions that weren’t asked.  One picks the construction worker because the candidate comes from a part of the district where there’s lots of construction.  Another picks the young Asian woman because the district is becoming more multi-cultural. Some seem to be picking people they find attractive or would like to be with, not who embody the campaign.  This ends in another mish-mash of unusable information.

 

I innocently asked the client where the pictures came from?  It turns out employees of the research company picked them.  They may use them every session, for all I know.  Regardless, it does not make a lot of sense to me to have respondents judging pictures picked either at random or purposefully by a research company to make their sessions go more smoothly.

 

Next, the leader asks what does the person in the picture do for a living, and what does the person do on weekends?  (I assume the construction worker works in construction).

 

Ironically, after going to all this trouble to stimulate fairly irrelevant discussion, the leader cuts off dialogue by asking if anyone has any final thoughts?  When the leader leaves the room to get more instructions from the client, I listen in to the continuing discussion in the focus group room.  It’s actually a better discussion, unaided by the leader.  One older woman began waxing nostalgic about the district, the party and the candidate.

 

So, what’s the right approach?  There are lots of them.  Credible scientists use several different methods and compare them to obtain more reliable data.  Polls, questionnaires, elite interviews with opinion leaders and lay-elite dialogues to see where there are gaps between those in the know and regular folks can all help.  So can old-fashioned research.  Online databases now make is really easy to search everything from newspapers to academic journals.  For all a campaign knows five distinguished academics and journalists have written 10,000 excellent words on the topic over the years.

 

Focus groups can be helpful, if run properly.  But they also need to be augmented with sound, old-fashioned research, preferably in libraries, surrounded by musty books.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Enter Captcha Here : *

Reload Image