Unintelligent fieldwork for 3 research

hamsandwich I took a phone call this morning on my home telephone number, a Virgin Media landline that usually receives calls either from my mother or the smooth American voice that announces “You’ve won a Florida vacation!”

It was from Fieldworks, a market research firm, doing a customer survey on behalf of mobile operator 3.

The friendly script-reader researcher asked me if I’d be willing to answer a few questions about my experience as a customer of 3 mobile broadband, to which I replied that I would.

And so Lee (that was his name) embarked on asking me a series of questions to which I responded during the course of our 18-minute conversation.

It became clear very early into our chat that here was an almost perfect example of wholly the wrong means being used to connect with a customer to seek the customer’s views on something.

To start with, nearly all the questions were multiple choices, requiring me to answer numerically, ie, choosing a number on a scale (two scales, actually: 1 to 7 and 1 to 10 depending on the question) to signify how positively or negatively I felt about a particular aspect of my 3 mobile broadband experience. There were also many ‘yes’ or ‘no’ questions some of which I suggested to Lee should be ‘it depends’ which obviously wouldn’t fit his survey model.

With some questions, by the time Lee got to the fifth choice I’d forgotten what he asked for the first two and we had to go through things again. Hardly a good experience for either of us.

A few of the questions needed an “Other” answer, meaning Lee had to write down what I was saying. That interrupted the flow a bit, too.

I mentioned to Lee that I have a bit of a different relationship with 3, not only as a paying customer but also as someone who has tried out many of their products as part of 3’s PR outreach activities in the blogosphere. I have no idea if or how he included that tit-bit in his survey report.

The appropriate method for this survey surely would be online, one that would enable me to far more effectively consider the multiple questions by seeing them all on a screen. I’m sure I would have got through the survey in half the time it took for Lee to do it over the phone, saving both of us time (and especially me, the customer). I’d also have been able to save a copy of my answers.

When we concluded our conversation and Lee asked me if I had any other points to mention that I’d like him to feed back to 3, I suggested he tell them about this ineffective survey method. Interestingly, Lee said a number of other customers who’ve been surveyed the same way had said the same thing.

In fact, I expect it’s Fieldworks rather than 3 who need to think about this method of surveying customers. I can’t imagine it would be 3 who set up the mechanics of how to do a customer survey: Fieldworks are the experts apparently. Indeed their tagline is “intelligent fieldwork.”

Not in this case.

Neville Hobson

Social Strategist, Communicator, Writer, and Podcaster with a curiosity for tech and how people use it. Believer in an Internet for everyone. Early adopter (and leaver) and experimenter with social media. Occasional test pilot of shiny new objects. Avid tea drinker.

  1. Charles

    I see what you mean, a very wasteful way of surveying and badly designed too. I’m often reminded of one Guy Kawasaki’s Dozen Don’ts for Entrepreneurs “Don’t ask people to do something you wouldn’t do.” The people at Fieldworks really need to ask themselves ‘how would I feel taking this survey’. I wonder if any of them did.

  2. Niall Cook

    Whilst online might be a more appropriate survey method, I’m guessing that telephone still produces better response rates. If so, that’s probably why this is still the better approach for Fieldwork and 3.

    In that case, I wonder if it’s time for a hybrid approach: telephone someone to see if they’ll participate (better response rates) then give/send them a URL to carry out the research.

    • neville

      It may well be a better approach for Fieldworks and 3, Niall – and there is the major issue: this should be about the customer. The approach I experienced certainly isn’t a good one for him or her.

      As for a hybrid approach, I’d rather see an imaginative approach. For instance, maybe a phone call as you suggest – 3 is a phone company! – with an easy-to-remember URL you can go to, and with a reward, eg, some free time on your mobile broadband account. Even better: the URL knows you and automatically gives you the freebie (not sure the tech’s there for that yet, though), or it’s somehow linked to your account or via the 3 dialler app on your computer.

      The way the survey is currently, there’s nothing obvious in it for the customer, it’s all about 3.

      • Charles

        The tech’s there for it, it’s just a question of linking these things together, but that’s what EDI (electronic data interchange) was supposed to give us. It may well be that they get more responses in total from the phone survey, but what’s the quality of those responses. How many people, five minutes in, will think ‘this is taking ages, I’ll just pick a random number’? A hybrid approach means the customer can come back to it in her own time. Giving the customer something for their time/effort/attention is only fair.

        Whenever I get an invite to my frequent flyer programme’s annual survey, I fill it in online, truthfully, because they give me some miles on my account for taking the time to do so.

        • neville

          While I have no idea what 3’s specific objectives are with this particular customer survey, it surely must be to their advantage to see it as an engagement opportunity rather than simply a volume exercise to get boxes ticked in 18-minute phone conversations.

          I do exactly the same with airline surveys. Others, too, who behave in a way that makes me feel as if they actually do care what I have to tell them. Nice as Fieldworks’ Lee was this morning, I got the feeling that his survey was just a numbers game.

        • Clive

          You’ll get more responses for a web-based survey than through a telephone based survey (not many can afford to pay what’s required for a multi-thousand response telephone survey).

          I totally agree that if you are faced with a “Give me a score between 1 and 10” type survey, it very rapidly becomes a standard bell curve response to the questions. Therefore, it is a far better approach to use “perceptional” research, where the responses are not through standard scoring, but are far more along the lines of “which of the following is closest to your own feeling/belief?” – provided that the statements are not self-serving or leading and the respondent is allowed to say “none of the above” and provide their own response. It’s a very precarious tightrope to walk, and there is a lot of skill in designing a real survey (as opposed to just going to SurveyMonkey and putting in a bunch of questions designed to get to a desired end result.

          It is a case of horses for courses – if you know the result you want, then design the survey to get there, run it on teh web and use social media and anything else to drive people to it. If you just want marketing output, pretty much the same – but try and be a bit more independent on how the survey is put together. If it’s for strategy, then you really need to understand what the impact may be – what if the desired outcome isn’t what is real out there? Do you have a Plan B to fall back on – for example, if you find that the idea of selling blue widgets just won’t float in teh market, did the survey design find out why, and whether pink widgets would sell? Did you profile your respondents correctly, or just blast it out there? Did you check the responses for fake respondents (not all agents are fully bought in to telephone surveying, and cheat), for constant average responses and for erratic responses, with no correlation across a set of responses?

          there’s room for all sorts of surveying out there – but it does have to be done professionally and correctly at all levels – and what seems to have failed here is the initial approachto survey design, and the use of script-driven agenets who are not well trained enough to work outside the script.

          • neville

            Thanks for that analysis, Clive. Your concluding comments are right on the ball: this was a good (not good) example of a rigid script and someone who, while personable and engaging in his own approach, couldn’t divert from the rigidly of his script.

            I’d love to know what 3’s specific (and measurable) goals are with this survey and its approach.

  3. Clive

    Pure web-based research gives a skewed result – you only get those who violently agree or disagree with the topic – very few people will go “I don’t really know – hey, let’s do the survey anyway!”. Web based surveys tend to be good for marketing (100% of people when asked “Would they prefer to be fleeced by the government through stealth taxes or have a cut-price holiday with ABC Travel, Ltd said…”) – telephone based, or face-to-face (very expensive) surveys are betetr for strategic findings. Although telephone-based interviewing still has a skew (you can still say “no thanks” or just put the phone down, a reasonable response rate still gives better statistical results than web-based surveys. The big thing is getting the survey right – which it sounds like wasn’t the case here. As Niall says, a hybrid model may be better in many cases – more difficult to manage overall, and you don’t pick up the nuances that a fully capable, trained and professional survey agent can pick up, but many people would still prefer to use the web for this sort of thing.

    As an example of web-based skew, look at any survey asking about Linux perceptions. If teh findings are to be believed, the vast majority of the world is now using Linux at the desktop, and those that aren’t will move within the next 3 minutes. Telephone based research shows a high degree of acceptance and even happiness with Windows – and a very high level of ignorance of the existence of Linux. If your total business model depended on the validity of one of these findings, which one would you go for?

    • neville

      Interesting points, Clive, thanks.

      Some sweeping statements, eg, “you only get those who violently agree or disagree with the topic” taking web-based surveys. What’s the basis for your views? Evidence? (Heh, the results of a survey?)

      You don’t link to anywhere about you online so no way of knowing your credentials, as it were :)

      As for “the big thing is getting the survey right”, I’d say the big thing is getting the customer engagement right, eg, with some kind of benefit to the customer for his or her taking the survey. I can’t see that happening with this example from Fieldworks.

      • Clive

        Sorry about lack of link – here you go http://www.quocirca.com.

        Also sorry about the sweeping statement. We have looked at the use of on-line surveying, but the majority of our work is for more strategic reasons. We have looked at a lot of web-based surveys, and find thet a lot of the results just don’t correlate with what we have found through telephony based surveys. One of us is therefore probably wrong – and I hope it is not us.

        As to customer engagement – do you mean teh customer of teh surveying company, or teh respondent to teh survey? If teh former, I had made anotehr sweeping assumption :-) that any professional organisation would have done this anyway (although I accept that there is a lot of revenue chasing going on these days). If teh latter, then it is a difficult point, and we always run pilots across a profiled base first to make sure that any survey is engaging, is fully understood by agent and respondent and that timings and so on all work.

        Our work generally results in a public report, which is provided to any respondent who wants it. With consumers, a gift of something of financial value can often be offered, with corporate research, it is not recommended, as it often is contrary to the corporate’s policy.

        As I say, it’s a complex area, and one where it is far easier to get it wrong rather than right. Probably needs more focus on the differentiation between “market research” and “market analysis”?

Comments are closed.