I do X. I don’t do research.
This is my reflection on research based on experience in the course of my professional work and the people I have come across. While pointing out that some criticisms about research do not make sense, this article is mainly a self-critique. I will conclude by suggesting what we should do, looking forward.
My focus is on market(ing) research (MR) and the MR industry, drawing examples from companies like Kantar, Ipsos and Nielsen or their smaller-scale counterparts. I will refer to the people who work in MR as MRers.
Setting the Record Straight
Please stop calling it something else just to prove you are different from MR.
Many non-MRers tell me that “Research (MR) is useless”. My reply is: Research (including MR) per se is not useless — people are responsible for making it useful.
According to the Oxford Dictionary, research is “ a careful study of a subject, especially in order to discover new facts or information about it.” It involves a “systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions”.
MR is research for marketing. But in our day to day work, MRers also do research for non-marketing purposes from communications, product innovation and design, political/opinion polling and customer experience to industry research and market analysis. MR skills are transferrable. What we do not do is competitive intelligence — researching trade secrets.
Another comment I often hear is: “We don’t do MR. We do analytics!”
Comments like this often come from people who do not normally do MR (nor analytics) but have to do it now to prove that: 1) they are catching up to the “trend* of big data and analytics”; 2) their works are not purely gut feel.
So what is analytics? Is it better than MR? According to the Oxford Dictionary, analytics is “the systematic computational analysis using a model, usually performed by a computer” and “the information resulting from this analysis”.
There is nothing contradictory between MR and analytics! Analytics is one way of doing MR (or research in general) that utilises technology and systemic computational data modelling. People have been doing analytics (or machine learning or design thinking) for years. It is not a new discovery or invention, and certainly not something you can be an expert of just like that — not even after one or two Courseras. It takes years of learning, training, trying and failing to master.
All MR companies have an in-house marketing science (statistics) department. This is again not a new creation; we have had this long before I even started working. In this department, we hire statisticians who run statistical analyses like regression, correspondence test (mapping), conjoint analysis, segmentation (cluster analysis) and statistical modelling^. Statisticians are well-qualified for the job of analytics (one of my mentors of statistics/statistical modelling was the marketing science team head in my previous company).
The main reason for non-researchers’ (they are neither MRers nor researchers) opting for analytics over research is their frustration with the outputs produced by MRers. The critics are not entirely wrong. Indeed, many of their criticisms are valid and fair (see the next section for why). But they have confused methodologies with outputs. They believe that since the only methodologies they see MRers using are focus groups and surveys, and the results do not help them, then the fault must be with focus groups and surveys and the MRers using these methodologies. The problem is not focus groups or surveys per se or any other methodologies used by MRers — they are only tools. Humans are responsible for using the tools right, improving the tools and making new tools.
So MR companies have the capacity for analytics — we need to develop it to be client-ready. (We would also need to hire tech and other professionals to build and maintain the system.)
* I call it “trend” because, like “design thinking”, analytics has suddenly become “trendy”.
^ Does the list of statistical analyses sound familiar? Yes, you see the same names in machine learning algorithms! For those who rave about AI and machine learning, machine learning (algorithms) use similar methods as statistical modelling but with different objectives. Having been trained in research statistics since university helped me a lot when I learned machine learning.
The most jaw-dropping criticism I have heard must be: “Those who do MR don’t do insight. Those who do insight don’t do analytics. We do analytics.”
The above comes from a few CFOs/COOs of ad/brand agencies. The race to trashing MR has extended to the finance department! (I am not calling out anyone specific as I get this quite often from people who are not usually involved in the day-to-day agency work.)
I accept criticisms that “MRers don’t offer useful/impactful/good/… insight, or even any insight” — this is what prompts me to write this article (and the years of searching, learning, experimenting and humbly enquiring of how to do better). But the rest is nonsense.
According to my most trusted English dictionary, Oxford Dictionary, insight is “an accurate and deep understanding”. The ad industry talks about the “Aha!” moment. Some believe insight must tell us something new* while others think as long as it is relevant and actionable, it is an insight. Whatever the criteria, insight is the goal of doing research (including MR), of which analytics is one way of achieving this goal.
* I believe we can call a finding insight if it brings a new perspective. It takes scientists years — even a life-time — to make one new discovery so it is virtually impossible to expect someone without the millions-to-billions’ funding to bring in never-seen-and-never-heard-before-newness every time after a 1–3 months MR project!
A Critique of MR
Caveat: If you want to criticise MR and MRers, you’d better know MR first. Otherwise, please keep your criticisms to evaluating how (un)helpful MR is. Criticisms like the above make you look silly.
Let me offer an insider’s view that gives much stronger criticisms. The crux of the problem is mentality. A few examples here:
#1. MR has become a factory product-line churning out reports. A typical process goes like this. Upon receiving a brief, we classify it into one or more categories to decide how to write the proposal and design the research (methodology and sampling): brand positioning, brand health tracking, brand/ad concept test, concept and product test, ad campaign pre-/post-tests, product innovation, segmentation, customer loyalty and satisfaction, etc. We often reference past materials to produce the outputs at each stage: proposals, sampling plans, questionnaires, discussion guides and reports. In a typical Category X MR project, there are certain default questions to ask — e.g., brand metrics in brand health trackers, evaluation metrics in concept tests and brand personality questions in brand positioning projects. There is nothing wrong in this because research is a scientific enquiry of which consistency (systematic) is key. Problem arises when certain questions do not make sense to the research users or are not relevant anymore (because time has changed). Findings from such questions are unusable by the research users who need to draw up strategies or design new processes/products. Yet we keep these questions because we have been doing this and everyone is doing it.
There is a deep-rooted belief that because MR (research) is scientific*: “We find out the truth out there and it’s your responsibility to apply the truth. If you don’t know how to use my report, it’s because you don’t know research”. Very typical MR arrogance.
I have experienced it first hand as a brand strategist. Despite my background in MR, I could not find a way to use the MR reports in my strategy no matter how I tried because the report was a bunch of answers to the default questions.
* There is a scientifically proven rationale, backed up by theories, behind why we design our samples, ask questions or perform analyses in a certain way and order. But this does not mean we cannot change. The scientific community strive to make new discoveries and better existing technologies while the MR industry sticks to the old ways. I used to work for the research arm of a PR agency (Company A). An acquaintance who came from a traditional market research background was hired by another PR firm to head up their research team in a certain market. I was curious about the methodologies they were using. As soon as I mentioned the name of Company A, I got a “What the hell is that?” reply. Very typical market researchers’ rude arrogance as they only knew Nielsen or Ipsos, etc., even though Mr Acquaintance technically was working at a PR firm! I held my urge to retort and continued with introducting Company A and sharing with Mr Acquaintance the amazing things that Company A has been doing to meet the specific needs of the PR industry (e.g., adapting methodologies and frameworks that would allow us to capture actionable insights for PR professionals), but without giving out too much details. I then asked Mr Acquaintance what methodologies they were using, to my suprise, he replied with so much disdain that made me feel ashamed of my own researcher’s background. He said, “There is no need for special methodology. Their work is very simple.” About a year or so later, he left the PR firm and he sent me a message — perhaps the only message I received from him in the 6–7 years we have known each other — to ask me what other PR agencies have a research department. I did not reply to him because I was surprised that he did not know his competitors even after working in a PR firm for almost 2 years!
#2. MRers seldom push ourselves to do or learn other sub-fields, especially in larger agencies*. Those in the quantitative teams do not see the need to learn the basics of qualitative research and vice versa. “Newer” divisions like neuroscience do not think they should know (the basics of) quantitative and qualitative (the head of a new division overseeing the neuroscience sub-division told me, “This is their (quant/qual teams’) job”.
* I have always worked in both quantitative and qualitative research due to smaller agency size.
#3. Although everyone should have a grasp of the basics, the reality is many MRers do not know the basics well, not even in their own sub-field. In quantitative research, most MRers rely on the marketing science team for statistical analyses, wait for the results, put them in their report and then add a commentary. While significance tests are reported in MR quantitative reports, a manager in my first job (not listed on my Linkedin) once explained to the client that “‘statistically significant’ means it is important”. (I immediately planned my next move after that encounter because I wanted to learn from people who were serious about their works).
#4. The rampant problem of fake respondents, especially in mainland China. Admit it — MRers spend a lot of time “dealing with fieldwork” (in-house or external vendors)*. I cannot describe how many times I had to argue with fieldwork because they couldn’t seem to understand the word “responsibility”. Frustration with fieldwork was a big motivator for me to explore life outside MR.
* One commonly used tactic to avoid responsibility is to not send you the collected data by batches as scheduled and shamelessly ignore your emails, not pick up the phone, hide when you go to their office until the very last minute when they know you have your deadline to meet. Then they send you everything! On numerous occasions, more than 50% of the data had problems. On a few occasions, 100% of the data were fake!
#5. Too much pride. Refusing to change. Living in the past. To our defence, MRers are not the only ones. I hear the same complaints about advertising, branding or PR.
In short, the MR industry suffers from robotic factory-product-line dogmatism (RFPD).
#1. Have a point of view. Hold the urge of joining the latest trend without knowing what bandwagon you are jumping onto.
#2. Be serious and work hard. Researchers (or designers or data scientists) spend years to train up ourselves. No one can be an expert overnight.
You may look down on “traditional” MR. But such skills and experience are the foundation for analytics or whatever latest-fancy-thing you claim to be doing. You can’t do n+1 without n! Scientists cannot treat COVID-19 without years of training in everything that came before it.
#3. Be open and flexible. Just because we have not heard of it not mean it is wrong. Don’t pigeon-hole ourselves to the field or sub-field we happen to be in. Learn from other professionals and adapt. We do not need to be an expert in everything. We need to know the basics and collaborate with the others if need be. Cost and time aside, clients do not care whether our insight comes from qualitative or quantitative or neuroscience or big data. To get actionable “killer insight”, we need an open and integrated research strategy, systematically incorporating and experimenting methodologies and techniques within and outside MR, and continuously collecting and drawing insight supported by ad hoc research whenever necessary.
The AI field is doing just that — they are ready to embrace anything.
#4. Research is not the only way to gain insight. I am not contradicting myself. I am living my motto #3. Read extensively. Observe curiously. Learn greedily. The accumulation of everyday seemingly irrelevant little things helps us find new meanings and new perspectives to data.
A snapshot of me
I used to suffer from RFPD!
Starting out as a MRer, I have worked in both “traditional” MR and research roles in public relations and public affairs, brand, advertising and media agencies. I also did a short stint in academic research. Driven by curiosity and a desire to “do better”, I have been learning and adapting skills from across professions and expanding my capabilities beyond research.
I owe a huge debt to my friends and strangers who have spent time answering random questions about what is wrong with MR and how we can do better. They are the victims of MR, having been “forced” by their clients to use MR (thank god I was not the MRer responsible), yet are generous enough to offer constructive feedbacks 👇:
I enjoy reading — anything interests me, except for Pop-Xs (popular psychology, popular economics, popular philosophy, popular XYZ, etc.). Most Pop-Xs are not grounded in rigorous research and tend to make exaggerating claims to grab attention.
As a strong believer in open- and cross-learning, I dislike pigeon holes (but love pigeons) and resent “field pride”. I prefer spending my energy on learning, exploring and experimenting to revelling in self-importance (“MR is not dead! It is still very important!”).
Confidence is knowing what we are good at and what we are not.
My reflections on the market research industry: