Lies, Damned Lies & Statistics – Battling Bias In Marketing Measurement

Lies, Damned Lies & Statistics – Battling Bias In Marketing Measurement

The human brain is incredibly powerful, but it does have limitations. Our limited ability to process information quickly means we create mental shortcuts, which result in a tendency to make illogical, irrational – sub-optimal – decisions. This tendency is cognitive bias. And it’s everywhere. It’s reported that there are as many as 175 different types of cognitive bias, all affecting human behaviour: our belief formation, reasoning processes, our decisions.

Marketers have sought to take advantage of cognitive biases by understanding how to influence consumer behaviour and develop tactics accordingly (hopefully, in an ethical and transparent way). But, bias in marketing also has the potential to negatively affect the customer relationship and reduce the effectiveness of campaigns. For example, bias can lead to misrepresentation of your target audience or the neglect of potentially responsive audience segments. Bias can lead to bad decisions. Bias can be bad for business.

The ever-growing complexity faced by marketers fuels the potential negative impact of bias on marketing decisions due to the need for greater simplification. Plus, the prominence of adtech further exacerbates the problem as many biases now become embedded in tools and systems, potentially amplifying the bias of a small group of people. Bias is a growing problem. As many biases are unconscious, it’s a problem that is very difficult to solve.

In marketing, measurement is often sought to mitigate bias in decision-making. To uncover truth in the search for objectivity to confront subjectivity; to provide verification of the impact of decisions and the corresponding actions taken, based on facts and evidence. Good measurement can certainly support more effective decision-making, but measurement is far from being immune to bias. Several forms of bias have the potential to impact measurement, causing marketers to make poor decisions, based on faulty information—and possibly bring serious consequences to the business.

Common Types of Bias In Measurement

The actual data used for analysis can lead to bias – sampling bias – where a systematic error is caused by choosing non-random data for statistical analysis. This is a common ‘test-related’ challenge, such as in (Brand, Conversion) ‘Lift’ tests. And whilst the application of advanced statistical methods can improve result reliability, it still presents a headache in some areas, such as the evaluation of the incremental impact of ‘Search’ ads. Survivorship bias is another example here, where the study of winners, results in overly optimistic findings (indeed, this is a criticism – perhaps, slightly unfair – of the Binet & Field ‘Long and short of it’ research).

Further bias can arise if analysis does not consider all factors that could have influenced the observed results. Just because two variables are correlated, it doesn’t mean that one caused the other, there could be additional variables at play. ‘Omitted variables’ bias occurs when analysis does not include one (or more) relevant variables. The bias results in misattribution of the effect of the missing variables to those that were included. Omitted variable bias is (typically) a major flaw in ‘digital attribution’ methodologies and it remains prevalent in many forms of marketing evaluation. Good analysis considers all variables that might impact our KPIs.

And then there’s the expectations, the beliefs of the analyst. If an analyst has pre-existing ideas about the results of a study, they can accidentally have an impact on the data, even if they are trying to remain objective.

How to Minimise Bias In Marketing Measurement

So, what can you do to minimise the impact of bias on measurement in marketing? Here are some of the steps we always encourage at Entropy:

1. Simply be aware of the potential for bias. Although it’s incredibly difficult to completely remove bias, it’s critical to be aware of its sources to minimise its effects. Paying close attention to the data collection process and analysis can help identify possible flaws and reduce the impact on results.

2. Work collaboratively with a (potentially broad) range of stakeholders. Daniel Kahneman famously taught us that, ‘What you see is all there is’; we don’t spend enough time thinking “…there are still many things I don’t know”. Simply, we assert what we do know. A diverse group of stakeholders can bring a range of fresh POVs – even an element of naivety – that can challenge beliefs and assumptions. A variety of perspectives and experiences will raise awareness of aspects perhaps overlooked, not considered by the analyst. Bringing together collective opinions also reduces the likelihood of the results simply reflecting the beliefs of an individual/small group of similar individuals.

3. Seek validation of results. Statistical analysis uses methods which are non-deterministic. This means that you can get different results, given the same ‘inputs’; the actual ‘answer’ is never known with complete certainty! Statistical validation of results should be carried out e.g. ‘out of sample’ performance, along with common sense checks and the consideration of counterfactuals. In addition, try not to rely on a ‘single source of truth’. The complexity of marketing means that many good ‘models’ can fit the data well. For example, it’s now commonly accepted that Marketing Mix Modelling (MMM) should be used alongside ‘experimentation’. Different analysis/techniques should not be used in silo, rather they should be co-ordinated to triangulate results.

4. Scenario plan. Scenario planning can help overcome biases by forcing the consideration of different perspectives, to question assumptions, and expose uncertainties. The creation and comparison of alternative scenarios reduces the influence of individual preferences, biases and emotions, and increases awareness of the complexity and variability of the future.

5. Be forgiving – sometimes things are wrong! Accept this, the important thing is to get it right moving forward, to learn and improve. Ensure you learn from your successes and failures, and identify the best practices, lessons, and recommendations for your future marketing.

In taking these steps, you can lead your teams onwards and upwards towards better measurement, research and analysis with more reliable insights. Armed with these insights, you can make better data-backed business decisions – optimal decisions – that keep your measurement programme, and broader organisation, moving in the right direction.

How Entropy Can Help

For more information about measurement or other ways Entropy can help accelerate your growth, contact us or read about our work.

Get in touch to learn more about our ecommerce services

Sign up to our Newsletter

We’ll send you the latest digital commerce related brainfood. We’ll also let you know about anything interesting we are up to.

This website uses cookies to ensure you get the best experience on our website.