Uncertainty has always been a core aspect of marketing. In this industry more than almost any other, success or failure is dependent on a notoriously unpredictable factor: human behaviour. 

There’s nothing many marketers would like more than to be able to predict the future behaviour of their customers; it’s hardly surprising, then, that many brands are now embracing machine learning and propensity models as tools to better anticipate and optimise the impact of their marketing tactics. 

Propensity modelling in particular has quickly risen to prominence as a ‘predictive’ tool, allowing marketers to predict an individual’s probability of completing an action – usually to make a purchase in a given window of time. It’s true that propensity models can be powerful marketing tools – but only if used correctly, and with a clear understanding of their limitations. In some cases, digging a little deeper makes it clear that other models may prove far more useful in helping marketers to capture incremental business value. 

A (semi-) technical summary of propensity models

Those familiar with machine learning (ML) jargon will recognise that propensity models are a form of “binary classifier”, a model that predicts the probability that a given event will occur. Binary classifiers are among the most ubiquitous types of models in machine learning and are usually introduced near the beginning of introductory data science courses. Given some input data – such as historical customer purchases – there are many very good open source libraries out there that will train one for you. 

The relative simplicity and accessibility of propensity models makes them a natural first use case for businesses – and practitioners – looking to use data science to improve their marketing efforts. This may be the reason that they are one of the most commonly used ML models in marketing, especially by companies making their first forays into using ML to inform or make business decisions. 

But just because you can train a predictive model on your historical customer data, it doesn’t mean that it will actually be useful or drive business value. Without consideration for the intended usage of the model, propensity models often don’t have the business impact expected by those who conceived it. In a worst case scenario, using poorly developed propensity models can generate a negative impact on marketing ROI. 


Limitations – The sense check

The issue that almost always arises with propensity models is knowing what to do with the outputs they produce. Consider again a propensity model for predicting customer purchases: we train a model using historical purchase data and use it to predict – for every customer in our CRM – the probability they will purchase in the following month. 

How should the marketing team use these predicted purchase probabilities? By marketing to those with the lowest probabilities of purchasing, or the highest? The model alone cannot tell you this. 

Let’s say a mobile phone network provider is considering sending out an email with a 10% discount code to encourage customers to renew their subscription. They’ve invested in developing a propensity model, so they can see that their customers divide into two groups – Group 1 is unlikely to renew organically, while Group 2 is likely to renew their subscription unprompted. 

The next step seems clear: send the discount code to all of Group 1 and none of Group 2. There’s no point wasting marketing on customers who are likely to buy regardless; we want to be speaking to the people who have yet to be swayed. 

But on closer inspection, it’s clear that a key piece of data is missing: not just whether or not these customers are likely to renew their subscription, but whether your marketing is likely to have an effect on that decision. 

If we can begin measuring the probability that an individual will be more or less influenced by a discount offer, then the picture begins to look very different. 

We can see that, far from being homogeneous, the reactions of individuals within Group 1 and Group 2 to the discount code can differ starkly. 

If the company acted on the information the propensity model alone gave them and chose to market to all of Group 1: 

Group 1a will likely be a good investment; they won’t convert on their own, but marketing is likely to increase their chances of purchasing. 

Group 1b will probably be unaffected by the offer – most would never have renewed, and most still won’t. The cost of sending an email is so low that it doesn’t matter much if the majority of recipients are unmoved. 

If our propensity model led us to choose not to send the offer to any of Group 2:

• This makes sense for Group 2b; a large number of people in this group will renew organically if not contacted, so giving them a discount is tantamount to throwing money away. Of those who won’t renew organically, most still won’t renew if they receive an offer, so there’s very little upside to contacting Group 2b and plenty of downside.

• Things are less clear-cut for Group 2a; many of these people will also renew organically, so there’s little point in contacting them. But many of those who wouldn’t have renewed will do so if provided with a discount. Without further investigation, it’s impossible to know which of these two effects is stronger and thus whether it’s beneficial to contract group 2a.

In this example, using a propensity model to decide whom to email discounts is probably better than it would be to email no discounts at all or send discounts to everyone; considering the context and cost of the marketing campaign, the propensity model’s blind spots are unlikely to have a huge negative effect.

Limitations – Relative cost

The level of risk involved in relying on a propensity model increases rapidly when we look at a different use case. Let’s say our company’s marketing campaign uses one-on-one calling instead of emails. Now there are much greater costs associated with contacting every customer. In this paradigm, it is much more important to be able to distinguish between a customer in Group 1a and one in Group 1b. 

For example, the company might find that customers with a high probability of churning are dissatisfied with their service and are unlikely to be swayed if called and offered a discount. In this case, there would be few people in Group 1a and many in Group 1b. Likewise, you might never want to contact Group 2a, because they are highly likely to convert regardless of your intervention. It might turn out that the best trade-off is to contact people who sit somewhere in the middle of the x-axis (medium propensity scores). 

How do we navigate all of this uncertainty? Without real-world testing, it is impossible to know from the data alone. In this case, a propensity model might perform very poorly, even to the point of having a negative impact on ROI. It is impossible to estimate this from historical data, making deploying a propensity model highly risky. 

Limitations – The need for real-world testing

We’ve established that, in some cases, it’s vital to move beyond propensity modelling and develop a clear understanding of the impact of marketing on conversion. But how do we develop this understanding?

Machine learning and data science are not all-seeing counter-factual tools; they are limited if they only have access to observational data from the past. If the business has always followed a rule that says “if contract value exceeds £X/month, contact and offer reduction to renew, otherwise do not contact”, we will have no data to show what happens when the rule is not applied. The data will therefore not be able to tell us whether this is a bad rule. We need to experiment with contacting people who have low contract values and not contacting people with high contract values. 

In many cases where there are unknowns, like the effect of marketing on a decision and its relationship to customer churn, we need to run real world tests that randomly select individuals from across Groups 1 and 2 above. With this information, we can begin to understand the relationship between propensity score and the effect of marketing intervention, and use this to decide where to intervene in future. 

This is, understandably, an intimidating prospect to most marketers; taking time to gather data and turning off some long-running marketing campaigns can feel risky and alien. But it’s the only way that many marketers will be able to achieve the results they dreamed of when they first invested in propensity marketing and begin truly optimising marketing spend. 

Ultimately, propensity modelling is often a sensible starting point for marketers looking to begin their journey with machine learning. But like any tool, it must be applied in the right way and with consideration of context. Rigorous testing and experimentation is the key to ensuring that you avoid the propensity modelling pitfalls outlined above.

Can we do better?

If marketers are going to get what they want from propensity models, they need to stop seeing them as magic bullets; no one model is capable of anticipating and optimising the infinite variety of human behaviour that marketers must consider and contend with every day. 

Instead, marketers must see propensity modelling as one of the many weapons in their arsenal – one that’s most effective when implemented with full consideration of the limitations outlined above and (most importantly) in conjunction with other techniques that supplement its weaknesses and enhance its strengths. 


In the next blog in this series, we’ll discuss just such a technique; a class of model that help us predict the incremental effect of a marketing intervention on a consumer’s tendency to buy, churn, upgrade, or perform any other action.


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.