Let’s Talk.
Want to grow fast? Let’s talk.

"*" indicates required fields

Apply Now.
Please apply below.

"*" indicates required fields

This field is hidden when viewing the form
Drop files here or
Max. file size: 300 MB.
    All
    Attribution Modelling
    Lifetime Value Analysis
    Data Science
    Bark.Insights

    Marketing Mix Modelling & Incremental Experiments – How to Test New Marketing Activity

    Jun 11, 2024 |
    Written by:
    Alex Eisenhart

    Media (or Marketing) Mix Modeling has always had so much value to add for digital advertisers, and now with the decline of the third-party cookie and tighter regulations for data privacy, the digital marketing industry has been developing quickly to adapt to the new landscape.

    Digital marketers are used to optimising with daily-refreshing, click-based data.  Multi-Touch-point Attribution (MTA) is still the most popular system used to optimise spend between channels and campaigns. This data is as granular as it gets at the daily, advert-level. However, Media Mix Modelling (MMM) used to be delivered on a quarterly or yearly basis, but now we have tools which can give results on a daily basis, at the campaign or campaign group level (or more).

    The concern is that, in the quest to generate the most granular results possible, quality control around the validity of the models is going to suffer. This doesn’t need to be the case, and we certainly aren’t criticising the tools and methods being developed out there. However, we feel that if a basic MMM analysis cannot tell you much or anything about the relative value of your key marketing channels, the best move isn’t to throw more advanced statistics at the problem. The forecasting methods can still guide an incremental testing strategy that anyone can be doing right now, and without getting any outputs on which channels are performing better to date.

    The most common things that marketers are looking for with MMM now are: 

    • What are the diminishing returns to spending on different channels and campaigns?
    • What is the optimal budget mix for my business at different total budget levels?

    The reality is that for a lot of businesses, Media Mix Modelling is not able to generate a valid answer to those questions given the data that the business currently has.

    We would like to make the case that developing a near plug and play, MMM SaaS tool to model campaign or ad level performance is not what most Ecommerce businesses need most. The development in this technology is exciting and the quest to improve MMM methods is wonderful, but users of these tools still need to have a strong awareness around what can cause the models to come up with bogus results, and how to interpret the certainty around what the models report.

    Both when modelling and interpreting MMM results we need some understanding of:

    1. How to design and interpret models to look for causal effects and not just correlation.
    2. Experience of digital marketing media planning

    If the data is not rich enough to get a valid answer on how our different channels perform, the answer should not be to throw all the methods we can at it until we have some kind of an answer, but to acknowledge how little we can understand from the data.

    Modelling work still delivers key insights such as forecasting future sales based on marketing spend and identifying essential revenue drivers, such as search demand, competition, price, weather, or macro-economic factors. These insights allow us to plan and evaluate experiments where we make significant strategic marketing moves and judge whether they drive business growth efficiently.

    In this blog, the aim is to discuss in more detail what some of the most common pitfalls of Media Mix Modelling are that Ecommerce business should be aware of, and then outline a way of testing new marketing activity that everyone could be doing now.

    2 Core Obstacles to Valid MMM:

    1. Controlling for external effects

    The first major pitfall is leaving out important variables that predict revenue. If you invest in a new channel, whilst also being on a promotional sale, a model which does not adjust for this sale will see how much more revenue grew when investing in this new channel and think the channel is amazing, when really the additional sales you generated may have been completely driven by your limited 20% off offer. 

    It is very important to note that when you run a sale or promotion, the bias in your click-based attribution data (including the platform reported stats) changes as well. Using these numbers to forecast and plan cross channel spends can lead to some very unrealistic media plans.

    2. Inter-correlation

    Anyone who has looked into this topic in some detail should be aware that MMM is supposedly only doable for companies of a certain size, with more than just a couple of marketing channels. Go a bit deeper and you will understand that budget size and number of channels is not actually the most important thing – “inter-correlation” is our enemy. 

    What “inter-correlation” means, is that if you have always dialled budgets up and down pro-rata between your campaigns and channels, there is no basis in your data to run a correlational analysis prying apart the different elements of your marketing activities. (the same concept as when you launch your new channel at the same time as dropping your prices, you cannot infer how much each of those factors uniquely contributed to a growth in sales. Except here the two things we have as inputs into our model have the same shape over time)

    There are valid methods to get around inter-correlation problems, but the reality is that, for a lot of businesses, these methods are still not going to be good enough. One way this becomes clear is when you can see the level of confidence (or margin of error) in the estimations of return on ad spend from your providers’ models. 

    In the quest to give a number to ascribe to each channel’s value, it is tempting to report a number (a “point-estimate”), even if the uncertainty around that number is grossly unacceptable. Are they saying the ROAS of your channel is 2, but could be anywhere between .5 and 4? All of a sudden this doesn’t sound so useful. Even worse, the confidence interval’s lower value may be below 0 – so although they have given you an estimate, they may have failed to communicate to you that the model is uncertain as to whether the channel has any effect at all! 

    A solid partner will let you know clearly the level of uncertainty, and recommend some changes to your media plan to generate data which will be useful. Always ask or look for what the confidence interval is around the estimates that an MMM provider gives. 

    An Incremental Marketing Experiment Method

    So, if the variance in your data is not rich enough, your marketing scientist should tell you that you need to conduct an experiment to understand the value of a specific channel or activity in your marketing mix.

    Say we would like to understand if adding a new channel into the mix adds value, but profit margins & targets are super tight and we can’t be taking any risks. Here is one way to run an incrementality test, keeping the methods and example as basic as possible:

    1. Estimate the smallest budget amount to invest which can make an impact on all sales

    We first estimate a model of diminishing returns to spending on all marketing.

    We then determine what the smallest reduction in spend would be to feel an impact on our top line sales. We predict that if we drop spend by that amount then sales will fall but efficiency (ROAS or CAC) will improve. We are going to see if spending this amount into some new activity instead will be as effective. (Note in the chart that at the new spend level, the upper bound (of our confidence interval) of predicted new customers is below the lower bound for predicted customers at the current spend level)

    2. Estimate how many days of results we need to collect to get a good read

    We must determine what the minimum number of days we should spend at this new level should be, so that the aggregated results from our experiment are likely not due to random variance. (get a large enough sample).

    We also make an assumption about “ad stock”. We know that spend on day 1 likely does not just lead to sales on day 1, but instead there is a time lag between hitting prospective customers with advertising and them going on to make a purchase. We need an assumption on our existing spend (it may be possible to estimate this using historic data but if not we can make a conservative guess). We also need one for the new activity we want to test.

    3. The experiment plan

    We first drop spend by the given amount for the recommended time period. If sales and CAC land where we predicted, this is a very strong validation of our model! Any efficiency gains we made from spending less and having better costs can also be reinvested into our new test activity, or this can go towards the next test.

    Now we are here, we can invest the amount of budget we cut into our activity we are testing. If sales then do not pick up, we can infer the activity we tested does not add enough value to us. 

    If sales go back to where they were, great – the activity has added sales and we can estimate an incremental return on ad spend for this new activity (in this case at a similar incremental ROAS to our existing marketing). If results are at least this good then we can take a read on how results look in the ad platform or our website tracking data, and create benchmarks for what good performance looks like for the new activity. 

    If we have even better incremental gains then we have generated genuine, efficient scaling of the business. We now need to increase spend as we are ahead of targets. We can choose to continue to scale our new activity, or increase spend in our existing marketing activities back to the pre-experiment level (will the incremental gain be the same, or is there an overarching market saturation at play?).

    When judging success, we can keep track of a rolling average of new customer volume, based on the number of days we estimated are needed to get a good read, and map out the threshold for success:

    This is a great way to work when under strict profitability goals. In the worst case scenario, we lose some volume of new customers in return for better efficiency, whilst our profit targets stay intact. It also works well to explain the concept! We could just shuffle our marketing budget and not go into a period of lower spend. If we have a specific target we want to hit and some funding to leverage we may instead want to model how much incremental value the new activity needs to bring in order to hit our goals, and then plan the parameters for our experiment accordingly. It is important, though, that different businesses have different levels of variance in daily sales and different diminishing returns on marketing spend, and we need to take account for that when planning experiments.

    In Summary:

    Taking an experimental approach to media planning is something everyone can start doing more of. Doing so generates the data you need to understand marketing effectiveness. Whether you are running some statistics or not, engaging in a push and pull of budget into different strategies is a great way to achieve growth.

    Just modelling how growth scales with marketing spend whilst accounting for external factors is a problem which needs to be solved for a good granular MMM system to work, and it isn’t something we would feel comfortable allowing a machine to completely automate (at this point in time at least 🤖). 

    Solid results from experiments inform benchmarks on what our KPIs for ad platform or Google Analytics results should be. Sense check all of this with incremental lift experiments and then we have a solid, modern triangulation method for digital marketing attribution.