Customizing attribution logic increases cashflow

Posted by on September 02, 2018 · 4 mins read

Research indicates that testing different methods to allocate LTV across multiple user touch points (multi-touch attribution or MTA) will produce better budget allocations and resolve into statistically improved ROI performance. Marketers with large budgets should commit to investigating if other MTA methods can provide value to their organizations.

The best way to create customized attribution models is to get programmatic access to all customer interactions and touch points. This can come in the form of advertisements, cross promotion activities, and tracking in digital properties. By mapping out the individual customer journey event by event, impression by impression, and click by click, data scientists can build (and test) different LTV attribution models for the various parts of the customer journey and see if the various budget allocations will yield better ROI.

Since collecting and processing this amount of data is difficult for a marketer that doesn’t know how to code up a data pipeline (it can’t fit in an Excel file), most marketers start off with an out-of-the-box SaaS tool that has some kind of built in attribution logic for allocating LTV to campaign sources. As a starting point, this solution is a no-brainer since it’s

  1. Simple to get started and
  2. Does 90% of the optimization you need.

As marketing budgets grow, the limitations of cookie-cutter attribution models are more observable. Intuitively, as a marketer’s budget grows, the end customer has a higher likelihood of running into more advertisements (multi touch is more likely).

I work with developers everyday who start off with the 90% solution, then graduate to the last 10% on MTA when their marketing budgets exceed $10M a year. Most out-of-the-box attribution solutions for mobile installs are done on the “last click” model where the LTV of a user is entirely associated to the source of the last clicked ad. For small budgets this is generally fine. There isn’t enough budget coming from the same marketer for advertisements to overlap before the user downloads the app.

Once marketers start spending more, this “last click” attribution model can be overly simplistic; the customer’s number of touch points grows before the user downloads the app. If the customer saw multiple ads all in a 10 second sequence, there’s probably a good argument for allocating LTV across those ads and not just the last one which was clicked. In these cases, building MTA into your budget allocation can yield up to 15% better ROI.

Once a custom attribution model is built, you can test its efficacy by setting up a series of A/B tests that pit a previous attribution model against the new attribution model. Such a test would require the same budget allocation discipline for both A and B when running campaigns. Over time you’ll be able to determine if there is a significant difference in performance in cash flow and ROI when using different attribution models.

For example, if your budget is $2M, you create two groups with $1M each. Group A has $1M and Group B the other $1M. Group A is assigned a test MTA model and Group B the “last click” model. In both groups, you start off by allocating $ budgets the same way. Over time, budget allocations in Group A and Group B should differ because the test MTA model and “last click” model will yield differing directions to ROI. It should be apparent which group performs better in the long run by measuring the effects of long term cashflow. I’ve seen customers do this at small sample sizes. Indicators point to 15% or more improvement.

If this plays out at scale, serious marketers need to employ and iterate on MTA models (as well as their campaigns) by testing new methods for performance improvements in ROI. By customizing your MTA model and realizing performance improvements in ad campaigns, you can quickly justify the amount of work that goes into hiring a data guru.