A Data-Driven Approach to Acquisition Lead Selection: FAQ
Posted by Francel Mitchell
January 11, 2018
Can you test a different approach? Is it better to use one or multiple channels? Is this approach really successful? My team and I frequently get asked these and other questions on a data-driven approach to acquisition lead selection. In this blog, I’ll answer some of the most frequent questions.
If I want to test a different selection approach, how do I compare the new method to the current one?
Before you start, it is of utmost importance to make sure that you roll out your new test on a random sample to avoid any bias in the test results.
If you are looking to test a new selection approach on 20% of the volumes (for e.g.), I would recommend that you randomly split your available universe according to that percentage (80:20). The “available” universe would be the total pool of leads that you would consider for the selection. Then, if you need a selection of 100,000 leads, you can use your current selection strategy on the 80% sample to produce the best 80,000 of the leads based on your selection criteria, and select the best 20,000 leads using the new strategy from the 20% sample. This will give you the opportunity to compare the outcomes of the two strategies, and make a data-backed decision, as you would expect similar results when you roll any of the two strategies out on 100% of your universe.
Is it better to use a single, or multiple channels?
A combination of channels typically provides better results, but it’s important to test and monitor the results each channel gives you, to fully comprehend the benefits. It’s likely to result in higher costs, if you use more channels, thus you need to find the ideal balance between benefit and cost. Channel optimisation is recommended where you can use optimisation analytics to assign the best channel or combination of channels to each lead selected. To develop a channel optimisation strategy, however, would require a large volume of historic test data, thus it is important to test your channel approach and store the data for future use. It is important to follow the test approach described above.
How do I determine whether this approach is working?
Keep a small, random sample in your selection that you can monitor separately. This sample will represent the original, full database that you were considering for selection, prior to applying any smarts. Compare the results from this sample to the lead selection that your analytical driven strategy produced. This allows you to monitor the lift provided by the use of predictive models and analytical driven strategies.
Can you prove the success of this approach?
A data-driven approach has resulted in huge lifts for some of our clients. When comparing the results of campaigns using data-driven lead selections to the random sample (or holdout), one of our Insurance clients experienced a 5.5 improvement in Quotation Rate lift and a 5.5 improvement in Activation Rate Lift.
When compared to competitors, our predictive models, have also been proven to deliver higher lift, on both a campaign and a monthly basis:
If there are any questions you have, that I haven’t yet answered in one of the three blogs in this mini-series, feel free to contact the Principa team. You could also have a look at Genius Leads for more information.
About the Author, Francel Mitchell
Francel Mitchell is the Head of Decision Analytics at Principa. Francel’s team has a winning track record using descriptive, predictive and prescriptive analytical techniques within the financial services, marketing and loyalty sectors. Utilising available data and through the application of advanced analytical techniques, the team takes pride in their ability to predict human behaviour that can be used to assist business in making profitable decisions.