October 14, 2015 at 5:55 PM
When we started our Man vs. Machine (Learning) initiative, we did so to have a bit of fun and to learn a few lessons we could apply to earning our bread and butter: predicting customer behaviour, customer lifetime value and credit worthiness, among other things, for our customers.
We’re now half-way through the Rugby World Cup and, despite a couple of upsets that our algorithms did not predict, our ranking for one of our two Machine Learning teams – Nero – stands at 99.68% of all humans on sports prediction site, SuperBru.com. Why? We reckon it’s because our predictions are based purely on data and the patterns and correlations within this historical data (6,000 rugby matches played by 99 teams over the past 20 years). No emotional influence, no “gut” feel comes into play that may lead to a biased prediction. What’s more, the predictive analytics tools we use are able to identify significant patterns and correlations across different data, which the human brain cannot match.
As we are now half-way through this challenge we set for ourselves, we thought we’d share some of the key lessons learned so far and the challenges we face predicting the results of the up-coming knock-out stage Rugby World Cup matches.
One of the main reasons we embarked on this fun challenge was to learn. Almost three weeks into it, we have learned a few lessons and have even managed to get some media attention to boot!
Here are some of the key lessons:
- The principles we apply in predicting customer behaviour for our customers can be applied successfully in other areas, such as predicting the outcomes and margins of world cup rugby games. All of the knowledge and skills we apply every day for our customers was applied to our Man vs. Machine challenge, e.g. data sourcing, scrubbing, validations, characteristic generation, sampling, model development and testing. The application of these principles, skills and knowledge to predicting rugby scores has yielded a predicted outcomes with a high degree of accuracy.
- Retraining the predictive models after every set of games has shown to improve our predictions – something we expected to be the case, but nevertheless good to see this proven. We’ve realised that this regular retraining of our predictive models can be of great value for businesses requiring predictions in a more dynamic area such as call centres or fraud detection
- Combining predictive models or following an ensemble modelling technique is an effective way of further improving the accuracy of results. This approach effectively extracts the best of both models into one final model that will perform better than any individual model used on its own.
Although we have been very accurate with the outcome of the pool matches, the challenge has been predicting the margin of the scores. Getting within five points of the game margin was the main challenge. This past week we saw our margin predictions be almost spot on for the last few matches, until the last 10 or 15 minutes of the games when an additional try or two were scored before the final whistle was blown.
However, as we move into the knock-out stages this weekend, we expect to be more accurate in our margin predictions, since the teams playing are closer in performance and world rankings. Our challenge now shifts away from predicting the margin and more towards predicting the actual outcome.
Besides the teams being more equal in performance, their strategy in the knock-out rounds might change, which will be difficult for our models to predict, e.g. a more defensive strategy may be adopted which is a behavioural shift that is difficult to be encapsulated by the models.
Fortunately, the teams playing have been playing against each other for many years, so there is more data available for us to develop our predictive models for these teams than there has been for some of the pool games.
One final challenge in obtaining an accurate prediction is minimising the impact of unpredictable variables – referee calls, last-minute player decisions, injuries - that could affect the outcome of a match. Any of those could make the difference between a predicted winning team and a resulting losing team.
Despite these challenges, by basing our predictions primarily on historical data, we have managed to rank top 0.32% of users on SuperBru. So, we seem to be winning the Man vs. Machine (Learning) Challenge. What’s interesting now is to see how our two “machines” have performed against each other and why one has been more accurate more often than the other.
Machine vs. Cyborg
Although 89% of our win / lose predictions were the same between our two teams, only 23% of our margin predictions were within 5 points of each other. This just shows how the data sources obtained, the subsequent processing of the data and the chosen modelling approach can significantly affect the accuracy of the predictive analytics system.
We created two teams internally, initially to create a bit of fun rivalry among our data scientists during the Rugby World Cup, but to also allow us to test two different modelling techniques against each other. The two teams set off in different directions – not sharing with each other what data sources they were using, what insights they had drawn or what approach they were taking to develop their predictive models. We did this to see how diverse the results could be, and this has indeed been the case.
So, which team is performing better and why? If you’re a betting man, that’s literally the million dollar question.
At the moment, Nero’s predictions have been more accurate more often than Trojan’s predictions. And, the reason for this is very interesting indeed. Team Trojan’s predictions have been based purely on statistics derived from historic performance by the rugby teams analysed. Team Nero, on the other hand, have added to this the fantasy league’s values for those teams, as well as the bookie odds.
If Nero continues to out-perform Trojan, it will tell us that perhaps a little bit of human “gut” feel has a positive influence in improving the accuracy of predictive analytics. In a way, among our two teams it has now become less of a Machine vs. Machine challenge and more Machine vs. Cyborg.