Many of us remember the hoopla around the predicting ability of the now deceased FIFA World Cup predictor, Paul the Octopus. For those who don’t recall, Paul was an Octopus at a German aquarium that famously predicted with 100% accuracy the results of team Germany’s six matches and final match of the 2010 Soccer World Cup.
Although, Goldman Sachs defending their 37.5% success rate back then contested that Paul would have only been 33% accurate had he had to predict the results of all 48 games, including draws. Be that as it may, given the option of breaking into Cape Town Aquarium and kidnapping an octopus to help us predict the outcome of the Rugby World Cup or rather relying on data science and machine learning, we stuck to what we know and believe in and it has paid off.
We simply believe in the power of machine learning and the simple principle of predicting future behaviour by analysing past behaviour, identifying patterns and trends, and leaning on the strong assumption that past behaviour carries forward into the future. And one of the main drivers for us initiating our Man vs. Machine-Learning initiative was to demonstrate superiority of using data science over gut-feel or even an octopus to base your business decisions on.
In our previous blog on this subject, we covered some of the lessons learned during the initial round of matches and the challenges we faced leading up to the knock-out rounds of last week.
In this blog, I’d like to go through some additional lessons learned during the knock-out rounds as we head into the final this weekend.
Adaptability is central to machine learning
Although our win and margin prediction for the Australia vs. Argentina match were spot-on, we overstated the margin with the All Blacks game (12 vs. 2). Our predictive models were built on a selection of historical data points such as bookie odds, fantasy league value, world-ranking and number of tries scored in the last three games, etc. We could have combined a wider range of characteristics that might have narrowed the gap for the knock-out rounds, but had to draw the line somewhere in terms of the data we were injecting into our predictive models. Additional metrics like weather conditions, crowd-size or even referee track record would have given us considerably more granular results, thereby narrowing the gap between our predictions and the results.
What this points to is the centrality of almost up-to-the-minute, dynamic data that can only be determined in the 11th hour to reach the most accurate result. A combination of historic and near-live data will result in much closer insight which might appear almost psychic to the data-novice. You see, machine learning, in essence, hinges on the availability of a wide spectrum of data that allows it to learn from it and adapt its algorithms to more accurately calculate the possible outcomes of whatever situation it is being applied to.
Your data needs to move at the speed of life
One often has grand ideas when going into a real-world predictive modelling problem, but it is important to bear in mind that with every change or variable, new dynamics kick into action that predictive models need to factor in.
Our cognitive inner workings occur automatically thanks to the complexity of the human brain and its ability to process complex variables and stimuli instantaneously. This, of course, is the end goal in our adventures in machine learning and artificial intelligence and thus leads to the question of how capable our current state of Information and Communications Technology (ICT) is of matching humans’ capacity to collect, filter, organise and analyse information and ultimately determine the right action based on aforementioned processes. Adding to this is the ability to apply context to the stimuli we receive, which in turn influences our decision-making processes.
Let’s use the case of the semi-finals: if we had applied contextual data that informed our predictive models that these games were knock-out matches as opposed to first-round games, our systems could have factored in possible context-based strategy changes by the teams that could have narrowed the score margin somewhat. But we think we have done pretty well with our 0.42% position in the SuperBru league using the available data, and wouldn’t wager the rich insights we’ve gained on the inclinations of any tentacled beings anytime soon.
As we learn and adapt our understanding and approaches to the world of predictive modelling, we’ll continue to evolve to reach the goals we’ve set for ourselves. But we had fun and learned a lot in the machine learning space which has been a positive outcome, for us anyway. If anything, we’ve managed to make our point that important business decisions should be left to data science and predictive analytics and not to gut or hocus pocus.
Julian Diaz is Head of Marketing for Principa. American born and raised, Julian has worked in the IT industry for over 20 years. Having begun his career at a major software company in Germany, Julian made the move to South Africa in 1998 when he joined Dimension Data and later MWEB (leading South African ISP). Since then, Julian has helped launch various South African technology brands into international markets.