We have a bit of a joke in the office around how data scientists in 2027 will have a good laugh at what we define as ‘Big Data’ in 2017. Pat pat, there there, I guess that was Big Data back then. Unlike the term Big Data, Machine Learning is here to stay. It is after all one of the foundations of Artificial Intelligence and this is rapidly becoming more and more part of our culture. The impact of Machine Learning is being felt on a daily basis, from using interactive devices like Amazon’s Echo to do our shopping, learning a language through DuoLingo, or interacting with chatbots to get your statement in under a second instead of waiting “for the next available agent”. So what has happened, why the recent explosion of Machine Learning applications?
Firstly, to set the record straight, Machine Learning is not a new invention. In 1959, Arthur Samuel developed a self-training Checkers algorithm that reached ‘amateur status’. That’s right people: 1959. Of course things have moved on since then, with Google’s AlphaGo beating Lee Sodol in 2016 in the game of Go 4 games to 1. Go is like chess on steroids, with 10761possible moves (compared with chess that has 10120 possible moves). The successful machine had 1920 CPUs and 280 GPUs (or 1MW of power) versus Lee’s brightly lit 20W bulb in his head. And Lee can do so much more than play Go (he can brush his teeth, line up dominoes, drive a car, hold a deep and meaningful conversation). But unless you have been stuck on a deserted island, things have been moving on quite quickly recently. It’s very much to do with timing. Over the last 20 years, processing and storage costs have dropped dramatically and have become exponentially more powerful at the same time.
This has resulted in a bit of a snowball effect:
Off the back of more powerful and cheaper computers, faster algorithms have been developed and applied to more data and this leads to…
Impressive lift in quick time and this leads to…
A thirst for (and investment in) more data and this leads to…
An investment in faster algorithms that can handle more data…
On top of this we have a rapidly growing open source movement with Machine Learning capabilities: Python and R have well and truly been embraced by the data science community. Fruits from high investment in Machine Learning technologies by big players like IBM, Google, Microsoft and Amazon (to name a few) are now being realised and thrown open to the data science community, often for free. There is now a plethora of impressive ML tools that do not require a PhD in Mathematics or Statistics. Your average data citizen can now lay into the most advanced algorithms and yield impressive results in a short space of time and at a low cost (sometimes at no cost).
There are numerous examples of how Machine Learning is changing the way we live. In financial services too, Machine Learning is being used to improve efficiencies, reduce costs and increase revenues. For example, identifying customers that are about to leave for your competitor (do you let them go or do you intervene), retraining models that will more accurately predict whether someone is going to be a good customer (or not) and thus channel your onboarding costs in the right areas.
If you have a team of good data scientists that will help you to extract the most value out of your data universe, well done and hang onto them. However, if you have data that can provide you with rich insights into your customer and give you the edge over your competitors but don’t have a team of data scientists to carefully construct Machine Learning models, then you will probably be very interested in using MLaaS or, you guessed it, Machine Learning as a Service. This is a growing trend, with various options now available for the end user. Take a look at the graph below that shows the rising frequency of searches for “Machine Learning as a Service” (where 100 on the y axis is simply the peak)
Examples of available MLaaS tools are IBM’s Watson, Microsoft Azure ML and BigML, to name but a few. Load up your data, press some buttons and voila! BigML’s tagline neatly summarises their approach: “Shockingly simple Machine Learning tasks using BigML's REST API”.
But even with the wide range of easy-to-use MLaaS tools available that can accelerate the incorporation of Machine Learning into your business, there are still a couple of things missing from this approach - experience and business knowledge. One still needs experienced users to know what to look out for, how to put your available data together in the correct way, how to avoid the wheels from coming off (and they can come off), plus the directions to take in order to answer the actual business problem. The MLaaS tools available might give you all the required tools but they are missing the intuition or the ability to listen that makes a good data scientist. If one really is short on data science resources then a more collaborative partnership is required with someone that has “been there, done that”.
Principa has realised that this is a key gap in what is currently available and our MLaaS offering, Genius, has been developed to get you going quickly and safely. We have identified a few key applications on which we have intimate knowledge. We know what data is required, how the data needs to be put together, what to look out for and, of course, what Machine Learning tricks to apply to this data to give you the best solution for the specific application. We think it’s Genius.
About the Author, Robin Davies
Robin Davies is the Head of Product Development at Principa. Robin’s team packages complex concepts into easy-to-use products that help our clients to lift their business in often unexpected ways.