The Data Analytics Blog

Our news and views relating to Data Analytics, Big Data, Machine Learning, and the world of Credit.

All Posts

Statistical Models Vs Machine Learning Models - What's Your Flavour?

September 11, 2018 at 8:05 AM

We are quite proud of the ability to develop performant, stable and trustworthy predictive models here at Principa. For nearly 20 years, we have been developing predictive models that have helped so many of our clients to make better decisions, more often than not outperforming what our best competitors can achieve. The models that we have historically developed can be categorised as part of the additive group of models – that is, a handful of predictive characteristics are selected and classed in a way that best separates the ‘goods’ from the ‘bads' (i.e. the traditional binary classification application). Depending on the new unseen data, the resulting weightings are then added together to get a final score. For example, consider a 3-feature model that uses only Home Ownership, Years at Employer and Age. Let's say you are a homeowner and for this you get 10 points, you have been with your employer for 5+ years (15 points), and you are 23 years of age (8 points), then your final score is 33, and the strategy will use this score and decide where you should go in the business decision tree.

We follow our own recipe that we have refined over the years when developing these models: making sure that the output from the models will fulfil the business' needs, carefully specifying what is required during the data sourcing phase, and carefully understanding the underlying data before even thinking about developing the models, amongst many other things. Basically, working towards a "no surprises" delivery of the models into production for our clients. On the modelling side, let's say that we have also learnt a few tricks on the way. Yes, our modelling approach can sometimes be compared to the standard logistic regression, but our models are similar to logistic regression models only in the final structure of the model. We use advanced techniques to give us close-to-optimal bins and weightings.  Let's not go too much into our technical approach here, but suffice it to say that our models (a) successfully separate the goods from the bads (b) provide accurate point predictions that can be used with confidence in a business strategy and (c) do not degrade rapidly over time, due to the inherent nature of the additive models and how they are constructed.

Enter the wave of machine learning. 

It took us a while to realise that we have been developing machine learning models since the beginning of our time. We don't brag about this as we know it is not entirely true. We have been building statistical models that are retrained only when we see a degradation in performance (e.g. they are deployed in stable environments and only need get redeveloped every year or even every few years, really!). Some might consider these models to fall outside of machine learning. There are numerous debates about what machine learning is and what it is not - you can find a wholesome debate here: https://stats.stackexchange.com/questions/158631/why-is-logistic-regression-called-a-machine-learning-algorithm.

Constructing machine learning models is generally regarded as a computer science challenge – i.e. the underlying model is more complex in nature, and the challenges faced relate more with computational efficiencies (i.e. quicker processing of the complex trees or neural net) than nuances around the statistical approach. For example, the well-established and often Kaggle.com winning machine learning algorithm is Gradient Boosted Machine (GBM) which, during construction of the model, is computationally expensive, especially on large training sets. Depending on the size of the data that you have to work with and your computing power, it can take hours to train a model as there are often thousands of trees that need to be optimally constructed and tested for a range of tuning parameters. This is often the trade-off between statistical and machine learning approaches. Statistical models are quick to re-class but require an analyst to construct the model carefully and fine-tune the bins to ensure stability in the model, avoid overfitting, etc. On the other hand, the construct of machine learning models depends on parameters and hyperparameters that can quite dramatically affect the performance of the final model. Where the underlying training is non-linear in nature (as is often the case), it is not easy to find the optimal parameters and the best-known method is to run multiple experiments using a grid search or other more advanced approaches that are also computationally expensive (e.g. using Bayesian techniques or genetic algorithms to find the optimal parameters).

There have been great strides in optimising how quickly these models are constructed, often using C++ to do the heavy computational lifting. For example, going from the libraries GBM to XGBoost to LightGBM can reduce the processing times by the order of 5 times from one to the next! For example, it can take a GBM library say 100 minutes to fit a new model, 20 minutes for XGBoost and 4 minutes for LightGBM. What this gives you is the ability to run more experiments in the same time that you have available, or the same number of experiments in a shorter time, or somewhere in between. We have a good track record now regarding constructing machine learning models, with a few models deployed into production as a challenger to our incumbent statistical model. The performance is very similar (sometimes slightly in favour of the boosted algorithm) which is an excellent outcome for us as the bar was set very high with our statistical models.

The real benefit with the machine learning approach is that it lends itself to efficient retraining of the models into the future. This is particularly true if you have set up a machine learning pipeline that provides a data feedback loop, updating your training dataset as the outcomes are observed from ongoing campaigns. Efficient retraining of the models means that the model's performance does not degrade over time and the model adjusts for possible fundamental shifts in the operating environment. There are rapid and ongoing advances concerning the tools available (often open source) to efficiently construct performant models without the need to employ and run a large team of data scientists. We still construct both statistical and machine learning models, where challenges around deployment are often the deciding factor. We are in a fortunate position to see how these two different approaches compare to each other and it is going to be interesting to see how things unfold as the machine learning landscape changes and develops further.

Using machine learning in business - download guide

Robin Davies
Robin Davies
Robin Davies was the Head of Product Development at Principa for many years during which Robin’s team packaged complex concepts into easy-to-use products that help our clients to lift their business in often unexpected ways. Robin is currently the Head of Machine Learning at a prestigious firm in the UK.

Latest Posts

The 7 types of credit risk in SME lending

  It is common knowledge in the industry that the credit risk assessment of a consumer applying for credit is far less complex than that of a business that is applying for credit. Why is this the case? Simply put, consumers are usually very similar in their requirements and risks (homogenous) whilst businesses have far more varying risk elements (heterogenous). In this blog we will look at all the different risk elements within a business (here SME) credit application. These are: Risk of proprietors Risk of business Reason for loan Financial ratios Size of loan Risk industry Risk of region Before we delve into this list, it is worth noting that all of these factors need to be deployable as assessment tools within your originations system so it is key that you ensure your system can manage them. If you are on the look out for a loans origination system, then look no further than Principa’s AppSmart. If you are looking for a decision engine to manage your scorecards, policy rules and terms of business then take a look at our DecisionSmart business rules engine. AppSmart and DecisionSmart are part of Principa’s FinSmart Universe allowing for effective credit management across the customer life-cycle.  The different risk elements within a business credit application 1) Risk of proprietors For smaller organisations the risk of the business is inextricably linked to the financial well-being of the proprietors. How small is small? The rule of thumb is companies with up to two to three proprietors should have their proprietors assessed for risk too. This fits in with the SME segment. What data should be looked at? Generally in countries with mature credit bureaux, credit data is looked at including the score (there is normally a score cut-off) and then negative information such as the existence of judgements or defaults; these are typically used within policy rules. Those businesses with proprietors with excessive numbers of “negatives” may be disqualified from the loan application. Some credit bureaux offer a score of an individual based on the performance of all the businesses with which they are associated. This can also be useful in the credit risk assessment process. Another innovation being adopted internationally is the use of psychometrics in credit evaluation of the proprietors. To find out more about adopting credit scoring, read our blog on how to adopt credit scoring.   2) Risk of business The risk of the business should be managed through both scores and policy rules. Lenders will look at information such as the age of company, the experience of directors and the size of company etc. within a score. Alternatively, many lenders utilise the business score offered by credit bureaux. These scores are typically not as strong as consumer scores as the underlying data is limited and sometimes problematic. For example, large successful organisations may have judgements registered against their name which, unlike for consumers, is not necessarily a direct indication of the inability to service debt.   3) Reason for loan The reason for a loan is used more widely in business lending as opposed to unsecured consumer lending. Venture capital, working capital, invoice discounting and bridging finance are just some of many types of loan/facilities available and lenders need to equip themselves with the ability to manage each of these customer types whether it is within originations or collections. Prudent lenders venturing into the SME space for the first time often focus on one or two of these loan types and then expand later – as the operational implication for each type of loan is complex. 4) Financial ratios Financial ratios are core to commercial credit risk assessment. The main challenge here is to ensure that reliable financials are available from the customer. Small businesses may not be audited and thus the financials may be less trustworthy.   Financial ratios can be divided into four categories: Profitability Leverage Coverage Liquidity Profitability can be further divided into margin ratios and return ratios. Lenders are frequently interested in gross profit margins; this is normally explicit on the income statement. The EBIDTA margin and operating profit margins are also used as well as return ratios such as return on assets, return on equity and risk-adjusted-returns. Leverage ratios are useful to lenders as they reflect the portion of the business that is financed by debt. Lower leverage ratios indicate stability. Leverage ratios assessed often incorporate debt-to-asset, debt-to-equity and asset-to-equity. Coverage ratios indicate the coverage that income or assets provide for the servicing of debt or interest expenses. The higher the coverage ratio the better it is for the lender. Coverage ratios are worked out considering the loan/facility that is being applied for. Finally, liquidity ratios indicate the ability for a company to convert its assets into cash. There are a variety of ratios used here. The current ratio is simply the ratio of assets to liabilities. The quick ratio is the ability for the business to pay its current debts off with readily available assets. The higher the liquidity ratios the better. Ratios are used both within credit scorecards as well as within policy rules. You can read more about these ratios here. 5) Size of loan When assessing credit risk for a consumer, the risk of the consumer does not normally change with the change of loan amount or facility (subject to the consumer passing affordability criteria). With business loans, loan amounts can range quite dramatically, and the risk of the applicant is normally tied to the loan amount requested. The loan/facility amount will of course change the ratios (mentioned in the last section) which could affect a positive/negative outcome. The outcome of the loan application is usually directly linked to a loan amount and any marked change to this loan amount would change the risk profile of the application.   6) Risk of industry The risk of an industry in which the SME operates can have a strong deterministic relationship with the entity being able to service the debt. Some lenders use this and those who do not normally identify this as a missing element in their risk assessment process. The identification of industry is always important. If you are in manufacturing, but your clients are the mines, then you are perhaps better identified as operating in mining as opposed to manufacturing. Most lenders who assess industry, will periodically rule out certain industries and perhaps also incorporate industry within their scorecard. Others take a more scientific approach. In the graph below the performance of an industry is tracked for two years and then projected over the next 6 months; this is then compared to the country’s GDP. As the industry appears to track above the projected GDP, a positive outlook is given to this applicant and this may affect them favourably in the credit application.                   7) Risk of Region   The last area of assessment is risk of region. Of the seven, this one is used the least. Here businesses,  either on book or on the bureau, are assessed against their geo-code. Each geo-code is clustered, and the projected outlook is given as positive, static or negative. As with industry this can be used within the assessment process as a policy rule or within a scorecard.   Bringing the seven risk categories together in a risk assessment These seven risk assessment categories are all important in the risk assessment process. How you bring it all together is critical. If you would like to discuss your SME evaluation challenges or find out more about what we offer in credit management software (like AppSmart and DecisionSmart), get in touch with us here.

Collections Resilience post COVID-19 - part 2

Principa Decisions (Pty) L

Collections Resilience post COVID-19

Principa Decisions (Pty) L