« Since the financial crisis of 2008, financial institutions have struggled to save costs and optimize capital allocation. Banks need to develop robust internal models to cover losses. Indeed, the internal models are used to determine the amount of capital required by the regulation. The approach used by banks is generally based on classical methods such as logistic regression. Nevertheless, due to the emergence of artificial intelligence, banks can ameliorate the performances of internal models by using the most recent machine learning algorithms. In fact, recent machine learning approaches can achieve higher levels of performance. The processing capabilities of computers now allow the use of methods discovered years ago, such as Deep Learning, decision tree-based algorithms (such as Decision Trees, Random Forest and Gradient-Boosting Machines), or techniques that combine the capabilities of machine learning models (such as Staking1). In the credit risk industry, the use of these techniques, especially for regulatory purposes, is met with skepticism.
Machine Learning models often lack interpretability. As described by Miller, interpretability is defined by « the degree to which a human can understand the cause of a decision ». The Machine Learning methods are defined as black-box2 methods that do not provide an explanation of the results obtained. In the context of banks, it is important to know the explanation behind the prediction. In fact, banks and regulators need to be able to understand the predictions of these models in order to meet regulatory constraints.
This is the reason why research is focusing on interpretability methods in order to integrate Machine Learning algorithms as an integral part of the methods used in credit risk.
This report aims to provide and test relevant methods that can be used to make machine learning models more interpretable. To do so, we first talked about the credit risk environment then we addressed the issue of Machine Learning interpretability and gave some solutions. After that, we dug into the practical part where we worked on a Kaggle database in order to apply and compare the interpretability solutions before ending with a discussion on the results. »