Open access peer-reviewed chapter

Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2

Written By

Charmele Ayadurai and Sina Joneidy

Submitted: 14 December 2020 Reviewed: 04 January 2021 Published: 03 February 2021

DOI: 10.5772/intechopen.95806

From the Edited Volume

Operations Management - Emerging Trend in the Digital Era

Edited by Antonella Petrillo, Fabio De Felice, Germano Lambert-Torres and Erik Bonaldi

Chapter metrics overview

724 Chapter Downloads

View Full Metrics

Abstract

Banks have experienced chronic weaknesses as well as frequent crisis over the years. As bank failures are costly and affect global economies, banks are constantly under intense scrutiny by regulators. This makes banks the most highly regulated industry in the world today. As banks grow into the 21st century framework, banks are in need to embrace Artificial Intelligence (AI) to not only to provide personalized world class service to its large database of customers but most importantly to survive. The chapter provides a taxonomy of bank soundness in the face of AI through the lens of CAMELS where C (Capital), A(Asset), M(Management), E(Earnings), L(Liquidity), S(Sensitivity). The taxonomy partitions challenges from the main strand of CAMELS into distinct categories of AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) categories that banks and regulatory teams need to consider in evaluating AI use in banks. Although AI offers numerous opportunities to enable banks to operate more efficiently and effectively, at the same time banks also need to give assurance that AI ‘do no harm’ to stakeholders. Posing many unresolved questions, it seems that banks are trapped between the devil and the deep blue sea for now.

Keywords

  • bank
  • bank soundness
  • financial sector
  • Artificial Intelligence (AI)
  • CAMELS

1. Introduction

The Global Financial Crisis (GFC) showcased that even banks that are well established, operating in robust markets and governed by tough and forceful regulatory frameworks can fail. Over the years banks have grown larger in size, have become more complex and complicated in its nature of operation making it opaque and incomprehensible. Bank supervisors are still struggling to demystify the risk undertaken by banks during the GFC. The unknown risk posed by yet another Blackbox in the name of Artificial Intelligence (AI) could pose identical challenges of increased systemic fragility, bank failures and freezing up capital markets evident during the GFC. As such, bank supervisors are critical of banks adopting AI. To move forward banks need to give assurance that banks will “do no harm”. As such, it is important to consider the lesson from the GFC which calls into question the soundness of regulations to capture the risk but most importantly to critically evaluate the process that is in place, in this case the efficacy of the implementation of AI. It is necessary to know if AI promotes safety and soundness in the financial system or adds undue burdens to the markets [1]. As such, the study looks to critically assess the challenges posed by AI that could influence bank soundness from the light of Capital (C), Asset (A), Management (M), Earning (E), Liquidity (L) and Sensitivity (S) (CAMELS), determinants of bank bank’s health and wellbeing [2, 3, 4] while “doing no harm” to individuals, corporations and society as a whole.

The chapter contributes to literature in several ways. Earlier research has either focused on AI application on the entire financial sector covering banks, fintech companies, mortgage lenders, security companies amongst others [5, 6, 7] or have evaluated AI applications in the form of Machine Learning (ML), Neural Networks (NN) Artificial Neural Networks (ANN) in specific areas such as credit evaluation, portfolio management, financial prediction and planning [2, 8, 9, 10] or by examining user friendly experiences of end users [5, 6, 11]. Therefore, these studies are not sufficient to understand the challenges proposed by AI from solely a bank’s perspective. As such, the chapter fills this gap by taking a holistic approach in scrutinizing the challenges faced by banks solely by deploying AI. By doing this the chapter provides a significant insight into the significant challenges that AI technology can pose on the banking industry depleting its chances of survival. The chapter also further considers bank soundness with the application of AI from various aspects of Capital (C), Asset (A), Management (M), Earning (E), Liquidity (L) and Sensitivity (S) (CAMELS) determinants of bank soundness. This chapter is the first to review the challenges of deploying AI in banking operation in light of CAMELS. Earlier research [2, 3, 4] has only focused on bank soundness from the CAMELS perspective. The chapter also more specifically focuses from both the service provider and customer end, providing further insight to regulators on what they need to look into. Most importantly, the intention is to examine through the lens of CAMELS how sound are banks having applied AI into their processes.

The chapter is organized as follows: the next section presents a brief theoretical discussion on the application of AI in different sections of the bank from front, middle to back office operation. Section three introduces the literature gathering and research method. Section four presents result and discussions on the challenges posed by banks on application of AI from CAMELS perspective. The last section concludes the chapter and highlights insight on further research.

Advertisement

2. Literature review

Central banks worldwide have actively embedded AI into their daily operations from microprudential and macroprudential supervisions to information management, forecasting and detecting fraudulent activities. Monetary Authority of Singapore applies AI to scrutinise suspicious transactions, while the central bank of Austria has developed a prototype for data validation. The central bank of Italy uses AI techniques to predict price moves on the real estate market [7].

Banks also have deployed AI from front office through middle-office to back office operations in different subsets of the banks [7, 12]. AI is not only widely used and applied in conventional banks it has been actively embraced in Islamic banks as well [13, 14]. AI in the form of Neural Networks (NN) has been used in risk management, forecasting i.e. inflation [15, 16, 17, 18], identify complex patterns [16, 19, 20], predict future stock behavior, market trends, market response, real estate evaluations, financial crises, bank failures, bankruptcy, exchange rates, detecting credit card fraud [9, 20, 21, 22].

Artificial Neural Networks (ANN) has been used to analyse relationships of bonds and stocks between economic and financial phenomenon, futures and financial markets, loan application and overdraft check, loan scoring, credit scoring, credit worthiness, mortgage choice decision, portfolio optimization, portfolio management, asset value forecasting, index movement prediction, exchange rate forecasting, global stock index forecasting, portfolio selection, portfolio resource allocation, forecasting, planning, generating time series, credit evaluations, bankruptcy prediction, predicting thrift failures, financial distress prediction, asset value forecasting, and decision making [22, 23, 24, 25, 26].

Backpropagation neural networks (BPNN) is applied to classify loan applications from good to bad [20]. Decision Tree (DT) is applied in credit risk classification [26]. SVM is applied in corporate credit rating [26].

Machine Learning (ML), a subfield of AI is used in customer services such as search engines, offering product recommendations [27], manage customer online queries, perform voice recognition, predictive analysis, provide financial advice, analyse risks, manage assets and engage in algorithmic trading [6]. ML is deployed for call-centre optimization, mortgage decision making, relationship management, treasury-management initiatives, customer-credit decisions and equity trading [27, 28] where algorithmic trading is used to pick stocks and is able to fulfil the job specification of a portfolio manager [29, 30]. JP Morgan uses ML to execute trades in equity markets [31].

Big Data and Machine Learning (BD/ML) in the form of Robo-advisers use algorithms to deliver stock recommendations, analysing incoming information for investors, providing investment advice and financial planning services, make credit decisions [32]. Goldman Sachs uses ML platform Kensho to offer automated analysis of breaking news and compiling the information into regular summaries. Wells Fargo on the other hand, uses AIERA (Artificial Intelligent Equity Research Analyst), to issue buy and sell call options on stocks. While bank officers offer recommendations through this platform [32]. Several studies have worked with various ML models for credit scoring namely ensemble classifier [33, 34, 35, 36, 37, 38, 39], support vector machine [40, 41, 42, 43, 44, 45, 46], neural network [45, 47, 48, 49, 50, 51, 52], genetic programming [53, 54, 55], Bayesian network [56, 57, 58] and decision tree [50, 59, 60, 61]. Automated trading where systems make independent decisions without human intervention or oversight is evident in stock markets. Nasdaq runs on autonomous trading by 75% [25].

Development in AI is moving in the direction of hybrid models where two or more Artificial Intelligent systems are combined to enhance performance namely with the application of intelligent systems which is able to integrate intelligent techniques to problem-solving namely the combined efforts of neural network and fuzzy system [26].

Advertisement

3. Methodology

The research is conducted as a conceptual chapter with the aim to provide a deeper understanding of the opportunities parted by AI from a service provider and customer perspective. To answer the research question on how able banks are to effectively deploy AI into their daily operations to improve CAMELS from a bank’s perspective, a systematic review of the literature and objective observations were undertaken to examine banks through the lens of bank soundness determinants of CAMELS. The observations found in existing literature are gathered to assemble a framework categorized by CAMELS (Figure 1). The literature was gathered through the Scopus database as a main source of finding existing literature. The database offers a wide range of management and business-related studies relevant for the topic of research. In addition, other databases such as Google Scholar, Social Science Research Network (SSRN), SpringerLink and IEEE Xplore were also examined. Journal articles since the period 2000–2020 were extracted using the prescribed keywords of Bank, Bank soundness, Financial Sector, Artificial Intelligence (AI), CAMELS. Only articles that were available in full text, published in scholarly, peer reviewed journal were chosen to be closely examined. The search was also conducted using the backward and forward approach where reference list of articles was utilized to find further research papers.

Figure 1.

Taxonomy of challenges posed by AI on Bank Soundness - A classification based on the determinants Bank Soundness of CAMELS.

Advertisement

4. Findings and discussion

This section presents an overview of the challenges posed by banks in deploying AI in their daily front, middle and back office operations prescribed from the CAMELS perspective (see Table 1 in the Appendix).

4.1 Capital and liquidity

Bank capital acts as a core determinant for bank’s survival. Capital absorbs losses during adversity and insufficient capital holdings can cause banks to collapse. AI with its limitless abilities and capabilities helps banks to hold robust capital holdings through stress testing.

Banks have to hold sufficient liquidity funding to ensure that it is able to meet unforeseen deposit outflows. Banks that struggle to meet its daily liquidity needs will eventually fail [3]. Central banks working on larger scales overseeing the workings of the market use AI to sort large number of bank notes and detect liquidity problems.

AI’s ability to detect or uncover crisis depends on the quantity and quality of data provided and used to train the algorithms. As such if the dataset lacks important conditions such as economic crashes, as normal periods exist more than crisis periods, limited crisis data could reduce AI’s predictive abilities and the output will have limited use in measuring or projecting future risk under stress [7, 32]. As such, will have little value for bank’s setting their minimum prescribed capital/liquidity holding (Basel accords) to remain solvent while lending through recessions [27]. Banks have little choice but to rely on theory of distribution of losses and parametric statistical structure to link normal times data to determine large losses that causes instability. Yet, a more accurate prediction will yield from data of distribution of losses itself [27].

4.2 Asset

Asset quality is measured by the level of credit risk contained in bank’s assets [62]. Therefore, a bank that can detect, measure, monitor and regulate credit risks will hold higher quality assets [63]. The GFC showcased that credit risk is the most challenging risk to manage and control as it not only absorbs profits but exposes banks to failures as well. AI helps banks to clearly assess and evaluate customers’ risk, eliminating ambiguity, biasness while improving loan processes and.

Banks are accountable for each decision that they make. As such employ verification and checks at several levels to weed out incorrect or weak decisions. As such, Loan officers should be able to provide a logical explanation on what grounds a loan has been accepted or rejected to their superiors, compliance officers, auditors, regulators and customers [5, 7, 10, 12, 64, 65]. The working logic of AI decision has to be traceable backwards. Customers need to understand the reasons why their loan application has been rejected or why AI has recommended a particular product before acting on it. Keeping customers in the dark without proper justification will cut short the chances of them determining the real cause behind the rejection, finding solutions to their problems and improving their circumstances or proving an identity theft if it happened to them. In short, AI adverse decision will have a permanent detrimental effect on someone’s future [28, 64, 66, 67, 68].

Transparency is also important to fully trust the system through validating the decision made by AI, by not only detecting anomalies in the decision process such as biasness, mistakes, manipulations of data, deficiencies, compliance to rules i.e. GDPR, cybersecurity crimes linked to work processes such as dataset poisoning, internal network manipulation, and side-channel attacks [69] but also to detect clearly and precisely at which step the anomalies occurred and what information AI fed itself [10, 12, 64, 66, 67, 68, 70, 71, 72].

Although AI can assess customers from various angles namely with non-traditional data such as customers connection, internet searches, network diversity, etc., yet how reliable is this information to make an informed decision about a person’s repayment ability, thus future. Does the credit score of a person increase if they socialise with those who are creditworthy? Borrowers may also be judged based on how they behave online or their dishonesty in disclosing financial data, forming biases and being judged unfairly [73]. Also, are customers aware that non-traditional data is used in the evaluation process to assess their loan repayment ability? [6].

AI that are trained based on supervised learning where both inputs and output are fed into the system have zero chance of biasness unless the data fed itself is biased. Data used to train ML algorithm must be representative of a wide range of customers that will apply for loans representing namely a whole population [27, 72]. If a population is underrepresented or there are rare cases such as women, race, ethnicity, marital status, zero credit history and this information is used to train AI, AI will deliver biased results if data is highly correlated in these categories [7, 28, 72, 73, 74].

In unsupervised learning, AI trains itself to make independent decisions. As such, based on what AI trains itself with, decisions can be bias. In reinforced learning AI takes uses its own initiative to combine various decision to make an ultimate decision where biasness can form as well. In checking creditworthiness AI can discriminate based on gender if more men are in professions or earning higher salaries, race if more discount stores are located near ethnic minorities, spelling mistakes in internet searches etc. Statistics reveal, algorithms accept white applicants and reject black applicants evident from the gradual reduction in black applicants’ loan approval in banks [64].

According to Janssen algorithms can systematically introduce inadvertent bias, reinforce historical discrimination, favor a political orientation or reinforce undesired practices [75]. Standard affordability rules such as defaults, loan-to-value, and loan-to-income may not be applicable to all groups of borrowers [76] causing low-income borrowers to be marginalised. Looking from a different perspective, the contribution of one’s data could contribute to a whole set of minority, race, gender, marital status or society to be judged in a certain way, forming biases, causing more harm than intended. For example, algorithms picking up 20 black female who are constantly delinquent on their loans as a representative of the whole black female population. AI could also link financially vulnerable customers to mental health issues [12]. Banks could utilize this information to turn down loan applications causing more harm to the society than intended [12]. Yet, to train AI systems to replicate human decision-making skills is a challenge. As it is difficult to transform various algorithmic concepts into training data to solve every problem for a range of lending products [10, 77].

4.3 Management

Banks rely heavily on management to not only generate earnings and increase profit margins [3] but also to keep banks alive [78]. AI helps banks to be more efficient, effective, effectual and efficacious.

The legal profession requires predictability in its approach i.e. contracts are written in a way knowing how it will be executed. As such, the legal system offers a predictable environment where customers can improve their lives [64]. Therefore, AI needs to be predictable to customers.

The GFC is the outcome of human greed, manipulation and corruption. As such, AI algorithms need to be robust against exploitation [64]. Discontented employees or external foes may learn the inner workings of the AI model to easily corrupt algorithms or use AI application in malfeasant ways [28]. This will strike of a worst catastrophe than the GFC, as the involvement of AI increases complexity and opaqueness of the financial systems making it is difficult to configure a solution.

When an AI system fails at its assigned task, it is difficult to pin down who is to take the blame for its actions. As the AI ecosystem comprises of wide range of stakeholders from the philosopher, the AI researcher, the data scientist, the data provider, the developer, the library author, the hardware manufacturer, the OS provider, programmers, etc. Each staff has established procedures to part with AI and their responsibilities have been distributed widely amongst them. As such, when a catastrophe strikes, it would be difficult to assign liability and could be a perfect cover for mistakes, manipulation and exploitation [64, 72, 79]. In the pursuit of accumulating big data, banks could cross the boundaries to incorporate customers private information. As such, when any loss results from the use of AI should the scientists who work to tune the experience to the needs of consumers, employees write the content the chatbots or the algorithm provider be liable? [5, 7].

As AI systems are interconnected, hackers or malicious software can manipulate bank’s data by hacking client’s financial details, creating false identities, flooding systems with fabricated data resulting in misclassification or bad clusters causing incorrect decisions facing consumer backlash and regulatory repercussions [28, 72].

Algorithms are constantly looking to improve predictive power. As such are in the constant look out for correlations producing spurious relationships which eventually leads to bias conclusions [80].

Literature have pointed out the potential for AIs to act on biased data [81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92]. scientists have realised that ML can discriminate customers based on race and gender. One such example is the ‘white guy’ syndrome where men are picked over women. Input data is directly linked to the outcome. As every individual have their own biases, norms and ethics it is difficult to establish that biases will not exist even after AI has gone through training data [84, 93]. Also to perfect the existing biases under the Fair Lending Act, and to improve processes and innovation, more data from people with disability, colour, age, gender and creed could be incorporated into the system. Only if customers feel comfortable to share [12, 32].

As developing and operating AI requires extensive resources and big data, only large banks can be players in this field. This encourages concentration affecting healthy competition in the market [7]. Banks have to rely heavily on technology companies for AI’s critical tools and infrastructure. As such, increasing operational risk [7].

As there are only a few players in the market, operational risk could easily feed into systemic risk. On top of that, the widespread use of AI in similar functions such as provision of credit or trading of financial assets and the uniformity in data, training and methodology employed to develop the algorithms could spark off herding and procyclical behaviour [7].

Banks that are extensively working with AI need staff that have expertise not only the field of finance but also have formal training in computer science, cybersecurity, cryptography, decision theory, machine learning, formal verification, computer forensics, steganography, ethics, mathematics, network security, psychology and other relevant fields. The challenge would be to find sufficient number of staff to fit this role.

AI in the form of robo-advisory services incur high development, marketing as well as advertising costs. As such a single client acquisition costs ranges between $300–$1,000 with clients at the lower end only generating $100 in annual revenues [9, 94]. Robo-advisors’ slim operating margin and low average account size would eat up the profits garnered quickly taking banks a decade or more to cover the $10 to $100 million in marketing costs [9, 95].

Some studies have pointed out that ML is only able to act on the primary stages of decision making such as data processing and forming predictions. However, at higher levels of judgement, action and task, requires special skills such as empathy, lateral thinking as well risk evaluations where AI is unable to muster [6].

Algorithmic trading through ML could also facilitate trading errors. A single failure in the AI system could lead to devasting catastrophes without a chance for recovery resulting in flash crashes [96, 97], causing “excess volatility or increase pro-cyclicality as a result of herding” [98]. Besides, major financial institutions returned compliance software that stopped detecting trading issues from excluded customer trades [28]. Developers prescribe that new intelligent features embedded into AIs could pose unexpected and unknown risks creating new points of vulnerability leaving loopholes for hackers to exploit [28, 32].

If humans make mistakes, manipulated the system they can be fired instantly. However, if AI makes mistakes or corrupts, customers will lose hope and trust in the bank and its systems [5]. Robo-advisors work with several parties namely clearing firms, custodians, affiliated brokers, and other companies in the industry to offer its services to customers. While Lewis suggest that robo-advisors resolve conflict of interest amongst the parties [99], Jung et al. suggest conflicts remains costing customers [100]. If company uses brokers for an example, this cost is transferred to customers increasing the price of service while the robo-advisor makes profit as the middleman [101]. In other scenarios, robo-advisors could receive fee for order flow in exchange for routing trades to a clearing firm or have an equal interest in securities that customers are looking into [101].

Scripting errors, lapses in data management, misjudgments in model-training data can compromise fairness, privacy, security, as well as compliance [28]. As the volume of data being sorted, linked, ingested is large, further complicated with unstructured data, mistakes such as revealing sensitive information could take place. i.e. client’s name might be redacted from the section that is used by an AI but present in the stock broker’s notes section of the record thus breaching European Union’s General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) [28].

4.4 Earnings

Banks that manage their expenses well while fully utilising their assets to generate constant revenue streams are most likely to be sound [3]. AI enables banks to offer unique selling points in products that increases customer satisfaction, boosting sales and revenue [6].

Studies have recorded chatbot controversy and backlash [5, 6]. AI chatbot Microsoft’s Tay, tweeted racist, sexist, xenophobic, and prejudiced opinions through learning them through tweets it read and interaction with younger demographics on Twitter upsetting customers [7, 102, 103, 104].

To increase market share and competitive positions, improving the predictive power of algorithms and ensuring AIs are trained properly to avoid bias decisions, banks need a large set of quality and diverse data. In the pursuit and pressure to achieve this goal, banks might share customer’s private data without their consent when customers trusted the bank to keep it confidential [7, 9, 12]. Privacy is important to not only for customers but also banks to allow banks to remain in its competitive position [27].

Training AIs to exclude certain segments of customers in sales could also lead to discrimination and biasness [28]. AI could also weave zip codes and income of individuals to create target offering causing discrimination against other classes and groups of people [28]. Robo-advisors can provide incorrect risk evaluation if it is not equipped with all aspects of an investor’s financial condition to accurately access the overall risk. When customised questions are unable to capture unique life experiences such as receiving large inheritance customers are better off with human advisors [9].

Customers are more likely to rely on human advisers than chatbots and robo-advisors to assist when it comes to more personal and sensitive matters. One example is when large sums of money is involved either through wealthy customers or due to death and illness. Another is when there is market volatility. Customers are less inclined to trust new technologies and would prefer humans to handle such transactions for accountability purposes [6, 9, 105, 106]. Customers are also more confident to gain insight from human advisors when it comes to complex financial products such as derivatives, discussing complicated matters or making complains [6].

AI lacks emotion quotient. As such is unable to connect, understand or react to human at deeper level to comprehend their emotion and to empathise, rejoice or sympathize with them [5, 6]. As such, some prefer front-desk receptionist to a chatbot or an electronic menu that needs to be navigated.

Although the equality act, that oversees the violation against race and gender acknowledges inequality when “a person” treats “another person” favourably. It does not recognise discrimination by AI’s but it recognises discrimination by other “non-humans” such as government agency or regulator. As such, AI cannot be taken to court [6].

Banks may not disclose their use of AI to customers to either benefit the bank or to avoid “fear factor of AI” amongst customers. Banks have to be transparent with their customers revealing if they are working with AI or human advisors. Although Swedbank and Société Générale agree it is best to be honest with customers others may not agree as it disadvantages the banks in many ways. As such AI are being trained to offer seamless interaction through training to very closely mimic humans. [6].

4.5 Senstivity

Banks are subject to market risks (i.e. interest rate risk, foreign exchange risk, price risk etc) that can have adverse effects on bank’s earnings and capital. AI provides solutions to real world problems [20], through real time, enabling banks to keep up, adapt and respond to constant and dynamic changes in the environment. Thus, improving bank stability and soundness.

Unsupervised ML techniques are the only ones that can be used to detect frauds as it is able to identify unusual transaction characteristics to then investigate and test further to prove the authenticity of the transaction [27, 32]. Besides, unsupervised ML can also be used to closely monitor traders’ behavior enabling auditing [107]. Yet, as unsupervised ML is linked to blackbox decision making, it is difficult to point out if the decision was made fairly.

ML in the form of decision tree and logistic regression models are interpretable but lack accuracies [10]. AI in the form of Deep Neural Networks (DNNs) and ensemble models such as random-forest, XGboost and Adaboost have strong higher predictive power and accuracy as it works with multiple layers of hundreds and thousands of parameters and neurons while applying nested non-linear structure which makes them opaque and a complex blackbox [10, 27, 32, 71]. Various stakeholders have labelled the effort of allowing black boxes to make decisions as irresponsible as decisions created by these blackboxes are not justifiable, not interpretable, trackable, legitimate, trustworthy as the decision lacks detailed explanations [71].

Advertisement

5. Conclusion

Investment in AI is one of the most essential and a core element in bank survival. Therefore, it is vital for banks to continue to deploy AI into their operations. Yet, AI suffers from a series of limitations that must be considered in assessing its use. Many studies have raised concerns about AI biasness, discrimination, violating privacy, manipulating political systems compromising national security, economic stability, political stability, digital safety, and financial safety causing disastrous consequences from reputational damage, revenue losses, regulatory backlash to criminal investigation, ignores equality and fair treatment, difficult to evaluate decisions due to poor explainability, transparency, resulting in lack of trustworthiness, accountability and reliability [12, 20, 28, 108]. The chapter has successfully portrayed bank soundness in the face of AI through the lens of CAMELS. The taxonomy partitions challenges posed by AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) distinct categories. Ironically, both AI and banks are opaque in nature and have diminished public trust in them. Governments will be held accountable once again by taxpayers if markets come to a standstill as a result of AI. As such, banks need to provide answers on how well they are protecting customer’s privacy and security with a range of protocols, controls and measures. If a silver bullet is not found than either banks will have to disappear, or the world will witness yet another catastrophe created by banks but this time with the help of AI. As such, trapping banks further into the conundrum of being in between the devil and the deep blue sea.

Advertisement

Government Support AI & Banks
Competition and Survival
Nature of Banking Industry
Future customers
Beyond human capacity
Challenges
Capital
  • Uncover crisis

Asset
  • Explainability

  • Transparency

  • Non-traditional data evaluation

  • Training and learning

Management
  • Predictability

  • Corruptibility

  • Responsibility

  • Cybersecurity risk

  • Biasness from spurious relationship

  • Concentration risk

  • Systemic risk

  • Operational risk

  • Staff expertise

  • Conflict of interest

  • High cost

  • Decision making

  • Robo-advisor incorrect risk evaluation

  • Biasness from data input

  • Flash crash

  • Reputational risk

  • Mistakes

Earnings
  • Trade-off 2- Data privacy vs. accuracy

  • AI backlash

  • customer exclusion

  • Personal, sensitive and complex matters

  • Empathy, emotional quotient

  • Equality Act

  • Honesty

  • Robo-advisor-incorrect risk evaluation

Liquidity
  • Uncover crisis

Sensitivity
  • Complex task

  • Trade-off 1-Interpretable vs. precision

Table 1.

Taxonomy of challenges posed by AI on bank soundness.

References

  1. 1. O’Halloran, S., & Nowaczyk, N. (2019). An Artificial Intelligence Approach to Regulating Systemic Risk. Front. Artif. Intell, 2(7).
  2. 2. Balasubramanyan, L., Haubrich, J., Jenkins, S., & Wallman, N. (2013). Focusing on the Future: Regional Banks and the Financial Marketplace. Federal Reserve Bank of Cleveland, 4-9.
  3. 3. Ayadurai, C., & Eskandari, R. (2018). Bank soundness: a PLS-SEM approach. In Partial Least Squares Structural Equation Modeling (pp. 31-52). Springer, Cham.
  4. 4. Zervoudi, E. K. (2019). Parallel banking system: Opportunities and Challenges. Journal of Applied Finance and Banking, 9(4), 47-70.
  5. 5. Crosman, P. (2018). How Artificial Intelligence is reshaping jobs in banking. American Banker, 183(88), 1.
  6. 6. Lui, A., & Lamb, G. W. (2018). Artificial intelligence and augmented intelligence collaboration: regaining trust and confidence in the financial sector. Information & Communications Technology Law, 27(3), 267-283.
  7. 7. Fernández, A. (2019). Artificial intelligence in financial services. Banco de Espana Article, 3, 19.
  8. 8. Fethi, M. D., & Pasiouras, F. (2010). Assessing bank efficiency and erformance with operational research and artificial intelligence techniques: A survey. European journal of operational research, 204(2), 189-198.
  9. 9. Hakala, K. (2019). Robo-advisors as a form of artificial intelligence in private customers’ investment advisory services.
  10. 10. Sachan, S., Yang, J. B., Xu, D. L., Benavides, D. E., & Li, Y. (2020). An explainable AI decision-support-system to automate loan underwriting. Expert Systems with Applications, 144, 113100.
  11. 11. Francisco, D. F. (2019). How artificial intelligence can help banks improve the customer experience of buying a house (Doctoral dissertation).
  12. 12. Burkhardt, R., Hohn, N., & Wigley, C. (2019). Leading your organization to responsible AI. McKinsey Analytics.
  13. 13. Abdullah, O. (2017). Digitalization in Islamic Finance. Retrieved from http://kliff.com.my/wpcontent/uploads/2016/09/Sesi-3-Digitalization-of-Islami-Finance-Othman.pdf.
  14. 14. Khan, N. (2017, December 13). The Impact of Financial Technology. Retrieved from https://www.linkedin.com/pulse/impact-financial-technology-nida-khan
  15. 15. Huntley, D.G., (1991). Neural nets: An approach to the forecasting of time series. Soc. Sci. Comput. Rev., 9: 27-38. DOI: 10.1177/089443939100900104.
  16. 16. Huang, Z., Chen, H., Hsu, C.-J., Chen, W.-H., Wu, S., (2004). Credit rating analysis with support vector machines and neural networks: A market comparative study. Decision Support Systems 37, 543-558
  17. 17. Limsombunchai, V., C. Gan and M. Lee. (2005). An analysis of credit scoring for agricultural loans in Thailand. Am. J. Applied Sci., 2:1198-1205.
  18. 18. Binner, J. M., Gazely, A. M., & Kendall, G. (2009). An evaluation of UK risky money: an artificial intelligence approach. Global Business and Economics Review, 11(1), 1-18.
  19. 19. Zhang, G.P., (2004). Neural Networks in Business Forecasting. 1st Edn., Idea Group Inc., ISBN: 1-59140-176-3, pp: 1-41.
  20. 20. Eletter, S. F., Yaseen, S. G., & Elrefae, G. A. (2010). Neuro-based artificial intelligence model for loan decisions. American Journal of Economics and Business Administration, 2(1), 27.
  21. 21. Conway, J. J. E. (2018). Artificial intelligence and machine learning: Current applications in real estate (Doctoral dissertation, Massachusetts Institute of Technology).
  22. 22. Rossini, P. (2000). Using expert systems and artificial intelligence for real estate forecasting. In Sixth Annual Pacific-Rim Real Estate Society Conference, Sydney, Australia (pp. 24-27).
  23. 23. Shachmurove, Y., 2002. Applying artificial neural networks to business, economics and finance. http://ideas.repec.org/p/cla/penntw/5ecbb5c20d3d5 47f357aa130654099f3.html
  24. 24. Nag, A. K., & Mitra, A. (2002). Forecasting daily foreign exchange rates using genetically optimized neural networks. Journal of Forecasting, 21(7), 501-511.
  25. 25. Rao, A. (2017). A strategist’s guide to artificial intelligence. Strategy+ Business, 87, 46-50.
  26. 26. Bahrammirzaee, A. (2010). A comparative survey of artificial intelligence applications in finance: artificial neural networks, expert system and hybrid intelligent systems. Neural Computing and Applications, 19(8), 1165-1195.
  27. 27. Wall, L. D. (2018). Some financial regulatory implications of artificial intelligence. Journal of Economics and Business, 100, 55-63.
  28. 28. Cheatham, B., Javanmardian, K., & Samandari, H. (2019). Confronting the risks of artificial intelligence. McKinsey Quarterly, 1-9.
  29. 29. Reuters (2017), “BlackRock Is Cutting Jobs and Banking on Robots to Beat the Stock Market” Fortune (March 29). Available at http://fortune.com/2017/03/29/blackrock-robots-stockpicking/.
  30. 30. Siegal (2016), “BlackRock Is Making Big Data Bigger” Institutional Investor. Available at http://www.institutionalinvestor.com/article/3598029/asset-managementfixed-income/blackrock-is-making-big-data-bigger.html#/.WahmfsiGOr1.
  31. 31. Noonan, Laura (2017), “JP Morgan Develops Robot to Execute Trades,” Financial Times (July 31).
  32. 32. Jagtiani, J., Vermilyea, T., & Wall, L. D. (2018). The roles of big data and machine learning in bank supervision. Forthcoming , Banking Perspectives.
  33. 33. Xu, D., Zhang, X., & Feng, H. (2018). Generalized fuzzy soft sets theory based novel hybrid ensemble credit scoring model. International Journal of Finance & Economics, 24(2), 903-921.
  34. 34. Abellán, J., & Castellano, J.G. (2017). A comparative study on base classifiers in ensemble methods for credit scoring. Expert Systems with Applications, 73, 1-10.
  35. 35. Xiao, H., Xiao, Z., & Wang, Y. (2016). Ensemble classification based on supervised clustering for credit scoring. Applied Soft Computing, 43,73-86.
  36. 36. Marqués, A.I., García, V., & Sánchez, J.S. (2012). Exploring the behaviour of base classifiers in credit scoring ensembles. Expert Systems with Applications, 39(11), 10244-10250.
  37. 37. Wang, G., Ma, J., Huang, L., & Xu, K. (2012). Two credit scoring models based on dual strategy ensemble trees. Knowledge Based Systems, 26, 61-68.
  38. 38. Hung, C., & Chen, J.H. (2009). A selective ensemble based on expected probabilities for bankruptcy prediction. Expert systems with applications, 36(3), 5297-5303.
  39. 39. Nanni, L., & Lumini, A. (2009). An experimental comparison of ensemble of classifiers for bankruptcy prediction and credit scoring. Expert systems with applications, 36(2), 3028-3033.
  40. 40. Harris, T. (2015). Credit scoring using the clustered support vector machine. Expert Systems with Applications, 42(2), 741-750.
  41. 41. Tomczak, J.M., & Zieba, M. (2015). Classification restricted boltz mann machine for comprehensible credit scoring model. Expert Systems with Applications, 42(4), 1789-1796.
  42. 42. Hens, A.B., & Tiwari, M.K. (2012). Computational time reduction for credit scoring: An integrated approach based on support vector machine and stratified sampling method. Expert Systems with Applications, 39(8), 6774-6781.
  43. 43. Chen, W., Ma, C., & Ma, L. (2009).Mining the customer credit using hybrid support vector machine technique. Expert Systems with Applications, 36(4), 7611-7616.
  44. 44. Huang, G. B.; Ramesh, M.; Berg, T.; and Learned-Miller, E. (2007). Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst.
  45. 45. Li, S.T., Shiue, W., & Huang, M.H. (2006). The evaluation of consumer loans using support vector machines. Expert Systems with Applications, 30(4), 772-782.
  46. 46. Schebesch, K.B., & Stecking, R. (2005). Support vector machines for classifying and describing credit applicants: Detecting typical and critical regions. Journal of the Operational Research Society, 56(9), 1082-1088.
  47. 47. Luo, C., Wu, D., & Wu, D. (2017). A deep learning approach for credit scoring using credit default swaps. Engineering Applications of Artificial Intelligence, 65, 465-470.
  48. 48. Zhao, Z., Xu, S., Kang, B.H., Kabir, M. M., Liu, Y., & Wasinger, R. (2015). Investigation and improvement of multi-layer perceptron neural networks for credit scoring. Expert Systems with Applications, 42(7), 3508-3516.
  49. 49. Khashman, A . (2010). Neural networks for credit risk evaluation: Investigation of different neural models and learning schemes. Expert Systems with Applications, 37(9), 6233-6239;
  50. 50. Bensic, M., Sarlija, N., & Zekic-Susac, M. (2005). Modelling small business credit scoring by using logistic regression, neural networks and decision trees. Intelligent Systems in Accounting, Finance & Management: International Journal, 13(3), 133-150.
  51. 51. Kim, Y.S., & Sohn, S.Y. (2004). Managing loan customers using misclassification patterns of credit scoring model. Expert Systems with Applications, 26(4), 567-573.
  52. 52. West, D. (2000). Neural network credit scoring models. Computers & Operations Research, 27 (11-12), 1131-1152.
  53. 53. Metawa, N., Hassan, M.K., & Elhoseny, M. (2017). Genetic algorithm based model for optimizing bank lending decisions. Expert Systems with Applications, 80, 75-82.
  54. 54. Abdou, H.A. (2009). Genetic programming for credits coring: The case of Egyptian public sector banks. Expert Systems with Applications, 36(9),11402-11417.
  55. 55. Ong, C.S., Huang, J.J., & Tzeng, G.H. (2005). Building credit scoring models using genetic programming. Expert Systems with Applications, 29(1), 41-47.
  56. 56. Leong, C.K. (2016). Credit risks coring with bayesian network models. Computational Economics, 47(3), 423-446
  57. 57. Wu, W. W. (2011). Improving classification accuracy and causal knowledge for better credit decisions. International Journal of Neural Systems, 21(04), 297-309.
  58. 58. Zhu, H., Beling, P.A., & Overstreet, G.A. (2002). A bayesian framework for the combination of classifier outputs. Journal of the Operational Research Society, 53(7), 719-727.
  59. 59. Bijak, K., & Thomas, L.C. (2012). Does segmentation always improve model performance in credit scoring. Expert Systems with Applications, 39(3), 2433-2442.
  60. 60. Yap, B. W., Ong, S. H., & Husain, N. H. (2011). Using data mining to improve assessment of credit worthiness via credit scoring models. Expert Systems with Applications, 38(10), 13274-13283
  61. 61. Zhang, D., Zhou, X., Leung, S.C., & Zheng, J. (2010). Vertical bagging decision trees model for credit scoring. Expert Systems with Applications, 37(12), 7838-7843.
  62. 62. Affes, Z., & Hentati-Kaffel, R. (2019). Predicting US banks bankruptcy: logit versus Canonical Discriminant analysis. Computational Economics, 54(1), 199-244
  63. 63. Christopoulos, A. G., Mylonakis, J., & Diktapanidis, P. (2011). Could Lehman Brothers' collapse be anticipated? An examination using CAMELS rating system. International Business Research, 4(2), 11.
  64. 64. Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge handbook of artificial intelligence, 1, 316-334.
  65. 65. Kaya, O., Schildbach, J., AG, D. B., & Schneider, S. (2019). Artificial intelligence in banking. Artificial intelligence.
  66. 66. Bary, E. (2018). How Artificial Intelligence Could Replace Credit Scores and Reshape How We Get Loans. MarketWatch. Available at: https://www.marketwatch.com/story/ai-based-credit-scores-will-soon-give-onebillion-people-access-to-banking-services-2018-10-09
  67. 67. Hornigold, T. (2018). Life-or-Death Algorithms: Avoiding the Black Box of AI in Medicine. Singularity Hub. Available at: https://singularityhub.com/2018/12/18/life-or-death-algorithms-the-black-box-ofai-in-medicine-and-how-to-avoid-it/
  68. 68. Gilchrist, K. (2018). Your next job interview could be with a robot. CNBC. Available at: https://www.cnbc.com/2018/10/03/future-of-jobs-your-next-jobinterview-could-be-with-a-robot.html
  69. 69. Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I., & Talwar, K. (2016). Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755.
  70. 70. Samek, W., & Müller, K. R. (2019). Towards explainable artificial intelligence. In Explainable AI: interpreting, explaining and visualizing deep learning (pp. 5-22). Springer, Cham.
  71. 71. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Chatila, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
  72. 72. Nassar, M., Salah, K., ur Rehman, M. H., & Svetinovic, D. (2020). Blockchain for explainable and trustworthy artificial intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(1), e1340.
  73. 73. Byrne, M. F., Shahidi, N., & Rex, D. K. (2017). Will computer-aided detection and diagnosis revolutionize colonoscopy?. Gastroenterology, 153(6), 1460-1464.
  74. 74. Petrasic, K., Saul, B., Greig, J., Bornfreund, M., & Lamberth, K. (2017). Algorithms and bias: What lenders need to know. White & Case.
  75. 75. Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance.
  76. 76. Gates, S.W., Perry, V.G., & Zorn, P.M. (2002). Automated underwriting in mortgage lending: Good news for the underserved. Housing Policy Debate, 13(2), 369-391.
  77. 77. Aggour, K.S., Bonissone, P.P., Cheetham, W.E., & Messmer, R.P. (2006). Automating the underwriting of insurance applications. AI magazine, 27(3), 36-36.
  78. 78. Berger, A. N., & Bouwman, C. H. (2013). How does capital affect bank performance during financial crises? Journal of Financial Economics, 109(1), 146-176.
  79. 79. Howard, P. K., (1994). The Death of Common Sense: How Law is Suffocating America. NewYork, NY: Warner Books.
  80. 80. Research and Markets. (2017). Artificial Intelligence Market: Global Forecast to 2020. Ireland: Research and Markets. Retrieved from https://www.researchandmarkets.com/reports/3979203/artificial-intelligencechipsets-market-by
  81. 81. IBM. (n.d.). AI and Bias. IBM. Available at: https://www.research.ibm.com/5-in5/ai-and-bias
  82. 82. Bloomberg, J. (2018). Bias Is AI's Achilles Heel. Here's How To Fix It. Forbes. Available at: https://www.forbes.com/sites/jasonbloomberg/2018/08/13/bias-isais-achilles-heel-heres-how-to-fix-it/#2b782e676e68
  83. 83. Eder, S. (2018). How Can We Eliminate Bias In Our Algorithms?. Forbes. Available at: https://www.forbes.com/sites/theyec/2018/06/27/how-can-weeliminate-bias-in-our-algorithms/#63f855a9337e
  84. 84. McCullen, A. (2018). Ethical AI  “Without reason, without heart, it destroys us”?. [online] Medium. Available at: https://medium.com/thethursdaythought/ethical-aiwithout-reason-without-heart-it-destroys-us-5436fb81ae1a
  85. 85. Sears, M. (2018). AI Bias And The 'People Factor' In AI Development. [online] Forbes. Available at: https://www.forbes.com/sites/marksears1/2018/11/13/ai-bias-andthe-people-factor-in-ai-development/#f5874879134c
  86. 86. Simpson, G. (2018). The Societal Impact of AI. CIO. Available at: https://www.cio.com/article/3273565/the-societal-impact-of-ai.html.
  87. 87. Vanian, J. (2018). Unmasking A.I.'s Bias Problem. Fortune. Available at: http://fortune.com/longform/ai-bias-problem.
  88. 88. Hao, K. (2019). This is How AI Bias Really Happens—and Why It’s So Hard to Fix. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/612876/this-is-how-ai-bias-reallyhappensand-why-its-so-hard-to-fix/
  89. 89. Intel AI. (2019b). The Risks Of Dirty Data And AI. Forbes. Available at: https://www.forbes.com/sites/intelai/2019/03/27/the-risks-of-dirty-data-andai/#70b8807e2dc7
  90. 90. Leetaru, K. (2019). Why Is AI And Machine Learning So Biased? The Answer Is Simple Economics. Forbes. Available at: https://www.forbes.com/sites/kalevleetaru/2019/01/20/why-is-ai-and-machinelearning-so-biased-the-answer-is-simple-economics/#28a8b3bf588c
  91. 91. Marr, B. (2019). Artificial Intelligence Has A Problem With Bias, Here's How To Tackle It. [online] Forbes. Available at: https://www.forbes.com/sites/bernardmarr/2019/01/29/3-steps-to-tackle-theproblem-of-bias-in-artificial-intelligence/#691845ed7a12
  92. 92. Pandya, J. (2019). Can Artificial Intelligence Be Biased?. Forbes. Available at: https://www.forbes.com/sites/cognitiveworld/2019/01/20/can-artificial-intelligencebe-biased/#678865bf7e7c
  93. 93. McRaney, D. (2012). You are Not So Smart: Why You Have Too Many Friends on Facebook, why Your Memory is Mostly Fiction, and 46 Other Ways You're Deluding Yourself. New York: Gotham Books.
  94. 94. Ernst & Young. (2018). The evolution of robo-advisors and advisor 2.0 model. 7. Retrieved from www.ey.com
  95. 95. Wong, M. M. (2015). Hungry Robo-Advisors Are Eyeing Wealth Management Assets We Believe Wealth Management Moats Can Repel the Fiber-Clad Legion. 16. Retrieved from https://www.morningstar.com/content/dam/marketing/shared/pdfs/Research/e quityreserach/20150409_Hungry_RoboAdvisors_Are_Eyeing_Wealth_Manage ment_.pdf
  96. 96. Kirilenko, Andrei A., and Andrew W. Lo. (2013). Moore's Law versus Murphy's Law: Algorithmic Trading and Its Discontents." Journal of Economic Perspectives 27, no. 2: 51-72.
  97. 97. Yampolskiy, R. V., & Spellchecker, M. S. (2016). Artificial intelligence safety and cybersecurity: A timeline of AI failures.
  98. 98. Carney, M. (2017b), “The Promise of FinTech – Something New Under the Sun?” Speech at the Deutsche Bundesbank G20 conference on “Digitising finance, financial inclusion and financial literacy”, Wiesbaden Germany. Available at http://www.bankofengland.co.uk/publications/Documents/speeches/2017/speech956.pdf.
  99. 99. Lewis, D. (2018). Computers may not make mistakes, but many consumers do (Vol. 10923). https://doi.org/10.1007/978-3-319-91716-0
  100. 100. Jung, D., Glaser, F., & Köpplin, W. (2019). Robo-Advisory – Opportunities and Risks for the Future of Financial Advisory. https://doi.org/10.1007/978-3-319-95999-3
  101. 101. Fein, M. L. (2015). Robo-Advisors : a Closer Look By Melanie L. Fein. 1-33. https://doi.org/10.2139/ssrn.2658701
  102. 102. Vincent, J. (2016). Twitter Taught Microsoft’s AI Chatbot to be a Racist Asshole in Less Than a Day. The Verge. Available at: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
  103. 103. Buranyi, S. (2017). Rise of the Racist Robots – How AI is Learning All Our Worst Impulses. [online] The Guardian. Available at: https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robotshow-ai-is-learning-all-our-worst-impulses
  104. 104. Newman, D. (2017). Your Artificial Intelligence Is Not Bias-Free. [online] Forbes. Available at: https://www.forbes.com/sites/danielnewman/2017/09/12/yourartificial-intelligence-is-not-bias-free/#165b087c783a
  105. 105. Ludden, C. (2015). The Rise of Robo-Advice: Changing the Concept of Wealth Management. Accenture, 12. https://doi.org/10.1007/978-3-319-54472-4_67
  106. 106. Cocca, T. (2016). Potential and Limitations of Virtual Advice in Wealth Management. Journal of Financial Transformation, 44(December), 45-57. Retrieved from https://ideas.repec.org/a/ris/jofitr/1581.html
  107. 107. Van Liebergen, B. (2017). Machine learning: A revolution in risk management and compliance? Journal of Financial Transformation, 45, 60-67.
  108. 108. Di Maio, P. (2020). Neurosymbolic Knowledge Representation for Explainable and Trustworthy AI.

Written By

Charmele Ayadurai and Sina Joneidy

Submitted: 14 December 2020 Reviewed: 04 January 2021 Published: 03 February 2021