Open access peer-reviewed chapter - ONLINE FIRST

Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea- Part 2

By Charmele Ayadurai and Sina Joneidy

Submitted: December 14th 2020Reviewed: January 4th 2021Published: February 3rd 2021

DOI: 10.5772/intechopen.95806

Downloaded: 38

Abstract

Banks have experienced chronic weaknesses as well as frequent crisis over the years. As bank failures are costly and affect global economies, banks are constantly under intense scrutiny by regulators. This makes banks the most highly regulated industry in the world today. As banks grow into the 21st century framework, banks are in need to embrace Artificial Intelligence (AI) to not only to provide personalized world class service to its large database of customers but most importantly to survive. The chapter provides a taxonomy of bank soundness in the face of AI through the lens of CAMELS where C (Capital), A(Asset), M(Management), E(Earnings), L(Liquidity), S(Sensitivity). The taxonomy partitions challenges from the main strand of CAMELS into distinct categories of AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) categories that banks and regulatory teams need to consider in evaluating AI use in banks. Although AI offers numerous opportunities to enable banks to operate more efficiently and effectively, at the same time banks also need to give assurance that AI ‘do no harm’ to stakeholders. Posing many unresolved questions, it seems that banks are trapped between the devil and the deep blue sea for now.

Keywords

  • bank
  • bank soundness
  • financial sector
  • Artificial Intelligence (AI)
  • CAMELS

1. Introduction

The Global Financial Crisis (GFC) showcased that even banks that are well established, operating in robust markets and governed by tough and forceful regulatory frameworks can fail. Over the years banks have grown larger in size, have become more complex and complicated in its nature of operation making it opaque and incomprehensible. Bank supervisors are still struggling to demystify the risk undertaken by banks during the GFC. The unknown risk posed by yet another Blackbox in the name of Artificial Intelligence (AI) could pose identical challenges of increased systemic fragility, bank failures and freezing up capital markets evident during the GFC. As such, bank supervisors are critical of banks adopting AI. To move forward banks need to give assurance that banks will “do no harm”. As such, it is important to consider the lesson from the GFC which calls into question the soundness of regulations to capture the risk but most importantly to critically evaluate the process that is in place, in this case the efficacy of the implementation of AI. It is necessary to know if AI promotes safety and soundness in the financial system or adds undue burdens to the markets [1]. As such, the study looks to critically assess the challenges posed by AI that could influence bank soundness from the light of Capital (C), Asset (A), Management (M), Earning (E), Liquidity (L) and Sensitivity (S) (CAMELS), determinants of bank bank’s health and wellbeing [24] while “doing no harm” to individuals, corporations and society as a whole.

The chapter contributes to literature in several ways. Earlier research has either focused on AI application on the entire financial sector covering banks, fintech companies, mortgage lenders, security companies amongst others [57] or have evaluated AI applications in the form of Machine Learning (ML), Neural Networks (NN) Artificial Neural Networks (ANN) in specific areas such as credit evaluation, portfolio management, financial prediction and planning [2, 810] or by examining user friendly experiences of end users [5, 6, 11]. Therefore, these studies are not sufficient to understand the challenges proposed by AI from solely a bank’s perspective. As such, the chapter fills this gap by taking a holistic approach in scrutinizing the challenges faced by banks solely by deploying AI. By doing this the chapter provides a significant insight into the significant challenges that AI technology can pose on the banking industry depleting its chances of survival. The chapter also further considers bank soundness with the application of AI from various aspects of Capital (C), Asset (A), Management (M), Earning (E), Liquidity (L) and Sensitivity (S) (CAMELS) determinants of bank soundness. This chapter is the first to review the challenges of deploying AI in banking operation in light of CAMELS. Earlier research [24] has only focused on bank soundness from the CAMELS perspective. The chapter also more specifically focuses from both the service provider and customer end, providing further insight to regulators on what they need to look into. Most importantly, the intention is to examine through the lens of CAMELS how sound are banks having applied AI into their processes.

The chapter is organized as follows: the next section presents a brief theoretical discussion on the application of AI in different sections of the bank from front, middle to back office operation. Section three introduces the literature gathering and research method. Section four presents result and discussions on the challenges posed by banks on application of AI from CAMELS perspective. The last section concludes the chapter and highlights insight on further research.

2. Literature review

Central banks worldwide have actively embedded AI into their daily operations from microprudential and macroprudential supervisions to information management, forecasting and detecting fraudulent activities. Monetary Authority of Singapore applies AI to scrutinise suspicious transactions, while the central bank of Austria has developed a prototype for data validation. The central bank of Italy uses AI techniques to predict price moves on the real estate market [7].

Banks also have deployed AI from front office through middle-office to back office operations in different subsets of the banks [7, 12]. AI is not only widely used and applied in conventional banks it has been actively embraced in Islamic banks as well [13, 14]. AI in the form of Neural Networks (NN) has been used in risk management, forecasting i.e. inflation [1518], identify complex patterns [16, 19, 20], predict future stock behavior, market trends, market response, real estate evaluations, financial crises, bank failures, bankruptcy, exchange rates, detecting credit card fraud [9, 2022].

Artificial Neural Networks (ANN) has been used to analyse relationships of bonds and stocks between economic and financial phenomenon, futures and financial markets, loan application and overdraft check, loan scoring, credit scoring, credit worthiness, mortgage choice decision, portfolio optimization, portfolio management, asset value forecasting, index movement prediction, exchange rate forecasting, global stock index forecasting, portfolio selection, portfolio resource allocation, forecasting, planning, generating time series, credit evaluations, bankruptcy prediction, predicting thrift failures, financial distress prediction, asset value forecasting, and decision making [2226].

Backpropagation neural networks (BPNN) is applied to classify loan applications from good to bad [20]. Decision Tree (DT) is applied in credit risk classification [26]. SVM is applied in corporate credit rating [26].

Machine Learning (ML), a subfield of AI is used in customer services such as search engines, offering product recommendations [27], manage customer online queries, perform voice recognition, predictive analysis, provide financial advice, analyse risks, manage assets and engage in algorithmic trading [6]. ML is deployed for call-centre optimization, mortgage decision making, relationship management, treasury-management initiatives, customer-credit decisions and equity trading [27, 28] where algorithmic trading is used to pick stocks and is able to fulfil the job specification of a portfolio manager [29, 30]. JP Morgan uses ML to execute trades in equity markets [31].

Big Data and Machine Learning (BD/ML) in the form of Robo-advisers use algorithms to deliver stock recommendations, analysing incoming information for investors, providing investment advice and financial planning services, make credit decisions [32]. Goldman Sachs uses ML platform Kensho to offer automated analysis of breaking news and compiling the information into regular summaries. Wells Fargo on the other hand, uses AIERA (Artificial Intelligent Equity Research Analyst), to issue buy and sell call options on stocks. While bank officers offer recommendations through this platform [32]. Several studies have worked with various ML models for credit scoring namely ensemble classifier [3339], support vector machine [4046], neural network [45, 4752], genetic programming [5355], Bayesian network [5658] and decision tree [50, 5961]. Automated trading where systems make independent decisions without human intervention or oversight is evident in stock markets. Nasdaq runs on autonomous trading by 75% [25].

Development in AI is moving in the direction of hybrid models where two or more Artificial Intelligent systems are combined to enhance performance namely with the application of intelligent systems which is able to integrate intelligent techniques to problem-solving namely the combined efforts of neural network and fuzzy system [26].

3. Methodology

The research is conducted as a conceptual chapter with the aim to provide a deeper understanding of the opportunities parted by AI from a service provider and customer perspective. To answer the research question on how able banks are to effectively deploy AI into their daily operations to improve CAMELS from a bank’s perspective, a systematic review of the literature and objective observations were undertaken to examine banks through the lens of bank soundness determinants of CAMELS. The observations found in existing literature are gathered to assemble a framework categorized by CAMELS (Figure 1). The literature was gathered through the Scopus database as a main source of finding existing literature. The database offers a wide range of management and business-related studies relevant for the topic of research. In addition, other databases such as Google Scholar, Social Science Research Network (SSRN), SpringerLink and IEEE Xplore were also examined. Journal articles since the period 2000–2020 were extracted using the prescribed keywords of Bank, Bank soundness, Financial Sector, Artificial Intelligence (AI), CAMELS. Only articles that were available in full text, published in scholarly, peer reviewed journal were chosen to be closely examined. The search was also conducted using the backward and forward approach where reference list of articles was utilized to find further research papers.

Figure 1.

Taxonomy of challenges posed by AI on Bank Soundness - A classification based on the determinants Bank Soundness of CAMELS.

4. Findings and discussion

This section presents an overview of the challenges posed by banks in deploying AI in their daily front, middle and back office operations prescribed from the CAMELS perspective (see Table 1 in the Appendix).

4.1 Capital and liquidity

Bank capital acts as a core determinant for bank’s survival. Capital absorbs losses during adversity and insufficient capital holdings can cause banks to collapse. AI with its limitless abilities and capabilities helps banks to hold robust capital holdings through stress testing.

Banks have to hold sufficient liquidity funding to ensure that it is able to meet unforeseen deposit outflows. Banks that struggle to meet its daily liquidity needs will eventually fail [3]. Central banks working on larger scales overseeing the workings of the market use AI to sort large number of bank notes and detect liquidity problems.

AI’s ability to detect or uncover crisis depends on the quantity and quality of data provided and used to train the algorithms. As such if the dataset lacks important conditions such as economic crashes, as normal periods exist more than crisis periods, limited crisis data could reduce AI’s predictive abilities and the output will have limited use in measuring or projecting future risk under stress [7, 32]. As such, will have little value for bank’s setting their minimum prescribed capital/liquidity holding (Basel accords) to remain solvent while lending through recessions [27]. Banks have little choice but to rely on theory of distribution of losses and parametric statistical structure to link normal times data to determine large losses that causes instability. Yet, a more accurate prediction will yield from data of distribution of losses itself [27].

4.2 Asset

Asset quality is measured by the level of credit risk contained in bank’s assets [62]. Therefore, a bank that can detect, measure, monitor and regulate credit risks will hold higher quality assets [63]. The GFC showcased that credit risk is the most challenging risk to manage and control as it not only absorbs profits but exposes banks to failures as well. AI helps banks to clearly assess and evaluate customers’ risk, eliminating ambiguity, biasness while improving loan processes and.

Banks are accountable for each decision that they make. As such employ verification and checks at several levels to weed out incorrect or weak decisions. As such, Loan officers should be able to provide a logical explanation on what grounds a loan has been accepted or rejected to their superiors, compliance officers, auditors, regulators and customers [5, 7, 10, 12, 64, 65]. The working logic of AI decision has to be traceable backwards. Customers need to understand the reasons why their loan application has been rejected or why AI has recommended a particular product before acting on it. Keeping customers in the dark without proper justification will cut short the chances of them determining the real cause behind the rejection, finding solutions to their problems and improving their circumstances or proving an identity theft if it happened to them. In short, AI adverse decision will have a permanent detrimental effect on someone’s future [28, 64, 6668].

Transparency is also important to fully trust the system through validating the decision made by AI, by not only detecting anomalies in the decision process such as biasness, mistakes, manipulations of data, deficiencies, compliance to rules i.e. GDPR, cybersecurity crimes linked to work processes such as dataset poisoning, internal network manipulation, and side-channel attacks [69] but also to detect clearly and precisely at which step the anomalies occurred and what information AI fed itself [10, 12, 64, 6668, 7072].

Although AI can assess customers from various angles namely with non-traditional data such as customers connection, internet searches, network diversity, etc., yet how reliable is this information to make an informed decision about a person’s repayment ability, thus future. Does the credit score of a person increase if they socialise with those who are creditworthy? Borrowers may also be judged based on how they behave online or their dishonesty in disclosing financial data, forming biases and being judged unfairly [73]. Also, are customers aware that non-traditional data is used in the evaluation process to assess their loan repayment ability? [6].

AI that are trained based on supervised learning where both inputs and output are fed into the system have zero chance of biasness unless the data fed itself is biased. Data used to train ML algorithm must be representative of a wide range of customers that will apply for loans representing namely a whole population [27, 72]. If a population is underrepresented or there are rare cases such as women, race, ethnicity, marital status, zero credit history and this information is used to train AI, AI will deliver biased results if data is highly correlated in these categories [7, 28, 7274].

In unsupervised learning, AI trains itself to make independent decisions. As such, based on what AI trains itself with, decisions can be bias. In reinforced learning AI takes uses its own initiative to combine various decision to make an ultimate decision where biasness can form as well. In checking creditworthiness AI can discriminate based on gender if more men are in professions or earning higher salaries, race if more discount stores are located near ethnic minorities, spelling mistakes in internet searches etc. Statistics reveal, algorithms accept white applicants and reject black applicants evident from the gradual reduction in black applicants’ loan approval in banks [64].

According to Janssen algorithms can systematically introduce inadvertent bias, reinforce historical discrimination, favor a political orientation or reinforce undesired practices [75]. Standard affordability rules such as defaults, loan-to-value, and loan-to-income may not be applicable to all groups of borrowers [76] causing low-income borrowers to be marginalised. Looking from a different perspective, the contribution of one’s data could contribute to a whole set of minority, race, gender, marital status or society to be judged in a certain way, forming biases, causing more harm than intended. For example, algorithms picking up 20 black female who are constantly delinquent on their loans as a representative of the whole black female population. AI could also link financially vulnerable customers to mental health issues [12]. Banks could utilize this information to turn down loan applications causing more harm to the society than intended [12]. Yet, to train AI systems to replicate human decision-making skills is a challenge. As it is difficult to transform various algorithmic concepts into training data to solve every problem for a range of lending products [10, 77].

4.3 Management

Banks rely heavily on management to not only generate earnings and increase profit margins [3] but also to keep banks alive [78]. AI helps banks to be more efficient, effective, effectual and efficacious.

The legal profession requires predictability in its approach i.e. contracts are written in a way knowing how it will be executed. As such, the legal system offers a predictable environment where customers can improve their lives [64]. Therefore, AI needs to be predictable to customers.

The GFC is the outcome of human greed, manipulation and corruption. As such, AI algorithms need to be robust against exploitation [64]. Discontented employees or external foes may learn the inner workings of the AI model to easily corrupt algorithms or use AI application in malfeasant ways [28]. This will strike of a worst catastrophe than the GFC, as the involvement of AI increases complexity and opaqueness of the financial systems making it is difficult to configure a solution.

When an AI system fails at its assigned task, it is difficult to pin down who is to take the blame for its actions. As the AI ecosystem comprises of wide range of stakeholders from the philosopher, the AI researcher, the data scientist, the data provider, the developer, the library author, the hardware manufacturer, the OS provider, programmers, etc. Each staff has established procedures to part with AI and their responsibilities have been distributed widely amongst them. As such, when a catastrophe strikes, it would be difficult to assign liability and could be a perfect cover for mistakes, manipulation and exploitation [64, 72, 79]. In the pursuit of accumulating big data, banks could cross the boundaries to incorporate customers private information. As such, when any loss results from the use of AI should the scientists who work to tune the experience to the needs of consumers, employees write the content the chatbots or the algorithm provider be liable? [5, 7].

As AI systems are interconnected, hackers or malicious software can manipulate bank’s data by hacking client’s financial details, creating false identities, flooding systems with fabricated data resulting in misclassification or bad clusters causing incorrect decisions facing consumer backlash and regulatory repercussions [28, 72].

Algorithms are constantly looking to improve predictive power. As such are in the constant look out for correlations producing spurious relationships which eventually leads to bias conclusions [80].

Literature have pointed out the potential for AIs to act on biased data [8192]. scientists have realised that ML can discriminate customers based on race and gender. One such example is the ‘white guy’ syndrome where men are picked over women. Input data is directly linked to the outcome. As every individual have their own biases, norms and ethics it is difficult to establish that biases will not exist even after AI has gone through training data [84, 93]. Also to perfect the existing biases under the Fair Lending Act, and to improve processes and innovation, more data from people with disability, colour, age, gender and creed could be incorporated into the system. Only if customers feel comfortable to share [12, 32].

As developing and operating AI requires extensive resources and big data, only large banks can be players in this field. This encourages concentration affecting healthy competition in the market [7]. Banks have to rely heavily on technology companies for AI’s critical tools and infrastructure. As such, increasing operational risk [7].

As there are only a few players in the market, operational risk could easily feed into systemic risk. On top of that, the widespread use of AI in similar functions such as provision of credit or trading of financial assets and the uniformity in data, training and methodology employed to develop the algorithms could spark off herding and procyclical behaviour [7].

Banks that are extensively working with AI need staff that have expertise not only the field of finance but also have formal training in computer science, cybersecurity, cryptography, decision theory, machine learning, formal verification, computer forensics, steganography, ethics, mathematics, network security, psychology and other relevant fields. The challenge would be to find sufficient number of staff to fit this role.

AI in the form of robo-advisory services incur high development, marketing as well as advertising costs. As such a single client acquisition costs ranges between $300–$1,000 with clients at the lower end only generating $100 in annual revenues [9, 94]. Robo-advisors’ slim operating margin and low average account size would eat up the profits garnered quickly taking banks a decade or more to cover the $10 to $100 million in marketing costs [9, 95].

Some studies have pointed out that ML is only able to act on the primary stages of decision making such as data processing and forming predictions. However, at higher levels of judgement, action and task, requires special skills such as empathy, lateral thinking as well risk evaluations where AI is unable to muster [6].

Algorithmic trading through ML could also facilitate trading errors. A single failure in the AI system could lead to devasting catastrophes without a chance for recovery resulting in flash crashes [96, 97], causing “excess volatility or increase pro-cyclicality as a result of herding” [98]. Besides, major financial institutions returned compliance software that stopped detecting trading issues from excluded customer trades [28]. Developers prescribe that new intelligent features embedded into AIs could pose unexpected and unknown risks creating new points of vulnerability leaving loopholes for hackers to exploit [28, 32].

If humans make mistakes, manipulated the system they can be fired instantly. However, if AI makes mistakes or corrupts, customers will lose hope and trust in the bank and its systems [5]. Robo-advisors work with several parties namely clearing firms, custodians, affiliated brokers, and other companies in the industry to offer its services to customers. While Lewis suggest that robo-advisors resolve conflict of interest amongst the parties [99], Jung et al. suggest conflicts remains costing customers [100]. If company uses brokers for an example, this cost is transferred to customers increasing the price of service while the robo-advisor makes profit as the middleman [101]. In other scenarios, robo-advisors could receive fee for order flow in exchange for routing trades to a clearing firm or have an equal interest in securities that customers are looking into [101].

Scripting errors, lapses in data management, misjudgments in model-training data can compromise fairness, privacy, security, as well as compliance [28]. As the volume of data being sorted, linked, ingested is large, further complicated with unstructured data, mistakes such as revealing sensitive information could take place. i.e. client’s name might be redacted from the section that is used by an AI but present in the stock broker’s notes section of the record thus breaching European Union’s General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) [28].

4.4 Earnings

Banks that manage their expenses well while fully utilising their assets to generate constant revenue streams are most likely to be sound [3]. AI enables banks to offer unique selling points in products that increases customer satisfaction, boosting sales and revenue [6].

Studies have recorded chatbot controversy and backlash [5, 6]. AI chatbot Microsoft’s Tay, tweeted racist, sexist, xenophobic, and prejudiced opinions through learning them through tweets it read and interaction with younger demographics on Twitter upsetting customers [7, 102104].

To increase market share and competitive positions, improving the predictive power of algorithms and ensuring AIs are trained properly to avoid bias decisions, banks need a large set of quality and diverse data. In the pursuit and pressure to achieve this goal, banks might share customer’s private data without their consent when customers trusted the bank to keep it confidential [7, 9, 12]. Privacy is important to not only for customers but also banks to allow banks to remain in its competitive position [27].

Training AIs to exclude certain segments of customers in sales could also lead to discrimination and biasness [28]. AI could also weave zip codes and income of individuals to create target offering causing discrimination against other classes and groups of people [28]. Robo-advisors can provide incorrect risk evaluation if it is not equipped with all aspects of an investor’s financial condition to accurately access the overall risk. When customised questions are unable to capture unique life experiences such as receiving large inheritance customers are better off with human advisors [9].

Customers are more likely to rely on human advisers than chatbots and robo-advisors to assist when it comes to more personal and sensitive matters. One example is when large sums of money is involved either through wealthy customers or due to death and illness. Another is when there is market volatility. Customers are less inclined to trust new technologies and would prefer humans to handle such transactions for accountability purposes [6, 9, 105, 106]. Customers are also more confident to gain insight from human advisors when it comes to complex financial products such as derivatives, discussing complicated matters or making complains [6].

AI lacks emotion quotient. As such is unable to connect, understand or react to human at deeper level to comprehend their emotion and to empathise, rejoice or sympathize with them [5, 6]. As such, some prefer front-desk receptionist to a chatbot or an electronic menu that needs to be navigated.

Although the equality act, that oversees the violation against race and gender acknowledges inequality when “a person” treats “another person” favourably. It does not recognise discrimination by AI’s but it recognises discrimination by other “non-humans” such as government agency or regulator. As such, AI cannot be taken to court [6].

Banks may not disclose their use of AI to customers to either benefit the bank or to avoid “fear factor of AI” amongst customers. Banks have to be transparent with their customers revealing if they are working with AI or human advisors. Although Swedbank and Société Générale agree it is best to be honest with customers others may not agree as it disadvantages the banks in many ways. As such AI are being trained to offer seamless interaction through training to very closely mimic humans. [6].

4.5 Senstivity

Banks are subject to market risks (i.e. interest rate risk, foreign exchange risk, price risk etc) that can have adverse effects on bank’s earnings and capital. AI provides solutions to real world problems [20], through real time, enabling banks to keep up, adapt and respond to constant and dynamic changes in the environment. Thus, improving bank stability and soundness.

Unsupervised ML techniques are the only ones that can be used to detect frauds as it is able to identify unusual transaction characteristics to then investigate and test further to prove the authenticity of the transaction [27, 32]. Besides, unsupervised ML can also be used to closely monitor traders’ behavior enabling auditing [107]. Yet, as unsupervised ML is linked to blackbox decision making, it is difficult to point out if the decision was made fairly.

ML in the form of decision tree and logistic regression models are interpretable but lack accuracies [10]. AI in the form of Deep Neural Networks (DNNs) and ensemble models such as random-forest, XGboost and Adaboost have strong higher predictive power and accuracy as it works with multiple layers of hundreds and thousands of parameters and neurons while applying nested non-linear structure which makes them opaque and a complex blackbox [10, 27, 32, 71]. Various stakeholders have labelled the effort of allowing black boxes to make decisions as irresponsible as decisions created by these blackboxes are not justifiable, not interpretable, trackable, legitimate, trustworthy as the decision lacks detailed explanations [71].

5. Conclusion

Investment in AI is one of the most essential and a core element in bank survival. Therefore, it is vital for banks to continue to deploy AI into their operations. Yet, AI suffers from a series of limitations that must be considered in assessing its use. Many studies have raised concerns about AI biasness, discrimination, violating privacy, manipulating political systems compromising national security, economic stability, political stability, digital safety, and financial safety causing disastrous consequences from reputational damage, revenue losses, regulatory backlash to criminal investigation, ignores equality and fair treatment, difficult to evaluate decisions due to poor explainability, transparency, resulting in lack of trustworthiness, accountability and reliability [12, 20, 28, 108]. The chapter has successfully portrayed bank soundness in the face of AI through the lens of CAMELS. The taxonomy partitions challenges posed by AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) distinct categories. Ironically, both AI and banks are opaque in nature and have diminished public trust in them. Governments will be held accountable once again by taxpayers if markets come to a standstill as a result of AI. As such, banks need to provide answers on how well they are protecting customer’s privacy and security with a range of protocols, controls and measures. If a silver bullet is not found than either banks will have to disappear, or the world will witness yet another catastrophe created by banks but this time with the help of AI. As such, trapping banks further into the conundrum of being in between the devil and the deep blue sea.

Government SupportAI & Banks
Competition and Survival
Nature of Banking Industry
Future customers
Beyond human capacity
Challenges
Capital
  • Uncover crisis

Asset
  • Explainability

  • Transparency

  • Non-traditional data evaluation

  • Training and learning

Management
  • Predictability

  • Corruptibility

  • Responsibility

  • Cybersecurity risk

  • Biasness from spurious relationship

  • Concentration risk

  • Systemic risk

  • Operational risk

  • Staff expertise

  • Conflict of interest

  • High cost

  • Decision making

  • Robo-advisor incorrect risk evaluation

  • Biasness from data input

  • Flash crash

  • Reputational risk

  • Mistakes

Earnings
  • Trade-off 2- Data privacy vs. accuracy

  • AI backlash

  • customer exclusion

  • Personal, sensitive and complex matters

  • Empathy, emotional quotient

  • Equality Act

  • Honesty

  • Robo-advisor-incorrect risk evaluation

Liquidity
  • Uncover crisis

Sensitivity
  • Complex task

  • Trade-off 1-Interpretable vs. precision

Table 1.

Taxonomy of challenges posed by AI on bank soundness.

Download for free

chapter PDF

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Charmele Ayadurai and Sina Joneidy (February 3rd 2021). Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea- Part 2 [Online First], IntechOpen, DOI: 10.5772/intechopen.95806. Available from:

chapter statistics

38total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us