Open access

Introductory Chapter: Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications

Written By

Esther Villar, Eneko Osaba, Jesus L. Lobo and Ibai Laña

Submitted: 06 July 2021 Published: 01 September 2021

DOI: 10.5772/intechopen.99289

From the Edited Volume

Artificial Intelligence - Latest Advances, New Paradigms and Novel Applications

Edited by Eneko Osaba, Esther Villar, Jesús L. Lobo and Ibai Laña

Chapter metrics overview

407 Chapter Downloads

View Full Metrics

1. Introduction

In the current technological society, data are considered the new oil of the digital economy. For this reason, Artificial Intelligence (AI), a knowledge field that emerges and flourishes on data, can be arguably deemed as the principal driving force of the current social and economical revolution. AI makes possible to extract knowledge from data, in order to infer, decide and proactively act in diverse critical areas for the human being, such as transportation, energy, industry, health, security or the financial sector.

Today, AI services and products can be found in many daily applications, such as those mobile tools that enrich user experience with our mobile devices or in the online shopping sector, where AI intervenes in the whole process from the targeted advertising to the recommendation systems. Furthermore, the application of AI algorithms in the industry has been a recurrent research topic for some years, and represents one of the catalyst technologies of the entire digital transformation movement that the industry is experiencing [1]. In this context, we can find heterogeneous systems such as predictive analytics methods, decision support techniques or artificial vision systems. Many additional applications are being developed and deployed by different companies for helping in diverse contexts such as the assistance with diagnosis and planning decisions, automated inspections, robotic applications or advanced manufacturing [2].

In any case, we are still at the very dawn of a technological revolution. It is widely considered that in an increasingly interconnected and automated world, those companies who master AI technologies will exercise control over the market.

Advertisement

2. Main areas of artificial intelligence

Artificial Intelligence is a wide field of knowledge dedicated to the design, modeling and implementation of intelligent systems so that they automatically give a response to complex problems arisen in the real-world. In this regard, several subfields can be found in this broader paradigm, being Machine Learning (ML) and Optimization the ones that stand out.

ML comprises those algorithms targeted to extract knowledge from data, relying on fundamental concepts in computer science, statistics or probability. Apart from that, ML goes one step further, being capable of unveiling additional features from the data, such as causality or advanced cognitive reasonings. Thus, ML techniques are meant to properly represent raw data featuring past experience and rendering it into a model able to gain insights and make either decisions or predictions. ML is closely related to data mining, although the latter fundamentally concentrates on the exploratory analysis whilst the former draws upon other artificial intelligence disciplines such as computational statistics or pattern recognition.

ML algorithms comprises both descriptive and predictive techniques. On the one hand, descriptive refers to those solvers aiming at describing the data, summarizing or categorizing it. On the other hand, predictive analysis is focused on finding trend, behaviors or conclusions which could be valuable for anticipating future outcomes. With respect to the learning style applied to the model generation, ML algorithms are typically categorized as supervised, unsupervised, semi-supervised and reinforcement learning:

  • Supervised Learning [3]: in this category, labeled input data feeds a learning algorithm in the training phase. The model or inferred function will be generated under the premise of minimizing an error function or, on the contrary, of maximizing the precision. These systems are intended to correctly map unseen examples. Mostly addressed problems in this case are classification and regression.

  • Unsupervised Learning [4]: no label for any input vector is provided. The objective in this case is to find the structure behind the patterns, with no supervisory or reward signal. These models analyze and deduce peculiarities or common traits in the instances so as to discover similarities and associations among the samples. Example problems are clustering and latent variable models.

  • Semi-Supervised Learning [5]: labeled and unlabeled instances feed the algorithm, hence falling between the previously mentioned categories. The acquisition of labeled data is fairly expensive and often requires human skills while unlabeled data can be of great practical value in order to surpass the performance of any other previous learning approaches. The goal of this kind of systems can be oriented towards a transductive learning (deriving the labels of the unlabeled data by searching for analogies) or inductive learning (inferring the mapping from initially labeled vectors to their corresponding categories).

  • Reinforcement Learning [6]: in this last category, the system interacts with its environment by producing actions and receiving either a positive or a negative stimulus from the events in response. These stimuli prompt the translation of that feedback into a learning process aiming at minimizing the punishment or maximizing the gained reward. This sort of learning is typical of robotics and its realistic environments which require algorithms for identifying relevant peripheral events in the stream of sensory inputs.

Apart form these classical categories, the natural flow of this field along with the technological advances happened in last years have led to the proposal of more sophisticated paradigms. This sophistication can come from the way the knowledge is acquired, like in the case of transfer and online learning; the way knowledge is shared (Federated Learning) or the inner complexity of models that allows for the representation of complex knowledge (Deep Learning). Furthermore, the growing complexity of the systems developed in these areas has created a demand for understanding the behavior of the applications created. All this with the purpose of reaching comprehensible and reliable systems for users foreign to the AI technologies, and also for automating tasks for researchers and practitioners (with paradigms such as AutoML [7]). As a result of this need, the field known as Explainable AI has recently emerged [8], which objective is to facilitate the interpretation and visualization of complex ML models (mainly Deep Learning models).

In another vein, optimization is the other widely studied topic within the wide field of AI. Hundreds of studies are published every year fully focused on giving an answer to many diverse real-world problems of this kind. In a nutshell, an optimization problem can be defined as the intelligent search of the best solution from the whole group of feasible ones. In this regard, a feasible solution is this alternative that can be placed within the boundaries demarcated by the established constraints. Analogously, the specific word best refers in this context to the most desired solution related to any objective function (or fitness function) which is expected to be maximized or minimized. In other words, an optimization procedure consists of finding the optimal solution to a problem taking into account i) the previously mentioned objective function, which provides a quantitative measure of the performance ii) the decision variables that compose the optimization problem and the parameters on which the solving algorithm is based on, and iii) the constraints to be compulsorily met which delimits the allowable search space.

The nature and characteristics of the above described objective function, variables or restriction give rise to a broad variety of optimization problems, such as numerical optimization, linear, continuous or combinatorial optimization. We can also distinguish about single-optimization, which objective is to optimize one sole objective; or multiobjective optimization [9], which entails the finding of a group of solutions which provide the optimal balance among different objectives. We would like to highlight dynamic optimization [10], in which constraints and/or fitness function of the problem can vary dynamically along time; stochastic optimization [11], defined as the process of optimizing a problem in which one or more of its values are subject to randomness; or Transfer Optimization [12], devoted to the exploitation of the knowledge acquired throughout the optimization of one problem to solve another related or unrelated problem. Additional categories are multimodal optimization or robust optimization among many others.

The interest of solving optimization problems can be justified in two different ways. On the one hand, optimization problems are usually modeled for giving an efficient solution to a real-world problem, entailing their resolution a both social and business interest. To be more precise, this means that different real situations can be modeled as optimization problems to be treated and solved with greater efficiency [13]. On the other hand, many optimization problems are highly complex to solve. For this reason, finding efficient solutions constitutes an attractive challenge for researchers. Being more specific, a large number of these problems are classified as NP-Hard. According to the theory of computational complexity, a problem is considered NP-Hard when there is no technique capable of finding an optimal solution for all its instances in polynomial time.

Advertisement

3. Motivation behind the book edition

Regarding the scientific production, AI symbolizes one of the most high-growing area in the current research community, with more than 350 000 papers published since the nineties. Just a quick glance to renowned Scopus® database confirms this statement, and unveils a clear upward trend. More concretely, AI related scientific production evolves at a significant rate from nearly 3 500 papers in 1990, 14 100 in 2010 and more than 35 000 in 2020. For this reason, and considering the interest that this area is generating in the current community, the edited book that this chapter is introducing revolves around recent prominent theories and developments of AI methods, as well as their application to different fields covered by the engineering. Thus, this material supposes a great opportunity for practitioners, lectures and researchers interested in AI topic as a whole.

References

  1. 1. Matt, C., Hess, T., Benlian, A.: Digital transformation strategies. Business & Information Systems Engineering 57(5) (2015) 339–343
  2. 2. Lee, J., Davari, H., Singh, J., Pandhare, V.: Industrial artificial intelligence for industry 4.0-based manufacturing systems. Manufacturing letters 18 (2018) 20–23
  3. 3. Caruana, R., Niculescu-Mizil, A.: Anempirical comparison of supervised learning algorithms. In: Proceedings of the 23rd international conference on Machine learning. (2006) 161–168
  4. 4. Barlow, H.B.: Unsupervised learning. Neural computation 1(3) (1989) 295–311
  5. 5. Zhu, X., Goldberg, A.B.: Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning 3(1) (2009) 1–130
  6. 6. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT press (2018)
  7. 7. He, X., Zhao, K., Chu, X.: Automl: A survey of the state-of-the-art. Knowledge-Based Systems 212 (2021) 106622
  8. 8. Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.R.: Explainable AI: interpreting, explaining and visualizing deep learning. Volume 11700. Springer Nature (2019)
  9. 9. Abraham, A., Jain, L.: Evolutionary multiobjective optimization. In: Evolutionary Multiobjective Optimization. Springer (2005) 1–6
  10. 10. Nguyen, T.T., Yang, S., Branke, J.: Evolutionary dynamic optimization: A survey of the state of the art. Swarm and Evolutionary Computation 6 (2012) 1–24
  11. 11. Schneider, J., Kirkpatrick, S.: Stochastic optimization. Springer Science & Business Media (2007)
  12. 12. Osaba, E., Del Ser, J., Martinez, A.D., Lobo, J.L., Herrera, F.: At-mfcga: An adaptive transfer-guided multifactorial cellular genetic algorithm for evolutionary multitasking. Information Sciences (2021)
  13. 13. Osaba, E., Villar-Rodriguez, E., Del Ser, J., Nebro, A.J., Molina, D., LaTorre, A., Suganthan, P.N., Coello, C.A.C., Herrera, F.: A tutorial on the design, experimentation and application of metaheuristic algorithms to real-world optimization problems. Swarm and Evolutionary Computation (2021) 100888

Written By

Esther Villar, Eneko Osaba, Jesus L. Lobo and Ibai Laña

Submitted: 06 July 2021 Published: 01 September 2021