Open access peer-reviewed chapter

Applied Mathematics Tools in Digital Transformation

Written By

Francesco Calabrò, Maurizio Ceseri and Roberto Natalini

Reviewed: 18 February 2022 Published: 05 April 2022

DOI: 10.5772/intechopen.103806

From the Edited Volume

Digital Transformation - Towards New Frontiers and Business Opportunities

Edited by Antonella Petrillo, Fabio De Felice, Monica Violeta Achim and Nawazish Mirza

Chapter metrics overview

175 Chapter Downloads

View Full Metrics

Abstract

Digital transformation is a process that companies start with different purposes. Once an enterprise embarks on a digital transformation process it translates all its business processes (or, at least, part of them) into a digital replica. Such a digital replica, the so-called digital twin, can be described by Mathematical Science tools allowing cost reduction on industrial processes, faster time-to-market of new products and, in general, an increase of competitive advantage for the company. Digital twin is a descriptive or predictive model of a given industrial process or product that is a valuable tool for business management, both in planning—because it can give different scenario analysis—and in managing the daily operations; moreover, it permits optimization of product and process operations. We present widespread applied mathematics tools that can help this modeling process, along with some successful cases.

Keywords

  • data mining
  • digital twin
  • modeling simulation optimization (MSO)
  • numerical linear algebra
  • scientific machine learning

1. Introduction

What is digital transformation? According to Ebert and Duarte [1], Digital Transformation (DX) is “about adopting disruptive technologies to increase productivity, value creation, and the social welfare”. The above definition is quite general meaning that DX can be adopted in many circumstances and by many actors: governments (from local to national level), multilateral organizations, industries. Digitalization, and in particular the concept of Digital Twin, has applications in many different fields [2]: Health, Meteorology, Manufacturing, Education, Cities and Transportation, and Energy.

Indeed, DX is having a huge impact on society at large. For instance, concerning labour force, a report from the World Economic Forum [3] states that by 2025, as a consequence of digitalization, 85 million jobs will be destroyed, while 97 million new jobs will be created worldwide. Thus, new competencies and skills will be necessary in a digitally transformed world. Among the top skills listed in the WEF report are the following: Critical thinking and analysis, Active learning and learning strategies, Analytical thinking and innovation, Complex problem-solving, Systems analysis and evaluation. We can observe that a person with a background in mathematics possesses the above skills. More generally, STEM education will offer great opportunities in DX. A recent study [4] shows that in the US the automation and the use of robots in the productive environment has increased the enrolment at Universities and higher education levels in the field like Computer Science and Engineering. This shows an increasing awareness of the workforce of the importance of updating labour competencies.

This paper focuses in particular on the digital transformation of Industry. Throughout this paper, the term “Industry” means business and commerce, public and private research, development, and production facilities; in practice, all the activities that lie outside the field of academic research and education. For the industrial sector, DX leads to huge changes in how a company is managed and can also significantly affect customer satisfaction and product quality. The effects of digital transformation can be seen at many levels of an organization: the way employees work, how business processes operate, and how to collect, analyze, and use data. All the above considerations imply that DX does not mean simply the digitization of information. This is just a first indispensable step of a bigger transformation of the way a company is managed: in short, DX requires a digital culture within an enterprise [5].

Digitalization is enabled by a series of technologies:

  • IoT: different objects in a system (a product or a manufacturing environment) are connected by sensing devices to the Internet and can interact with one another; this allows the control of the system throughout its life cycle.

  • Big Data: the huge amount of data collected every moment on the system are a powerful tool to understand the process, and extract information.

  • Cloud: the data collected has to be stored properly and safely and made available to the users; the user, in turn, may operate on the data through, for instance, simulations to understand what is happening on the system under control.

But one of the most important technologies enabling the Digital Transformation is Mathematics. Modeling, Simulation, and Optimization methods (MSO) have demonstrated their usefulness for solving problems in real life: forecast of air pollution, image processing, filtration processes, cultural heritage conservation, just to mention a few. MSO are becoming very important in a digitalized world, since they make it possible to extract information and knowledge from the data collected. In fact, in the last years research have developed new mathematical tools to be applied, for example, in Digital Twins [6, 7]. And some authors have talked explicitly of an “Era of Mathematics” [8].

This paper will give an overview of the mathematical tools that can be applied in a digitally transformed enterprise. It is organized as follows. Section (2) will describe the two main approaches of applied mathematics to digitalization: Physics-Based and Data-Driven. We will depict the main differences and how they can be combined to increase their effectiveness, with an example from an Industrial case. Section 3 will introduce some tools from linear algebra that can be applied to process collected data. Finally, in a concluding section, we will summarize our points and stress the importance of promoting Mathematics towards Industry.

Advertisement

2. Physics-based versus data-driven approach: competition or collaboration?

The Digital Transformation implies the use of mathematical models and methods to take advantage from the (possibly huge) amount of data collected by an enterprise about its own processes. When an enterprise wants to digitalize its processes it may apply a physics-based approach or a data-driven approach. The above approaches imply different methods to deal with a problem, each one with its pros and cons [9]. In recent years, however, Hybrid methods have been developed building on the two approaches. The hybrids have been developed in such a way to sum up the advantages of both model-based and data-driven approaches while reducing the disadvantages.

2.1 Physics-based

With the Physics-Based (PB) approach, we mean a description of the process or device based on first principles. Thus, the digital counterpart of the process is a mathematical model that describes the physics of the system taking into account all the relevant scales. Such an approach is also named Model-Based to emphasize the role of mathematical modeling.

Thus given a system to be digitized, the enterprise has to translate it in a mathematical model with a variable degree of complexity. The complexity of the model depends on the problem at hand and on the business objectives. The model can be used to monitor the performances of the system and then decide how to manage possible anomalies. How possible anomalies can be detected? Sensors are positioned on the system to collect data about its functions. The data are then compared with model output evaluating the residual, i.e.

ri=uidi,fori=1,,NE1

where ui is the outcome of the mathematical model, di is the data collected by the sensors, and N is the number of data collected. If the residual exceeds a given threshold, then the system does not “behave properly” and some countermeasure has to be implemented.

Physics-based models can be powerful tools to create a Digital Twin. They can give great support in the first design of the digital system; in fact, they can give useful information on the processes to be described without great moles of data. As an example, physics-based modeling can give a first idea on what variables shall be monitored.

Physics-based models are characterized by transparency: this makes straightforward the interpretation of results.

Generalization is another important characteristic of PB models. The underline assumptions, as well as the approximations due to the development of the model determine the extent to which a model, can be generalized to other cases not previously considered. In general, in a PB approach, the limits of applications of a model are known in advance and this makes clear the range of problems it can describe.

One of the main problems of such an approach is the complexity of the resulting system of equations; in the presence of multiple processes interacting with one another (multi-physics systems) and at multiple spatial and/or temporal scales (multi-scale systems), the difficulty in solving the equations will increase very rapidly. Some techniques have been developed, such as Reduced Order Modeling, to diminish model and computational complexity and end up with a more manageable system of equations, while maintaining the accuracy of results [10].

Nevertheless, such an approach is the one that permits a automotive industry to do crash test only via numerical simulations and new applications of the PB are growing up. For example, we are now able to simulate the whole heartbeat where all the physical parts (electrophysiology, the passive and active mechanics of the cardiac muscle, the microscopic force generation in sarcomeres, the blood flow in the heart chambers, and the valve dynamic) concur to the final simulation, see [11].

PB approaches have been applied also in the field of building automation: in [12] the authors focused on predictive maintenance of biomass boilers trying to minimize user discomfort.

2.2 Data-Driven

Data-Driven (DD) approach implies the use of methods from Machine Learning or even Deep Learning to exploit directly the data collected by sensors on system performance. Artificial Intelligence tools are suitable when very high amounts of data are available and can be used to find hidden patterns in the sample that cannot be discovered otherwise. Such a pattern can be refined whenever new data are collected. In such a way, we can say that the tool “learns from experiences”.

The methods applied are diverse and comprises the following:

  • Support Vector Machines

  • Artificial Neural Network

  • Convolutional Neural Network

  • Recurrent Neural Network

  • Generative Adversarial Network

The main advantage of the DD approach is the ease with which the solutions can be found, compared with the first principle approach. They have proven to be a very good alternative to PB models. If enough data are available, tools like neural networks can find hidden structures in very complex problems, where it is difficult to describe exactly the underlined physical structure. To give an example in the field of Material Sciences, in [13] the authors developed a Neural Network to simulate a Cellular Mechanical Metamaterial (CMM) and compared the performances with Direct Numerical Simulations (DNS), usually applied in this kind of problem. While the DNS needs about 105 Degrees of Freedom and 5102s to describe the mechanical properties of CMM, the Neural Network scheme needs only 290 DoF and about 6 to 8 s to provide a solution.

The ability to generalize a DD model is very limited. A DD model can describe circumstances only spanned by the data already available. It will need a large number of new data to take into account a more general setting. The need for (possibly large) amount of information is an overall limit to the development of the DD approach; sometimes, one does not have sufficient data to describe properly a given process.

Another problem is the lack of transparency. DD model builds in general so-called black-boxes that do not permit a clear interpretation of results. This led to the development of explainable AI [14].

According to Gartner [15], thirty-seven percent of companies have implemented AI in some form, translating to a 270 percent increase over the last four years. Customers are accustomed to bots and AI mechanisms that provide, among others, recommendations on services such as Netflix or Spotify. Not surprisingly, 31 percent of companies plan to increase the share of artificial intelligence in their business.

2.3 Hybrid approach

Recently, some researchers attempted to compare the two approaches in order to understand the advantages and the disadvantages of both. For example, in [16] the authors compared PB and DD approaches to the problem of fault detection and isolation in an automotive application. They found the same performances for both approaches in terms of detection and robustness and described the main shortcomings of both. However, Moallemi and coauthors [17] applied both Model Based and DD models to the problem of Structural Monitoring of a building; they found that Model Based approach results in best performances and point out that scarcity of data limits the behavior of DD algorithms. Thus, the debate is still open on what is the best approach for digitalisation.

The recent tendency, however, is to take advantage of the benefits of both by building hybrid methods that mix together. First principles with data-intensive modeling [18, 19, 20]. In [21] the authors survey the research literature on this topic and propose a taxonomy of the developed methods. The integration of PB and DD approaches has several applications. PB models of complex processes require approximations to have a usable set of equations. Approximations are necessary whenever the process is not completely understood in all of its elements. This will introduce bias in model results. Another issue is due to the presence of physical parameters that have to be estimated some time with a small amount of data. On the other hand, Machine learning tools need a large amount of data to reproduce with precision a given process and have limited generalization capabilities. The use of DD techniques in combination with first principle modeling may help to overcome such limitations and this has been shown to be very useful in practice.

One attempt is the so-called Physics-Informed Neural Networks (PINN) [22]: a deep learning framework for the resolution of PDE in several application (fluid dynamics, quantum physics, reaction–diffusion systems, etc.). Neural networks are defined by minimizing a loss function incorporating physical constraints. If one wish to solve an equation of the type Au=0, the solution is given by the minimum of the following function

1Ni=1NAuxi2+1Mj=1Muxjuj2E2

where xjuj for j=1,,M are training data (including initial and boundary conditions), and xi for i=1,,N are points where the solution is defined.

The interplay of Pb and DD approaches has been employed in Unmanned Aerial Vehicle (UAV) management; in [23], the authors present a Component Based Reduced Order Model coupled with Baysian state estimation to allow a small aircraft to adjust dynamically its trajectory.

Hybrid methods have been applied to describe a spring-mass system subjected to damping that evolves according to multiple time scales [24]; such an application is relevant for many industrial settings and is generally very difficult to solve numerically.

2.4 An example in predictive maintenance

Through the technologies described it is possible to solve usual problems of engineering and computational mechanics but also to explore new fields in which up to now mathematical modeling had not been able to replace the experience or the direct control of the mechanism. One of the most interesting areas is that of predictive maintenance [25], already mentioned above. In some sectors, such as in aircraft maintenance, the benefits of having guidelines available to ensure the correct functioning of circulating machines are well known: in civil aeronautics, periodic checks and periodic replacements, for example, of bolts or panels subjected to greater stress is an established practice, but this practice comes from the combination of experience and modeling. On the other hand, it is well known that it is not at all easy in a complex system to keep all the components under control to have guarantees on correct functioning and to prevent possible failures and/or breakages. Machines are often equipped with many sensors that monitor correct operation, signaling anomalies on what each sensor is capable of measuring. Unfortunately, it is often only from the combination of information that the general picture is obtained, and therefore if only the single sensor is monitored, this information collected is not really helpful in decisions. Some industrial sectors, such as boating or highly automated production, would greatly benefit from the possibility of preventing critical situations before failures occur that can put in danger the correct functioning of the machinery. The modeling available to complex tools—which work on different scales and interact with each other—often has to overlook some effects and does not allow predictive models so accurate as to be able to help decision makers determine if or when to intervene with substitutions or repairs. Most of the time one can notice a problem when it is already too late.

The recent literature on predictive maintenance foresees the possibility of combining the effect of a control on the physics of phenomena, possibly linked to the information of the sensors, and of an experience given by the analysis of the available data processed through artificial intelligence. For example, in [26] the authors describe the use of hybrid techniques for controlling the operation of Computer Numerical Control (CNC) machines. These are fundamental briks in modern industries (from Wikipedia): “CNC is the automated control of machining tools (such as drills, lathes, mills and 3D printers) by means of a computer. A CNC machine processes a piece of material (metal, plastic, wood, ceramic, or composite) to meet specifications by following coded programmed instructions and without a manual operator directly controlling the machining operation.” Authors in [26] present how it is possible to build a digital twin that provides information on the expected behavior of the machine and at the same time, through the data collected by the sensors inside the machine, the on-situ behavior of the machine. Then, by taking a comparison of the two, this reveals a good ability to highlight anomalies, based on the physical modeling. At the same time, however, the acquisition of a large amount of data through sensors allows the elaboration of a database that also includes a sort of forecasting model: this increases the knowledge of the ongoing process and helps in prediction.

The use of hybrid approach helps to meet the targets of predictive maintenance:

  • Fault diagnosis

  • Fault forecast

  • Intelligent decision

  • Intelligent maintenance.

Advertisement

3. Applied linear algebra tools

Now we consider how the tools coming from linear algebra can manage to solve the problems considered in the previous section. We consider both problems, namely the search of a model that can justify some available data and the case where the model is given and some prediction is done via numerical computations. What we present next is inspired by [27, 28], where readers can find full details of the topics introduced in this section.

3.1 Modeling with linear functions

First, we consider that a dataset is given; instances xiyi are known i=1,,M. As previously discussed, such dataset is described with numbers, and we identify it as input x and y is the output. Each of the inputs is vectors of numbers xiRd while we consider the output to be a number yiR. The dataset then can be described as an input matrix DRM×d whose rows are the instances and the columns are the features, and a column vector Y that stores the output.

Our modeling problem is to describe the dependence of the output from the input, so to identify a mathematical law F:RdR in good agreement with the available dataset Fxiyi. The most simple mathematical law that we can introduce would be the linear function. This is completely described by the coefficients wRd: Fx=j=1dwjxj. If we impose the agreement with the data, we have that the unknown coefficients solve the problem:

FindwRdsuch thatDwY0Linear SystemPB

Such a problem, if settled out in the square (d=M) and exact framework, is the well-known resolution of linear systems. We prefer to write the search problem in an approximate way, so we use the symbol, because usually we consider inexact data or non-square datasets, so that existence of the solution is not guaranteed. In the next section we discuss the resolution of the linear (approximate) problem. Finally, when the w are calculated, for a new instance x¯Rd the model would predict y¯=j=1dwjx¯j.

3.2 Model approximation

When a model is given, this is used to predict the behavior of the quantity of interest for forecasting, i.e., in unseen cases. Mathematical models are relations involving, in many cases, operations that are not tabled or easily computable. In order to understand how the tools of applied mathematics apply in such cases, we can start from a general formulation where the quantity of interest is a u (an unknown function in the general case or a vector/scalar) that is the solution of a problem: Puδ=0 where δ are the given data.

FindαRNsuchthatuxij=1Nαjfjxiyi0i=1M

To begin with a simple example, consider the approximation/extrapolation of data. As in the previous section, the inputs are made of vectors of numbers xiRd while we consider the output to be a number yiR so that the known data are δ=xiyiii=1,,M. Eventually, the outputs yi are evaluations of a “black box” function that we want to describe in a different way. In the previous section, we considered the resolution of a liner problem in order to look for a linear model that could justify the dependence of the output by the input data. In this case, we start from different knowledge and look for a non-linear model that gives “exactly” the dependence of y from x. The approximate solution will be a function uu that can be described by a finite set of coefficients. We look for u such that uxiyi, where ux=j=1Nαjfjx, i.e., u is a linear combination of N fixed functions fjx. The coefficients of the linear combination, αRN determinate the solution of our approximated problem u. One way to fix such coefficients is to impose accordance with the available data:j=1Nαjfjxi=yi.

This problem is exactly in the same framework as the ones seen in the previous section: a matrix problem where the matrix is the so-called collocation matrix ΦRM×N,Φj,i=fjxi.

Remark. The choice of the functions fj strongly affects both the quality of the solution and the ability to solve the discrete problem easily. Usual choices are polynomials or piece-wise polynomial functions. Nevertheless many systems of functions can be used instead and the results on the capability to approximate general functions, known as the Universal Approximation Theorem (UAT), are available. One of the reasons why the Neural Network (NN) functions are widely used in numerical approximations is due to the fact that there are available UAT results in this case. We remark that in the general case of NN also some parameters inside the functions can be tuned so that the overall determination of the network is a nonlinear optimization problem.

Moving from the approximation of a given function to different scientific models, we encounter the case of equilibrium laws based on first principles, as described in the previous section on Physics-Based approach. Also in this case the unknown is a function but in this case, the model is written as a solution to a mathematical problem Puδ=0 that involves some operators applied to the unknown function: derivatives and/or integrals (that are linear operators) and eventually nonlinear transformations. (The problem of interpolations described at the beginning of this section can be sought as the resolution of a mathematical problem that involves only evaluation of functions on sites, which is a linear functional.) Also in the case where only linear operators are considered, approximated problems for integration and derivation are needed because both operations are intrinsically infinite-dimensional [29, 30]. In general, two main roads—or combinations of these two—can be taken for the numerical resolutions of problems that involve the use of operators:

  • instead of considering general functions, look for a solution written as a linear combination of simple functions where the operation can be done simply u=αiϕiu;

  • instead of considering exact modeling approximate the operators and look for solutions in particular cases, ad example in fixed sites PNP.

Applied mathematics tools aim to introduce methods that translate the original model in some approximation that is consistent with the original formulation and gives an accurate approximation of the unknowns. The initial problem Puδ=0 is then reformulated in an approximate way (PNP) and the sought solution is an approximation of the mathematical solution (uu). The final resolution step, in most cases revels in the resolution of a linear problem as the ones seen before.

3.3 Numerical resolution of linear systems

Because many problems seen in this excursus are finally modeled via linear systems, we aim to present some of the ingredients that can be used for the resolution of linear systems. The problem written in (Linear System PB) is the one that we aim to solve.

As commented, the unknown vector w has to solve the problem of interpolating the available data with a linear function, i.e., if Dij,j=1,,d are the features of the ith individual and Yi is the output, then wj are such that jwjDij should be close to Yii=1,,M. The first case that we consider, is the one where we have more parameter to fix than available information, so that the number of features d is greater of the number of individuals M; the problem is also referred as overparametrized.

First of all we introduce the rank rd of the matrix D: this is the number of columns of the matrix that are linearly independent, i.e., columns that cannot be written as a linear combination of the other ones. Thinking of the matrix as a collection of features from individuals, the rank would be the number of information that we collect that are linearly independent. Because our aim is to find the coefficients wj that solve the linear system, if we call Dj the jth column of D, the linear system can be written as wjDjY0 thus only columns that are linearly independent give useful information for the determination of the w. The real dimensionality of the problem, the number of degrees of freedom that can be sought, is then the rank of the matrix.

An important feature when data are collected with some error—or “noise”—is that the rank of the matrix could be affected by this error: some columns can be linearly independent only for effect of the randomness of the noise. Then, a different preliminary analysis—the so-called feature selection—that reduces the dimension of the columns could be necessary. One possible way to proceed is by exploring the Singular Value Decompostion of the matrix and, in particular, the order of magnitude of the singular values. All these procedures that avoid the presence of redundancy on the feature collection, end up with the so-called active features. When this preprocessing of the data is done, we need to solve the problem of the determination of dd coefficients w to be fixed from the M linear equations: a linear problem of the type DwY0. If d is still greater or equal to M, the problem—which is referred as square if d=M or underdeterminated if d>M- has always an optimal solution, i.e., a solution to the matrix problem DwY=0. If the problem is square, the solution is unique. It has more solutions when d>M: to select between these solutions we can introduce the square problem DTDw˜=DTY, where T stands for the transposed matrix. Such equations are referred as “normal equations” for the non-squared linear problem. The solution w˜ solves a square linear system and is one of the possible vectors w.

Overdetermined linear problems are the ones where the number of features (or active features as described above) are less than the number of individuals, that is, the number of rows of the matrix M; the problem is also referred as overfitted. To focus on such an overdeterminated case, we can focus on the scalar case, where d=1. The coefficient wR is, then, the slope of the straight line that we want to construct in order to give a law of direct-proportionality between x and y: wxi=yi. This problem admits a solution only if M=1 or if the points xiyi are aligned (and aligned to 00).

In the general cases, in order to solve the linear problem DwY0 one has to introduce an optimality condition and try to solve the problem of “best fit”. After some computation, what turns out to be a good solution is the solution of the normal equations as introduced in the underdeterminated case.

Advertisement

4. Conclusions

What about the need for companies of such technologies? What we experience every day is that new technologies are deeply changing markets, as if this is “second wave of digital transformation”. In fact, Robotics, Machine Learning, Big Data and Artificial Intelligence are becoming more and more common.

Using properly DX, companies have the opportunity to optimize operating costs and thus increase process efficiency via new services. Additionally, the digitalization of company data produces the so-called “digital capital” made of huge amounts of information from various sources, which increases the possibility to settle out predictive models. This allows companies to better meet customers’ needs. DX is the present for companies that aim to remain competitive, and the use of simple mathematical tools opens up great opportunities in unexpected fields.

However, Enterprises are not completely aware of this point and how they would benefit from the application of Mathematical research in their daily operations. In the last few years, a new professional figure emerged world wide to promote:

  • MSO towards enterpises, and

  • Industry-Academic Collaborations.

The above role is played by the Technology Translator. A Technology Translator has strong competencies of Mathematical research and on the other hand good knowledge of industrial processes; thus, he/she is the right person to talk both with mathematicians and enterprises. Since Academics and Industrialists have different languages,2 the need has emerged for a professional figure that can translate Industrial problems in mathematical terms enabling the cooperation among the two world. There are several Institutions in Europe employing Technology Translators. In Italy, the National Research Council of Italy promotes the project Sportello Matematico per l’Innovazione e le Imprese.3 The main objectives of Sportello Matematico are: Promoting Mathematical Technologies as source of Industrial Innovation; Activating cooperations between Enterprises and Research Centers; Facilitating the employment of Mathematicians in Industry [31]. The work of Technology Translators contributes to the change of perspective required by the complex challenges of Digital Transformation.

References

  1. 1. Ebert C, Duarte CH. Digital transformation. IEEE Software. 2018;35(4):16-21
  2. 2. Rasheed A, San O, Kvamsdal T. Digital twin: Values, challenges and enablers from a modeling perspective. Ieee Access. 2020;8:21980-22012
  3. 3. World Economic Forum. The Future of Jobs Report. Geneva, Switzerland: World Economic Forum; 2020
  4. 4. Di Giacomo G, Lerch B. Automation and Human Capital Adjustment: The Effect of Robots on College Enrollment. 2021. Available from: SSRN 3920935
  5. 5. Ivančić L, Vukšić VB, Spremić M. Mastering the digital transformation process: Business practices and lessons learned. Technology Innovation Management Review. 2019;9(2):36-50
  6. 6. Hartmann D, Van der Auweraer H. Digital twins. In: Progress in Industrial Mathematics: Success Stories. Berlin, Germany: Springer, Springer, Champions; 2021. pp. 3-17
  7. 7. Hauret P. At the crossroads of simulation and data analytics. European Mathematical Society Magazine. 2021;16(121):9-18
  8. 8. Bond P. The Era of Mathematics; An Independent Review of Knowledge Exchange in the Mathematics Sciences. 2018. Available from: epsrc.ukri.org/newsevents/pubs/era-of-maths/ [Accessed: February 9, 2022]
  9. 9. Rueckert D, Schnabel JA. Model-based and data-driven strategies in medical image computing. Proceedings of the IEEE. 2019;108(1):110-124
  10. 10. Magargle R, Johnson L, Mandloi P, Davoudabadi P, Kesarkar O, Krishnaswamy S, et al. A simulation-based digital twin for model-driven health monitoring and predictive maintenance of an automotive braking system. In: Proceedings of the 12th International Modelica Conference, Prague, Czech Republic, May 15-17, 2017. Vol. 132. Linköping: Linköping University Electronic Press; 2017. pp. 35-46
  11. 11. Quarteroni A, editor. Modeling the Heart and the Circulatory System. Berlin, Germany: Springer; 2015
  12. 12. Cauchi N, Macek K, Abate A. Model-based predictive maintenance in building automation systems with user discomfort. Energy. 2017;138:306-315
  13. 13. Xue T, Beatson A, Chiaramonte M, Roeder G, Ash JT, Menguc Y, et al. A data-driven computational scheme for the nonlinear mechanical properties of cellular mechanical metamaterials under large deformation. Soft Matter. 2020;16(32):7524-7534
  14. 14. Longo L, Goebel R, Lecue F, Kieseberg P, Holzinger A. Explainable artificial intelligence: Concepts, applications, research challenges and visions. In: International Cross-domain Conference for Machine Learning and Knowledge Extraction. Berlin, Germany: Springer, Champions; 2020. pp. 1-16
  15. 15. GARTNER. What Is Artificial Intelligence? Avaiable from: https://www.gartner.com/en/topics/artificial-intelligence
  16. 16. Yang R, Rizzoni G. Comparison of model-based vs. data-driven methods for fault detection and isolation in engine idle speed control system. Annual Conference of the PHM Society 2016 8 1 PHM Society San Diego, California
  17. 17. Moallemi A, Burrello A, Brunelli D, Benini L. Model-based vs. data-driven approaches for anomaly detection in structural health monitoring: A case study. In 2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) 2021 May 17 1 6 IEEEManhattan, New York, US
  18. 18. Liu Z, Meyendorf N, Mrad N. The role of data fusion in predictive maintenance using digital twin. In: AIP Conference Proceedings. Vol. 1. Melville, NY: AIP Publishing LLC; 1949. p. 020023
  19. 19. Shetty P, Mylaraswamy D, Ekambaram T. A hybrid prognostic model formulation system identification and health estimation of auxiliary power units. In: 2006 IEEE Aerospace Conference. Manhattan, New York, US: IEEE; 2006. p. 10
  20. 20. Slimani A, Ribot P, Chanthery E, Rachedi N. Fusion of model-based and data-based fault diagnosis approaches. IFAC-PapersOnLine. 2018;51(24):1205-1211
  21. 21. Willard J, Jia X, Xu S, Steinbach M, Kumar V. Integrating physics-based modeling with machine learning: A survey. arXiv preprint arXiv:2003.04919. 2020;1(1):1-34
  22. 22. Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics. 2019;378:686-707
  23. 23. Kapteyn MG, Knezevic DJ, Huynh DB, Tran M, Willcox KE. Data-driven physics-based digital twins via a library of component-based reduced-order models. International Journal for Numerical Methods in Engineering. 2020:1-18
  24. 24. Chakraborty S, Adhikari S. Machine learning based digital twin for dynamical systems with multiple time-scales. Computers Structures. 2021;243:106410
  25. 25. Ran Y, Zhou X, Lin P, Wen Y, Deng RA Survey of Predictive Maintenance: Systems, Purposes and Approaches. arXiv preprint arXiv:1912.07383. 2019
  26. 26. Luo W, Hu T, Ye Y, Zhang C, Wei Y. A hybrid predictive maintenance approach for CNC machine tool driven by digital twin. Robotics and Computer-Integrated Manufacturing. 2020;65:101974
  27. 27. Deisenroth MP, Faisal AA, Ong CS. Mathematics for Machine Learning. Cambridge: Cambridge University Press; 2020. Available from: https://mml-book.github.io/
  28. 28. Strang G. Linear Algebra and Learning from Data. Cambridge: Wellesley-Cambridge Press; 2019
  29. 29. Calabrò F, Manni C, Pitolli F. Computation of quadrature rules for integration with respect to refinable functions on assigned nodes. Applied Numerical Mathematics. 2015;90:168-189
  30. 30. Calabrò F et al. Null rules for the detection of lower regularity of functions. Journal of Computational and Applied Mathematics. 2019;361:547-553
  31. 31. Bertsch M, Ceseri M, Natalini R, Santoro M, Sgalambro A, Visconti F. On the Italian network of industrial mathematics and its future developments: Sportello Matematico per l’industria italiana. In: European Consortium for Mathematics in Industry. Berlin, Germany: Springer, Champions; 2014. pp. 173-178

Notes

  • This is also related to the fact that Industry and Academia have different objectives and timescales.
  • www.sportellomatematico.it.

Written By

Francesco Calabrò, Maurizio Ceseri and Roberto Natalini

Reviewed: 18 February 2022 Published: 05 April 2022