On the Effect of Fabrication and Testing Uncertainties in Structural Health Monitoring

Sensitive engineering structures are designed to be safe such that catastrophic failures can be avoided. Traditionally, this has been achieved by introducing safety factors to compensate for the lack of considering a structure’s full-scale behavior beyond the expected loads. Safety factors create a margin between real-time operational loading and residual strength remaining in the structure. Historically, although fail-safe and safe-life methodologies were among design strategies for many years, the increasing impact of economical considerations and emerging inspection technologies led to a new design strategy called damage tolerance strategy [1]. Damage tolerant designed structures have an added cost which is related to the frequency and duration of inspections. For such structures, inspection intervals and damage thresholds are estimated and at every inspection the structure’s health is investigated by looking for a maximum flaw, crack length and orientation. If necessary, modified investigation times are proposed, especially at vulnerable locations of the structure. Other limitations of the damage tolerant strategy include a lack of continuous assessment of the structure’s health status and the need to pause the regular operation of the structure during off-line inspections. Over time, beside some historical catastrophic failures, the advancement of nondestructive technologies and economical benefits have directed designers to the introduction of the concept of Structural Health Monitoring (SHM). It may be hard to find a comprehensive and consistent definition for SHM, but as Boller suggested in [1], “SHM is the integration of sensing and possibly actuation devices to allow the loading and damage state of the structure to be monitored, recorded, analyzed, localized, quantified and predicted in a way that nondestruc‐ tive testing becomes an integral part of the structure”. This definition contains two major elements: load monitoring and damage diagnosis as the consequence of operational loading (which is often subject to a stochastic nature).


Introduction
Sensitive engineering structures are designed to be safe such that catastrophic failures can be avoided. Traditionally, this has been achieved by introducing safety factors to compensate for the lack of considering a structure's full-scale behavior beyond the expected loads. Safety factors create a margin between real-time operational loading and residual strength remaining in the structure. Historically, although fail-safe and safe-life methodologies were among design strategies for many years, the increasing impact of economical considerations and emerging inspection technologies led to a new design strategy called damage tolerance strategy [1]. Damage tolerant designed structures have an added cost which is related to the frequency and duration of inspections. For such structures, inspection intervals and damage thresholds are estimated and at every inspection the structure's health is investigated by looking for a maximum flaw, crack length and orientation. If necessary, modified investigation times are proposed, especially at vulnerable locations of the structure. Other limitations of the damage tolerant strategy include a lack of continuous assessment of the structure's health status and the need to pause the regular operation of the structure during off-line inspections. Over time, beside some historical catastrophic failures, the advancement of nondestructive technologies and economical benefits have directed designers to the introduction of the concept of Structural Health Monitoring (SHM). It may be hard to find a comprehensive and consistent definition for SHM, but as Boller suggested in [1], "SHM is the integration of sensing and possibly actuation devices to allow the loading and damage state of the structure to be monitored, recorded, analyzed, localized, quantified and predicted in a way that nondestructive testing becomes an integral part of the structure". This definition contains two major elements: load monitoring and damage diagnosis as the consequence of operational loading (which is often subject to a stochastic nature).
The review of literature shows an increasing number of research programs devoted to the development of damage identification systems to address problems such as assuming costeffective methods for optimal numbering and positioning of sensors; identification of features of structures that are sensitive to small damage levels; the ability to discriminate changes caused by damage from those due to the change of environmental and testing conditions; clustering and classification algorithms for discrimination of damaged and undamaged states; and comparative studies on different damage identification methods applied to common datasets [2]. These topics are currently the focus of various groups in major industries including aeronautical [3,4], civil infrastructure [5], oil [6,7], railways [8], condition monitoring of machinery [9,10], automotive and semiconductor manufacturing [2]. In particular, new multidisciplinary approaches are increasingly developed and used to advance the capabilities of current SHM techniques.

Motivation of this study
A standard SHM technique for a given structure compares its damaged and healthy behaviors (by contrasting signals extracted from sensors embedded at specific points of the structure) to the database pre-trained from simulating/testing the behavior of the structure under different damage scenarios. Ideally, the change in the vibration spectra/stress-strain patterns an be related to damage induced in the structure, but it is possible at the same time that these deviations from a healthy pattern are caused by imperfect manufacturing processes including uncertainty in material properties or misalignment of fibers inside the matrix (in the case of composite structures), an offset of an external loading applied to the structure during testing, etc. Based on a strained-based SHM, this article addresses the important effect of manufacturing/testing uncertainties on the reliability of damage predictions. To this end, as a case study a benchmark problem from the literature is used along with a finite element analysis and design of experiments (DOE) method. Among several existing DOE experimental designs (e.g., [11][12][13][14][15][16]) here we use the well-known full factorial design (FFD).

Case study description
The structure under investigation is a composite T-joint introduced in [17], where a strainbased structural health monitoring program, GNAISPIN (Global Neural network Algorithm for Sequential Processing of Internal sub Networks), was developed using MATLAB and NASTRAN-PATRAN. The T-joint structure, shown in Figure 1, consists of four major segments including the bulkhead, hull, over-laminates and the filler section. The finite element model of the structure is assumed to be two-dimensional (2D) and strain patterns are considered to be identical in the thickness direction of the structure. The geometrical constraints and applied load are also shown in Figure 1. The left-hand side constraint only permits rotation about the z-axis and prevents all other rotational and translational degrees of freedom. The right-hand side constraint permits translation along the x-axis (horizontal direction) and rotation about the z-axis. The displacement constraints are positioned 120mm away from the corresponding edges of the hull. The structure is subjected to a pull-off force of 5 kN. In [17], several delaminations were embedded in different locations of the structure, but in this study only a single delamination case is considered between hull and the left overlaminate. The strain distribution is then obtained for nodes along the bond-line (the top line of the hull between the right-and left-hand constraints), which are the nodes most affected by the presence of embedded delamination.

Left constraint
Right constraint Figure 1. Geometry of the T-joint considered in the case study [17] Using ABAQUS software, two dimensional orthotropic elements were used to mesh surfaces of the bulkhead, hull, and overlaminates, whereas isotropic elements were used to model the filler section. The elastic properties of the hull, bulkhead, and the overlaminates [17] correspond to 800 grams-per-square of plain weave E-glass fabric in a vinylester resin matrix (Dow Derakane 411-350). The properties of the filler corresponded to chopped glass fibers in the same vinylester resin matrix as summarized in Table 1. In order to verify the developed base ABAQUS model, strain distributions along the bond-line for the two cases of healthy structure and that with an embedded delamination are compared to the corresponding distributions presented in [17]. Figures 2.a and 2.b show a good accord-ance between the current simulation model and the one presented in [17] using NASTRAN-PATRAN. The only significant difference between the two models is found at the middle of the T-joint where results in [17] show a significant strain drop compared to the ABAQUS simulation. Figure 3 also illustrates the 2D strain distribution obtained by the ABAQUS model for the healthy structure case.  Next, using the ABAQUS model for the DOE study, fiber orientations in the bulkhead, hull and overlaminate as well as the pull-off loading offset were considered as four main factors via a full factorial design, which resulted in sixteen runs for each of the health states (healthy and damaged structure). Two levels for each factor were considered: 0 or +5 degrees counterclockwise with respect to the x-axis ( Figure 4). Table 2 shows the assignment of considered factors and their corresponding levels.   Table 3. Full factorial design resulting in a total of 32 simulations (2 4 for the healthy structure and 2 4 for the damaged structure)

Elastic Properties Hull and Bulkhead
In order to illustrate the importance of the effect of uncertainty in fiber misalignment (e.g., during manufacturing of the structure's components), one can readily compare the difference between the strain distributions obtained for a case containing, e.g., 5 o misalignment in the overlaminate (i.e., run # 2 in Table 3) and that for the perfectly manufactured healthy case (run # 1). A similar difference can be plotted between the case without any misalignment but in the presence of delamination (damage)--which corresponds to run # 17 -and the perfectly manufactured healthy case (run # 1). These differences are shown in Figure 5.  Table 3 By comparing the strain distributions in Figures 5.a and 5.b one can conclude that 5 degrees misalignment of fibers in the overlaminate (run # 2) has resulted in a significant deviation from the base model (run # 1) compared to the same deviations caused by the presence of delamination (run # 17); and hence, emphasizing the importance of considering fiber misalignment in real SHM applications and database developments. The next section is dedicated to perform a more detailed factorial analysis of results and obtain relative effects of the four alignment factors A, B, C, and D as samples of uncertainty sources in practice.

DOE effects analysis
Two different approaches are considered in the effects analysis; a point-to-point and an integral analysis. In the point-to-point approach, the difference between the horizontal strain values at three locations along the bond-line (first, middle and the last node in Figure 4) and those of the ideal case are considered as three output variables. On the other hand, the integral approach continuously evaluates the strain along the bond line where the number of considered points (sensors) tends to infinity. In fact the strain values obtained from the FE analysis would correspond to the strain data extracted from sensors embedded in the T-joint. The integral analysis for each given run, calculates the area under the strain distribution along the bond line, minus the similar area in the ideal case. The comparison of the two approaches, hence, provides an opportunity to assess the impact of increasing the number of sensors on the performance of SHM in the presence of manufacturing errors (here misalignments). For each approach, the most dominant factors are identified via comparing their relative percentage contributions on the output variables as well as the corresponding half-normal probability plots (see [16] for more theoretical details). Subsequently, ANOVA analysis was performed to statistically determine the significance (F-value) of key factors. Figure 4 shows the position of nodes assigned for the point-to-point analysis strategy. The first and last sensor points are considered to be 50mm away from the nearest constraint on the contact surface of hull and overlaminate. The middle point is located below the pull-off load point. Table 4 shows the results of FE runs based on the factor combinations introduced in Table 3. As addressed before, the presented data for the first group of runs (i.e., for healthy structures -runs 1 to 16) are the difference between strain values of each run and run 1; while the corresponding data for the second group (damaged T-joint -runs 17 to 32) represent the difference between strain values for each run and run 17. Table 5 represents the ensuing percentage contributions of factors and their interactions at each node for the two cases of healthy and delaminated T-joint. For the first node, which is close to the most rigid constraint on the left hand side of the structure, the only important factors are the misalignment of fibers in the hull (factor C) and its interaction with the loading angle offset (CD). This would be explained by the type of constraints imposed on the structure which is free horizontal translation of the opposite constraint on the right side. Figure 6 shows the half normal probability plot of the factor effects for the 1 st node, confirming that factors C and CD are distinctly dominant parameters affecting the strain response at this node. Logically, one would expect that the mid node response would be strongly influenced by any loading angle offset as it can produce a horizontal force component and magnify the effect of the free translation boundary condition on the neighboring constraint; therefore, for the middle point response, the misalignment of fibers in the hull (C) and the loading angle error (D) and their interactions (CD) are the most significant factors, as also shown from the corresponding half normal probability plot in Figure 7. Finally, due to the short distance of the last (3 rd ) measuring node to the right constraint point and the strong influence of the large hull section beneath this measuring node, the parameter C was found to be the most dominant factor, followed by D, CD, AC, AD, and ACD ( Figure 8).   Table 6. Results of ANOVA for the 3 rd node response, considering the identified factors from Figure 8 for the block of healthy runs

Point-to-point analysis results
Next, based on the identified significant factors from the above results for the 3 rd node, an ANOVA analysis (Table 6) was performed considering the rest of insignificant effects embedded in the error term. As expected, the p-value for the factor C is zero and the corresponding values for factors D and CD are 0.001. The p-value for all other factors is greater than 0.001. Therefore, assuming a significance level of 1%, for the 3 rd node response, much like the 1 st and middle nodes, factors C, D and their interaction CD can be reliably considered as most significant. Table 7 shows the ANOVA results for all the three nodes when only these three factors were included.
One interesting observation during the above analysis was that we found no significant deviation of main results when we repeated the analysis for the block of runs with delamination (compare the corresponding values under each node in Table 5 Table 7. Results of ANOVA analysis for factors C, D and CD -point-to-point analysis approach For the first and last points, the lines for interaction of hull fiber misalignment and the loading angle offset are crossed, which indicates a high interaction between those parameters at the corresponding node. This interaction indication agrees well with the high F-value provided by the ANOVA analysis for CD in Table 7 for the first node. For the middle node, the individual lines for C and D in the main plots are in the same direction but with a small difference in their slopes. For the last (3 rd ) node, the main factor plots for parameters C and D have slopes with opposing signs, suggesting that for this node, the fiber misalignment angle and loading angle offset have opposite influences on the strain response. This again could be explained by the imposed type of constraint on the right side of the T-joint.

Integral analysis results
In this approach the objective function for each run was considered as the area between the curve representing the strain distribution of the nodes lying on the bond line and that of the base case. For the first group of runs (healthy structure, run#1-16), the first run is the base curve, whereas for the second group (embedded delamination case, run # 17 -32) the 17 th run (i.e., only delamination and no other fiber misalignment or loading angle error) is considered as the base. Table 8 lists the objective values for each run during this analysis. Table 9 represents the obtained percentage contribution of each factor. The parameters C and CD again play the main role on the strain distribution, but to be more accurate one may also consider other factors such as A, D, AD, and AC.

Design of Experiments -Applications
In order to show the dominant factors graphically, the corresponding half normal probability plot ( Figure 10) was constructed; Figure 10 recommends considering AC as the last dominant factor. Next, a standard ANOVA analysis was performed (Table 10) and results suggested ignoring the effect of factors AC and D with a statistical significance level of α=0.01. Nevertheless, recalling the percentage contributions in Table 9 it is clear that the top two main factors are C and CD, as it was the case for the point-to-point analysis. However in the point-to-point analysis, D was also highly significant at the selected nodes, whereas in the integral method it shows much less overall contribution. This would mean that the number and locations of sensors during SHM can vary the sensitivity of the prediction results to particular noise/uncertainty factors, such as D (the loading angle offset). Figure 11 illustrates the main and interaction plots for the factors A, C, and D. From Figure 11.a, unlike in the point-to-point analysis (Figure 9), the slope of every main factor, including D, is positive in the current analysis. This indicates that increasing each noise factor magnitude also increases the deviation of the structure's overall response from the base model. The interaction plot for C and D in Figure 11.b confirms an overall high interference of these two main factors; which is interesting because according to   Table 10. Results of ANOVA analysis based on dominant factors in Figure 10 for the integral approach Finally, similar to the point-to-point analysis, comparing the contribution percentage values for the two blocks of runs in Table 9 (healthy vs. damaged structure), the delamination seems to have no major interaction with the other four uncertainty factors. In order to statistically prove this conclusion, a 2 5 full factorial was performed, considering delamination as a new fifth factor E. Ignoring the 3 rd order interactions and embedding them inside the body of error term, ANOVA results were obtained in Table 11. It is clear that the E (damage) factor itself has a significant contribution but not any of its interaction terms with noise factors.  Table 11. Results of ANOVA analysis considering delamination as the 5 th factor for integral approach.

Conclusions
Two different approaches, a point-to-point analysis and an integral analysis, were considered in a case study on the potential effect of uncertainty factors on SHM predictability in composite structures. The point-to-point (discrete) analysis is more similar to real applications where the number of sensors is normally limited and the SHM investigators can only rely on the data extracted at specific sensor locations. The integral approach, on the other hand, calculates the area of a continuous strain distribution and, hence, simulates an ideal situation where there are a very large number of sensors embedded inside the structure. The comparison of the two approaches showed the impact of increasing the number of strain measurement points on the behavior of the prediction model and the associated statistical results. Namely, for all sensor positions considered in the point-to-point (discrete) analysis, the main factors were the misalignment of fibers in the hull and the loading angle offset, but for the integral (continuous) approach, the aggregation of smaller factors over the bond line resulted in increasing significance of other parameters such as overlaminate misalignment angle and its interaction with other factors. However the top contributing factors remained the same between the two analyses, indicating that increasing the number of sensors does not eliminate the noise effects from fabrication such as misalignment of fibers and loading angle offset. Another conclusion from this case study was that, statistically, there was no sign of significant deviation in contribution patterns of factors between the healthy and damaged structure. This suggests that different sensor positioning scenarios may change the sensitivity of the response to noise factors but the deviation would be regardless of the absence or presence of delamination. In other words the relative importance of studied noise factors would be nearly identical in the healthy and damaged structure. Finally, results suggested that that the absolute effect of individual manufacturing uncertainty factors in deviating the structure's response can be as high as that caused by the presence of delamination itself when compared to the response of the healthy case, even in the absence of misalignment errors. Hence, a basic SHM damage prediction system under the presence of pre-existing manufacturing/testing errors may lead to wrong decisions or false alarms. A remedy to this problem is the use of new stochastic SHM tools.