Failure probabilities corresponding to different numerical representations.
Analysis of structures will in general involve large and complex numerical models, which require extensive computation efforts. These models are frequently referred to as digital twins. This analysis becomes particularly cumbersome for cases where a large number of response calculations are repeatedly performed, such as in the case of Monte Carlo simulation. One way of avoiding this will be to introduce simplified numerical models, which are no longer twins but some kind of more distant numerical relative. As an example of such a simplified numerical representation, a so-called response surface model can be applied in order to overcome the excessive computational efforts. Such models are also sometimes referred to as meta-models or cyber-physical models. One possible approach is to use a response surface model based on first- or second-order polynomials as approximating functions, with the function parameters being determined based on multivariate regression analysis techniques. In this chapter, various types of approximate models are first discussed in connection with a simplistic example. The application of response surface techniques is subsequently illustrated for a quite complex physics-based structural model for an offshore jacket structure in combination with Monte Carlo simulation techniques.
- digital representation
- structural analysis
- Monte Carlo simulation
- response surface techniques
- structural integrity management
Analysis of structures will in general involve large and complex numerical models both with respect to the loading and the structure. This typically implies extensive computational efforts. For cases where a large number of load and response calculations are repeatedly performed this becomes particularly cumbersome, such as in the case of Monte Carlo simulation. In the present paper, application of physics-based response surface methods for the purpose of reducing computation time is illustrated. In Section 2, various types of numerical approximations are first discussed in connection with a very simple structure. In Section 3 a complex offshore jacket structure is analyzed by means of response surface techniques for the loading and a physics-based “digital twin” of the structure.
2. Numerical representations of physical structures
2.1 The (near to) perfect twin based on multi-physics models
In the present text, the concept of a digital twin is understood in the following sense: A digital twin is a numerical model capable of reproducing the state and behavior of a unique real asset in real time (or faster), with this model also being able to represent the performance of the asset for new and artificially generated conditions (i.e., in connection with extrapolated predictions). As a primary candidate for a digital twin, a complete numerical model based on first principles in terms of multi-physics modeling seems to be most relevant. Such a model will also be able to represent non-linear features of the structural behavior of the asset.
As an example, a relatively simple structure with pronounced non-linear behavior is considered: Figure 1 shows a structure composed of two truss members. The structure is subjected to a vertical load R.
If the geometry is assumed to be non-deformed, the relationship between the vertical load R and the vertical displacement is obtained as:
which for small angles can be approximated by
where α0 is the slope angle of both truss members.
However, by accounting for changing geometry due to the vertical load, a different relationship between the vertical load, R, and displacement, r, is obtained. By consideration of geometric compatibility, equilibrium conditions and a linear stress-strain relationship, the expression for the load-displacement curve can then be derived as:
where h is the height of the truss and l is half the horizontal span length. E is the modulus of elasticity for the relevant material and A is the cross-section area of both truss members. The model uncertainty associated with this relationship is presently considered to be negligible, such that it can be assumed to represent a “digital twin” of the structure (implying that, e.g., buckling of the members themselves is not relevant due to their non-slender characteristics).
By inserting α0 = h/l (also assuming small angles), this can be written as
Both of the R-r (i.e., load and displacement) relationships according to Eqs. (3) and (5) are shown in Figure 2 for a slope angle of α0 = π/15. It is seen that they can barely be distinguished from another. (This implies that the third order representation can also be regarded as a digital twin, although not of the one-egg kind).
Both of the curves are characterized by a very non-linear behavior, where a so-called snap-through occurs when the two truss members are displaced to a completely horizontal position. After snap-through has occurred, a second equilibrium configuration is obtained for which a further increase of the vertical load can take place. However, this second equilibrium configuration will in most cases represent a “failed condition” in the sense that the structure will survive but such that an unwanted large displacement has taken place (which would, e.g., be the case if the structure represents a load-carrying roof structure or an arch system).
Up to around one quarter of the maximum load point, the load-displacement curve is quite close to being linear. Accordingly, if only empirical load-displacement data points for this interval are available, this would typically lead to the assumption that the structural behavior is linear for any load level (unless the physical behavior of the system is taken into consideration). Having available data sets for many different structures of the same type, it is very unlikely that any of the sets contain information about the post-snap interval if all the structures are still in operation.
For structures of the present type in cases where also the stress-strain behavior of the material is nonlinear, numerical solution methods will generally be required in order to compute the load-displacement curve. This will increase the computation time significantly, which will be particularly cumbersome in connection with Monte Carlo simulation procedures where a large number of repeated calculations is typically required (e.g., of the order of millions and upwards). In any case, simplified but “adequate” models need to be introduced.
2.2 More distant numerical relatives based on different kinds of simplified physics-based models
One way of being able to reduce the computation time for even larger and more complex numerical models, is to introduce a simplified representation which is no longer a twin but some more distant numerical relative. As an example, a so-called response surface model can be applied in order to overcome excessive computational efforts. Such models are frequently also referred to as “meta-models” or “cyber-physical” models. One possible approach is to use a response surface representation based, e.g., on first- or second-order polynomials as approximating functions. The parameters of these functions and their weighting coefficients are then determined, e.g., based on minimization of the mean square error. The “control points” for the approximate model are then established based on application of the physics-based model at just these points (i.e., for given input parameter values). By a proper selection of control points, the prediction error associated with the entire range of structural displacement levels can be limited in magnitude.
As examples, we consider approximation of the exact load-displacement relationship with a quadratic and also an alternative quadratic response surface model. For the former, the control points are selected as (0, 0); (0.5, 3/16) and (1.0, 0.0), where 3/16 represents the exact maximum value of the cubic function (but with the location of the maximum point shifted to an abscissa value of 0.5). For the latter, a minimum mean square error approximation within the interval 0.0–1.0. The first of these approximations is compared to the “exact physics-based model” in Figure 3.
The error associated with the second order approximations over the range from r/h = 0 to 1 is seen to be acceptable, while for the less interesting range (within the present context) from 1 to 2 it is highly inaccurate and of little use.
2.3 Data-driven simplified models
A numerical representation of the load-displacement relationship based on a data-driven simplified model is next considered. First, it is assumed that 10 data points in the range from r/h = 0 to 0.15 are available, which is mainly in the weakly non-linear regime. These are, e.g., obtained during normal operation of the structure. A measurement noise with a standard deviation of 10% of the measured signal is also introduced. The extrapolated second order approximation (based on regression analysis) is shown in Figure 4a together with the data points themselves. It is seen that the maximum value of the load R is significantly underpredicted by this curve. As a second approximation, the 10 data points (including noise) are next taken to lie in the range from r/h = 0 to 0.2 (i.e., into the slightly more nonlinear regime). The corresponding second order approximation is shown in Figure 4b. The maximum load is somewhat closer to the true value, but still a significant underprediction is observed.
These results are intended to illustrate the limitations of extrapolations based on data driven models unless the measurement points are available in the region with “high nonlinearity.” For structural systems, such data points are generally scarce as they represent rare events that may even correspond to failure of the structure.
2.4 Comparison of failure probabilities calculated by application of the different numerical models
By introducing a structural failure criterion for the truss in addition to joint statistical models for the inherent random variables, the failure probability corresponding to a given reference period can be computed. Presently, the failure function is expressed in terms of the maximum allowable load (i.e., Rmax), and the only random variable is the external extreme environmental load (i.e., Rex) which follows a Gumbel distribution with a mean value of 0.9 Rmax and a coefficient of variation of 10% (i.e., a standard deviation of 0.09 Rmax). In the present section, a comparison is made between structural failure probabilities, which are obtained by application of the different structural representations that were considered above (Table 1).
|Numerical representation||Probability of failure|
|Exact (digital twin)||0.0719|
|Response surface (physics-based), quadratic||0.0719|
|Response surface (physics-based), MSE quadratic||0.0671|
|Data-driven, cubic regression, low loading||1.0000|
|Data-driven, cubic regression, intermediate loading||0.9102|
Not unexpectedly, the accuracy of the physics-based representations is significantly higher than the data-driven models for the present example. While the cubic response surface almost corresponds to the twin representation, the data driven model for the low loading regime could at best be referred to as a more distant relative (e.g., a half-brother or a cousin).
3. Example of a more complex structural analysis by Monte Carlo simulation
In the following, an application of a physics-based digital twin model is illustrated for the analysis and the structural integrity management optimization of a specific jacket structure, also in combination with Monte Carlo simulation techniques. The loading is represented by a response surface with the basic environmental parameters as input. The control points are based on physics-based load models. The structural response is obtained by means of a numerical model, which is able to account for large deformations and plastic behavior. This implies that the load-displacement curve is characterized by a maximum value, which is followed by a rapid decline of load-carrying ability similar to the previous simplified example.
3.1 System modeling and reliability formulation
The failure of a structural system, e.g., offshore jacket platform is often defined as the total collapse of the structure. The collapse event can be modeled as a series system of several parallel subsystems as follows :
where n is the number of components in the system, N is the number of failure modes, is the limit state function of component i for failure mode j. The system failure probability for systems like offshore jacket platforms can be accurately estimated by considering a single failure mode and expressing the system resistance R and the system load S in terms of base shear [3, 4]. The system resistance R is the ultimate capacity base shear, which is a function of system damage state’s matrix D. The system load S is the base shear load for a given environmental variable E. The probability of system overload failure for a given system damage state D is calculated shown in Eq. (7).
The performance of structural components in the system deteriorates over time due to, e.g., fatigue damage or corrosion. The system damage state’s matrix D contains the (fatigue) damage state of each component at time t, i.e., , where is an indicator function that equals one if the component i fails (i.e., ) and zero if otherwise. Y is a vector of random variables that influences the fatigue damage (see Chapter 3.2). The total probability theorem is then utilized to calculate the probability of system failure due to both overload and fatigue failures as follows:
where is the system failure probability due to overload in the intact condition, is the fatigue failure probability for component i, is the conditional system failure probability due to overload after fatigue failure occurs at component i, and is the probability that fatigue failures occurs at components i and j before the overload failure. Eq. (9) is often referred as annual probability of system failure in the context of structural integrity management, where is defined as the probability of failure at component i given survival up until year t . As a first approximation, the annual probability of system failure can be calculated by keeping only the first two terms :
3.2 Response surface
The system load L is a function of environmental variable vector E. In this work, the wave height H and wave period T are considered as the environmental random variables, i.e., E = [H, T]. The system load L is expressed as the base shear for a given combination of wave height and wave period. The response surface method with quadratic polynomial function is utilized to estimate the system load as follows :
where a0….a5 are the coefficients to be determined. Probabilistic linear regression analysis is employed to obtain the coefficients and the predictive distribution of the system load L. The linear model is written as follows:
where L is a (1 × m) vector of “responses” (i.e., which here is the load), β is a (1 × m) vector of the regression coefficients (see Eq. (11)), and ε is the 1 × m vector containing the error terms. The error is assumed Normal-distributed with zero expected value and variance . X is a m × p design matrix which consists of p combinations of individual terms (see Eq. (11)) and m number of samples as follows:
The unconditional predictive distribution is given by the multivariate non-central Students t-distribution i.e., with parameters as given as follows :
and are matrices that contain the pre-computed load points from, e.g., finite element analysis. is a vector of load predictions from regression analysis for given , which is calculated, e.g., from the samples of wave height and wave period. The predictive distribution of the load, , can be seen as a measure of the model uncertainty associated with the response surface.
where hs is the significant wave height. Probabilistic models for the wave period are less studied compared to wave height. In the present work, the wave period is assumed to follow a Lognormal distribution, which is conditional on wave height, and the parameters are defined as follows:
where b1; b2; b3 and c1; c2; c3 are the coefficients to be determined. Eqs. (19) and (20) ensure that the wave period is dependent on the wave height in order to avoid drawing unrealistic samples of wave height and period (e.g., very large wave heights with very small wave periods).
3.3 Structural fatigue and reliability updating with monitoring and inspection information
Fatigue failure occurs if the crack size exceeds a critical crack size, and this can be modeled by means of a limit state function as follows:
where δc is the critical crack size and δ(Y,t) is the crack size at time t. Here, failure of the structure will occur if this failure function becomes negative. The crack growth is modeled using Paris’ Law as follows:
where m and C are the empirical model parameters, Ns is the number of stress cycles, and ΔK is the stress range intensity factor. For through-thickness cracks on an infinite panel, the solution to Eq. (22) can be written as follows :
where δ0 is the initial crack size, and ν is the annual cycle rate. BSIF and BΔS are the model uncertainties of the stress intensity factor and for the stress range calculation, respectively (see e.g., ). is the so-called equivalent stress range and calculated as follows:
Y is a vector of random variables i.e., , where γ and is the scale and shape parameter of the Weibull distributed stress range. The probability of fatigue failure is calculated as follows:
is defined as the probability of annual fatigue failure given survival up until year t. Statistical dependencies between fatigue hotspots are modeled using correlation coefficients of the random variables in the Y vector. There are 6 correlation coefficients: . The coefficient represents the statistical dependencies due to the same fabrication process. indicates the dependencies due to common material characteristics. and describe the statistical dependencies due to the similar loading patterns. and depict the dependencies due to common stress intensity factor and stress range calculation.
Probabilistic models that are able to represent inspection activities in a proper way are also required. Information regarding structural performance can be obtained by carrying out inspection or structural monitoring. There are two outcomes of an inspection: no damage indication (I1) or damage indication (I2). The objective of inspection modeling is to obtain the marginal probability of indication (and no indication) followed by an update of the probability of system failure. By utilizing detection theory, the probability of an indication can be derived from the noise and signal distributions (see e.g., [9, 12]). Signal and noise characteristics are typically modeled by means of a Normal distribution (see e.g., [9, 13]). The updating of component fatigue failure probability is performed by utilizing Bayes’ law. Given no indication after an inspection, the probability of fatigue failure is updated as given in .
Furthermore, models for statistical representation of the structural monitoring methods are required. Structural health monitoring (SHM) systems can be installed to monitor specific structural properties such as, e.g., vibration or strain in the structural system. Information from a SHM can be viewed as one of the possible realizations of the model uncertainty (see e.g., [15, 16]), which is associated with the measured property such as, e.g., stress ranges. In the present work, the SHM modeling proposed by  is employed, i.e., three different possible SHM outcomes are considered: The outcome Z1 corresponds to the case where monitoring indicates lower stress ranges than expected and indicates that the monitored component has a high performance. Outcome Z3 indicates that the monitoring component has a low performance due to higher than expected stress ranges. Outcome Z2 indicates that the monitored component performs as expected. Calculation of the updated probability of system failure is carried out as described in .
3.4 Quantification of the value of SIM strategies
The quantification of the value of SIM strategies builds upon the Bayesian pre-posterior decision analysis framework as formulated by Benjamin and Cornell, . A SIM strategy decision problem can be modeled by a decision tree in pre-posterior form as shown in Figure 5. The information space S consists of available information acquirement strategies i (e.g., inspection and monitoring). The outcome space O comprises the possible outcomes of a given information acquirement strategy i. The action space A consists of the possible actions that can be taken such as e.g. repair. The state space θ contains possible states such as, e.g., failure or survival.
The value of SIM strategies is quantified by utilizing the value of information and action (VOIA) analysis (see ). A VOIA analysis consists of a base and an enhancement scenario. The base scenario is defined as the scenario without any SHM/inspection and risk-mitigating action such as e.g., repair. There are two states considered in this system state analysis: the (collapse/no collapse) and the component state (failure/no failure). Therefore, the expected cost C0 in the base scenario is the sum of the expected system E[CFS] and component E[CF,i] failure costs over the service life TSL:
where n is the number of structural components. Procedures for calculation of E[CFS] and E[CF,i] are described in . The failure probabilities, which are required in order to calculate these costs, are computed by means of Monte Carlo simulation.
Two different SIM strategies are analyzed as enhancement scenarios. The first strategy is to perform inspections and repair if required during the service life. The second strategy is to install the SHM system for 1 year at one component and to perform inspections and repair if required. For both strategies, a repair action is performed if the inspection indicates a damage. The expected total cost of the first strategy is calculated as the sum of expected costs of inspection E[Ci], repair E[CR,i], system failure E[CFS], and fatigue failure E[CF,i] costs over the service life as follows:
where NIC is the number of inspected components. E[CFS] and E[CF,i] in Eq. (27) are calculated by considering the updated probability of system and fatigue failure, respectively. E[CR,i] is the expected repair costs over the service with NIC repaired components.
The expected total costs for the second strategy is calculated as follows:
where is the expected SHM costs. Further details are given in . The VOIA is then calculated as follows:
3.5 Case study
A typical deepwater offshore jacket platform with 25 years of service life and located at 190 m waterdepth is utilized in the present work (see Figure 6a). The jacket platform has 200 components and each component is subjected to fatigue deterioration. In this study, each component is assumed to have exactly one hotspot for which a trough-thickness crack will result in fatigue failure. The incoming wave direction is taken as 135° and the 100-year significant wave height HS;100y equals to 24.3 m.
System failure is defined as the collapse of the jacket platform due to overload and fatigue deterioration. In this example, it is assumed that only single component failure is possible before the overload failure and the probability of system failure is calculated by Eq. (10). The resistance R is the ultimate base shear for given damage state matrix D and is calculated by performing pushover analysis with software USFOS . The system load L is approximated by utilizing response surface analysis outlined in Section 3.2 with 248 pre-computed load points. The samples of L are drawn from the predictive distribution for a given H and T. The coefficients in Eq. (19) and (20) are assumed to have the following values (based on a typical North Sea environment): b = [1.322; 0.8; 0.242] and c = [0.005; 0.09374; 0.32]. The conditional probability of system collapse (see Eq. (7)) is calculated by utilizing Monte Carlo simulation with 106 samples. The response surface of the system load L is shown in Figure 6b with the coefficient of determination R2 = 0.9905. Figure 6c shows the predictive distribution of L for T = 20.8 s.
The system failure probability in intact condition is 5E−6. The conditional system failure probabilities for different damaged components and its associated ultimate base shears are given in Table 2. The components in Table 2 are located on the jacket’s legs.
3.5.1 Fatigue model
All components are subjected to fatigue deterioration over the service life with one critical hotspot for each component. The contributions from the fatigue failure probabilities to the system failure are weighted w.r.t. the conditional system failure probabilities. Due to the high number of structural components, only the 10 most critical components with the highest conditional system failure probabilities (see Table 2) are presently considered. Fatigue failure contributions from other components are considerably smaller and can hence be neglected. Fatigue failure probability at time t is calculated by application of Monte Carlo simulation. The probability is calculated for a period of 1 year, where survival until year t is given. All fatigue hotspots are modeled with the same probabilistic models, which are shown in Table 3. are assumed fully correlated between components following . The other random variables are assumed to have a correlation coefficient of 0.8 .
|Damaged component||Ultimate base shear (N)||P(FS,O|D)|
|Variable||Dimension||Distribution||Expected value||St. deviation|
|lnC||N and mm||Normal||−29.97||0.5095|
|lnγ||N and mm||Normal||2.1||0.22|
3.5.2 Structural integrity management (SIM)
Two SIM strategies are considered: inspection and repair and inspection with SHM and repair. For the first strategy, inspections are performed at 3 critical components 1 year before the annual system failure probability P(FS) is estimated to exceed the threshold Pth(FS) (i.e., constant threshold approach). The minimum system failure probability threshold during operation is set equal to which corresponds to target system failure probability recommended by JCSS  for structures with large consequences of failure and large relative cost of safety measure.
In order to simplify the decision analysis, a repair action is performed only if any damage is detected by inspections. This decision rule is practical and VoI-optimal, see . Repaired components are assumed to behave as components with no damage indication. The probability of indication is derived from the noise and signal distributions. The noise is assumed to follow a Normal distribution with zero mean and a standard deviation of 0.5. The signal threshold ths is calibrated to the probability of false indication (PFI) of 0.01. The signal S is also Normal distributed with the following parameters:
where δ(t) is the crack size at year t.
In the second SIM strategy, a SHM system for stress range monitoring is installed 1 year before the first inspection is performed, with a monitoring duration of 1 year (i.e., up to the time of the inspection itself). SHM performance is modeled as proposed in  by utilizing the stress range model uncertainty . Two thresholds distinguishing the outcomes Z1 (low stress ranges), Z2 (stress ranges as designed) and Z3 (high stress ranges) are calibrated to target probabilities of P1T(Fi) = 1·10−4 and P2T(Fi) = 1·10−3, respectively. The target reliabilities are selected following  for structures with minor consequences of failure with normal and large relative cost of safety measure. The time dependent threshold’s calibration is illustrated by Figure 7. The measurement uncertainty U is assumed Normal distributed with the expected value of 1.0 and a standard deviation of 0.05. A summary of the probabilistic models is shown in Table 4.
|Variable||Dimension||Distribution||Expected value||Std. deviation|
The costs considered in this case study are consisting of inspection costs CI, SHM costs CSHM, repair costs CR, system failure costs CFS, and component failure costs CFi. SHM costs are further divided into investment costs CInvSHM, installation costs CInstSHM, and operational costs CopSHM. The cost model used in this example is shown in Table 5 based on [23, 24].
The annual component and system failure for t = 1....25 years are shown in Figure 8. The system failure probability for the intact condition is P(FS,O|D = 0) = 5E−6. The annual system failure probability at the end of service life is 9.5E−5, which is less than the minimum operational threshold, i.e., no SIM implementations are required to achieve the minimum operational requirement. However, the decision-maker may wish to increase the structural reliability above the minimum requirement and presumably enhance the value of SIM. In this work, three different annual system failure probability thresholds are studied: 6E−5, 7E−5, and 8E−5.
For the inspection-only strategy (S = S1), inspections and repairs are performed at three components 1 year before the system failure probability threshold Pth(FS) is predicted to be reached. After each inspection, probabilities of fatigue and system failure are updated. Figure 9a shows the annual system failure probability for the inspection-only strategy as a function of time for a specific threshold value. The first inspection time is at t = 8 years. Increasing the threshold means that the inspections are performed later during the service life. Because of this, the inspection frequency during the service life is decreasing with a higher threshold in exchange of a higher annual system failure probability.
The second SIM strategy (S = S2) is based on installation of a SHM system at one component to monitor stress ranges for 1 year before the first inspection. There are three possible outcomes for monitoring based on the component performance.Figure 9b shows the annual system failure probability with monitoring and inspections for a system failure probability threshold of 6E-5. Compared to Figure 9a, it is observed that the outcome of monitoring can influence the future inspection schedule. The SHM installation time differs depending on the thresholds (see Table 6). A higher threshold means that the SHM system is installed closer to the end of service life. With increasing annual system failure probability threshold, the probability of obtaining low performance outcome (Z3) becomes higher.
|SHM outcome||Annual system failure probability threshold|
|6.00E-05 tSHM = 6 years||7.00E-05 tSHM = 8 years||8.00E-05 tSHM = 11 years|
The values of information and action (VOIA) of the two SIM strategies are shown in Figure 10. It is observed that increasing the system failure probability threshold will reduce the value of information and action. With a higher threshold, inspection and monitoring are performed later in the service life, which reduces the benefits due to a higher annual system failure probability during the remaining service life compared to a lower system failure probability threshold. It is also observed that the VOIA of the SIM strategy SHM, inspection and repair (S2) is higher than the inspection-only strategy (S1) for all investigated system failure probability thresholds. This shows that information from SHM system can enhance the value of the SIM, i.e., reduce the expected total cost. In this example, the cost of system failure is dominating the expected total cost over the service life.
4. Summary and conclusions
In the present analysis, physics-based numerical models of the load, structural behavior and for the integrity management have been utilized in combination with response surface techniques and Monte Carlo simulation. An application of a physics-based digital twin model is illustrated for the structural and integrity management analysis of a specific jacket structure. The loading is represented by a response surface with the basic environmental parameters as input. The control points are based on physics-based load models. The structural response is obtained by means of a numerical model, which is able to account for large deformations and plastic behavior.
A framework has been developed to plan and to optimize the structural integrity management (SIM) by utilizing the physics-based digital twin model. By extending the concept of the value of information to a value of information and action analysis, the value of inspection and monitoring information and repair actions is quantified. A novel approach of SHM modeling introduced by Agusta and Thӧns  has been employed in conjunction with inspection modeling based on a probabilistic representation of inspections. The optimal SIM strategy leading to the least expected costs and structural risks is associated to the lowest annual system failure probability threshold. It is further demonstrated that structural systems with a high reliability requirement will benefit more from a SHM system implementation.
It is believed that for the present type of analysis, which involves large structural deformations and structural failure behavior, data-driven models will not be adequate due to an insufficient amount of relevant data. Clearly, this belief is also based on the assumption that the model uncertainties associated with the physics-based numerical models can be adequately controlled. This can be achieved by collecting data from laboratory (destructive) testing and full-scale measurements including failure records. In this way, data-calibrated and physics-based numerical models can be developed, rather than relying on data-driven models based on conditions corresponding to normal operation of the structures.
The authors acknowledge the funding received from the Center for Oil and Gas-DTU/Danish Hydrocarbon Research and Technology Center (DHRTC). The authors are also grateful to Professor Jørgen Amdahl for his support regarding the Finite element analysis with USFOS.
Conflict of interest
There is no conflict of interest.