## 1. Introduction

One of the primary goals of structural engineers is to assure proper levels of safety for the structures they design. This seemingly simple task is complicated by uncertainties associated with the materials with which the structure was designed and the loads they must resist, as well as our inaccuracies in analysis and design. Structural reliability and probabilistic analysis/design are tools which can be employed to quantify these uncertainties and inaccuracies, and produce designs and design procedures meeting acceptable levels of safety. Recent researches in the area of structural reliability and probabilistic analysis have centred on the development of probability-based design procedures. These include load modelling, ultimate and service load performance and evaluation of current levels of safety/reliability in design (Farid Uddim, 2000; Afolayan, 1999; Afolayan, 2003; Afolayan and Opeyemi, 2008; Opeyemi, 2009).

Deterministic methods are very subjective and are generally not based on a systematic assessment of reliability, especially when we consider their use in the entire structure. These methods can produce structures with some “over designed” components and perhaps some “under designed” components. The additional expense incurred in constructing the over designed components probably does not contribute to the overall reliability of the structure, so this is not a very cost-effective way to produce reliable structures. In other words, it would be better to redirect some of the resources used to build the over designed components toward strengthening the under designed ones. Therefore, there is increasing interest in adopting reliability-based design methods in civil engineering. These methods are intended to quantify reliability, and thus may be used to develop balance designs that are both more reliable and less expensive. Also, according to Coduto (2001), the methods can be used to better evaluate the various sources of failure and use this information to develop design and construction methods that are both more reliable and more robust - one that is insensitive to variations in materials, construction techniques, and environment.

The reliability of an engineering system can be defined as its ability to fulfil its design purpose for some time period. The theory of probability provides the fundamental basis to measure this ability. The reliability of a structural element can be viewed as the probability of its satisfactory performance according to some performance functions under specific service and extreme conditions within a stated time period. In estimating this probability, system uncertainties are modelled using random variables with mean values, variances, and probability distribution functions. Many methods have been proposed for structural reliability assessment purposes, such as First Order Second Moment (FOSM) method, Advanced Second Moment (ASM) method, and Computer-based Monte Carlo Simulation (MCS) (e.g., Ang and Tang,1990;Ayyub and Haldar,1984; White and Ayyub,1985; Ayyub and McCuen,1997) as reported by Ayyub and Patev (1998).

The concept of the First-Order Reliability Method (FORM) is a powerful tool for estimating nominal probability level of failure associated with uncertainties and it is the method adopted for the reliability estimations in this text. The general problem to which FORM provides an approximate solution is as follows. The state of a system is a function of many variables some of which are uncertain. These uncertain variables are random with joint distribution function

In which β_{R} = the reliability or safety index (Melchers, 2002).

Mathematical models, be they deterministic or stochastic, are intended to mimic real world systems. In particular, they can be used to predict how systems will behave under specified conditions. In scientific work, we may be able to conduct experiments to see if model predictions agree with what actually happens in practice. But in many situations, experimentation is impossible. Even if experimentation is conceivable in principle, it may be impractical for ethical or financial reasons. In these circumstances, the model can only be tested less formally, for example by seeking expert opinion on the predictions of the model.

The examples based on the author’s research work introduce a different collection of stochastic models on analysis of the carrying capacities of piles based on static and dynamic approaches with special consideration to steel and precast concrete as pile types in cohesive, cohesionless, and intermediate soils between sand and clay and layered soils. Steel and precast concrete piles were analysed based on static pile capacity equations under various soils such as cohesive, cohesionless, and intermediate soils between sand and clay and layered soils. Likewise, three (3) dynamic formulae, namely: Hiley, Janbu and Gates were closely examined and analysed using the first-order reliability concept (Afolayan and Opeyemi, 2008; Opeyemi, 2009).

## 2. General methods and procedures for modelling structural elements

Structural reliability is concerned with the calculation and prediction of the probability of limit state violation for an engineered structural system at any stage during its life. In particular, the study of structural safety is concerned with violation of the ultimate or safety limit states for the structure. In probabilistic assessments any uncertainty about a variable expressed in terms of its probability density function is taken into account explicitly. This is not the case in traditional ways of measuring safety, such as the “factor of safety” or “load factor”. These are “deterministic” measures since the variables describing the structure; its strength and the applied loads are assumed to take on known (if conservative) values about which there are assumed to be no uncertainty. The loads which are applied to a structure fluctuate with time and are of uncertain value at any one point in time. This is carried over directly to the load effects (or internal actions) S. Somewhat similarly the structural resistance R will be a function of time (but not a fluctuating one) owing to deterioration and similar actions. Loads have a tendency to increase, and resistances to decrease, with time. It is usual also for the uncertainty in both these quantities to increase with time. This means that the probability density functions fS ( ) and fR ( ) become wider and flatter with time and that the mean values of S and R also change with time. The safety limit state will be violated whenever, at any time t.

The probability that this occurs for any one load application (or load cycle) is the probability of limit state violation, or simply the probability of failure P_{f}.

### 2.1. Reliability analysis

For basic structural reliability problem only one load effect S resisted by one resistance R is considered. Each described by a known probability density function FS( ) and FR( ) respectively. S is obtained from the applied loading Q through a structural analysis, but R and S are expressed in the same units.

For convenience, but without loss of generality, only the safety of a structural element is considered and as usual, that structural element will be considered to have failed if its resistance R is less than the stress resultant S acting on it. The probability of failure, P_{f} of the structural element will be stated as follows:

Where G ( ) is termed the “limit state function” and the probability of failure is identical with the probability of limit state violation.

In general, R is a function of material properties and element or structure dimensions, while S is a function of applied loads Q, material densities and perhaps dimensions of the structure, each of which may be a random variable. Also, R and S may not be independent, such as when some loads act to oppose failure (e.g. overturning) or when

*2.1.1. Modeling procedures*

The first step is to define the variables involved in the generalized reliability problem. The fundamental variables which define and characterize the behaviour and safety of a structure, termed “basic” variables, usually are variables employed in conventional structure analysis and design. Examples are dimensions densities or unit weights, materials, loads, material strengths, etc. It is very convenient to choose the basic variables such that they are independent, since dependence between basic variables usually adds some complexity to a reliability analysis.

The probability distributions will then be assigned to the basic variables, depending on the knowledge that is available. If it can be assumed that past observations and experience for similar structure can be used, validly, for the structure under consideration the probability distributions might be inferred directly from such observed data. More generally, subjective information may be employed or some combination of techniques is required. Sometimes physical reasoning may be used to suggest an appropriate probability distribution.

The parameters of the distribution are estimated from the data by using one of the usual methods viz: methods of moments, maximum likelihood, or order statistics.

Finally, when the model parameters have been selected, the model would be compared with the data. A graphical plot on appropriate probability paper is often very revealing, but analytical “goodness of fit” tests could be used also.

When or after the basic variables and their probability distributions have been established, the next step is to replace the simple R – S form of limit state function with a generalized version expressed directly in terms of basic variables.

The vector X will be used to represent all the basic variables involved in the problem. Then the resistance R will be expressed as R – GR(X) and the loading or load effect as S = GS(X). The limit state function G (R, S) can be generalized. When the functions GR(X) and GS(X) are used in G (R,S), the resulting limit state function will be written simply as G(X), where X is the vector of all relevant basic variables and G( ) is some function expressing the relationship between the limit state and the basic variables.

The limit state equation G(x) = 0 now defines the boundary between the satisfactory or ‘safe’ domain G>0 and the unsatisfactory or ‘unsafe’ domain G≤O in n-dimensional basic variable space.

With the limit state function expressed as G(x), the generalization of

Becomes

Sidestepping the integration process completely by transforming fx(x) in (6) to a multi-normal probability density function and using some remarkable properties which may then be used to determine, approximately, the probability of failure – the so-called ‘First Order Second Moment’ methods.

Rather than use approximate (and numerical) methods to perform the integration required in the reliability integral; the probability density function fx ( ) in the integrand is simplified. In this case the reliability estimation with its each variable is represented only by its first two moments i.e. by its mean and standard deviation. This is known as the “Second – Moment” level of representation. A convenient way in which the second-moment representation might be interpreted is that each random variable is represented by the normal distribution. This continuous probability distribution is described completely by its first two moments.

Because of their inherent simplicity, the so-called “Second-moment” methods have become very popular. Early works by Mayer (1926), Freudenthal (1956), Rzhanitzyn (1957) and Basler (1961) contained second moment concepts. Not until the late 1960s, however, was the time ripe for the ideas to gain a measure of acceptance, prompted by the work of Cornell (1969) as reported by Melchers (1999).

When resistance R and load effects are each Second-moment random variables (i.e. such as having normal distributions), the limit state equation is the “safety margin” Z = R – R and the probability of failure Pf is

Where β is the (simple) ‘safety’ (or ‘reliability’) index and φ( ) the standard normal distribution function.

*2.1.2. First Order Reliability Method (FORM)*

FORM has been designed for the approximate computation of general probability integrals over given domains with locally smooth boundaries but especially for probability integrals occurring in structural reliability. Similar integrals are also found in many other areas, for example, in hydrology, mathematical statistics, control theory, classical hardware reliability and econometrics which are the areas where FORM has already been applied.

The concept of FORM is, essentially, based on

## 3. Example applications

Some examples are hereby introduced to show particular cases of stochastic modelling as applied to the study of structural analysis of the carrying capacities of piles based on static and dynamic approaches with special consideration to steel and precast concrete as pile types in cohesive, cohesionless, and intermediate soils between sand and clay and layered soils, taken from the author’s experience in research in the structural and geotechnical engineering field. Steel and precast concrete piles were analyzed based on static pile capacity equations under various soils such as cohesive, cohesionless, and intermediate soils between sand and clay and layered soils. Likewise, three (3) dynamic formulae (namely: Hiley, Janbu and Gates) were closely examined and analyzed using the first-order reliability concept.

Pile capacity determination is very difficult. A large number of different equations are used, and seldom will any two give the same computed capacity. Organizations which have been using a particular equation tend to stick to it especially when successful data base has been established. It is for this reason that a number of what are believed to be the most widely used (or currently accepted) equations are included in most literature.

Also, the technical literature provides very little information on the structural aspects of pile foundation design, which is a sharp contrast to the mountains of information on the geotechnical aspects. Building codes present design criteria, but they often are inconsistent with criteria for the super structure, and sometimes are incomplete or ambiguous. In many ways this is an orphan topic that neither structural engineers nor geotechnical engineers have claimed as their own.

### 3.1. Static pile capacity of concrete piling in cohesive soils

The functional relationship between the allowable design load and the allowable pile capacity can be expressed as follows: G(X) = Allowable Design Load – Allowable Pile Capacity,

So that,

(9) |

In which *f*_{cu}= characteristic strength of concrete, *D*_{1}= pile diameter, *D*_{2}= steel diameter, *f*_{y} = characteristic strength of steel, *S*_{u}= cohesion, *L*_{b}= pile length, α= adhesion factor, and SF=factor of safety.

For concrete in cohesive soils, the safety level associated with piling capacity shows a consistent conservation. At the same time, safety level decreases with increasing length of pile (Fig.3) and the undrained shear strength of the soil (Fig.2). The decrease however depends on the diameter of the pile. Also it can be seen in Fig.1 that the safety level increases with the diameter of the pile, though it decreases with pile length. At any rate, the safety level is generally high for all the range of values for pile length, implying that the estimated pile capacity is highly conservative.

### 3.2. Static pile capacity of concrete piling in cohesionless soils

The Similar to the functional relationship between allowable design load and the allowable pile capacity expressed for cohesive soils, we also have: G(x) = Allowable Design Load – Allowable Pile Capacity

So that,

in which

as the expression for assessing the performance of concrete piling in cohesionless soils. In equation (10), the additional variables not in equation (9) are:

The predicted capacity in cohesionless soils leads to a more rapid degeneration of the safety level than in cohesive soils. For instance, it is noted that when piling length is between 10 m and 20 m, there is some margin of safety. At a length of 30 m, the safety level has reduced to zero, implying a total violation of the predicted capacity. This can be seen in Fig. 5. The unit weight of the soil plays a significant role in capacity prediction. The denser the soil, the lower the safety level, no matter the diameter of the pile (see Fig. 6). This similar trend is noted when the angle of internal friction increases (Fig. 7).

### 3.3. Static pile static pile capacity of steel piling in cohesive soils

G(x) = Allowable Design Load – Allowable Pile Capacity

The implied safety level associated with piling capacity of steel in cohesive soils shows a consistent conservation. Although, it degenerates with cohesion, adhesion factor and the area of shaft as shown in Figs. 8, 9 and 10 respectively. However, this decrease depends on the area of pile. While the safety level of the piling capacity follows the same trend based on the area of shaft of the pile, but, degenerates less rapidly and higher consistently conservation. It is also noted here that the higher the adhesion factor and the cohesion, the less conservative the static pile capacity.

### 3.4. Static pile capacity of steel piling in cohesionless soils

G(x) = Allowable Design Load – Allowable Pile Capacity

(13) |

Piling reliability/safety level decreases as pile length, unit weight of soil and the angle of internal friction increases (Figs.11, 12 and 13) but the rate of decrease is more rapid for precast concrete than steel. For concrete piling, pile length greater than 30 m will result in catastrophe while much longer steel piles that economy will permit are admissible.

### 3.4. Dynamic pile capacity using Hiley, Janbu and Gates formulae

Estimating the ultimate capacity of a pile while it is being driven in the ground at the site has resulted in numerous equations being presented to the engineering profession. Unfortunately, none of the equations is consistently reliable or reliable over an extended range of pile capacity. Because of this, the best means for predicting pile capacity by dynamic means consists in driving a pile, recording the driving history, and load testing the pile. It would be reasonable to assume that other piles with a similar driving history at that site would develop approximately the same load capacity.

Dynamic formulae have been widely used to predict pile capacity. Some means is needed in the field to determine when a pile has reached a satisfactory bearing value other than by simply driving it to some predetermined depth. Driving the pile to a predetermined depth may or may not obtain the required bearing value because of normal soil variations both laterally and vertically.

*3.4.1. Dynamic pile capacity using Hiley formula*

G(x) = Allowable Design Load – Allowable Pile Capacity

*3.4.2. Dynamic pile capacity using Janbu frmula*

G(x) = Allowable Design Load – Allowable Pile Capacity

*3.4.3. Dynamic pile capacity using Gates formula*

G(x) = Allowable Design Load – Allowable Pile Capacity

The implied safety level associated with piling capacity using Gates’ formula is grossly conservative, even much more than Hiley and Janbu formulae. The safety level does not change with the area of pile, hammer efficiency and hammer-energy rating (Figs. 14 to 17). As is common in practice, the areas of piles, hammer efficiency, hammer energy rating and point penetration per blow are subjected to variations and the results of the assessment are as displayed in Figures 14 to 17.

Hiley formula generally and grossly provides a very conservative pile capacity as seen in Figs. 14 to 17. Nevertheless, the safety level does not change with area of pile (Fig. 15). As hammer efficiency and hammer energy rating increase, the safety level reduces significantly as in Fig. 16 and Fig.17 respectively. On the other hand, safety level grows with increasing factor of safety as normally expected (Fig. 14).

Just like the Hiley formula, Janbu formula leads to a grossly conservative pile capacity. However, Janbu’s prediction is not as conservative as Hiley’s with respect to hammer efficiency and hammer-energy rating. Generally Gates’ formula yields the most grossly conservative prediction compared to Hiley and Janbu. It is noted that safety level is not dependent on the area of pile, hammer efficiency and hammer-energy rating (Figs.14 to 17).

## 4. Conclusion

Reliability-based design methods could be used to address many different aspects of foundation design and construction. However, most of these efforts to date have focused on geotechnical and structural strength requirements, such as the bearing capacity of shallow foundations, the side friction and toe-bearing capacity of deep foundations, and the stresses in deep foundations. All of these are based on the difference between load and capacity, so we can use a more specific definition of reliability as being the probability of the load being less than the capacity for the entire design life of the foundation. Various methods are available to develop reliability-based design of foundations, most notably Stochastic methods, the First-Order Second-Moment Method, and the Load and Resistance Factor Design method.

Deterministic method is not a very cost-effective way to produce reliable structures; therefore, reliability-based design should be adopted to quantify reliability, and thus used to develop balance designs that are both more reliable and less expensive.

From all the preceding sections it is apparent how a stochastic modelling can contribute to the improvement of a structural design, as it can take into account the uncertainties that are present in all human projects.