Open access peer-reviewed chapter

Control Laws Design and Validation of Autonomous Mobile Robot Off-Road Trajectory Tracking Based on ADAMS and MATLAB Co-Simulation Platform

By Yang. Yi, Fu. Mengyin, Zhu. Hao and Xiong. Guangming

Submitted: November 11th 2010Reviewed: April 20th 2011Published: September 9th 2011

DOI: 10.5772/20954

Downloaded: 2805

1. Introduction

Autonomous automobile technology is a rapidly developing field, with interest in both academia and industry. Outdoor navigation of autonomous vehicles, especially for rough-terrain driving, has already been a new research focus. DARPA Grand Challenge and LAGR program stand for the top development level in this research region. Rough-terrain driving research offers a challenge that the in-vehicle control system must be able to handle rough and curvy roads, and quickly varying terrain types, such as gravel, loose sand, and mud puddles – while stably tracking trajectories between closely spaced hazards. The vehicle must be able to recover from large disturbances, without intervention (Gabriel, 2007).

Since robotics autonomous navigation tasks in outdoor environment can be effectively performed by skid-steering vehicles, these vehicles are being widely used in military affairs, academia research, space exploration, and so on. In reference (Luca, 1999), a model-based nonlinear controller is designed, following the dynamic feedback linearization paradigm. In reference (J.T. Economou, 2000), the authors applied experimental results to enable Fuzzy Logic modelling of the vehicle-ground interactions in an integrated manner. These results illustrate the complexity of systematic modeling the ground conditions and the necessity of using two variables in identifying the surface properties. In reference (Edward, 2001), the authors described relevant rover safety and health issues and presents an approach to maintaining vehicle safety in a navigational context. Fuzzy logic approaches to reasoning about safe attitude and traction management are presented. In reference (D. Lhomme-Desages, 2006), the authors introduced a model-based control for fast autonomous mobile robots on soft soils. This control strategy takes into account slip and skid effects to extend the mobility over planar, granular soils. Different from above researches, which control the robots on the tarmac, grass, sand, gravel or soil, this chapter focuses on the motion control for skid-steering vehicles on the bumpy and rocklike terrain, and presents novel and effective trajectory tracking control methods, including the longitudinal, lateral, and sensors pan-tilt control law. Furthermore, based on ADAMS&MATLAB co-simulation platform, iRobot ATRV2 is modelled and the bumpy off-road terrain is constructed, at the same time, trajectory tracking control methods are validated effectively on this platform.

2. Mobile robot dynamic analysis

To research on the motion of skid-steering robots, the 4-wheeled differentially driven robot, which is moving on the horizontal road normally without longitudinal wheel slippage, is analyzed for kinematics and dynamics. As shown in Fig. 1 (a), global Descartes frame and in-vehicle Descartes frame are established. In Global frameOXYZ, its origin Ois sited on the horizontal plane where the robot is running, and Zaxis is orthogonal to the plane; in in-vehicle frameoxyz, its origin ois located at the robot center of mass, zaxis is orthogonal to the chassis of robot, xaxis is parallel to the rectilinear translation orientation of the robot and yaxis is orthogonal to the translation orientation in the plane of the chassis. The robot wheelbase is Aand the distance between the left and right wheels isB. According to the relation between OXYZandoxyz, the kinematics equation is as follows.

[X˙Y˙]=[x˙cosθy˙sinθx˙sinθ+y˙cosθ]=[cosθsinθsinθcosθ][x˙y˙]E1

The derivative of equation (1) with respect to time is

[X¨Y¨]=[cosθsinθsinθcosθ][x¨y˙θ˙y¨x˙θ˙]       =[cosθsinθsinθcosθ][axay]E2

Figure 1.

Dynamics analysis of the robot and the motion constraint about C o

In the above equations, X˙,Y˙,X¨,Y¨represent the absolute longitudinal and lateral velocities, and the absolute longitudinal and lateral accelerations respectively; θis the angle between xandXaxes. x˙,y˙,θ˙denote longitudinal, lateral and angular velocities in in-vehicle frame, respectively; ax,ayare the longitudinal and lateral acceleration respectively in in-vehicle frame.

The equations of the robot motion are as follows:

{max=i=14Fxii=14fximay=i=14fyiJθ¨=(B/2)((Fx1Fx3)+(Fx2+Fx4))MrE3

where Jis the moment of inertia of the robot; Fxidenotes the tractive force produced by the i-th wheel, and composes the outside tractive force Foutsideand the insideFinside; fxiis the longitudinal resistance of the i-th wheel; fyiis the lateral resistance of the i-th wheel; Mris the resistive moment around the center ofo.

During the turning, the robot is subject to the centripetal forcefcen, the equation of which is,

fcencosβ=fy3+fy4fy1fy2E4

Let μrand μsbe the coefficient of longitudinal rolling resistance and the coefficient of lateral friction respectively. In (4), βis the angle between Cooand yaxis. Accordingly, when the centripetal acceleration is considered, assumeβ0, then the equation of the tractive force of the robot is,

{Foutside=(mg2+hmayB)μr+mayxco2R+μsmgA4B[1(aygμs)2]Finside=(mg2hmayB)μr+mayxco2RμsmgA4B[1(aygμs)2]E5

where his the height of the robot center of mass, the distance between the robot center of mass and the ground, and Ris the radius of the robot turning (J. Y. Wong, 1978).

In the context of the robot motion around instantaneous, fixed-axis rotation, the straight line motion of the robot can be referred toR=. Cois the instantaneous center of the rotation, and the coordinates of Cocan be expressed as [xcoyco]T=[y˙θ˙x˙θ˙]T

In Fig. 1 (b), the real line arrowheads represent the transfer orientations of the four wheels of the robot, that is Comoves within the wheelbase range of the robot; the dashed arrowheads stand for the transfer orientations of the four wheels of the robot, whenCo, that isCo, moves beyond the wheelbase range of the robot. Clearly, if Comoves out of the wheelbase range, the lateral transfer orientations of four wheels are the same; as a result, the motion of the robot will be out of control. The motion constraint is as follows,

[sinθcosθl][X˙Y˙θ˙]=0   (|l|A/2 )E6

where lis the distance from Coto yaxis. According to Fig. 1 (a) and equation (5), it is l=Aaycosβ/2μsgthat can be obtained. Therefore, the constraint of the motion, shown as follow, is imperative.

(x˙)2/Rμsg or (θ˙)2RμsgE7

In this paper, a description of trajectory space (Matthew Spenko, 2006) is presented according to the robot’s dynamic analysis, which is defined as the two-dimensional space of the robot’s turning angular speed θ˙and longitudinal velocityVlong,(Vlong=x˙). This kind of description is very useful during motion control. Consider inequalities (7): (Vlong(t))2/Rμsgor(θ˙(t))2Rμsg, it is easy to get |Vlong(t)|μsgRor|θ˙(t)|μsg/R, then take another constraint into consideration, Vlong=θ˙R, now a figure of the robot’s (V,θ˙)space (see Fig. 2.) can be obtained. The boundary curve in the figure satisfiesVlong(t)θ˙(t)=μsg. When the robot’s (V,θ˙)state is in the shadow region, that’s to sayVlong(t)θ˙(t)μsg, the robot is safe, which means the hazardous situation like side slippage won’t occur. As a result, inequalities (7) is crucial for control decision making.

Fig. 2. The (V,θ˙)space of motion control of the robot. (In this figure, μs=0.49and g=9.8m/s2)

3. Trajectory tracking control laws design

Autonomous mobile robot achieves outdoor navigation by three processes, including the environment information acquired by the perception module, the control decision made by the planner module, and the motion plan performed by the motion control module (Gianluca, 2007). Consequently, for safe and accurate outdoor navigation it is vital to harmonize the three modules performance. In this paper, the emphasis is focused on the decision of control laws of the robot, and these controlled objects include the longitudinal velocity, the lateral velocity, and the angles of sensor pan-tilts.

As shown in Fig. 3, in an off-road environment, the robot uses laser range finder (LRF) with one degree of freedom (DOF) pan-tilt (only tilt) to scan bumpy situation of the close front ground, on which the robot is moving, and employs stereo vision with two DOF pan-tilt to perceive drivable situation of far front ground. With the data accessed from laser and vision sensors, the passable path can be planned, and the velocities of left and right robot’s sides can be controlled to track the path, consequently, the robot off-road running is completed.

Figure 2.

Off-road driving of the robot

3.1. Longitudinal control law

In this section, a kind of humanoid driving longitudinal control law based on fuzzy logic is proposed. First, the effect factors to the longitudinal velocity are classified. As shown in Fig. 4,

Figure 3.

The longitudinal control law of the robot

these factors include curvature radius of trajectory, road roughness, process time of sensors and speed requirement. Based on the four factors and the analysis of kinematics and dynamics, the longitudinal velocity equation can be given,

Vlong=(μcv(rc)+μrr(Rrough))μt(Tsen)μv(Vre)VreE8

where Vlongis longitude velocity command, μcv, μrr, μtand μvare the weight factors of the curvature radius of trajectoryrc, road roughnessRrough, processing time of sensors Tsenand speed requirement Vrerespectively. The weight factors can vary within[0,1].

3.2. Lateral control law

Autonomous vehicle off-road driving is a special control problem because mathematical models are highly complex and can’t be accurately linearized. Fuzzy logic control, however, is robust and efficient for non-linear system control (Gao Feng, 2006), and a well-tested method for dealing with this kind of system, provides good results, and can incorporate human procedural knowledge into control algorithms. Also, fuzzy logic lets us mimic human driving behavior to some extent (José E. Naranjo, 2007). Therefore, the paper has presented a kind of novel fuzzy lateral control law for the robot off-road running.

Thereafter, the lateral control law is designed for the purpose of position tracking by adjusting navigation orientation to reduce position error.

Definition: When the robot moves toward the trajectory, the orientation angle of the robot is positive, whichever of the two regions, namely left error region and right error region of the tracking position, the robot is in. When the robot moves against the trajectory, the angle is negative. When the robot motion is parallel to the trajectory, the angle is zero.

Figure 4.

Plots of membership functions of e d and θ e . The upper plot stands for μ ( e d ) . [ − 0.1 , 0.1 ] is the range of the c e n t e r region, μ ( e d ) = 1.0 in this region. The lower plot expresses μ ( θ e ) , which is indicated with triangle membership function. In this figure, the colorful bands represent the continuous changing course of e d .

In the course of trajectory tracking of the robot, position error edand orientation errorθe, which are the error values between the robot and the trajectory, are the inputs of the lateral controller. First, the error inputs, including the position error and the orientation error, are pre-processed to improve steady precision of the trajectory tracking, and restrain oscillation of that. The error Ecan be written asE=E¯+ΔE, where E¯is the result of smoothing filter aboutE, i.e.E¯=in+1iei/n,ΔEforecasts the error produced in the next system sample time,ΔE=in+1ikeiei/n (in).

As a result,E=in+1i(1+kei)ei/n  (in),n=4,kei=fei(x˙,Ts).

Tsrepresents system sample time. Second, the error inputs, after being pre-processed, are fuzzyfication-processed. As shown in Fig. 5, degree of membership μ(ed)and μ(θe)are indicated in terms of triangle and rectangle membership functions:

Part One: the space of edis divided intoLB, LS, MC, RS,RB. According to the result of the eddivision, the phenotype rules of the lateral fuzzy control law hold.

Part Two: the space of θeis also divided intoLB, LS, MC, RS,RB. According to the result of the θedivision, the recessive rules of the control law are obtained.

It is necessary to explain that the phenotype rules are the basis of the recessive rules, namely each phenotype rule possesses a group of relevant recessive rules. Each phenotype rule has already established their own expectation orientation angleθ˜e, that is to say the robot is expected to run on this orientation angle, θ˜e, and this angle is the very center angle of this group recessive rules. When the center angle, θ˜e, of each group of recessive rules is varying, the division of θeof each group recessive rules changes accordingly.

All of the control rules can be expressed as continuous functions:fLB, fLS, fMC, fRS, fRB, and, consequently, the global continuity of the control rules is established.

The functions, fLB, fLS, fRS, fRB, which express the rule functions of the noncenterregion ofed, are as follows:

θ˙=kθ˜ekt(θ˜eθe)Ts+ktθ˙˜trE9

In (9), θ˜eis the expectation tracking-angle in the current position-error region, and θ˜eis monotonically increasing withed. θ˜ecan be given by this formulaθ˜e=arctankededVlong,ked=μDL(ed)kDL+μDS(ed)kDSwhere μDLstands for the large error region, including LBandRB, membership degree ofed, and μDSrepresents the small error region, including LSandRS, membership degree ofed. θ˙˜tris the estimation of the trajectory-angle rate. kθ˜ecan be worked out by this equation:kθ˜e=μθL(θe)kθL+μθS(θe)kθS,where μθLstands for the large error region, including LBandRB, membership degree ofθe, and μθSdenotes the small error region, including LSandRS, membership degree ofθe. kDLand kθLexpress the standard coefficients of the large error region of edandθe; kDSand kθSindicate the small error region of edandθe, respectively. ktis the proportional coefficients of the system sample timeTS.

fMCis deduced as follows:

Figure 5.

Tracking trajectory of the robot. In this Figure, the red dashdotted line stands for the trajectory tracked by the robot. The different color dotted lines represent the bounderies of the different error regions of e d .

When the robot moves into the center region at the orientation ofα, the motion state of the robot can be divided into two kinds of situations.

Situation One: Assume that αhas decreased into the rule admission angular range of center region, i.e.0αθcent, whereθcent, which is subject to (7), is the critical angle of center region. To make the robot approach the trajectory smoothly, the planner module requires the robot to move along a certain circle path. As the robot moves along the circle path in Fig. 6, the values of edand θedecrease synchronously. In Fig. 6, λis the variety range of edin the center region. αis the angle between the orientation of the robot and the trajectory when the robot just enters the center region. R2λ/α2can be worked out by geometry, and in addition, the value of αis very small, so the process of approaching trajectory can be represented asΔα=αΔλλ.

Situation Two: When α0orαθcent. If the motion decision from the planner module were the same as Situation One, the motion will not meet (7). According to the above analysis, the error of tracking can not converge until the adjusted θ˜emakes αbe true of Situation One. Therefore, the purpose of control in Situation Two is to decreaseθe.

Based on the above deduction, fMCis as follow:

θ˙=kt(θ˜eθe)Ts+ktθ˜˙trE10

Whereθ˜e=edθcentλ,λis the variety range of edin the center region,[0.1m,0]or[0,0.1m]. θ˙is the output of (9) and (10), at the same time, θ˙is subject to (7), consequently, (θ˙)2Rμsgis required by the control rules.

The execution sequence of the control rules is as follows:

First, the phenotype control rules are enabled, namely to estimate which error region (LB, LS, MC, RS,RB) the current edof the robot belongs to, and to enable the relevant recessive rules; Second, the relevant recessive rules are executed, at the same time, θ˜eis established in time.

The lateral control law is exemplified in Fig. 7. In this figure, the different color concentric circle bands represent the different position errored. From the outermost circle band to the

Figure 6.

Plot of the lateral control law of the robot. These dasheds stand for the parts of the performance result of the control law.

center round, the values of e d is decreasing. The red center round stands for MCof e d , that is the center region of e d . At the center point of the red round,ed=0. According to the above definition, the orientation range of the robot is(π,π], and the two 0degree axes of θestand for the 0degree orientation of the left and right region of the trajectory, respectively. At the same time, π/2axis and π/2axis of θeare two common axes of the orientation of the robot in the left and right region of the trajectory. In the upper sub-region of 0degree axes, the orientation of the robot is toward the trajectory, and in the lower sub-region, the orientation of the robot is opposite to the trajectory. The result of the control rules converges to the center of the concentric circle bands according to the direction of the arrowheads in Fig. 7. Based on the analysis of the figure, the global asymptotic stability of the lateral control law can be established, and if ed=0andθe=0, the robot reaches the only equilibrium zero. The proving process is shown as follow:

Proof: From the kinematic model (see Fig. 8.), it can be seen that the position error of the robot edsatisfies the following equation,

Figure 7.

Trajectory Tracking of the mobile robot

e˙d(t)=Vlong(t)sin(θ˜e(t))E11
  • When the robot is in the non – center region, a controller is designed to control the robot’s lateral movement:

θ˜e(t)=arctankeded(t)Vlong(t)E12

Combining Equations (11) and (12), we get

e˙d(t)=Vlong(t)sin(arctan(θ˜e(t)))=keded(t)1+(keded(t)Vlong(t))2E13

Figure 8.

LRF Pan-Tilt and Stereo Viszion Pan-Tilt motion

As the sign of e˙dis always opposite that ofed, edwill converge to0. In equation (11), |e˙d(t)|Vlong(t), and |e˙d(t)|keded(t)can formed by equation (13). Therefore the convergence rate of edis between linear and exponential. When the robot is far away from the trajectory, it’s heading for trajectory vertically, thenθ˜e=π2,e˙d(t)=Vlong(t),ed(t)=Vlong(t)+ed(t0),when the robot is near the trajectory, ed0, then in equation (12),1+(keded(t)Vlong(t))21,e˙d(t)=keded(t).

According to equation (12), edand θ˜ecan converge to 0simultaneously.

  • When the robot enters the center region, another controller is designed,

θ˜e(t)=ed(t)θcentλE14

Combining Equations (11) and (14), we gete˙d=Vlongsin(edθcentλ)

In this region, edis very small, and consequently, edθcentλwill also be very small, and thensin(edθcentλ)edθcentλis derived. Therefore,e˙d=Vlongedθcentλ=Vlongθcentλed, and thened(t)=ed(t1)exp{Vlongθcentλ},where t1is the time when the robot enters the center region. In other word, edconverges to 0exponentially. Then, according toθ˜e(t)=edθcentλ, θ˜e(t)converges to0.

So the origin is the only equilibrium in the (ed,θ˜e)phase space.

3.3. LRF Pan-tilt and stereo vision pan-tilt control

Perception is the key to high-speed off-road driving. A vehicle needs to have maximum data coverage on regions in its trajectory, but must also sense these regions in time to react to obstacles in its path. In off-road conditions, the vehicle is not guaranteed a traversable path through the environment, thus better sensor coverage provides improved safety when traveling. Therefore, it is important for off-road driving to apply active sensing technology. In the chapter, the angular control of the sensor pan-tilts assisted in achieving the active sensing of the robot. Equation (15) represents the relation between the angles measured, i.e.φc, γcandγl, of the sensors mounted on the robot and the motion state, i.e. θeandx˙, of the robot.

[φcγcγl]=[kφc000kγc000kγc][θex˙x˙]E15

In (15), φc, γcare the pan angle and tilt angle of the stereo vision respectively. γlis the tilt angle of the LRF;kφc, kγcand kγlare the experimental coefficients between the angles measured and the motion state, and they are given by practical experiments of the sensors and connected with the measurement range requirement of off-road driving. At the same time, the coordinates of the scanning center arexec=xc+hccotγccosφc,yec=yc+hccotγcsinφc; andxel=xl+hlcotγl,yel=0. In the above equations, xc, yc, xl, yl, respectively, are the coordinates of the sense center points of the stereo vision and LRF in in-vehicle frame. As shown in Fig. 9, hcand hlare their height value, to the ground, accordingly.

The angular control and the longitudinal control are achieved by PIcontrollers, and they are the same as the reference (Gabriel, 2007).

4. Simulation tests

4.1. Simulation platform build

In this section, ADAMS and MATLAB co-simulation platform is built up. In the course of co-simulation, the platform can harmonize the control system and simulation machine system, provide the 3Dperformance result, and record the experimental data. Based on the analysis of the simulation result, the design of experiments in real world can become more reasonable and safer.

First, based on the character data of the test agent, ATRV2, such as the geometrical dimensions(HLW65×105×80cm), the mass value(118Kg), the diameter of the tire (Φ38cm)and so on, the simulated robot vehicle model is accomplished, as shown in Fig.10.

Figure 9.

ATRV2 and its model in ADAMS

Second, according to the test data of the tires of ATRV2, the attribute of the tires and the connection character between the tires and the ground are set. The ADAMS sensor interface module can be used to define the motion state sensors parameters, which can provide the information of position and orientation to ATRV2.

It is road roughness that affects the dynamic performance of vehicles, the state of driving and the dynamic load of road. Therefore, the abilities of overcoming the stochastic road roughness of vehicles are the key to test the performance of the control law during off-road driving. In the paper, the simulation terrain model is built up by Gaussian-distributed pseudo random number sequence and power spectral density function (Ren, 2005). The details are described as follows:

  • Gaussian-distributed random number sequencex(t), whose varianceσ=18and meanE=2.5, is yielded;

  • The power spectral SX(f)of x(t)is worked out by Fourier transform ofRX(τ), which is the autocorrelation function ofσ,

SX(f)=+RX(τ)ej2πfτdτ         =Tσ2(sinπfTπfT)2E16

where Tis the time interval of the pseudo random number sequence;

  • Assume the following,

y(t)=x(t)h(t)=+x(τ)h(tτ)dτE17
h(t)=+H(f)ej2πftdfE18

where h(t)is educed by inverse Fourier transform fromH(f), and they both are real even functions, then,

H(f)=SY(f)SX(f)E19
yk=y(kT)    =TM+Mx(rT)h(kTrT)=TM+MxrhkrE20

where SY(f)is the power spectral ofy(t), ykis the pseudo random sequence ofSY(f), k=0,1,2,N, and Mcan be established by the equationlimmMhm=h(MT)=0;

  • Assign a certain value to the road roughness and adjust the parameters of the special points on the road according to the test design, and the simulation test ground is shown in Fig. 11.

Figure 10.

The simulation test ground in ADAMS

4.2. Simulation tests

In this section, the control law is validated with the ADAMS&MATLAB co-simulation platform.

Based on the position-orientation information provided by the simulation sensors and the control law, the lateral, longitudinal motion of the robot and the sensors pan-tilts motion are achieved. The test is designed to make the robot track two different kinds of trajectories, including the straight line path, sinusoidal path and circle path. In Test One, the tracking trajectory consists of the straight line path and sinusoidal path, in which the wavelength of the sinusoidal path is5πm, the amplitude is3m. The simulation result of Test One is shown in Fig. 12. In Test Two, the tracking trajectory contains the straight line path and circle path, in which the radius of the circle path is5m. The simulation result of Test Two is shown in Fig. 13.

Figure 11.

Plots of the result of Test One ( T s = 0.05 s )

Figure 12.

Plots of the result of Test Two ( T s = 0.05 s )

In Fig. 12, which is the same as Fig. 13, sub-figure a is the simulation data recorded by ADAMS. In sub-figure a, the upper-left part is the 3Danimation figure of the robot off-road driving on the simulation platform, in which the white path shows the motion trajectory of the robot. The upper-right part is the velocity magnitude figure of the robot. It is indicated that the velocity of the robot is adjusted according to the longitudinal control law. In addition, it is clear that the longitudinal control law, whose changes are mainly due to the curvature radius of the path and the road roughness, can assist the lateral control law to track the trajectory more accurately. In Test One, the average velocity approximately is1.2m/s, and in Test Two, the average velocity approximately is1.0m/s. The bottom-left part presents the height of the robot’s mass center during the robot’s tracking; in the figure, the road roughness can be implied. The bottom-right part shows that the kinetic energy magnitude is required by the robot motion in the course of tracking. In Sub-figure b, the angle data of the stereo vision pan rotation is indicated. The pan rotation angle varies according to the trajectory. Sub-figure c is the error statistic figure of trajectory tracking. As is shown, the error values almost converge to0. The factors, which produce these errors, include the roughness and the curvature variation of the trajectory. In Fig. 13 (d), the biggest error is yielded at the start point due to the start error between the start point and the trajectory. Sub-figure d is the trajectory tracking figure, which contains the objective trajectory and real tracking trajectory. It is obvious that the robot is able to recover from large disturbances, without intervention, and accomplish the tracking accurately.

5. Conclusions

The ADAMS&MATLAB co-simulation platform facilitates control method design, and dynamics modeling and analysis of the robot on the rough terrain. According to the practical requirement, the various terrain roughness and obstacles can be configured with modifying the relevant parameters of the simulation platform. In the simulation environment, the extensive experiments of control methods of rough terrain trajectory tracking of mobile robot can be achieved. The experiment results indicate that the control methods are robust and effective for the mobile robot running on the rough terrain. In addition, the simulation platform makes the experiment results more vivid and credible.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Yang. Yi, Fu. Mengyin, Zhu. Hao and Xiong. Guangming (September 9th 2011). Control Laws Design and Validation of Autonomous Mobile Robot Off-Road Trajectory Tracking Based on ADAMS and MATLAB Co-Simulation Platform, Applications of MATLAB in Science and Engineering, Tadeusz Michałowski, IntechOpen, DOI: 10.5772/20954. Available from:

chapter statistics

2805total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

A Virtual Tool for Computer Aided Analysis of Spur Gears with Asymmetric Teeth

By Fatih Karpat, Stephen Ekwaro-Osire and Esin Karpat

Related Book

First chapter

Monte Carlo Simulations in NDT

By Frank Sukowski and Norman Uhlmann

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us