Open access peer-reviewed chapter - ONLINE FIRST

Innovative Adaptive Imaged Based Visual Servoing Control of 6 DoFs Industrial Robot Manipulators

Written By

Rongfei Li and Francis F. Assadian

Submitted: 06 February 2024 Reviewed: 12 February 2024 Published: 10 April 2024

DOI: 10.5772/intechopen.1004857

Latest Adaptive Control Systems IntechOpen
Latest Adaptive Control Systems Edited by Petros Ioannou

From the Edited Volume

Latest Adaptive Control Systems [Working Title]

Dr. Petros Ioannou

Chapter metrics overview

14 Chapter Downloads

View Full Metrics

Abstract

Image-based visual servoing (IBVS) methods have been well developed and used in many applications, especially in pose (position and orientation) alignment. However, most research papers focused on developing control solutions when 3D point features can be detected inside the field of view. This work proposes an innovative feedforward-feedback adaptive control algorithm structure with the Youla parameterization method. A designed feature estimation loop ensures stable and fast motion control when point features are outside the field of view. As 3D point features move inside the field of view, the IBVS feedback loop preserves the precision of the pose at the end of the control period. Also, an adaptive controller is developed in the feedback loop to stabilize the system in the entire range of operations. The nonlinear camera and robot manipulator model is linearized and decoupled online by an adaptive algorithm. The adaptive controller is then computed based on the linearized model evaluated at current linearized point. The proposed solution is robust and easy to implement in different industrial robotic systems. Various scenarios are used in simulations to validate the effectiveness and robust performance of the proposed controller.

Keywords

  • image-based visual servoing
  • adaptive control
  • robust control
  • feature estimation
  • feedforward control

1. Introduction

Automatic alignment plays a crucial role in industrial assignments, such as micromanipulation, autonomous welding, and industrial assembly [1, 2, 3]. Visual servoing is a powerful tool that is commonly used in this application and guides the robotic systems to their desired poses [4, 5, 6]. The task to be solved by a visual servoing system is to provide velocities of the end-effector that stabilizes and minimizes the difference between image features extracted by a vision device and the desired configurations [7].

The most known visual servoing techniques can be divided into two main categories: Image-based visual servoing (IBVS) and Position-based visual servoing (PBVS) [7]. IBVS is designed to drive the robot manipulators via feedback loops that matches 2D image features, such as points [7], lines [8], and entire images [9] while PBVS architecture extracts features in an image and estimates the pose, (including position and orientation) with respect to a 3D coordinate of frame in workspace and the difference between current pose and desired pose is defined as the control error [10]. In either case, an interaction matrix that relates the derivative of an image or pose features to the spatial velocities of the end-effector must be computed [7, 11].

In this research paper, the IBVS structure is explored and applied to a fastening and unfastening manufacturing scenario. Compared to PBVS, IBVS has the advantage of being more robust against camera parametric variations and noise but is more vulnerable to local minima and image singularities [7]. In general, the interaction matrix can be computed by using direct depth information [12, 13] or by approximation of the depth [14, 15] or by depth-independent interaction matrix [16]. However, these methods use redundant features to invert a non-square interaction matrix, as required in controller design, this in turn causes problems of local minima and image singularities. To address these issues, [17] uses the stereo visual system and provides a square interaction matrix. We have seen more trends in using stereo visual configuration in recent visual servoing works [18, 19].

In most related works, a simple kinematic model of the camera is solely used in generating an interaction matrix [7, 20, 21]. Dynamic visual servoing studied by [22, 23, 17] includes the dynamic of robot arms in their models and they argue to have higher performance and enhanced stability in their response. However, controllers built in these papers are usually achieved with the PID control technique or its simplified variations. One improvement in this work is to use Youla robust control design technique [24] that includes both kinematics and dynamics in the model development stage. This advanced control technique can increase the robustness and stability of the system for high-speed tasks.

The eye-in-hand (EIH) vision system and the eye-to-hand (ETH) vision system are two kinds of camera configurations that have been widely used in visual servoing [18, 19]. EIH has the freedom to obtain adequate environmental information, but the camera attached to the robot arms occupies more space and reduces flexibility of robot movement. On the other hand, in ETH systems, robot movements are not affected by the image extraction process. However, it usually suffers information loss and control loss of the robot when the point features are outside the view of the camera. One possible way to tackle this problem is to install several cameras to cover the whole workspace [19, 25]. In this paper, we address this problem by introducing an innovative method of designing a feedforward-feedback control architecture. A designed feature estimation loop ensures stable and fast motion control when point features are outside the field of view. As 3D point features move inside the field of view, IBVS feedback loop preserves the precision of the pose at the end of the control period.

Adaptive control methods have been widely used in recent research on visual servoing fields. Liu and Wang [26] presented a new simplified adaptive controller for visual servoing of robot manipulators, which is based on the Slotine-Li algorithm [27]. They developed an adaptive algorithm to estimate unknown geometric parameters, such as the depths of the image features in the interaction matrix. In addition, in another paper [28], researchers explored a new resilient adaptive dynamic tracking control scheme for a fully uncalibrated IBVS system with unknown actuator faults. An effective adaptive algorithm was developed to estimate uncalibrated parameters in the camera, robot, and end-effector, which appear with a highly coupled and nonlinear form in the composite Jacobian matrix. Moreover, we have seen applications of adaptive controls with neural networks in visual servoings. For example, Qiu and Wu [29] have developed an adaptive neural network-based IBVS dynamic method for both eye-in-hand and eye-to-hand camera configurations with unknown dynamics and external disturbances. In most applications, adaptive algorithms are used to estimate unknown parameters in models. However, adaptive controller can also be used in dealing with coupled and nonlinear dynamic models in the IBVS structures, and this application is less addressed in literature. In this paper, one innovative contribution is to develop an adaptive feedback loop controller based on linear Youla parametrization to stabilize the nonlinear IBVS system in the whole range of operations. Simulation results for the various scenarios are presented and the robustness to noise and model uncertainties in the manufacturing process of fastening and unfastening are examined.

Advertisement

2. System configuration

2.1 System topology for fastening and unfastening scenarios

In this paper, an automatic alignment system is used to align a screwdriver, which is attached to the gripper of the robot arm and guide it to move above a screw at a prescribed location. The system is composed of a 6-DoF robot manipulator, a stereo camera, and the tools as shown in Figure 1. In this Figure, a camera is placed on the front of the workspace. The base frame {O} and the end-effector frame {E} are attached to the robot. A cartesian frame {C} is attached to the optical center of the camera. The body of the screwdriver is modeled as a cylinder and its central axis is approximately parallel to the Z-axis of the end-effector frame {E}.

Figure 1.

Configuration of the alignment system.

2.2 Point feature extraction

To capture the pose (position and orientation) of the screwdriver, two circular fiducial markers are placed on it. The Hough transformation [30] in computer vision can isolate circular markers in the image and localize the center of them. A general Hough transformation process is discussed below.

A circle with a radius R and center ab can be defined in the following equation:

xa2+yb2=R2E1

Where xy are points on the perimeter of the circle. The three-element tuple abR uniquely parameterizes a circle in an image.

To provide an example for this mapping, we only consider one of the circles on our tool and explain the process as follows: the locus of ab points in the parameters space falls on a circle of radius R centered at xy, which are mapped to the parameters space and only three points are shown for this example. Each point in a geometric space generates a circle in the parameters space as shown in Figure 2. In case the geometric space points belong to the same circle as shown, the center of this circle will be represented by the intersection of circles in the parameters space. During the calculation of the Hough transformation, each image point with the coordinates xy generates tuples abR with the unknown R.

Figure 2.

Feature extraction by Hough transformation. (note: The red point in parameters space represents a possible tuple abR.

By creating different circles in geometric space, we can then generate different tuples. The tuple with the most intersections of the circles, in parameters space, provides the center and the radius of the center in the geometric space.

Then the stereo camera provides the image coordinates of the center of the two fiducial markers as:

pI1T̂=ulI1̂urI1̂vI1̂TE2
pI2T̂=ulI2̂urI2̂vI2̂TE3

Where pI1T̂ and pI2T̂ are image coordinates of circular centers of the first and the second fiducial markers. ul̂ and ur̂ are u axis coordinates measured on the left and right lens’s plane of the stereo camera, and v̂ is the vaxis coordinates.

2.3 Control system block diagram

The feedback-feedforward pose alignment control block diagram with feature estimation feedback is given in Figure 3. The point features of fiducial markers extracted from images captured by the stereo camera. The current point features are denoted as pÎ=pI1T̂pI2T̂, and the target point features are denoted as pI¯=pI1T¯pI2T¯. During the alignment, the coordinates of the target point features are fixed in the image frames, while the coordinates of the current point features vary with the movement of the end-effector.

Figure 3.

Block diagram of feedforward-feedback control loop with feature estimation (note: Blocks in red lines are referred to HIL models).

The inner-loop joint control is an inner feedback loop (which is not shown in this Figure, and the detail of it will be discussed in the later section) that moves the robot arms to the target rotational angleqTref, which is commanded by the outer feedback controller and feedforward controller. Dynamic model of the robot manipulator is included in the inner-loop controlled system. Sensor noise may originate in the inner joint control loop from the low fidelity cheap encoder joint sensors and the dynamic errors from the joint, e.g., compliances. All sources of noise from the joint control loop are combined and modeled as an input disturbance, dqT,to the outer control loop.

The feedforward control loop is an open loop that brings the tool as close to the target position as possible in the presence of the input disturbance, dqT. The outputs of the feedforward are the reference joint angles of rotations, qT_feedforward, which are added to the outer feedback controller outputs,qT_feedback, and set as the targets for the joint control inner loop. The function of the Tool-on-Robot kinematics model is to transform a set of current joint angles of the tool manipulator to the current pose of the tool on the end-effector using a kinematic model of the robot arm.

The movement of the tool can be adjusted by the adaptive feedback control loop. The adaptive controller is computed online according to the current image coordinate pÎ. The feedback control loop rejects the input disturbance, dqT, by minimizing the error between the target coordinates of two makers, pI¯, and the image estimation, pÎin the image frame.

The feedforward and feedback controllers work simultaneously to move the tool to the target pose location in the tool manipulation system. The combined target qTrefare the inputs to the joint control loop so that both controllers manipulate the tool pose. The benefit of designing both feedback and feedforward controls for the manipulation system is to reduce the time duration. If only feedback control is utilized, the pose estimation generated from the visual system requires image processing and makes the tool movement very slow. We can divide the task of the tool movement control into two stages. In the first stage, under the action of the feedforward control, the tool moves to an approximate location that is close to the desired destination. In the second stage, the feedback controller moves the tool to the precise target location using the tool pose estimation from the camera. In addition, the camera has a range of view and can only detect the tool and measure its 2D feature as pÎ when it is not far away from the target. When the tool is moving from a location that is not in the camera range of view, we must estimate the feature as pI until the tool moves into the range of view.

We developed another feedback loop so that we can estimate the 2D coordinates of the tool from the same mathematical model similar to the Hardware-In-Loop (HIL) models when the real measurements are unavailable. In Figure 3, the normal feedback loop (blue lines) is preserved when the tool is inside the camera range of view and hence, the camera can measure the tool’s 2D feature pÎ. However, when the tool is outside the range of view, the 2D feature can only be estimated as pI˜ (green dashed lines). We can implement a bump-less switch to smoothly switch between these modes of operation. The switching signal is activated when the tool moves in or out of the camera range of view (or when the camera detects fiducial point features).

Advertisement

3. Model development

3.1 Tool-on-robot kinematics

The robotic model we use in this article is a specific manipulator ABB IRB 4600 [31], which is an elbow manipulator with spherical wrist as shown in Figure 4. The physical dimension parameters of this robot manipulator (a1,L1,L2,L3,L4,Lt) are summarized in the Appendix section. This model of robot has a total of 6 links with three composed of the robot arms and other three composed of wrists. A joint is connected between each of the two adjacent links and there is a total of 5 convolutional joints. In addition to the base of the robot arm, we have attached to each joint a cartesian coordinate, as shown in Figure 4. Joint axes Z0, Z5 are the rotational direction of each joint. Their rotational angles are defined as θ1, θ6. The benefit of spherical wrist is that the wrist joint center (where Z3,Z4 and Z5 intersect) is kept stationary whichever the wrist orients. Therefore, the task of orientation will not affect the position of the wrist center. Each coordinate attached to the frame is generated based on the procedures that derive the forward kinematics by Denavit-Hartenberg convention (or D-H convention) [32]. In the coordinate frame O6X6Y6Z6 attached to the end-effector, set the unit vector k6̂=â along the direction of z6. Set j6̂=ŝ in the direction of gripper closure and set i6̂=n̂ as ŝ×â. The values of â,ŝandn̂in the base frame O0X0Y0Z0 define the orientation of the object (tool) mounted at the end-effector.

Figure 4.

IRB ABB 4600 model with attached frames.

In D-H convention, each homogeneous transformation matrix Aii1(from frame i to framei1) can be represented as a product of four basic transformations:

Aii1=Rotz,θiTransz,diTransx,aiRotx,αiE4
=cθisθi00sθicθi000010000110000100001di0001100ai01000010000110000cαisαi00sαicαi00001=cθisθicαisθisαiaicθisθicθicαicθisαiaisθi0sαicαidi0001E5

Note:

cθicosθi,cαicosαi,sθisinθi,sαisinαiE6

Whereθi, ai, αi and di are parameters of link iand joint i, ai is the link length, θ is the rotational angle, αi is the twist angle and di is the offset length between the previous i1thandthe current ithrobot links. The quantities of each parameter qi, ai, αi and di in Eq. (5) are calculated in Table 1 based on the steps in [32].

Linkaiαi(rad)diθi(rad)
1a1π/2L1θ1
2L200θ2π/2
3L3π/20θ3
40π/2L4θ4
50π/20θ5
600Ltθ6+π

Table 1.

DH-parameter for elbow manipulator with spherical wrist.

Only the angle are variables and shown with*.

We can generate the transformation matrix TEOfrom the base inertial frame O0X0Y0Z0Po to the end-effector frame O6X6Y6Z6PE:

TEO=A10A21A32A43A54A65E7

Assume a screwdriver with a length of Ltool is grasped by the gripper at the end effector, and the body of screwdriver is parallel to the Z6 axis. Two fiducial feature points attached to it are placed on the screwdriver where their coordinates are measured in the end-effector frame as PI1E=000T, and PI2E=00Ltool/2T. Then those coordinates of points in the frame PEcan be transformed to the frame Po (as)

PI1O=TEOPI1EE8
PI2O=TEOPI2EE9

To be consistent with notations in the control block diagram (Figure 3), the disturbed angles of rotation in transformation matrix TEO should be written as qT=θ1θ2θ3θ4θ5θ6T.

Take PIE=PI1EPI2E,PIO=PI1OPI2O, the kinematics model can be written as a nonlinear function as follows,

PIO=TEOqT·PIEE10

3.2 Dynamic models of 6 DoFs robot manipulator

Dynamic models are included in the inner control loop in Figure 3. The dynamic model of a serial of 6-link rigid, non-redundant, fully actuated robot manipulator can be written as:

Dq+Jq¨+Cqq̇+Brq̇+gq=uE11

Where qϵR6X1 is the vector of joint positions, and uϵR6X1 is the vector of electrical power input from DC motors inside joints, DqϵR6X6 is the symmetric positive defined matrix, Cqq̇ϵR6X6is the vector of centripetal and Coriolis effects, gqϵR6X1 is the vector of gravitational torques, R6X6 is a diagonal matrix expressing the sum of actuator and gear inertias, BϵR6X1is the damping factor, rϵR6X1is the gear ratio.

3.3 Stereo camera model

Assume the camera’s position and orientation is fixed and pre-known in the inertial frame. The transformation matrix TCO from the camera attached cartesian frame {C} to the inertial base frame {O} is:

TCO=nxsxaxdxnysyaydynzszazdz0001E12

Where nxnynzT, sxsyszTand axayazTare the camera’s directional vector of yaw, pitch, and roll in the base frame O0X0Y0Z0. And dxdydzT are the vector of absolute position of the center of the camera in the base frame O0X0Y0Z0.

The transformation matrix TOCfrom the inertial base frame {O} to the camera frame {C} can be derived as:

TOC=TCO1E13

Then the coordinates of fiducial points in frame O as expressed in Eqs. (8) and (9) can be transformed to the frame {C} as:

PI1C=TOCPI1OE14
PI2C=TOCPI2OE15

In order to detect depth of an object, a stereo camera is required for implementation. As shown in Figure 5, two identical cameras are separated by a baseline distance b. In this paper, the pinhole camera model is used to represent each camera. A 3D point PIC=XICYICZICT, measured in the camera cartesian frame {C}, is projected to two parallel virtual image planes, and each plane is located between each optical center (Cl or CR) and the object point PI. Denote the coordinates of the point projected on the left image plane as ulvTand the coordinates of the point projected on the right image plane as urvT.

Figure 5.

The projection of a scene object on the stereo camera’s image planes. Note: v coordinate on each image plane is not shown in the plot but is measured along the axis that is perpendicular to and point out of the plot.

The relationship between the image coordinates (ulv), (urv), and the coordinates (XIC,YIC,ZIC) on the normalized imaging plane is given by:

ulvl1=1ZCfuscu00fvv0001XICYICZICb2ZCfu00E16
urvr1=1ZCfuscu00fvv0001XICYICZIC+b2ZCfu00E17

Where fu and fv are the horizontal and the vertical focal lengths, and sc is a skew coefficient. In most cases, fu and fv are different if the image horizontal and vertical axes are not perpendicular. In order not to have negative pixel coordinates, the origin of the image plane will be usually chosen at the upper left corner instead of the center. u0 and v0 describe the coordinate offsets.

When there are no coordinate offsets and skews between u-v image plane, that is sc = u0=v0=0, the above relationships between 3D domain to 2D image domains of a point P in the workspace can be simplified as:

From 3D to 2D:

v=vl=vr=YICZICfvE18
ul=2XICb2ZICfuE19
ur=2XIC+b2ZICfuE20

Note vlandvr have the same value and they are denoted as the same parameter v.

Finally, from stereo camera model equations above, the coordinates of two fiducial points in the image frame can be expressed as:

pI1̂=2XI1Cb2ZI1Cfu2XI1C+b2ZI1CfuYI1CZI1CfvT=ulI1urI1vI1TE21
pI2̂=2XI2Cb2ZI2Cfu2XI2C+b2ZI2CfuYI2CZI2CfvT=ulI2urI2vI2TE22

Where PI1C=XI1CYI1CZI1C, and PI2C=XI2CYI2CZI2C are coordinates of two fiducial points measured in the camera’s frame {C}.

A specific stereo camera model: Zed 2 [33] is used in the model development and simulations. The parameters of this type are summarized in the Appendix.

Advertisement

4. Control strategy

4.1 Inner-loop joint control

Our previous paper [34] has provided details of developing the inner-loop joint controller with Youla feedback linearization. Here in this section, we only discuss a summary of the design.

Eq. (11) expresses the dynamic model of a 6 DoFs robot manipulator. We simplify this equation as follows:

qq¨+hqq̇=uE23
withMq=Dq+JE24
hqq̇=Cqq̇+Brq̇+gqE25

Then, transform the control input as following:

u=Mqv+hqq̇E26

wherev is a virtual input. Then, substitute for u in Eq. (23) using Eq. (26), and since Mq is invertible, we will have a reduced system equation as follows:

q¨=vE27

By using feedback linearization with Youla parameterization, we can design the controller system as shown in Figure 6.

Figure 6.

Block diagram of inner joint control loop with feedback linearization.

The Joint controller is designed as:

Gcsys_inner=3τin2s+1τin3s+3τin2·I6×6E28

And the closed-loop transfer function can be expressed as:

Tsys_inner=3τins+1τins+13·I6×6E29

WhereI6×6 is a 6x6 identity matrix, and τinspecifies the pole and zero locations and represents the bandwidth of the control system. τincan be tuned so that the response can be fast with less-overshoot.

4.2 Adaptive controller design

In control system block diagram (Figure 3), the linear closed-loop transfer function of inner-loop joint control is derived in Eq. (29). The tool-on-robot kinematics model and stereo camera model, which can be combined as a new model: camera-and-robot model, compose of the nonlinear part of the plant in outer feedback loop. Combining the equations in Section 3, we can derive a nonlinear function denoted asFthat relates the disturbed robot joint angles qTand 2D image coordinates of two fiducial points pÎ:

pÎ=FqTE30

We can use model linearization method by linearizing the nonlinear function in Eq. (30) at different linearized points and design linear controllers utilizing these linearized models. Choosing a linearized point qT0, the nonlinear function in Eq. (30) can be linearized using the Jacobian matrix form as:

pÎ=JqT0qT+FqT0E31

Where JqT0ϵR6X6 is the Jacobian matrix of FqT evaluated as qT=qT0.

Assuming C1=JqT0, C2=FqT0, therefore, Eq. (31) can be rewritten as:

pÎ=C1qT+C2E32

Let us define pÎ=pÎC2, then, the overall block diagram of the linearized system is shown in Figure 7.

Figure 7.

Block diagram of feedback loop with linearized camera-and-robot combined model.

The linearized plant transfer function is derived as:

Gpsys_outerlinear=pÎqVref1=C13τins+1τins+13·I6×6E33

As C1 is coupled, the first step to derive a controller for the multivariable system using model linearization is to find the Smith-McMillan form of the plant [35].

To obtain the Smith-McMillan form, we can decompose Gp using singular value decomposition (SVD) as:

Gpsys_outerlinear=ULMpUrE34

whereULϵR6X6and UrϵR6X6are the left and right unimodular matrices, and MpϵR6X6is the Smith-McMillan form of Gpsys_outerlinear. The SVD in this case only works for the replacement of Smith-McMillan form computation because we are dealing with a constant matrix and a dialogized transfer function matrix.

Mp is a diagonalized transfer function matrix with each nonzero entry equal to a gain multiple the transfer function 3τins+1τins+13; For the ithrow of Mp the entry on the diagonal is:

Mpii=gaini·3τins+1τins+13,123456E35

Where gainϵR6X1 is a numerical vector.

The design of a Youla controller for each nonzero entry in Mp is trivial in this case as all poles/zeros of the plant transfer function matrix are in the left half-plane, and therefore, they are stable. In this case, the selected decoupled Youla transfer function matrix, MY,can be selected to shape the decoupled closed-loop transfer function matrix,MT. All poles and zeros in the original plant can be canceled out and new poles and zeros can be added to shape the closed-loop system. Let us select a Youla transfer function matrix so that the decoupled closed-loop SISO system behaves like a second-order Butterworth filter, such that:

MT=ωn2s2+2ζωns+ωn2·I6×6E36

where ωn is called natural frequency and approximately sets the bandwidth of the closed-loop system. It must be ensured that the bandwidth of the outer-loop is smaller than the inner-loop, i.e., 1/ωn>τin. Parameter, ζ, is called the damping ratio, which is another tuning parameter.

We can then compute the decoupled diagonalized Youla transfer function matrix MY.The diagonal entry of ith row is denoted as MYii:

MYii=MTiiMpii=1gainiωn2s2+2ζωns+ωn2τins+133τins+1,,123456E37

The final coupled Youla, closed loop, sensitivity, and controller transfer function matrices are computed as:

Ysys_outerlinear=URMYULE38
Tysys_outerlinear=Gpsys_outerlinear·Ysys_outerlinearE39
Sysys_outerlinear=1Tysys_outerlinearE40
GCsys_outerlinear=Ysys_outerlinear·Sysys_outerlinear1E41

The controller developed in the above section is based on the linearization of the combined model at a particular linearized point qT0. This controller can only stabilize at a certain range of joint angles around qT0. As current joint angles qT deviates from qT0, the error between the estimated linearized system in Eq. (33) and the true nonlinear system in Eq. (30) increases.

To tackle this problem, we develop an adaptive controller that is computed online based on linearization of the model at current joint angles. This control process is depicted in Figure 8.

Figure 8.

Block diagram of adaptive control design.

The first step is to estimate current joint angles qT˜ from current measured images coordinates pÎ. The mathematic models of stereo camera and robot kinematics have been given in the above sections, and a combined model expression is defined in Eq. (30). Therefore, the mathematical function of the inverse model can be derived and expressed as:

qT˜=F1pÎE42

Where F1 is the inverse function of Eq. (30). Given the estimated current angle qT˜, we can calculate the Jacobian matrix of the nonlinear model at current time. By obtaining left and right unimodular matrices and Smith-McMillan form from singular value decomposition, the current linear controller GCsys_outerlinear can be built by steps Eqs. (30)(41).

In this way, the adaptive outer-loop controller is computed online based on current linearized plane model, which is evaluated at current estimated robot’s joint angles. The mapping between nonlinear plant model and linearized model is complete in all joint angles’ configurations, thus the adaptive feedback controller is robust in the entire range of operations.

4.3 Feedforward control

Feedforward controller generates a target rotational angle qT_feedforward based on the tool’s coordinates at the target location. As shown in the Figure below, Feedforward controller contains two inverse processes of the original plant in the development. The first diagram illustrates the inverse process of the tool-on-robot kinematics model and the second diagram Tforward shows the inverse process of inner-loop closed transfer function Tinnerloop.

Eq. (10) provides the mathematical expression of the tool-on-robot kinematics model. The inverse of Eq. (10) gives an expression of qT from PIOand PIE.We can define the inverse of tool-on-robot kinematics as follows:

qT=HPIOPIEE43

This process is so-called inverse kinematics of the robot manipulator, which finds the joint configurations given the coordinates of points in both the bottom frames and the end-effector frame.

The Tforward can be designed as,

Tforward=1Tinnerclosed1τforwards+12=τins+133τins+11τforwards+12·I6×6E44

The double poles s = −1/τforward are added to make Tforward proper. Choose τforward so that the added double poles are 10 times larger than the bandwidth of the original improper Tforward. In other words, τforward is chosen as (Figure 9),

Figure 9.

Block diagram of feedforward control design.

τforward=0.1τinE45

4.4 Feature estimation

As the camera is static (not moving with the robot arm), the tool pose cannot be recognized and measured visually if it is outside the camera range of view. To tackle this problem, the 2D feature (image coordinates of the tool points) can be estimated from the same combined model in Eq. (10) with the joint angles qTas input is shown in Figure 3:

pI˜=FqTE46
Advertisement

5. Simulation results

5.1 Response of tool’s pose in the control system

In this section, we will simulate the closed adaptive control loop with different scenarios to check the performance and stability of the proposed controller. For all scenarios, the tool is guided to a particular target position measured in the base inertial frame {O}: PI¯=1m0.2m0.3mT, where m represents meters and the tool should be aligned vertically so that its rotational matrix equals to n¯s¯a¯=010100001, where n¯,s¯,a¯ represent respectively the end-effector’s directional unit vector of the yaw, pitch, and roll in the inertial frame {O}. All scenarios are simulated with bandwidth of the inner-loop as 100 rad/s and the bandwidth of the outer-loop as 10 rad/s.

In the following three plots, the blue lines represent the responses of the end-effector position, and the red lines are the targets. In the trajectory plots, the circles represent the starting position of the end-effector, and the stars represent the target position of the end-effector. Black arrows are orientation direction of trajectory points that change over time, and red arrows are the target direction of the end-effector.

Three simulation scenarios are illustrated in Figures 1012, respectively. In the first, the tool (end-effector) starts at the pose: PIo=1.404,0.228m1.171mT, and n0s0a0=0.48930.02620.87170.24270.95600.16500.83770.29320.4614, where two fiducial points are in side view of the camera. Therefore, their coordinates can be detected and measured during the entire control period. In the second and third scenario, the tool (end-effector) starts at the pose: PIo=1.285m0m1.57mT, and n0s0a0=001010100, where two fiducial points are not inside view of the camera at beginning. Therefore, their coordinates must be estimated at the initial stage of the control loop. Disturbances of joint angles are added to the first and the third scenarios, as dq=0.1°0.5°0.2°0.3°0.1°0.3° to test the controller robustness against these input disturbances.

Figure 10.

Response of the tool’s (end-effector’s) pose in scenario 1: Start in the field view of camera with disturbances.

Figure 11.

Response of the tool’s (end-effector’s) pose in scenario 2: Start outside field view of camera with no disturbances.

Figure 12.

Response of the tool’s (end-effector’s) pose in scenario 3: Start outside field view of camera with disturbances.

The response in Figures 10 and 12 illustrate that the designed feedforward-feedback adaptive controller can reach the target in a stable manner even in the presence of input disturbances. Comparing orientation trajectories between Figures 11 and 12, disturbances add instability in the transient response, but the control system is able to stabilize it in the steady state response. When the tool’s coordinates are measurable by the camera (as shown in Figure 10), the feedforward and feedback controller work together to guide the tool to its target pose fast and precisely, which is indicated as a response within 0.3 seconds and small overshoots in transient response. However, when the tool is outside the view of the camera (as shown in Figures 11 and 12), the controlled system still converges fast (less than a second) but generates large overshoots in some of the position responses. The large overshoots may come from the accumulated disturbances that cannot be eliminated by the feedforward control without the intervention of the feedback control. The issue of the overshoots will be investigated in future research and can be dealt with either by upgrading the camera with a wider range of view or by using a switching algorithm, such as switching from a feedforward to a feedback controller, rather than the use of a continuous feedforward-feedback controller.

5.2 Robustness analysis of model uncertainties

In this section, we study the impact of varying some parameters on the steady-state error of the end-effector X-coordinate position. The length of link 2 and link 4 of the robot manipulator L2and L4 are chosen to vary as uncertainties because they have the two largest numerical dimensions among all geometric parameters of the robot manipulator.

Figure 13 shows the robustness test taken in simulation scenario 1. All points in the variation range have a steady state error which is less than 1%. We can use this as a standard to show the robust performance of the controller against model variation. From the plot, we can summarize that the designed control system is robust to model variation of the two parameters varying individually between 50 and 110%.

Figure 13.

Robustness analysis of model uncertainties.

Advertisement

6. Conclusion

This paper proposes a new feedforward-feedback adaptive control algorithm based on image-based visual servoing technique. The core of this method is to design a feedforward controller and feature estimation loop so that the tool’s pose can be stabilized when it is outside the view of the camera. Furthermore, an adaptive Youla parameterization-based controller is developed in the feedback loop to ensure stability and performance for the entire range of operations of the robot manipulator in the workspace. Various scenarios of the fastening and unfastening alignment problem have been simulated and robustness against input disturbances and model uncertainties have been examined. In summary, our control design has shown fast and robust performance in different scenarios so it has a great potential for implementation in real-world applications in automated manufacturing fields.

Advertisement

A. Appendix

See Table 2.

Parameters (Robot manipulator)Value
Length of Link 1: L10.495 m
Length of Link 2: L20.9 m
Length of Link 3:L30.175 m
Length of Link 4:L40.96 m
Length of Link 1 offset: a10.175 m
Spherical wrist: Lt0.135 m
Tool length of screwdriver:0.127 m
Parameters (Zed 2 Stereo Camera)Value
Focus length: f2.8 mm
Baseline: b120 mm
Angle of view in width: α86.09°
Angle of view in height: β55.35°

Table 2.

Parameter values of ABB IRB 4600 robot manipulator [31] and Zed 2 Stereo camera [33].

References

  1. 1. Qin F, Xu D, Zhang D, Pei W, Han X, Yu S. Automated hooking of biomedical microelectrode guided by intelligent microscopic vision. IEEE/ASME Transactions on Mechatronics. 2023;28(5):2786-2798. DOI: 10.1109/TMECH.2023.3248112
  2. 2. Jung W, Jin L, Hyun J. Automatic Alignment Type Welding Apparatus and Welding Method Using the Above Auto-Type Welding Apparatus; 2020
  3. 3. Guo P, Zhang Z, Shi L, Liu Y. A contour-guided pose alignment method based on Gaussian mixture model for precision assembly. Assembly Automation. 2021;41(3):401-411
  4. 4. Chang W, Wu C-H. Automated USB peg-in-hole assembly employing visual servoing. In: Proceedings of the 2023 3rd International Conference on Robotics (ICCAR). Nagoya, Japan: IEEE; 2017. pp. 352-355
  5. 5. Xu J, Liu K, Pei Y, Yang C, Cheng Y, Liu Z. A noncontact control strategy for circular peg-in-hole assembly guided by the 6-DOF robot based on hybrid vision. IEEE Transactions on Instrumentation and Measurement. 2022;71:Art. no. 3509815
  6. 6. Zhu W, Liu H, Ke Y. Sensor-based control using an image point and distance features for rivet-in-hole insertion. IEEE Transactions on Industrial Electronics. 2020;67(6):4692-4699
  7. 7. Chaumette F, Hutchinson S. Visual servo control part I: Basic approaches. IEEE Robotics and Automation Magazine. 2006;13:82-90
  8. 8. Chaumette F. Image moments: A general and useful set of features for visual servoing. IEEE Transactions on Robotics. 2004;20:713-723
  9. 9. Collewet C, Marchand E, Chaumette F. Visual servoing set free from image processing. In: IEEE International Conference on Robotics and Automation. Pasadena, California, USA: IEEE; 2008. pp. 81-86
  10. 10. Cervera E, del Pobil AP, Berry F, Martinet P. Improving image-based visual servoing with three-dimensional features. The International Journal of Robotics Research. 2003;22(10–11):821-839
  11. 11. Kanatani K. Linear algebra method for pose optimization computation. In: Sergiyenko O, Flores-Fuentes W, Mercorelli P, editors. Machine Vision and Navigation. New York: Springer; 2020. pp. 293-319
  12. 12. Feddema J, Lee CSG, Mitchell O. Model-based visual feedback control for a hand-eye coordinated robotic system. Computer. 1992;25(8):21-31
  13. 13. Mezouar Y, Chaumette F. Optimal camera trajectory with image-based control. The International Journal of Robotics Research. 2003;22(10):781-804
  14. 14. Janabi-Sharifi F, Deng L, Wilson W. Comparison of basic visual servoing methods. IEEE/ASME Transactions on Mechatronics. 2011;16(5):967-983
  15. 15. Nematollahi E, Janabi-Sharifi F. Generalizations to control laws of image-based visual servoing. International Journal of Optomechatronics. 2009;3(3):167-186
  16. 16. Wang H, Liu Y-H, Chen W. Uncalibrated visual tracking control without visual velocity. IEEE Transactions on Control Systems Technology. 2010;18(6):1359-1370
  17. 17. Cai C, Dean-León E, Somani N, Knoll A. 3D image-based dynamic visual servoing with uncalibrated stereo cameras. In: IEEE ISR 2013. Seoul, Korea (South): IEEE; 2013. pp. 1-6. DOI: 10.1109/ISR.2013.6695650
  18. 18. Ma Y, Liu X, Zhang J, Xu D, Zhang D, Wu W. Robotic grasping and alignment for small size components assembly based on visual servoing. International Journal of Advanced Manufacturing Technology. 2020;106(11–12):4827-4843
  19. 19. Hao T, Xu D, Qin F. Image-based visual Servoing for position alignment with orthogonal binocular vision. IEEE Transactions on Instrumentation and Measurement. 2023;72:1, Art no. 5019010-10. DOI: 10.1109/TIM.2023.3289560
  20. 20. Chaumette F, Hutchinson S. Visual Servoing and visual tracking. In: Siciliano B, Oussama K, editors. Handbook of Robotics. Berlin Heidelberg, Germany: Springer-Verlag; 2008. pp. 563-583. DOI: 10.1007/978-3-540-30301-5.ch25
  21. 21. Wilson WJ, Hulls CCW, Bell GS. Relative end-effector control using Cartesian position based visual servoing. IEEE Transactions on Robotics and Automation. 1996;12(5):684-696. DOI: 10.1109/70.538974
  22. 22. Hashimoto K, Ebine T, Kimura H. Dynamic visual feedback control for a hand-eye manipulator. In: Proceedings of the 1992 lEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 3. Raleigh, NC, USA: IEEE; 1992. pp. 1863-1868
  23. 23. Zergeroglu E, Dawson D, de Queiroz M, Nagarkatti S. Robust visual-servo control of robot manipulators in the presence of uncertainty. In: IEEE Conference on Decision and Control. Vol. 4. Phoenix, AZ, USA: IEEE; 1999. pp. 4137-4142
  24. 24. Youla D, Jabr H, Bongiorno J. Modern wiener-Hopf Design of Optimal Controllers-Part II: The multivariable case. IEEE Transactions on Automatic Control. 1976;21(3):319-338. DOI: 10.1109/TAC.1976.1101223
  25. 25. Liu S, Xu D, Liu F, Zhang D, Zhang Z. Relative pose estimation for alignment of long cylindrical components based on microscopic vision. IEEE/ASME Transactions on Mechatronics. 2016;21(3):1388-1398
  26. 26. Liu Y et al. An adaptive controller for image-based visual servoing of robot manipulators. In: 2010 8th World Congress on Intelligent Control and Automation. Jinan, China: IEEE; 2010. DOI: 10.1109/wcica.2010.5554505
  27. 27. Slotine JJ, Li W. On the adaptive control of robot manipulators. International Journal of Robotics Research. 1987;6:49-59
  28. 28. Liu A et al. Resilient adaptive trajectory tracking control for uncalibrated visual servoing systems with unknown actuator failures. Journal of the Franklin Institute. 2024;361(1):526-542. DOI: 10.1016/j.jfranklin.2023.12.011
  29. 29. Qiu Z et al. Adaptive neural network control for image-based visual servoing of robot manipulators. IET Control Theory and Applications. 2022;16(4):443-453. DOI: 10.1049/cth2.12238
  30. 30. Illingworth J, Kittler J. A survey of the Hough transform. Computer Vision, Graphics, and Image Processing. 1988;44(1):87-116
  31. 31. Anonymous. ABB IRB 4600–40/2.55 Product Manual [Internet]. 2013. Available from: https://www.manualslib.com/manual/1449302/Abb-Irb-4600-40-2-55.html#manual [Accessed: December 01, 2021]
  32. 32. Denavit J, Hartenberg RS. A kinematic notation for lower-pair mechanisms based on matrices. American Society of Mechanical Engineers. 1955, 1955;23(2):215-221. DOI: 10.1115/1.4011045
  33. 33. Anonymous. ZED 2 Camera and SDK Overview [Internet]. STEREOLABS; Available from: https://cdn2.stereolabs.com/assets/datasheets/zed2-camera-datasheet.pdf
  34. 34. Li R, Assadian F. Role of uncertainty in model development and control Design for a Manufacturing Process. In: Majid T, Pengzhong L, Liang L, editors. Production Engineering and Robust Control. London: IntechOpen; 2022. pp. 137-167. DOI: 10.5772/intechopen.101291
  35. 35. Assadian F, Mallon K. Robust Control: Youla Parameterization Approach. New Jersy: Willey; 2022

Written By

Rongfei Li and Francis F. Assadian

Submitted: 06 February 2024 Reviewed: 12 February 2024 Published: 10 April 2024