Control Architecture Design and Localization for a Gas Cutting Robot

Conventional control architecture which has been employed in mobile robot control software is divided into two categories such as knowledge-based and behavior-based control architecture. Early implementation of control architecture was mainly focused on building for sensing the environment, modeling it, planning based on this perceived model and executing the planned action to achieve a certain task. This design approach is called sense-model-plan-act (SMPA) or knowledge-based. A mobile robot making use of a knowledge-based controller tries to achieve its goal by following closely the sense-modelplan-act procedure. SMPA controller also needs initial knowledge, required to model its task environment prior for the robots to executing the planned task. Hence, if the initial knowledge is suited to its working environment, the resulting tasks guarantee success. Although the resulting overall behavior is predictable, the controller often suffers from being slow and becomes complex as it deals with a dynamic environment because most of the controller processing time is consumed in building a model, doing general perception and planning. Therefore, it is suitable controller for robots to require high-level intelligence and work in static and predictable environment. Brooks proposed a radically different approach in the design of mobile robot control architecture to address the drawback of knowledge-based control architecture. This control architecture functions a horizontal computation scheme so that each behavior is a fixed action pattern with respect to the sensory information. When the mobile robot is confronted by a dynamic environment, a behavior-based robot can react fast because of the direct coupling between its behaviors and the sensed states. The robot controller can be built incrementally, thus making it highly modular and easy to construct. However, this reactive approach still suffers from planning the productive and efficient actions in an unstructured environment because they are only confined to reactions to sensors and the changing states of other modules. In this chapter, we address the functional safety problems potentially embedded in the control system of the developed mobile robot and introduce a concept of Autonomous Poor Strip Cutting Robot (APSCR) control architecture with a focus on safety in order to design the walkthrough procedure for each behavior. In section 2, we explain the working environment where the robot will be manually or autonomously operated. Section 3 explains the control architecture of APSCR. In section 4, we explain the localization system of APSCR. Finally, section 5 shows some experimental result and in section 6 conclusions will be lastly addressed.


Introduction
Conventional control architecture which has been employed in mobile robot control software is divided into two categories such as knowledge-based and behavior-based control architecture.Early implementation of control architecture was mainly focused on building for sensing the environment, modeling it, planning based on this perceived model and executing the planned action to achieve a certain task.This design approach is called sense-model-plan-act (SMPA) or knowledge-based.A mobile robot making use of a knowledge-based controller tries to achieve its goal by following closely the sense-modelplan-act procedure.SMPA controller also needs initial knowledge, required to model its task environment prior for the robots to executing the planned task.Hence, if the initial knowledge is suited to its working environment, the resulting tasks guarantee success.Although the resulting overall behavior is predictable, the controller often suffers from being slow and becomes complex as it deals with a dynamic environment because most of the controller processing time is consumed in building a model, doing general perception and planning.Therefore, it is suitable controller for robots to require high-level intelligence and work in static and predictable environment.Brooks proposed a radically different approach in the design of mobile robot control architecture to address the drawback of knowledge-based control architecture.This control architecture functions a horizontal computation scheme so that each behavior is a fixed action pattern with respect to the sensory information.When the mobile robot is confronted by a dynamic environment, a behavior-based robot can react fast because of the direct coupling between its behaviors and the sensed states.The robot controller can be built incrementally, thus making it highly modular and easy to construct.However, this reactive approach still suffers from planning the productive and efficient actions in an unstructured environment because they are only confined to reactions to sensors and the changing states of other modules.In this chapter, we address the functional safety problems potentially embedded in the control system of the developed mobile robot and introduce a concept of Autonomous Poor Strip Cutting Robot (APSCR) control architecture with a focus on safety in order to design the walkthrough procedure for each behavior.In section 2, we explain the working environment where the robot will be manually or autonomously operated.Section 3 explains the control architecture of APSCR.In section 4, we explain the localization system of APSCR.Finally, section 5 shows some experimental result and in section 6 conclusions will be lastly addressed.During rolling, deformation of material occurs in between in the forms of rotation, driven rolls.The transporting force during rolling is the friction between rolls and a processed material.Due to this tremendous force applied to the slabs into wrong direction, the poor strips are produced as shown in Figure 1.Due to these poor strips, the whole facility has to be stopped until those poor strips are to be taken away from the roller conveyor by cutting them into the right size and weight so that the heavy-weight crane can hold and carry them out.In order for the workers to cut the poor strips (generally the poor strip is 1.5m x 60m), they have to put on the fire resistant work wear and do the oxygen cutting on the surface of the poor strip whose temperature is over 200 °C.During inspecting the working environment, we gathered the strong demands and voices about the development of new type of automation which can provide the safety and the cutting operation.We carefully determined the requirements in the respects to the workers or technical barrier.We concluded that the APSCR must be equipped with all the safety requirements and be designed to respond to the severe faults that might be done by operators or malfunctioned by the mechanical faults.

Technical requirement
We must take into consideration robot's functionalities and safety with respect to all cutting operations because the working environment is extremely bad to the robot for itself.For the purpose of fulfilling all the tasks, we analyzed the survey results in technically critical requirements in the engineer's point of view and took into consideration task achievements in the customer's point of view as well.Below is a list of important information required for the robot design in the customer's point of view.


Robot must cut a poor strip at single cutting operation, which means not to move back to the points where cutting is done. When a flame is not detected due to the malfunction of flame detection sensor or clogged by the heavy dust, robot must stop all the operations.


Poor strips should be cut within 10 ~15 minutes regardless of the thickness of the poor strips in the width direction.


Poor strip (approximately 1.5 m x 60 m) cutting operation should be done within 1 hour.


Robot should be equipped with a backfire protection device.


We take into consideration repulsiveness of torching at the moment of emitting the gas and oxygen.


Maximum thickness of the poor strips is 150 mm.


Electrical spark must be sealed off because gas tank and oxygen tank are loaded on the robot.We also take the design of the robot into consideration in the engineer's point of views as follows:  Torch tip should be designed to rotate automatically.


It should be an automatic ignition system.


Robot should move along the guide fence installed at the edge of the conveyor stand.


Backfire protection device should be mounted in the pipe of gas.


Workers should control a flow of gas and oxygen manually.


Robot body should sustain a fully stretching arm (full reach: 3.8 m) so that links of arm segment should not hang downwards with no firmness.


Cutting operation is divided by two steps: pre-heating and blowing.Each step must be accurately controlled by a system or worker.


For the safety protection against gas explosion, a flame detection sensor should be interrupted by a hardware and software.


Object detection mechanism, implemented by high sensitive ultrasonic sensor must be considered in order to avoid collision on moving next cutting point.


Weight balance mechanism should be taken into consideration since robot's total weight is approximately 700 kg.


Intensity of flame shouldn't be strong enough to cut the bottom plate under a poor strip.We had collected many opinions and suggestions regarding to robot's functionalities and behaviors.Above lists are a part of surveys, which workers thought of the fundamental tasks and operations which the robot developer should consider.

System configuration for APSCR
When we develop industrial robots under an unpredictable environment such as steelworks, there are many critical requirements in the sense that hardware parts would comprise of the supplementary functionalities with a help of software parts.On the contrary, software and hardware are separately developed and combined without considering control architecture.The proposed control architecture mainly makes use of the advantages of conventional reactive control architecture.In this section we focus on the software architecture assuming that the hardware architecture is well designed and describe how the safety walkthrough procedure is sufficiently cooperated within the reactive controller by utilizing hardware and software interrupts as well.Figures 3 through 4 are pictures illustrating the proposed gas cutting robot as mentioned in section 2.

Design of safety interaction for software interrupts
In the context of developing safety-oriented software, safety protection functionalities in control architecture become a key to making a robot function as was designed with a focus on the safety in this project.The safety protection functionalities are embodied within the task-achieving behaviors as follows.
BEHAVIOR_SettingCuttingParameter The task-achieving behaviors, which are described above provide the functional mechanism for APSCR to provide the poor strip cutting procedure autonomously.The following Figure 5 shows how behaviors modules depend on each other and are grouped according to their functions.To be able to operate the task-achieving behaviors sufficiently as defined in Figure 6, the APSCR starts with checking its system by BEHAVIOR_SelfWalkThrough and do the alignment job in order to previously position cutting procedure by calling the following functions CheckDistance10cm and InitialAlignemtnwithStand in Before Cutting.After initial setup is done, the robot starts to measure the thickness of the poor strip (MeasureDepthOfPoorStrip), finds the starting position in which the edge of the poor strip is detected by the roller mounted on the torch holder (SetStartingPostion4Cutting), cuts the poor strip with controlling the speed of Boom and Derrick and maintaining the gap between the torch tip and surface of the strip (StartFiring, Preheating4Cutting, SettingCuttingParameter), and finally detects the finishing edge of the strip (SetfinishigPosition4cutting).After finishing the cutting procedure, while monitoring the alignment between the robot and the poor strip stand and making obstacle avoidance modules running (CheckSensorStatus4Navigation), the robot moves to the next cutting position  (MovingNextCuttingPosition).Each of these task-achieving behaviors comprises of the primitive action modules to support the functional mechanism of the robot and sort out the interactions between the behaviors.Every primitive action module has its own timer or interrupts because each behavior or primitive action should be considerably taken for the safety.

Design of motion controller
As shown in Figure 6, every primitive action modules are implemented in the manner that each sensor considers a different property and I/O interface.Controller integrates all modules discussed previously using the safety-oriented reactive control architecture.One of the significant advantages for encapsulating the behavior modules is that when programming, we can hide a system level architecture such as sensor calibration, hardware or software interrupts, and so on.More importantly, we do not have to program the detailed safety modules of relevance.The motion controller for a gas cutting robot is composed of a microprocessor and peripheral circuits with a digital input and output (DIO), an analog to digital converter (ADC), a digital to analog converter (DAC), a serial communication circuit, and timers.The microprocessor of the controller is the Atmega128 made in Atmel Co., Ltd.

Localization for a gas cutting robot
Localization system for a gas cutting robot is a unique sensor system for indoor localization of industrial mobile robots.It analyzes infrared ray image which is reflected from a passive landmark with an independent ID.The output of position and heading angle of a robot is given with very precise resolution and high speed.It is seldom affected by surroundings such as an infrared ray, a fluorescent light and sunshine.It is composed of an IR Projector part and an image processing unit.It can calculate high resolution and high speed localization of position and heading angle.Also, landmark is used by being attached on ceiling.This localization system doesn't require any device for any synchronization or communication between a robot and a landmark.The area that localization system covers is extended by only adding landmarks to ceiling.Then each section can be distinguished easily each other by using landmarks with different IDs.This system automatically measure and calibrate distance between landmarks and ceiling height.The greatest advantage of this system is that it is nearly not affected in environment such as lamp and sunlight and works excellent localization function at night as well as in the daytime.

Background art
In general, to control an indoor mobile robot, it is required to recognize a position of the robot.There are two self-localization calculation methods performed by a robot itself using a camera.First, there is a method of using an artificial landmark.a landmark having a certain meaning is installed on a ceiling or a wall, the landmark is photographed by a CMOS image sensor, the landmark is extracted from an image, and coordinates on a screen are allowed to be identical with coordinates of a mobile robot, thereby calculating a self-localization of the mobile robot by itself On the other hand, the landmark is installed on the top of the mobile robot and the CMOS image sensor is installed on the ceiling.Second, there is a method of using a natural landmark.A ceiling is photographed by a camera; information of structures such as lightings installed on the ceiling and straight lines and edges of interfaces between the ceiling and walls is extracted, thereby calculating a self-localization of a mobile robot by itself using the information.However, when using the artificial landmark, the artificial landmark may be affected by lightings and color information of the landmark may be distorted by sunlight.Also, when using the natural landmark, since the natural landmark is much affected by brightness of an ambient light and there is required odometer information or another robot position reader when recording a position of a feature of the landmark, a large memory is required and an additional device is essential.Particularly, when there is no illumination, it is very difficult to use the natural landmark.Accordingly, it is required a new self-localization recognition method of a mobile robot that is capable of be disaffected by lightings and reduces a calculation time of the image processing.Also, when using the two conventional methods described above, since coordinates of a camera and coordinates of a landmark attached to a ceiling calculate position information of a mobile robot while assuming that there is no rotation of in directions excluding the direction of gravity, the robot position information calculated using an image obtained by the camera may have many errors when the mobile robot goes over a small mound or is inclined by an external force or an inertial force of rapid acceleration or deceleration.On the other hand, though there may be an initial correction for an inclination occurring when attaching one of a CMOS or CCD sensors used for a camera device to a robot, the initial correction is merely for an error occurring when it is initially installing in the robot but not for an error caused by an inclination occurring while the robot actually is driving.An aspect of the present localization system provides a landmark for recognizing a position of a mobile robot, the landmark capable of allowing one of position and area information of the mobile robot to be detected from a landmark image photographed by a camera and being recognized regardless of indoor lightings.Also it provides an apparatus and method for recognizing a position of a mobile robot, in which an image of the landmark is obtained by an infrared camera and the position and area information of a gas cutting robot can be obtained without particular preprocessing the obtained image.This proposed method provides an apparatus and method for recognizing a position of a mobile robot, the apparatus having an inclination correction function to precisely recognizing the position of the mobile robot by correcting position information of an image of a landmark photographed by an infrared camera by measuring inclination information of the mobile robot, such as a roll angle and a pitch angle, by using two axis inclinometer.

Technical solution
According to aspect of the present localization system, the landmark is used as recognizing coordinates and azimuth information of a mobile robot.The landmark including a position recognition part is formed of a mark in any position and at least two marks on an X axis and Y axis centered on the landmark.The landmark may further include an area recognition part formed of a combination of a plurality of marks to distinguish an individual landmark from others.According to another aspect of the present localization system, there is an apparatus provided for recognizing a position of a mobile robot.The apparatus including: an infrared lighting unit irradiating an infrared ray to a landmark formed of a plurality of marks reflecting the infrared; an infrared camera photographing the landmark and obtaining a binary image; a mark detector labeling a partial image included in the binary image and detecting the mark by using a number and/or dispersion of labeled pixels for each the partial image; and a position detector detecting coordinates and an azimuth of the mobile robot by using centric coordinates of the detected mark.The landmark may include a position recognition part formed of a mark in any position and at least two marks located on an X axis and Y axis centered on the mark.Also, it is provided a method of recognizing a position of a mobile robot having an inclination correction function, the method including: (a) obtaining a binary image by irradiating an infrared ray to a landmark including a position recognition part formed of a mark in any position and at least two marks located on an X axis and Y axis centered on the mark to reflect the infrared ray and photographing the landmark; (b) detecting two-axis inclination information of the mobile robot to the ground and obtaining a binary image again when the detected two-axis inclination information is more than a predetermined threshold; (c) labeling a partial image included in the binary image and detecting the mark by using a number and/or dispersion of labeled pixels for each the partial image; and (d) detecting coordinates and an azimuth of the mobile robot by using centric coordinates of the detected mark.Coordinates and an azimuth of the mobile robot can be detected by correcting the centric coordinates of the detected mark by using a coordinate transformation matrix according to the two axis inclination information.Artificial landmark to enhance reflective gain is shown in Figure 7. M to a point C meets with a plane r .In this case, the point C is designated as an optical center, and the plane r is designated as a retinal plane.A straight line passing the point C and vertical to the plane r may exist, which is designated as an optical axis.Generally, the point C is allowed to be an origin point of camera coordinates, and the optical axis is allowed to be identical with Z axis of an orthogonal coordinate system.After the camera model is determined, a structure of the camera may be expressed with various parameters.The parameters may be divided into two kinds of parameters used for describing a camera, intrinsic parameters and extrinsic parameters.The intrinsic parameters describe corresponding relationships between points on the camera coordinates, which is expressed within three dimensional coordinates and the retinal plane with two dimensional coordinates, where the points are projected.The extrinsic parameters describe a transform relationship between the camera coordinates and world coordinates.Hereinafter, the intrinsic parameters will be described.
wherein f indicates a focal length that is a distance between the optical center C and a point c at which the optical axis meets the retinal plane.
wherein and are values indicating a scale transformation between the two coordinates and 0 u and 0 v are pixel coordinates values of the principal point c.The relationship given by Eq. ( 2) is effective when an array of the CCD array is formed by a perfect right angle.However, since it is actually difficult to form a perfect right angle, it is required to obtain a relationship equation considering the difficulty.As shown in Figure 8, when an angle formed by both axes forming the pixel coordinates is designated as θ, there is a relationship between coordinates on the retinal plane and the pixel coordinates, as follows.
When applying Eq. ( 1) to Eq. ( 3), a relationship equation between the three dimensional coordinates on the camera coordinates and the pixel coordinates is finally obtained as follows. 00 As described above, the intrinsic parameters are formed of five such as , , , 0 u , and 0 v .Hereinafter, the extrinsic parameters will be described.Generally, points on a three dimensional space are described in different coordinates from the camera coordinates, the coordinates generally designated as world coordinates.Accordingly, a transformation equation from the world coordinates to the camera coordinates is required, the transformation equation capable of being shown by a displacement vector indicating a relative position between origin points of respective coordinates and a rotation matrix showing a rotation amount of each coordinate axis.() wherein R indicates the rotation matrix and t indicates the displacement vector.Since R includes three independent parameters and t also includes three independent parameters, the number of extrinsic parameters is six.Hereinafter, it will be described to show a camera mode using projective geometry.
A pinhole model of a camera may be linearly shown by using a concept of homogeneous coordinates.When a point on two dimensional pixel coordinates is defined as [] T mu v  and the coordinates on three dimensional world coordinates corresponding to the point, are defined as [] T M XYZ  , homogeneous coordinates formed by adding 1 to the last term of the coordinates become [1 ] T mu v   , and . A relationship equation between the three dimensional point M and m that is formed by projecting the point M is expressed using the described pinhole model as follows. [] wherein s is a scale factor and R and t are a rotation matrix and a displacement vector, respectively, which are extrinsic parameters.A is a matrix of the intrinsic parameter and is designated as a calibration matrix.
wherein and correspond to scale values to u and v axes, corresponds to a skewness of two image axes, and 0 u and 0 v are principal points.We use the abbreviation Homography between the model plane and its image without loss of generality, we assume the model plane is on 0 Z  of the world coordinate system.Let's denote the th i column of the rotation matrix R by i r .From Eq. ( 6), we have By abuse of notation, we still use a point on the model plane, but . Therefore, a model point M and its image m is related by a homography H : As is clear, the 33  matrix is defined up to a scale factor.Given an image of the model plane, an homography can be estimated.Let's denote it by   123 Hhhh  .From Eq. ( 7), we have where  is an arbitrary scalar.Using the knowledge that 1 r and 2 r are orthonormal, we have These are the two basic constraints on the intrinsic parameters, given one homography.
Because a homography has 8 degrees of freedom and there are 6 extrinsic parameters (3 for rotation and 3 for translation), we can only obtain 2 constraints on the intrinsic parameters.
Note that 1 T AA  actually describes the image of the absolute conic(Luong and Faugeras, 1997).We use the closed-form solution in order to solve camera calibration problem among nonlinear optimization methods.Let The above solution is obtained through minimizing an algebraic distance which is not physically meaningful.We can refine it through maximum likelihood inference.We are given n images of a model plane and there are m points on the model plane.Assume that the image points are corrupted by independent and identically distributed noise.The maximum likelihood estimation can be obtained by minimizing the following functional: ,, ) where ˆ(, ,, ) ii j mAR t M is the projection of point j M in image i , according to Eq. ( 7).A rotation R is parameterized by a vector of 3 parameters, denoted by r , which is parallel to the rotation axis and whose magnitude is equal to the rotation angle.R and r are related by the Rodrigues formula (Faugeras, 1993).Minimizing Eq. ( 20) is a nonlinear minimization problem, which is solved with the Levenberg-Marquardt algorithm (More, 1977).It requires an initial guess of A ,   which can be obtained using the technique described in the previous subsection.Up to now, we have not considered lens distortion of a camera.However, a desktop camera usually exhibits significant lens distortion, especially radial distortion.Therefore, we only consider the first two terms of radial distortion.Let

 
, uv be the ideal (nonobservable distortion-free) pixel image coordinates, and   Experimentally, we found the convergence of the above alternation technique is slow.A natural extension to Eq. ( 20) is then to estimate the complete set of parameters by minimizing the following functional: , , , , ,, ) where M in image i according to Eq. ( 6), followed by distortion according to Eq. ( 11) and ( 12) (Zhang, 1997).This is a nonlinear minimization problem, which is solved with the Levenberg-Marquardt Algorithm (More, 1977).A rotation is again parameterized by a 3 vector r .An initial guess of A and   can be obtained using the prescribed technique.An initial guess of 1 k and 2 k can be obtained with the technique described in the last paragraph technique, or simply by setting them to 0.
A vector allowing the coordinates and the azimuth of the mobile robot to be known may be obtained by using a calibration equation, which is disclosed in detail in several theses as follows.A transform relationship of projecting a point on world coordinates to camera pixel coordinates will be described referring to Figure 9.When a roll angle and a pitch angle corresponding to inclination of a camera are and , respectively, such a degree of inclination as and is expressed in a matrix as follows.
wherein homogeneous pixel coordinates m   corresponding to a scale parameter are obtained as follows.

 
When assuming the displacement vector t to be known, the point M on the world coordinates may be obtained as follows.
  When the point M corresponding to a reference point is known, the displacement vector t that is finally to be calculated is obtained as follows, thereby calculating a self-localization of the mobile robot.
As described above, the vector amount allowing the coordinates and azimuth of the mobile robot to be simultaneously known may be obtained by the vector operation using the three detected marks of the position recognition part and the calibration equation, thereby embodying a microprocessor at a low price.

Coordinates calculation for mobile robot
The proposed localization system includes a landmark indicating position information such as coordinates and an azimuth of a mobile robot, a position recognition apparatus, and method.A landmark indicating a position of a mobile robot will be described with reference to Figure 10.
According to an embodiment of the proposed localization system, the landmark is attached to a ceiling of a space, in which the mobile robot moves, and is photographed by a camera installed on the mobile robot or attached to a top of the mobile robot and photographed by a camera installed on the ceiling to be used for recognizing the position of the mobile robot.A landmark according to an exemplary embodiment of the present invention includes a position recognition part formed of three marks to recognize essential position information such as coordinates, an azimuth of a mobile robot, and an area recognition part formed of a plurality of marks to distinguish an individual landmark from others to recognize additional area information of the mobile robot.The position recognition part is formed of one mark B in any position and two marks C and A located on an X axis and Y axis, respectively, centered on the mark B. The three marks B, A, and C provide the landmark with a reference point and reference coordinates.Though there is mark shown in Figure 10, the number of the marks is not limited to this and more than two marks may be used.Though the area recognition part is formed of 4×4 marks inside the position recognition part as shown in Figure 10, a position and the number of the marks forming the position recognition part may be varied according to purpose.By giving an ID corresponding to the number and the position of each of the marks forming the area recognition part, each individual landmark may be distinguished from others.As shown in Figure 10, when the area recognition part is formed of the 3×3 marks, IDs of 512 is given.In this case, the position of the mark forming the area recognition part may be determined according to the reference coordinates provided by the position recognition part and each of the IDs may be Fig. 10.Design of artificial landmark.
binary coded, thereby quickly recognizing area information of the mobile robot.On the other hand, an infrared reflection coating may be applied or a reflection sheet may be attached to the marks forming the landmark in order to diffusely reflect an infrared ray in a certain wavelength band, particularly, a wavelength band of 800 to 1200 nm.Accordingly, not only in the night but also when there exist reflected lights, only an infrared ray reflected by the mark is detected by an infrared camera, thereby quickly recognizing the position of the mobile robot without using other image processing methods.In this case, the mark may be formed in the shape of only a circle of a predetermined size or may be formed in the protruded shape of one of a circle on a plane and a hemisphere from a plane.The mark formed in the shape of one of the circle and the hemisphere may be used for easily obtaining a number, dispersion, and centric coordinates of pixels when detecting the mark.Though the marks may be formed identically with each other, the marks for the position recognition part are formed different from those for the area recognition part in size and/or color, thereby easily distinguishing the position recognition part from the area recognition part.
The mark forming the landmark, described above, may be applied to a conventional mobile robot position recognition apparatus without using an infrared camera, and a use of the marks is not limited to a position recognition apparatus according to an exemplary embodiment of the proposed localization system.
Next, an apparatus and method for recognizing a position of a mobile robot, according to an exemplary embodiment of the proposed localization system, will be described in the order of operations.The embodiment may be applied to when a space in which the mobile robot moves or a space to which a landmark is attached has no bend and is flat.An infrared light emitting diode (LED) irradiates an infra ray to the landmark and an image reflected by the mark forming the position recognition part is photographed by a camera, thereby obtaining a binary image.Namely, the mark in the image obtained by the camera is set up as a bright light close to white and is converted into the binary image by selecting a predetermined threshold brightness value.Considering the camera in detail, the camera includes a plurality of infrared LEDs, an infrared light controller, a CMOS array, and a image processing controller around a wide angle lens.The camera is installed on one of the mobile robots and a ceiling of a space in which the mobile robot moves, to obtain an image of the landmark attached to one of the ceiling, a wall, and a top of the mobile robot.A partial image brightly displayed in the binary image is labeled, and the mark is detected from the number and/or dispersion of the labeled pixel.In this case, labeling indicates a procedure of recognizing an individual image, giving a reference number to the individual image, and making a label list to know a position and a size of the partial image brightly displayed in the binary image.
After the labeling, centric coordinates are obtained for each label and the mark is detected from the number and/or dispersion of the labeled pixels.There may be various methods of detecting a mark from a label list.For example, one method may limit the number of pixels forming a label.Namely, since the mark is formed in the shape of a circle and has a uniform size, only a label having a certain number of pixels is selected as a mark candidate and labels having pixels more or less than the certain number are deleted from the label list.
Another method may determine a predetermined dispersion value corresponding to a dispersion index with respect to centric coordinates from the labels and delete labels in which pixels are not clustered from the label list, thereby determining a mark candidate, since the marks are clustered in the shape of a circle.The two methods of detecting a mark from labels, described above, may be used selectively or simultaneously if necessary.On the other hand, when only the marks of the position recognition part exist in the landmark, three marks may be detected by using the above methods.However, when there are the marks of the area recognition part, whose size is identical with the mark of the position recognition part, only the marks corresponding to the position recognition part may be separately detected from the total marks by performing an additional process as follows.Namely, three labels whose distances from each other are similar and located in the shape of a right angle are detected from the determined mark candidates, thereby detecting the marks of the position recognition part.For example, an inner product of vectors connecting labels is obtained and a label whose inner product value is closest to a largest valid inner product value is detected, thereby detecting only the marks of the position recognition part from the total marks.When indexes of labels corresponding to A, B, and C of Figure 10 are designated by i, j, and k and a largest valid value of an inner product of vectors between the labels is the indexes whose difference of magnitudes is smallest among indexes whose inner product value corresponds to a range is obtained by using Eq. ( 6).

 
,, ,, (, , ) , , , a r g min ( , , ) When an existence and a position of the mark has been recognized by using Eq. ( 29), an identifier (ID) of the mark may be easily obtained by calculating the position by using a sum of position values and detecting whether the label exists in the position.Position information such as coordinates and an azimuth and area information of the mobile robot are detected by using the detected mark.The ID determined according to the number and position of the marks corresponding to the area recognition part from the detected marks may be quickly obtained, and the area information of the mobile robot may be obtained.In this case, the area information of the mobile robot is allocated to the ID and is an approximate position in which the mobile robot is located.Detailed position information of the mobile robot, such as the coordinates and the azimuth, may be obtained by using centric coordinates of the detected three marks A, B, and C forming the position recognition part.According to an exemplary embodiment of the present invention, coordinates of the mobile robot may be obtained by considering any point obtained from the centric coordinates of each of the three marks A, B, and C shown in Figure 10 as reference coordinates.In this case, the any point may be a center of gravity obtained by the centric coordinates of the three marks.In this case, since the center of gravity is an average of errors with respect to the centric coordinates of the three marks, an error with respect to the coordinates of the mobile robot obtained by using the center of gravity may be reduced.An azimuth of the mobile robot may be obtained based on one direction vector obtained by three centric coordinates, for example, a direction vector obtained by summation of a vector from B to A and a vector from B to C. A vector allowing both the coordinates and the azimuth of the mobile robot to be known may be obtained by using a calibration equation, which is disclosed in detail in several theses as follows (Hartley, 1994, Liebowitz andZisserman, 1998).
The calibration equation is shown as follows.
where 0 m  is projected pixel coordinates corresponding to a reference position of the mobile robot, 0 R and 0 t are a rotation matrix and a displacement vector, respectively, s is a scale value, A is a calibration matrix, 1 m  is pixel coordinates of a position to which the mobile robot rotates and moves, 1 R is a matrix corresponding to a rotation angle amount, and 1 t is a displacement vector, Eq. ( 31) and (32) may be obtained by using Eq.(30).

 
A coordinate value M is obtained by using Eq. ( 31), and the obtained value is assigned to Eq. ( 32).In Eq. ( 32), 1 R may be calculated by using a sum of vectors of the recognized marks.Since all values in Eq. ( 32) excluding 1 t are known, the displacement vector 1 t may be calculated.That is, the displacement vector of the mobile robot may be obtained by using Eq. ( 32).As described above, the vector allowing the coordinates and the azimuth of the mobile robot to be simultaneously known may be obtained by using the detected three marks of the www.intechopen.composition recognition part and a vector operation using the calibration equation, thereby embodying a microprocessor at a low price.Also, brief area information and detailed position information such as the coordinates and azimuth of the mobile robot may be converted into code information.The code information is transmitted to the mobile robot to perform necessary operations.An apparatus and method of recognizing a position of a mobile robot having an inclination correction function, according to another embodiment of the proposed localization system will be described referring to Figure 11.Entire flowchart for localization algorithm is depicted in Figure 12.Position information such as coordinates and an azimuth of the mobile robot is detected by using the detected mark and the two-axis inclination information.The detailed position of the mobile robot, such as the coordinates and the azimuth of the mobile robot, may be obtained by using centric coordinates of the detected three marks A, B, and C forming the position recognition part.

Evaluation
A mobile robot can identify its own position relative to landmarks, the locations of which are known in advance.The main contribution of this research is that it gives various ways of making the self-localizing error smaller by referring to special landmarks which are developed as high gain reflection material and coded array associations.In order to prove the proposed localization system, we develop the embedded system using TMS320DM640 of Texas Instrument Co., Ltd.And then, the proposed localization system has been tested on mobile robot.The schematic diagram for embedded system is depicted in Figure 13.And embedded system on mobile robot is shown in Figure 14.This localization system is composed of a microprocessor and CMOS image sensor with a digital bus, a serial communication circuit, and infrared LED driver as shown in Figure 14.Calibration system for camera distortion has 3-axis motion controller in order to control gesture of reference plates.And reference images for camera calibration are show in Figure 15, which these can be acquired by calibration system as shown in Figure 26.In order to find label in reference images automatically, we use the Javis' March algorithm.This is perhaps the most simple-minded algorithm for the convex hull, and yet in some cases it can be very fast.The basic idea is as follows: Start at some extreme point, which is guaranteed to be on the hull.At each step, test each of the points, and find the one which makes the largest right-hand turn.
That point has to be the next one on the hull.Because this process marches around the hull in counter-clockwise order, like a ribbon wrapping itself around the points, this algorithm also called the "gift-wrapping" algorithm.The accuracy of self-localizing a mobile robot with landmarks based on the indices is evaluated as shown in Figure 17 and a rational way to minimize to reduce the computational cost of selecting the best self-localizing method.The simulation results show a high accuracy and a good performance as depicted in Figure 16.Also, the localization errors for the proposed algorithm are shown in Figures 17(a) and (b), respectively.In results of evaluation, the peak error is less than 3 cm as shown in Figure 17.

Simulations and experiments
One of the most important cutting procedures taken into consideration is to properly use the cutting torch according the hot slab thickness ranged between 50 mm to 300 mm because choosing the right torch tip plays the major effects on the performance of cut quality and amount of usage for oxygen and gas.For automated oxygen cutting to blow the hot slab cobble into two pieces in the way of oxygen cutting equipped in the end of the robot arm, there are several things to consider as follows: Step1: way of being lit with spark Step2: control of flammable gas mixed with oxygen without human intervention Step3: status of the metal hot enough to be melt down Setp4: Status of end of blowing out of the melting In order for the torch to be lit with spark, we installed the stepping motor with 300mm long guided arm equipped with the engine ignition plug used in the car engine.When signal is on, it is rolled out all the way to the bottom of torch tip and sparked until the gas is lit and then begin automatically adjusting the oxygen based on the oxygen insertion Before blowing out of the way, we must preheat starting area.In order to locate the starting area, we used limit switch mounted inside the torch holder wheel.The decision signal to determine the preheating status depends on the human decision.The most challenging task in the oxygen cutting is to hold the flame with blowing out of the focused area with the proper speed along the way to the end.To determine the proper speed according to the slab thickness, we designed the table that maps between the speed and slab thickness.Figures 18 are the pictures of gas cutting experiments.In summary, the first procedure in the developed oxygen cutting robot for the hot slab cobble is to choose the proper torch tip based on the slab thickness.Secondly, ignition procedure is operated based on the table explained in the Table 1.Thirdly, locating the starting area by using the limit switch inside the torch holder is automatically done.At last, the human inspects the right condition for preheating and send command for "Moving Forward" to the robot.

Conclusion
In this chapter, we proposed the safety-oriented reactive controller implemented in an innovative and robust industrial mobile robot designed for cutting a poor strip in the steelworks.This mobile robot, called "APSCR", which provides the fully-autonomous operation and wireless controlled operation, is designed to take into consideration the functional safety because it controls the gas and oxygen without human intervention.To able to support the safety guarantee, we encapsulated the robot controller such as behavior and primitive action modules so that the behaviors do not care about how safety protection mechanism works behind the hardware part or software part.In this chapter, we propose a set of indices to evaluate the accuracy of self-localizing methods using the selective reflection landmark and infrared projector, and the indices are derived from the sensitivity enhancement using 3D distortion calibration of camera.And then, the accuracy of selflocalizing a mobile robot with landmarks based on the indices is evaluated, and a rational way to reduce the computational cost of selecting the best self-localizing method is proposed.The simulation results show a high accuracy and a good performance.With the preliminary results, we have proved the robustness and reliability of the proposed control architecture.To prove that APSCR is able to use in the steelworks, we still have to perform further experiments with mechanical modification.

Fig. 1 .
Fig. 1.Picture of poor strip produced during hot rolling.

Fig. 8 .
Fig. 8. Camera Model & In case: out of square in CCD array.
Referring to Figure 8, it may be known that a relationship between points [] T CC C C MX Y Z  on the camera coordinates and a point [] T rr r mu v  on the corresponding retinal plane is provided as follows.

C
camera coordinates, a relationship equation between Mw and Mc is shown as follows.

Fig. 9 .
Fig. 9. Diagram illustrating a coordinate system when a camera rotates.

Fig. 11 .
Fig. 11.The layout of proposed localization system.The apparatus of Figure11further includes two axis inclinometers.An infrared light emitting diode (LED) irradiates an infra ray to the landmark and an image reflected by the mark forming the position recognition part is photographed by a camera, thereby obtaining a binary image.Namely, the mark in the image obtained by the camera is set up as a bright light close to white and is converted into the binary image by selecting a predetermined threshold brightness value.Considering the camera in detail, as shown in Figure11, the camera includes a plurality of infrared LEDs, an infrared light controller, a CMOS array, a vision controller, and two axis inclinometers around a wide angle lens.The camera is installed on the mobile to obtain an image of the landmark attached to one of the ceiling and a wall.Entire flowchart for localization algorithm is depicted in Figure12.Position information such as coordinates and an azimuth of the mobile robot is detected by using the detected mark and the two-axis inclination information.The detailed position of the mobile robot, such as the coordinates and the azimuth of the mobile robot, may be obtained by using centric coordinates of the detected three marks A, B, and C forming the position recognition part.

Fig
Fig. 16.Calibration errors for camera.
The point c indicates a principal point.A phase formed on the retinal plane is sampled by the CCD or CMOS array, converted into a video signal, outputted from the camera, and stored in a frame buffer.Accordingly, a finally obtained coordinate value is not a coordinate value of the retinal plane but a pixel coordinate value.When pixel coordinates corresponding to r

Table 1 .
Oxygen and LPG Insertion Table.