InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Technology » "Mine Action - The Research Experience of the Royal Military Academy of Belgium", book authored by Charles Beumier, Damien Closson, Vinciane Lacroix, Nada Milisavljevic and Yann Yvinec, ISBN 978-953-51-3304-9, Print ISBN 978-953-51-3303-2, Published: August 30, 2017 under CC BY-NC 4.0 license. © The Author(s).

Chapter 2

Positioning System for a Hand-Held Mine Detector

By Charles Beumier and Yann Yvinec
DOI: 10.5772/65784

Article top


Typical mine clearance setting.
Figure 1. Typical mine clearance setting.
Setting of the camera on the detector.
Figure 2. Setting of the camera on the detector.
System at work.
Figure 3. System at work.
Part of the marker bar.
Figure 4. Part of the marker bar.
The calibration grid.
Figure 5. The calibration grid.
The OPMS synoptic.
Figure 6. The OPMS synoptic.
Coordinate systems.
Figure 7. Coordinate systems.

Centre refinement.
Figure 8. Centre refinement.
Centre propagation.
Figure 9. Centre propagation.
Node localisation.
Figure 10. Node localisation.
Results of node localisation on the grid.
Figure 11. Results of node localisation on the grid.
Close-up of the square pattern.
Figure 12. Close-up of the square pattern.
Corner detection.
Figure 13. Corner detection.
Subpixel localisation of the reference points.
Figure 14. Subpixel localisation of the reference points.
Evaluation grid and placement of the detector.
Figure 15. Evaluation grid and placement of the detector.
Estimation of the camera position and orientation during calibration. All units in mm.
Figure 16. Estimation of the camera position and orientation during calibration. All units in mm.
X′, Y′ and Z′ charts (in mm) for all the positions collected during the evaluation at three different heights for a tilted bar.
Figure 17. X′, Y′ and Z′ charts (in mm) for all the positions collected during the evaluation at three different heights for a tilted bar.
Left: histogram of 3D errors (in mm) for the positions at three different heights with a horizontal marker bar. Right: similar evaluation with a tilted marker bar.
Figure 18. Left: histogram of 3D errors (in mm) for the positions at three different heights with a horizontal marker bar. Right: similar evaluation with a tilted marker bar.
Real (circle) and estimated (cross) positions in the XY plane.
Figure 19. Real (circle) and estimated (cross) positions in the XY plane.
Position error (mm) and orientation error (degree) versus square size (cm) of interest centred in the middle of the 100 × 50 scan area, height = 11.5 cm.
Figure 20. Position error (mm) and orientation error (degree) versus square size (cm) of interest centred in the middle of the 100 × 50 scan area, height = 11.5 cm.
Matrices of error for the scan 100 × 50 cm at Z = 11.5, for position (top) and orientation (bottom).
Figure 21. Matrices of error for the scan 100 × 50 cm at Z = 11.5, for position (top) and orientation (bottom).

Positioning System for a Hand-Held Mine Detector

Charles Beumier and Yann Yvinec
Show details


Humanitarian mine clearance aims at reducing the nuisance of regions infected by explosive devices. These devices need to be detected with a high rate of success while keeping a low false alarm rate to reduce time losses and personnel’s fatigue. This chapter describes a positioning system developed to track hand-held detector movements in the context of close-range mine detection. With such a system, the signals captured by the detector over time can be used to build two- or three-dimensional data. The objects possibly present in the data can then be visually appreciated by an operator to detect specific features such as shape or size or known signatures. The positioning system developed in the framework of the HOPE European project requires only a camera and an extra bar. It adds few constraints to current mine clearance procedures and requires limited additional hardware. The software developed for calibration and continuous acquisition of the position is described, and evaluation results are presented.

Keywords: mine detection, hand-held detector, positioning system, camera calibration

1. Introduction

For several decades humanitarian mine clearance has received much attention to reduce the nuisance of infected regions. Although the recent trend for demining consists in reducing the search area thanks to heterogeneous sources of information [1], the actual detection of explosive devices still needs to be addressed. It requires high detection rates to maximise the chance of detection while keeping a low rate of false alarms to avoid time and energy losses and reduce fatigue of the personnel.

Metal detectors are possibly the most popular equipment to detect mines. The fact that mines may contain less and less metal has forced manufacturers to produce more and more sensitive metal detectors. As a result metal detectors frequently detect small harmless pieces of metal, which requires time to be investigated by the clearance teams. One possible solution consists in showing the operators a display of what is below the ground surface. Another one is to consider other mine detectors such as a ground-penetrating radar (GPR), with the possibility to combine both options.

In the European ESPRIT project hand-held operational demining system (HOPE [2], 1999–2001), higher detection rates and lower false alarm rates were expected from the combination of a metal detector (MD), a GPR (sensitive to metallic and dielectric objects) and a microwave radiometer (MWR, indicating flushed objects). Tracking the position of the sensors during scanning enabled the reconstruction of registered images, allowing for the fusion of the sensor signals and for the discrimination of objects from their shape and size. The advantage of image analysis had already been shown in [3, 4]. However, since a hand-held detector was considered as opposed to a robotic one [5], sensor head positioning was not trivial.

In the late 1990s, local positioning was commonly based on triangulation, scene analysis or proximity, possibly using time-of-flight, infrared or ultrasound emitters/receptors or magnetic sensors [6]. In the HOPE project, we opted for a camera attached to the hand-held detector to report positions relatively to a marker bar laid on the ground close to the area to be scanned.

This chapter describes the optical positioning system developed in the framework of the HOPE project. The different parts of the system are described, such as the camera attached to the detector, the marker bar used for reference and the trigger box aimed at synchronising sensor data and positioning. The mathematical solution for positioning the camera and finally the detector relatively to the bar is explained. A specific calibration procedure was developed to deliver positions of the detector relatively to the bar. The image processing tools for positioning and for calibration are detailed. Finally, the evaluation of the system is given in terms of positioning and orientation precision.

2. Methodology

2.1. Necessity for a positioning system

The major role of the positioning system for a hand-held detector is the ability to reconstruct an image of the detector response from a series of individual measurements obtained over time when scanning the area. This image may serve several purposes. First, it represents a map of detector responses providing a reference for a later device removal. Secondly, it may give a better view and description of the scene such as the object size and shape. Thirdly, the map indicates the scan progress and the areas still to be scanned before moving further.

Besides these direct products of the reconstructed image, the registration of the image pixels allows for the fusion of several captures, possibly with different detectors. The image(s) can be processed by traditional or new image processing tools. For instance, noise reduction with low-pass filtering is efficient and trivial on the image but quite imprecise on a sequence of measures with no localisation.

In the specific case of hand-held detectors for buried mine detection, the positioning system should be able to capture the sensor orientation. A non-vertical orientation of the detector may lead to a shifted localisation of an object in the map. Recording the angles of the detector is useful to compensate for this complication.

2.2. Requirements

In the case of hand-held detectors, we cannot rely on mechanical systems to have position. The freedom of movement makes position estimation more difficult.

The required precision is high since the localisation of suspect objects is of prime importance for their removal. The a priori precision was set to 2 cm for positions and 1° for angles.

The solution must be resilient to practical conditions such as a changing weather, varying light conditions and uncontrolled movements of the operator. In particular, should the positioning system fail for some measurements, it should be able to restart delivering positions without the need of a lengthy starting procedure, if any.

Finally, the developed solution should be consistent with the demining procedure so that it does not disturb the operator and does not ruin the chances for adoption by the demining team.

2.3. Discussion of possible methods

By the time of the HOPE project, there was no existing system for position estimation of a hand-held mine detector.

One classic way to localise an object is to attach an emitter and use triangulation from a few receivers. The setup requires some arrangement to correctly position the receivers, with good reception and appropriate distance and angle for precision. This is not trivial in real conditions, especially if the terrain is unfriendly. In the context of a team demining lanes in parallel, the system installation may become complex, especially if interferences exist.

With the availability of cost-effective, light and small image sensors, positioning has also been dealt with image matching. One possible solution consists in matching the ground part seen by a camera attached to the hand-held detector [7]. A global picture of the area must be first acquired before starting the scan. Or the localisation can be made relative, picture after picture during the scan, but the drift has to be compensated for at the end of the procedure, and the lack of matching during the scan may result in abandoning and restarting the scan. The problem with the image matching approach is that it completely relies on the existence of image features to be matched. These are typically colour, corners, lines or areas that have to be present and properly detected to bring images into correspondence. The sensitivity of the matching relatively to image features is always high since the detectability of features highly depends on image quality. The presence of shadow (induced by the operator or by the detector) and the change in illumination (due to reflections or clouds) are practical conditions which hamper image systems from a perfect functioning.

2.4. Selected method: the optical position monitoring system (OPMS)

In a common demining procedure, the suspected area to be cleaned from explosive devices is divided into lanes of about 1 m width. A bar is laid on the ground in front of the deminer to materialise the limit up to which the lane is considered safe since it was so far inspected and possibly cleaned from mines. In the context of the HOPE project, it has been found that the bar, used during most manual demining procedures, can appropriately serve the purpose of spatial reference.

We decided to realise positioning by attaching a camera to the arm of the hand-held detector and localising this camera relatively to a marker bar laid on the ground. In a feasibility study, we considered the addition of an accelerometer to get the angular deviation from the vertical direction [8, 9]. Due to the important noise in angular measures and the drift associated with acceleration measurements, we abandoned the accelerometer and turned towards a higher quality camera to grab the orientation from the capture of squares with known size printed on the marker bar.

3. A positioning system for hand-held detectors

3.1. Context

By the time of the HOPE project, building a camera-based positioning system was quite a challenge. Special attention had to be paid to design a precise and near real-time system given the camera speed and resolution, the available standards for image transfer and the limited computational power.

3.2. Principle

In order to get geometrical data and to make correspondences between different sensors for fusion, a position monitoring system is attached to the detector to assign position to the data acquired during scanning. It consists of a camera acquiring images of a marker bar laid on the ground being scanned. Reference points of the marker bar are tracked by the camera, which allows for the 3D camera positioning relatively to the bar. The bar must not be moved during scanning. It was decided that the marker bar should be laid at the end of the search area (see Figure 1).


Figure 1.

Typical mine clearance setting.

The safety bar used by some EOD teams is not needed by the optical position monitoring system (OPMS). Requesting the marker bar to be placed at the end of the search area is in accordance with some typical mine clearance procedures (e.g. Belgian EOD) which rely on two bars (one before and one after the search area). The new marker bar position reduces the risk of accidental displacement, either by the operator’s feet or when scanning the area with the detector. Moreover, the camera can look forward, reducing the occlusion of the bar by the detector and allowing for the adjustment of the carrying bar (‘ang’ in Figure 2) without moving the camera relatively to the detector (Figure 3).


Figure 2.

Setting of the camera on the detector.


Figure 3.

System at work.

3.3. System description

The proposed positioning system consists of five components which are detailed in this section. A marker bar with a specific pattern, positioned at the end of the search area, serves the purpose of geometrical reference. A camera, tracking the marker bar, delivers the camera position in a reference axis system linked to the marker bar. The estimation of the OPMS parameters necessary for the correct estimation of the three-dimensional position is done thanks to a calibration grid. A trigger box sending trigger signals to the camera helps synchronising the capture of position with other sensor data (GPR, MD, MWR). Finally, a portable computer gets camera images in real time, assigns timestamps for synchronisation with other sensor data and derives calibration parameters and XYZ positions.

In the context of HOPE, the portable computer is connected to a controlling computer (the Master PC) which sends acquisition requests, displays position data and manages the database for data collection.

3.3.1. The marker bar

The marker bar is 1 m long and has a width of 3 cm. A black pattern consisting of 33 squares (2 by 2 cm) surrounded by small rectangles for square identity coding is printed on a white background (see Figure 4). The squares are regularly spaced on a planar and rigid bar. With this assumption, the noise present in image localisation is partially compensated from a least mean square calculation.


Figure 4.

Part of the marker bar.

3.3.2. The camera

We considered a digital camera with at least 1000 pixels in one direction to offer maximal precision in the small dimension of the bar which is only 3 cm wide. This direction was used perpendicularly to the bar to get more precision about the size of the squares. Compared to analogue cameras still very common in 2000, a digital camera appeared to be adequate for a better subpixel estimation of the square edges.

With the additional constraint of real-time capture to get enough positions along the trajectory, we selected a SONY camera, model XCD-X700, delivering up to 15 images per second at 1024 × 768.

This model was equipped with FireWire for connection to a portable computer through a PCMCIA card. A repeater was used for electrical power, allowing for a total cable length of 20 m. This camera had an electronic shutter control and an external trigger that allowed for capture synchronisation.

The optics was chosen for its large field of view with yet acceptable distortion. Since the image sensor of the XCD-X700 was large (1 inch), a 6 mm focal length lens was allowed for about 50 × 70 cm of capture at a distance of 40 cm.

3.3.3. The calibration grid

Two calibration procedures are requested by the system. The first one concerns the estimation of the camera parameters to make correct 3D measurements (“intrinsic calibration”). The second procedure (“external calibration”) estimates the transform giving the position of the sensor knowing the estimated camera position. Figure 5 shows the grid used for calibration. The line crossings of the grid are accurate reference points which allow for “intrinsic calibration” if several images are captured from different points of view. For external calibration, the camera is placed at a precise position on the grid (represented by the footprint of the sensor).


Figure 5.

The calibration grid.

3.3.4. The trigger box

The aim of the positioning system is to track the position of the sensors during scanning. For precise estimation, it is necessary to synchronise position gathering with sensor data acquisition. A common TTL-rising signal is sent to the sensors (GPR, MD, MWR and trigger box of the OPMS) at the start of acquisition. From this TTL signal, the trigger box issues a clock signal with a period of 100 ms to the camera trigger. The camera was operated at a rate of ten images per second, inferior to the limit of 15, to allow for the desired range of shutter speeds.

3.3.5. The OPMS computer

The OPMS computer hosts the image acquisition interface and the procedures for calibration and position estimation involving image processing. In the context of HOPE, considering a system with several sensors, a Master PC centralises data acquisition requests, data display and database management. Acquisition controls like the mode of operation (among “image capture”, “calibration” and “position estimation”), and the shutter or the gain setting of the camera is modified through a graphical user interface currently running on the OPMS computer. Images and calibration or position data are locally saved on the OPMS computer on request.

3.4. Functional description

We describe here the interaction between the different components and the algorithms which allow to achieve position estimation. This functional description focuses on the OPMS but describes briefly the role of the Master PC, presented in Section 3.4.1. The integration and the role of the OPMS in the detector (consisting of several sensors) need to solve data synchronisation, topic of Section 3.4.2. The third subsection presents the user interface that allows interacting with the functionality and options of the OPMS. The three main operational modes of the OPMS, namely, image acquisition, calibration and position estimation, are described in the last three subsections.

3.4.1. The Master PC

The OPMS can be driven by a computer, here called the Master PC. In the HOPE context, this PC hosts the database, a graphical man-machine interface (MMI) for data display and a communication protocol (through Ethernet) between the PCs driving the sensors.

In the mode of remote acquisition of the OPMS, when a scan is requested through the MMI of the Master PC, this latter issues a “software start” asking the sensors to get ready and provides a filename under which the data will be stored. A “hardware start” is then sent to the sensors by a rising TTL signal. This signal is used as reference time for synchronisation of the sensors. It fires the clock of the “trigger box” which starts to send periodically (each 100 ms) triggers to the camera. The acquired images are captured by the OPMS application which either saves images, extracts reference points for position estimation or grids nodes for calibration, according to the selected mode of the OPMS. The extracted data is saved locally on the OPMS PC according to the filename assigned by the Master PC database and containing the date and time for easy later reference. The position data is also sent to the MMI of the Master PC for real-time display and direct evaluation and control by the operator. The operator interacts with the MMI of the Master PC to send a “software stop” to the sensors to stop acquisition of the signals (Figure 6).


Figure 6.

The OPMS synoptic.

3.4.2. Synchronisation

Because the signals from the different sensors (MD, GPR, and MWR) must be synchronised for fusion and image reconstruction, a consistent timestamping procedure is required.

Since the precision of time measurements delivered by the PC cannot be guaranteed due to possible operating system latency, and because the transfer of the image from the camera to the PC takes an undetermined time, an external synchronisation mechanism has been developed thanks to the use of a TTL signal (“Hhardware tart”) sent by the Master PC at the start of acquisition to all the sensors. When the OPMS trigger box receives the TTL-rising signal, it fires a clock signal that sends trigger signals to the camera. The TTL-rising signal is used as starting reference, and the image acquisition timestamps are derived from the periodicity of the clock issued by the trigger box.

3.4.3. User interface

The user interface described below presents the functionality of the OPMS. The minimum functionality (start and stop acquisition, real-time display of acquired data, filename assignment for storage) was located on the Master PC, a few metres behind the HOPE detector. The OPMS computer, hosting data acquisition, camera settings control, position and calibration algorithms, was controlled remotely by the Master PC thanks to a network application (through Ethernet connection) which allows the OPMS graphical interface to be accessible on the Master PC’s screen. In real operation, the OPMS computer is intended to be a computation unit without screen in the backpack.

Three main operational modes are available and detailed in the following subsections: “Image acquisition” which acquires one or several images from the camera, “Calibration” which acquires images, extracts reference points from the calibration grid and applies the calibration algorithm, and “Position estimation” which acquires images, extracts reference points from the marker bar and estimates the sensor position.

Image capture can be either single (one shot) or continuous. It can be controlled by an external trigger. The image quality can be enhanced by the proper setting of the shutter (choice of the integration time) to solve the compromise between a low motion blur and enough contrast. The camera gain can also be adapted by modifying the image contrast and lightning level to solve the compromise between good contrast and low noise level without saturation.

Several graphical outputs are available. Acquired images are displayed in real time, with overlaid detected reference points. Position data are displayed in real time on a 2D map with an additional elevation chart for the Z (height) dimension. The orientation is printed in text boxes and shown graphically thanks to a symbolic detector with proper position and orientation. Another representation for orientation consists of two quadrants, one displaying the horizontal orientation and the other one showing the angular deviation from the vertical.

A last option allows for the storage of the acquired data (such as images, reference points and position values) for debugging or logging.

3.4.4. Image acquisition

In order to set up a portable system that can be deployed in the field of operation, a portable PC was selected as the OPMS platform. A FireWire connection was retained as fast camera link with the PC. The SONY camera with its FireWire connection is interfaced thanks to a ‘Microsoft Filter’ that delivers acquired images. These images receive a serial number and a timestamp. They are bufferised in the application for delayed processing in another thread.

The graphical interface allows to control the camera settings (the shutter for image exposure and the contrast by gain adjustment). On the one side, a small exposure time results in little motion blur in images at the expense of little contrast. On the other side, a large gain for good contrast is limited by the acceptable noise level. A shutter of 10 ms resulting in a motion blur of a few mm seems adequate for correct noise level in normal illumination conditions.

The closing/opening of the camera lens diaphragm also helps solving the previous compromise of good contrast versus sharp image, but here sharpness relates to focusing and not blurring due to movement. In our application, since the distance between the bar reference points and the camera varies in large proportions, a large depth of field was necessary. We thus opted for a rather closed position of the lens and compensated the lower light levels on the CCD by a higher gain of the camera. The electronic noise due to amplification did not disturb localisation of the bar reference points.

The remaining difficult problem is the large illumination variation. Weather conditions drastically change the illumination level, sometimes in a few seconds due to clouds. Shutter and gain might be adapted during image capture. The idea is to optimise the detection of the marker bar by counting the number of detected squares. A grey-level analysis adapts the gain of the camera to avoid low contrast and saturation. Unfortunately, the control of the camera shutter and gain was too slow to allow for a continuous automatic adaptation. An automatic gain control built in the camera would be a better solution.

3.4.5. Calibration Principle

The calibration principle consists in determining the model parameters thanks to grid crosses (‘nodes’) extracted in images and the corresponding 3D coordinates of these points on the grid. N images of the grid from different points of view are captured to avoid the necessity of a 3D calibration object. These different images are taken during a manual vertical movement of the sensor from its original position on the footprint of the calibration grid. The movement must be done in such a way that the images are taken in various positions and orientations. Mathematical formulation

The axis systems used in the presentation are depicted in Figure 7.


Figure 7.

Coordinate systems.

With (O, X, Y, Z) the world coordinate axis linked to the bar;

(C, Xc, Yc, Zc) the camera axis system, C being the optical centre;

(L, Xi, Yi) the image coordinates, L being the upper left corner.

The camera calibration parameters intervene in the transform of the world coordinates of a point in the field of view of the camera into the position in the image where this point can be seen.


The transformation of the 3D coordinates of a point (x, y, z) into its position in the image (xi, yi) can be divided in four steps:

1. Change of coordinates from the world axis system to the camera axis system.


where (x, y, z) are world coordinates; (xc, yc, zc) are coordinates in the camera axis system; R is the rotation matrix defined by the angles of the absolute axis system in the camera axis system and T is the translation vector of the absolute axis system relative to the camera axis system.

2. Projection to the image plane.

We used the pinhole model for the camera, with the optical centre at the camera axis system origin. In the following equations linking the coordinates in the camera axis system with the image coordinates xp and yp, we used the model focal length f, modelling a whole camera system with optics and CCD:


3. Distortion.

To model the distortion due to the lens, we used the four-parameter Seidel model with two radial distortion parameters R1 and R2 and two tangential distortion parameters T1 and T2:




4. Translation from camera axis system (mm) into image axis system (pixel).

The position of the principal point P (orthogonal projection of the camera axis system origin on the image plane) gives two more parameters Px and Py. If Nx and Ny are the number of pixels in both directions and sx and sy are the dimension of the CCD, we have


with α the ratio Nx/Ny.

To sum up, we have six extrinsic parameters describing the location (X0, Y0 and Z0) and orientation (θ, ϕ and ψ) of the camera and eight intrinsic parameters describing the camera itself (the model focal length f; the Seidel parameters for the distortion R1, R2, T1 and T2; the principal point Px and Py and the ratio of the pixel dimension α).

This model takes as input points in the field of view of the camera and provides as output the position in the image of these points (their projections in the image plane). To estimate the parameters, we need a well-chosen set of points with their coordinates in the world axis system and their pixel coordinates in the image.

These points must not lie in a plane. That is why a calibration grid offering reference points on two orthogonal planes is often used. We preferred to use several views of a planar grid since a 2D object is much easier to manufacture than a 3D one. The detector is first positioned at the footprint reference on the calibration grid for extrinsic calibration and then lifts up about 20 cm to deliver several views (about 20 seems enough). To get a unique solution, the displacement must not be parallel to the optical axis of the camera, which is satisfied in our setup when the detector is lifted vertically since the camera is not vertically oriented. Image points: node extraction

The localisation of the nodes, intersections of the lines of the calibration grid, is carried out by a three-step procedure.

First, the centres of the white squares surrounded by grid lines are looked for, starting from the middle of the image. From an initial position, vertical and horizontal crossings with grid lines are searched for, refining the centre point (Figure 8). Each centre point is used as a “seed centre” to give rise to four new centres, neighbours in the cardinal directions, with initial position guessed from the seed centre and the size of the square (Figure 9). These four new centres are then refined in the same way and will provide new neighbours. This loop goes until no new centres are found in the image.


Figure 8.

Centre refinement.


Figure 9.

Centre propagation.

Secondly, each pair of horizontal or vertical neighbouring centres allows for the detection of a grid point (Figure 10), intersection of the line segment joining the two centres with a grid line. The grid point is localised as the minimum of the grey level along this segment. Subpixel accuracy is obtained by interpolation between the two pixels around the sign inversion of the first derivative of grey-level values.


Figure 10.

Node localisation.

Thirdly, the four grid points around each node are used to localise the node precisely. The left and right grid points and the top and bottom grid points define two line segments whose intersection gives a subpixel accurate node position (Figure 10).

Once all the nodes have been found, the marker (a small dark square close to the node at the centre of the calibration grid) is looked for to find the reference node. This node is labelled (50,50) in the symbolic coordinate system. It is very important that the marker is correctly detected in the different images for proper calibration.

Figure 11 presents the top left part of node localisation applied to an image of the grid. The superimposed white dots are initial centres, the black ‘ + ’ are the refined centres and the white ‘×’ are the localised nodes.


Figure 11.

Results of node localisation on the grid.

The whole image processing procedure was optimised and ran in less than 100 ms on a Pentium II 500 MHz, allowing for fast calibration, although this was not mandatory. Parameter estimation

Each calibration image provides two equations per detected grid node in the images, linking (xi, yi) to (x, y, z) thanks to the model. In the specific case of a planar grid, we chose z = 0 for all the points.

The total number of parameters to estimate is 8 + 6 × N. The eight intrinsic parameters depend only on the camera, while the six extrinsic parameters depend on the position of the camera, different for each image, and a priori unknown.

The error function to minimise is the sum of the image distances between the image coordinates of the extracted grid nodes and the projection of the corresponding 3D points by the model. We used the calibration software from Heikkilä [10], written in MATLAB. All the points of all considered images are processed in the same minimisation procedure.

The camera-to-detector transform, necessary to report position and orientation of the detector, is easily derived from the solution of the full problem. As the first image of the sequence considers the detector at a known position (footprint) on the calibration grid, and as the six extrinsic parameters, position and orientation of the camera, are eventually available for each image relatively to the grid, the camera-to-detector transform is obtained from the six extrinsic parameters of the first image. To derive the position and orientation of the detector, the position and orientation of the camera are combined with this camera-to-detector transform.

Only a dozen of evenly spaced images captured during calibration are considered in order to reduce memory needs, computation time and information correlation between images.

3.4.6. Position estimation Principle

The position estimation of the detector is obtained from the position estimation of the attached camera and the camera-to-detector transform estimated during calibration. Position and orientation are reported relatively to the marker bar upon which the world coordinate system is built (see Figure 7). They correspond to the six extrinsic parameters (X0, Y0, Z0, θ, ϕ, Ψ) introduced in the calibration procedure; the eight intrinsic parameters remain constant as long as the camera sensor or the lens is not modified. Problem solving

The position and orientation of the camera in the world axis system are obtained from the equations presented in the previous subsection. The world coordinates of the reference points of the marker bar and their projection in the image offer N × 4 equations with six unknowns (X0, Y0, Z0, θ, ϕ, Ψ), where N is the number of detected squares (each providing two reference points).

The world coordinates of the reference points are known from the six-bit code (see Figure 12) surrounding each square to specify its identity. Should the determination of this code fail, an invalid code label is assigned to the square (see ‘Reference points extraction’).


Figure 12.

Close-up of the square pattern.

In order to deal with some errors in the data acquisition or point extraction, a first least mean square procedure is applied to one of the two rows of reference points (Figure 12) in order to filter and complete the information extracted from the image. The set of reference point coordinates (xi, yi) with identified label is reduced to the ones matching the least mean square fit with a tolerance distance of 1 pixel. A consistent labelling of the reference points―necessary to know the corresponding absolute coordinates―is deduced from the labels of the reduced set of reference points and applied to all the detected reference points with an invalid code label. As this least mean square procedure concerns a one-dimensional row of points along Y (the bar), only one row of the matrix R intervenes, leading to linear equations in the unknowns (X0, Y0, Z0, R21, R22, R23). Since the solution is up to a multiplicative factor, the unknowns appearing in their simple form on both sides of the equations, one unknown can be set to 1 (Z0). Another constraint is the normality of the vector (R21, R22, R23). This partial solution aims at extracting a set of correct reference points with coherent labelling. It also provides initial values to five of the parameters for the whole system solving described below. It cannot estimate the orientation around the bar (φ) since it relies on points aligned on axis Y.

A second least mean square procedure considers the two reference points of the squares with label completion and outlier elimination obtained in the first step. This two-dimensional set of points allows for the determination of the whole rotation matrix. Instead of solving this non-linear overdetermined system in the unknowns (X0, Y0, Z0, θ, φ, ψ), we preferred to consider the linear system in the 12 redundant unknowns contained in the rotation matrix R and translation vector T, as presented in the calibration subsection and add the constraints on the vectors (perpendicularity and normality). Since the pinhole model is projective, multiplying all the unknowns by a constant does not change anything so that we can impose Z0 to 1, reducing the number of independent unknowns to 11.

The orientation around the bar (φ), and incidentally the precision on Z, was clearly the sensitive point of the design. Since this estimation is based implicitly on the distance between the two rows of reference points, the precision on φ is larger where the derivative of that distance relative to φ is maximal. It is advantageous to tilt the bar so that the squares are less perpendicular to the camera while avoiding a too grazing point of view to ensure visibility of the squares. The value of 45° is a good compromise that implies for our camera setup (25° with the vertical) a tilt of about 20° of the bar. Reference point extraction

The objective of this image processing algorithm is to detect the squares, localise their two reference points and identify the square label coded in small surrounding rectangles (see Figure 12).

The first square is looked for by visiting pixels in the image sparsely (one line out of 16), searching for large horizontal grey-level transitions. From each such point, the contour of a hypothetical square is followed until the contrast is below a threshold or the number of visited points is too high. A candidate contour is accepted if the corresponding curve is closed and if its size is in a proper range. Then, the four more prominent points of the retained contour are labelled as the corners. Two of them are detected as the opposite points on the contour with the largest distance (points 1 and 2 on Figure 13). The other two corners (‘3’, ‘4’) are roughly localised as the midpoints on the contour between the first two corners and then refined by picking the points with maximal distance to the segment 1–2. Finally, points ‘1’ and ‘2’ are refined as points with maximal distance to the segment 3–4.


Figure 13.

Corner detection.

Once the first square has been found, more squares are looked for in the directions of the sides of the first detected square. The corner positions and the linearity at short distances in the image, even with distortion, give hints about the direction and distance about where to find neighbouring squares.

The detected squares are then analysed to offer reference points. The direction of the bar is guessed from the distribution of the position of the squares. Each square provides two reference points: the midpoints of the sides parallel to the direction of the bar (Figure 12). They are roughly localised as the midpoint of the two neighbouring corners and refined perpendicularly to the side by a subpixel grey-level analysis (zero crossing of the second derivative of the grey-level profile perpendicular to the side (Figure 14)). As a matter of fact, the position along the bar can be well estimated from the distribution of the whole set of reference points, applying a least mean square procedure (see ‘Problem Solving’ above). On the contrary, the measures across the bar implicitly relate to small distances (2 cm) which gives a high sensitivity to the estimation of the rotation angle φ. Subpixel localisation is thus welcome.


Figure 14.

Subpixel localisation of the reference points.

The squares are finally labelled to identify their absolute position on the bar leading to the world coordinates of the reference points. For this, the six-bit code represented by small rectangles surrounding each square is extracted. The position of the small rectangles is known relatively to the square corners, and their colour, either black or white, determines the code. The code always contains at least one black rectangle for visibility checking. The code of a few neighbouring squares bring enough redundancy to enable error detection and correction in labelling. However, an invalid label is assigned to squares with unidentified code.

The design and implementation of the bar pattern and related algorithms have been developed to enable real-time detection and labelling of reference points. Less than 20 ms were necessary on a Pentium II 500 MHz to get all the required processing for camera and detector positioning.

4. Evaluation of the HOPE positioning system

4.1. Principle

The evaluation of the OPMS concerns here the accuracy of position and orientation estimation of the system in a typical range of use (1 m displacement along the bar, 40 cm perpendicularly to the bar and 11.5 cm in height). The OPMS was mounted on the carrying bar (see Figure 15), and the detector was placed on a grid manually at known coordinates. Three pieces of wood were added one by one under the detector to change the height to 0, 3.8, 7.7 and 11.5 cm. Other evaluations like the best shutter, gain and lens aperture of the camera were carried out empirically during development and a measure campaign in Bosnia. Although the tests concerned here considered position estimation with little or no motion, the precision estimation should be valid for dynamic estimation thanks to the short shutter setting (around 12 ms).


Figure 15.

Evaluation grid and placement of the detector.

The evaluation consisted of two phases: calibration and position measurements. The calibration phase, explained in Section 3.4.5, was carried out to ensure that the parameters are adapted to the selected lens aperture and camera position on the detector. Position measurements are made relatively to a marker bar printed with a calibration grid to form the so called “evaluation grid” described in Section 4.2. To complete the tests, an independent marker bar was laid tilted on the printed marker bar to analyse the influence of the bar inclination on the position and orientation accuracy.

4.2. Evaluation grid

The evaluation grid consists of the calibration grid and the marker bar, both presented in Section 3.3 about the system description and printed on an A0 sheet of paper. We are aware that paper cockles easily with humidity so we taped the paper on a metal sheet.

The square grid first gives the reference points for internal and external calibration. It is then used to position the system at known coordinates and extract positions relatively to the printed marker bar pattern. The residual errors after parameter tuning for the internal and then external calibrations are given below and determine the quality of the calibration procedure and of the proposed model. The estimated positions from the marker bar are compared to the known positions on the grid to reveal error statistics in the area of tests.

4.3. Calibration

4.3.1. Results

Results of a typical calibration experiment are presented in Figure 16. This diagram shows the reference points (z = 0) of the grid that were detected in at least one calibration image. Only a dozen of evenly spaced images captured during calibration are used in order to reduce memory needs, computation time and information correlation between images. For each image, the camera axis system resulting from the estimation in position and orientation (extrinsic parameters) is plotted. The positions in the diagram show the vertical trajectory of the detector during calibration acquisition. The orientation reveals the inclination of the forward-looking camera. The corresponding errors for this experiment, expressed in pixel distance in the image, are given in Table 1.

Standard error0.259 pixel
Mean error0.185 pixel
Maximal error1.56 pixel

Table 1.

On-field calibration results.


Figure 16.

Estimation of the camera position and orientation during calibration. All units in mm.

4.3.2. Discussion

The residual error after calibration can have three origins. First, the model is not perfect and has, for instance, not considered special lens artefacts. Secondly, the calibration plate for the reported tests was printed on a sheet of hard paper that clearly cockled with humidity so that it lost its planarity. Finally, the image localisation is not perfectly accurate, due to image noise and the hypothesis of segment linearity in the vicinity of the crosses.

The error values given here are not to be compared with the results obtained during laboratory calibration of camera with little distortion. The results derive from fast practical tests in a building close to the field, with no special care. The most critical feature of the system is its sensitivity to illumination that largely varies outdoors. This mainly affects position estimation as the calibration can be performed indoors. The fact that this OPMS has no drift is a valuable feature which limits to the minimum the frequency of calibration.

The influence of the calibration imprecision on the overall position accuracy appears to be small in comparison with problems affecting the quality of reconstructed images (the ultimate aim of position estimation in our application), such as the sensor resolution, the difficulty of sensor capture synchronisation and the influence of the manual scanning (area coverage and height of scanning). No major calibration development was necessary apart from printing the final calibration plate on a plastic substrate, bringing a better stability over time.

The typical processing time amounts to 1 minute on a Pentium II 500 MHz. Further optimisation could be envisaged like a progressive estimation, starting from a reduced set of points, or the conversion of the programme into a compiled language. However, the calibration procedure is an offline task, to be performed from time to time for verification or when a critical parameter is modified.

4.4. Position estimation

4.4.1. Results

As mentioned above, the results concern the position and orientation accuracy of the OPMS when used on the evaluation grid, at several heights and with a flat (the one printed on the calibration grid) or tilted marker bar (tilt angle: 19.3°).

The visualisation of the different coordinates (X′, Y′ and Z′) estimated by the OPMS at different positions on the evaluation grid (X, Y) and at three different heights (Z = 3.8, 7.7 and 11.5 cm) clearly shows on Figure 17 the qualitative correctness of the system, except in Z for the flat marker bar. This is due to the less precise orientation estimation around the bar. To a smaller extent, we can notice a systematic deviation in the Y′ position which should normally look like horizontal rows of crosses on the chart. This is attributed to the distortion compensation which was imperfectly handled at the border of the images.


Figure 17.

X′, Y′ and Z′ charts (in mm) for all the positions collected during the evaluation at three different heights for a tilted bar.

The histograms of the Euclidean distance (in mm) between the estimated and real 3D positions (Figure 18) show again the advantage of the tilted bar, as confirmed by the error levels in Table 2. The RMS distance error is the root mean square of 3D distances between the estimated and real positions.

Flat marker barTilted marker bar
RMS distance error (mm)16.812.93
Std angular error X (degree)1.581.16
Std angular error Y (degree)1.512.0
Std angular error Z (degree)1.471.51

Table 2.

Table of position and orientation errors.


Figure 18.

Left: histogram of 3D errors (in mm) for the positions at three different heights with a horizontal marker bar. Right: similar evaluation with a tilted marker bar.

The remainder of the chapter will concentrate on results for a tilted marker bar. As they appeared better, we selected this mode of operation.

Figure 19 compares the real and estimated positions in the XY plane. We see here again the problem at the border of the area. As this problem mainly originates from an algorithmic misbehaviour in distortion compensation, we analyse the error level for a central square region with increasing size (see Figure 20). Mention that with the constraint of an error level of 5 mm in position and 1 degree in orientation, the acceptable area covers nearly the entire zone of 100 × 50 cm.


Figure 19.

Real (circle) and estimated (cross) positions in the XY plane.


Figure 20.

Position error (mm) and orientation error (degree) versus square size (cm) of interest centred in the middle of the 100 × 50 scan area, height = 11.5 cm.

Let us finally present in Figure 21 the matrices of error, representing the scan area 100 × 50 cm in slots of 10 × 10 cm squares, for the scan at Z = 11.5 cm. Figure 21 gives the mean error (Errlm), standard deviation (Stdlm) and maximal value (MaxErrlm) in X, Y and Z and in 3D with the amplitude coded by a colour corresponding to an error level (in mm) as specified in the legend bar.


Figure 21.

Matrices of error for the scan 100 × 50 cm at Z = 11.5, for position (top) and orientation (bottom).

4.4.2. Discussion

Several problems affected the precision of the estimation in position and orientation.

First, the accuracy of the point localisation in the images depends on the algorithm and on the illumination and camera settings (lens aperture, shutter and gain). For instance, the localisation algorithm was optimised with a subpixel accuracy in the direction perpendicular to the bar but simply used the midpoint of two corners, affecting the precision along the bar (Y axis). The results show that the least mean square position of reference points along the bar does not completely compensate for the localisation imprecision in Y. The illumination saturation influences the edge localisation due to the non-linearity of grey levels.

Secondly, a larger number of detected squares contribute to a better estimation of the least mean square procedure. The left and right sides of the scan area are the worst zones for positioning since only a part of the bar is visible from the camera. A low contrast reducing the detectability of the squares also penalises precision.

Thirdly, the parameter estimation is more precise when considering reference points separated by larger distance, due to the general principle of triangulation. Those distances are however limited by the camera field of view which has to stay reasonable to limit distortion.

Finally, the model uses intrinsic parameters which must be determined during a calibration phase. This determination is not error-free and directly influences the position and orientation precision.

4.5. Evaluation conclusions

All the steps required by calibration and position estimation were designed to fulfil the constraints of the application and to realise a prototype that can work in the field. The complete solution is lightweight and cheap. The calibration phase requires the calibration plate and is carried out in a few minutes. As the position estimation of the OPMS has no drift over time, the calibration remains valid until the camera is dismounted from the detector or the lens aperture is modified. Position estimation is performed in real time with about ten measurements per second, which captures the dynamics of the detector movement.

The principal matters of concern in the current implementation are the illumination and synchronisation with the sensors. The quantity of light outdoors varies considerably during the day, and image acquisition has not yet been provided with automatic light adaptation to ensure contrast and avoid saturation. The synchronisation precision of the collected data also conditions the overall accuracy as signals should be captured at the same instants.

In the current implementation, the average position estimation error in the considered area (100 × 50 × 11 cm), evaluated at 1.3 cm, is above the initial requirement of 0.5 cm in each direction (about 0.85 cm of Euclidean distance). The average error in orientation is 2°, also above the initial requirement of 1° in each direction. This however does not affect the quality of image reconstruction for the metal detector and the microwave radiometer. According to Figure 20, we are confident that we can reduce these error levels to the requirements thanks to the correction of distortion compensation at the border of the images.

5. Conclusion

Hand-held detectors may benefit from continuous position estimation to localise measurements, to reconstruct 2D (or 3D) measurement maps suited for interpretation and to control if the area is completely scanned.

The OPMS system developed in the HOPE project answers practical constraints of an operational solution. It is weight-, size- and cost-effective since only one camera is necessary. It relies on a reference bar, consistent with many demining protocols, which is typically made of a rigid, straight beam with a specific printed pattern. The additional equipment is easily adapted to an existing detector as long as the reference bar can be kept in the field of view.

The limitations of the approach concern the reference bar and the camera which should not be moved during measurements. The bar is indeed the geometrical reference. Delivered positions are valid since the last calibration as long as the camera is not moved relatively to the detector and its lens is not modified.

The presented solution for positioning requires a camera, a bar with pattern and a calibration procedure. Nowadays, GPS technology offers high precision in very small casing. When differential GPS (DGPS) or Real-Time Kinematic (RTK) can be used, the precision may be high enough to represent a valuable solution. The cost and the needs for satellite visibility may represent an obstacle for these solutions in some circumstances.

More recently a new solution, based on global positioning systems, was designed in the TIRAMISU project. Called TCP box this system uses satellite positioning systems to acquire very precise positions. Mounted in a sensor, it can add the time and location to the sensor’s data and transfer the information to a remote ftp server. The TCP boxes can create a local mesh network even when there is no Internet connection.


This chapter describes work performed by Yann Yvinec, Pascal Druyts and Charles Beaumier. The authors would like to thank the Belgian Ministry of Defence and the European Commission for the financial support (Esprit Project 29870, DG III).


1 - IMAS 07.11, “Land Release”, First Edition 10 June 2009, Amendment 2, March, 2013, Geneva International: Centre for Humanitarian Demining.
2 - HOPE, Hand-held OPErational demining system, Esprit Project 29870, DG III.,, accessed on 8 February 2017.
3 - Milisavljevic, N, B. Scheers, O. Thonnard, M. Acheroy, “3D visualization of data acquired by laboratory UWB GPR in the scope of mine detection”, In Proceedings of Euro Conference on: Sensor Systems and Signal Processing Techniques Applied to the Detection of Mines and Unexploded Ordnance (Mine '99), Florence, Italy, 1999.
4 - Thonnard, O, N. Milisavljevic, “Metallic shape detection and recognition with a metal detector”, In Proceedings of the European Workshp on Photonics applied to Mechanics and Environmental Testing Engineering (PHOTOMEC '99 – ETE '99), Liège, Belgium, 1999.
5 - Havlik, S, P. Licko, “Humanitarian demining: the challenge for robotic research”, Journal of Humanitarian Deminig, (2.2), June 1998.,, accessed on 8 February 2017.
6 - Hightower, J, G. Borriello, A Survey and Taxonomy of Location Sensing Systems for Ubiquitous Computing, Technical report UW CSE 01-08-03, University of Washington, Dpt of Computer Science and Engineering, Seattle, WA, August 2001.
7 - Sato, M, J. Fujiwara, K. Takahashi, ALIS Evaluation Tests in Croatia, SPIE, Detection and Remediation Technologies for Mines and Minelike Targets XIV, Florida, USA, 2009.
8 - Beumier, C, P. Druyts, Y. Yvinec, M. Acheroy, “Motion estimation of a hand-held mine detector”, Signal Processing Symposium, Hilvarenbeek, The Netherlands, 23rd–24th March, 2000.
9 - Beumier, C., P. Druyts, Y. Yvinec, M. Acheroy, “Real-time optical position monitoring using a reference bar”, Signal Processing and Communications (SPC2000), IASTED International Conference, Marbella, Spain, September 19-22, 2000, pp. 468-473.
10 - Heikkilä, J, “Geometric camera calibration using circular control points”, IEEE Transactions on Pattern Analysis and Machine Intelligence; 22(10), 1066-1077, October 2000.