Open access

Calibration of EM Sensors for Spatial Tracking of 3D Ultrasound Probes

Written By

Andrew Lang, Vijay Parthasarathy and Ameet Jain

Submitted: 22 November 2011 Published: 23 August 2012

DOI: 10.5772/50695

From the Edited Volume

Data Acquisition Applications

Edited by Zdravko Karakehayov

Chapter metrics overview

2,762 Chapter Downloads

View Full Metrics

1. Introduction

Ultrasound (US) imaging is one of the modalities commonly used for interventional guidance. It offers many advantages over other imaging systems such as MRI, CT, and fluoroscopy in that it is considerably cheaper, is portable, offers real-time imaging, and has no radiation exposure. Recent developments have led to the advancement of 3D US technology, making it a more feasible alternative to 2D US imaging, despite its higher cost. Among 3D US technology includes real-time 3D transesophageal echocardiography (TEE) enabling US imaging of the heart from within the body. Unfortunately, there are many disadvantages of US that must be addressed. Due to the presence of speckle, the image quality is very poor, especially away from the focal zone. The imaging field-of-view (FOV) is also quite limited, due to the size of the US transducer. These disadvantages significantly limit the usability of US for many interventional procedures.

To make US a more feasible interventional tool, it should be fused with information from another modality or a tracking system. The tracking system allows tools to be visualized in the US volume, especially helpful when imaging a needle or a catheter. Overlaying information from other modalities gives the clinician more information allowing them, for example, to see outside the limited FOV of the US. Often times, the fusion of multiple modalities is done using a tracking system as an intermediate link between them. Calibration of the tracking system to each individual modality is thus required. Calibration of a US probe is a well-studied problem with a variety of methods available to be used. Multiple types of systems can also be used for tracking, including optical and electromagnetic (EM) tracking. The methods of calibrating the US can generally be applied to any type of tracking system.

There are several papers that provide comprehensive reviews of the various methods for calibrating a tracking system to US imaging [8, 11]. These papers focus primarily on the calibration of 2D US systems, but some of these methods can be extended to 3D. Other reviews have been done specifically for the calibration of 3D US, which compare new methods made possible through the use of a 3D probe [5, 14].

Among methods for the calibration of a US probe, there are two commonly used categories of techniques: phantom-based and tracked stylus-based. There are numerous phantom-based calibration methods. Some of the more common types of phantom designs use “N-wire” fiducials [6, 13]. Other phantom designs consist of plane phantoms [15, 16], point phantoms [4], and cross-wire phantoms [15].

It is perhaps more beneficial to calibrate the US without using a phantom. First and foremost is the simplicity of the calibration using a readily available tracked stylus or needle. Also, manufacturing of a phantom may require additional cost and time to develop. Hsu et al. [9] provide an overview of stylus-based calibration methods. The simplest method of calibration using a tracked stylus or needle is to image the tip of the needle precisely in the scan plane of the US image. Given that the needle is tracked, the tip can be segmented from the US images and the collection of all positions can be registered using a point-based registration method [12, 19]. Zhang et al. [19] showed an accuracy of up to 3.5 mm using this method with only nine images for each calibration. In 2D, this method has a problem in that it is very difficult to precisely position the stylus tip in the US scan-plane. Even in 3D the image resolution and artifacts make it difficult to locate the needle tip position accurately. Poon and Rohling [14] performed a similar stylus calibration for a 3D US probe using five pairs of points. They showed an RMS accuracy of 2.36 mm. An optical tracking system was used which has a significantly better accuracy then EM tracking systems.

Another similar stylus-based method depends only on the intersection of the stylus with the imaging plane [10]. The intersection of the US scan-plane with the needle will show a point that is easily segmented. The calibration transformation can be estimated using these measurements. Unfortunately, the calibration accuracy was not quantifiably analyzed here and was only performed for 2D. The biggest problem with this method is that the stylus must be imaged at a wide range of angles and positions.

In this paper we examine the problem of EM-to-TEE calibration for a 3D TEE probe by looking at two separate but similar solutions based off of prior 2D methods [10, 19]. We examine how these methods perform when extended to 3D using EM tracking. In addition, we further analyze the calibration errors using a statistical framework and also examine the effect of the speed of sound errors on the calibration.

Advertisement

2. Methods

The relationship between the TEE image volume and the attached EM sensor is shown in Figure 1. Two similar calibration methods are employed to calibrate the TEE probe and determine this relationship to the EMtracking. In both methods, the calibration is a multi-step process. The data acquisition step is first. Images of the needle from the TEE probe are acquired concurrently with the EM position information of the probe and needle sensors. Segmentation is then carried out to find the tip of the needle in the first method. In the second method, segmenting any point along the needle is sufficient. The calibration is finally performed either using a point-to-point or a point-to-line registration method.

Figure 1.

The relationship between the TEE image volume and the attached EM sensor.

2.1. Data acquisition

Ultrasound imaging was performed using an X7-2t TEE probe with a Philips iE33 echocardiography system. An EM sensor has been permanently mounted on the end of the probe (Figure 2). EM tracking was done using the Aurora Electromagnetic Measurement System (Northern Digital Inc, Waterloo, Canada). For each calibration experiment, a set of 30-50 volumes were captured, each containing the image of an EMtracked needle at a different orientation. It is important that these images are spread over a wide range of probe and needle positions to ensure the best accuracy of the calibration. Not doing so may introduce a bias to the calibration results. Additionally, all volumes are captured at a depth of 10 cm. TEE volumes are acquired in both the Live 3D (L3D) and Thick Slice (TS) imaging modes. L3D produces a volume size of 112 × 48 × 112 pixels or 91 × 47 × 100 mm in the width, height and depth directions. TS produces a volume size of 240 × 16 × 112 or 140 × 15 × 100 mm; almost twice the width as L3D with a very narrow thickness. Because of the narrow cross section, the TS volume can be thought of, in essence, as a 2D plane with some thickness. An overview of the acquisition system can be seen in Figure 3.

Figure 2.

An image of a TEE probe with the EM sensor permanently mounted near the transducer face.

Figure 3.

The TEE probe was calibrated by acquiring images of the needle at different orientations. The positions of the EM sensor attached to the TEE probe and the needle are concurrently saved with the image.

2.2. Needle segmentation

After the data is acquired, the needle must be segmented from the volumes in 3D. In both calibration methods the segmentation technique is the same. Needle segmentation is performed manually using three 2D maximum intensity projection images. Segmenting a needle point in two out of the three images is sufficient to identify the point in 3D. If the needle point is identified in all three views, the final 3D position will be the average position in all of the views.

The images were processed to enhance the needle’s appearance. Following the needle segmentation method described in Aboofazeli et al. [2], anisotropic diffusion [18] is first used for speckle reduction and smoothing with the subsequent application of an exponential contrast enhancement filter. The contrast adjustment filter has the form

Ic(x)=exp(12(I(x)μσ)4)E1

whereI(x) is the intensity of the volume at voxel x, μ and σ are parameters specifying the range of voxel intensities representing the needle and Ic(x) is the contrast adjusted intensity at voxel x. Values of μ = 190 and σ = 70 were found to work well. The effect of these filters can be visualized in Figure 4.

Figure 4.

A look at the different modes of processing on the maximum intensity projections of the TEE volume. Left: original image. Left middle: anisotropic diffusion filtered image. Right middle: exponential filtered image. Right: exponential and anisotropic diffusion filtered image.

2.3. Calibration

In order to calibrate the TEE probe to the EM tracking system, four different measurements are required as inputs. The first input is the set of segmented needle points, pTEE,i, where I is from 1 to the number of images acquired. Next we have two sets of measurements from the EM tracking. The first set of measurements are the positions of the TEE probe EM sensor, EMTprobe,i. Second are the measurements of the needle sensor, EMTneedle,i. The notation bTa represents the homogeneous transformation between coordinate systems a andb. The final input for the calibration is the position of the needle tip relative to the EM sensor, pneedle. This position is a constant for all measurements determined prior to the experiments [19].

The purpose of calibration is to find the transformation between the TEE coordinate system and the TEE probe EM sensor coordinate system, probeTTEE. After this transformation is estimated, a point measured in the TEE volume, pTEE, can be related back to the EM tracker coordinate system using the relation

pEM=EMTprobe×probeTTEE×pTEEE2

where pEMis that point in the EM coordinate system. Note that EM is simply the coordinate system of the stationary EM field generator, of which all EM sensor measurements are made relative to. It could alternatively represent the fixed location of a reference EM sensor.

2.3.1. Calibration using segmented needle tips

If the needle tip position has been segmented from each TEE volume, the needle tip position can be determined relative to the EM coordinate system using pEM= EMTneedle× pneedle, where pneedleis the calculated needle offset. Given a set of needle positions in EM and in the TEE volume, we can reformulate equation 2 to

probeTEM×pEM=probeTTEE×pTEE.E3

Since the points pTEEare 3D, we can solve for probe TTEEusing a closed-form least-squares solution proposed by Arun et al. [3] which uses the singular value decomposition. An alternative method must be used for 2D images [19].

2.3.2. Calibration using segmented needle points

The second calibration method is similar to that proposed by Khamene and Sauer [10] with an extension to 3D. In this method, any point on the needle may be segmented, not restricted to the needle tip as with the first method. However the calibration problem becomes more difficult to solve for. We must change our representation of the needle to that of a line, composed using two points. The first point, pl1 is taken to be the origin of the needle sensor measurements relative to the TEE probe sensor, or the translation vector of the transformation probe Tneedle. The second point, pl2, is the position of the needle tip relative to the TEE probe sensor, or probeTEM× pEM.

We now have the problem of estimating the transformation that minimizes the distance between the TEE points and the line representing the needle. The general equation for the distance between a point, pi, and a line (li={pl1,i,pl2,i})is determined from the equation

d(pi,li)=(pl2,ipl1,i)×(pl1,ipi)(pl2,ipl1,i)E4

where× is the cross product. The calibration transformation is estimated using a nonlinear least-squares optimization framework. The equation to be minimized is

e=i=1Nd(Tc×pTEE,li)2.E5

Here we are minimizing the distance of each individual needle point segmented in the TEE volume and the line representing the needle relative to the probe EMsensor. The TEE point is transformed by the transformation Tc. The transformation Tcwhich minimizes 5 becomes the estimation of the calibration transformation probeTTEE. Since no closed form solution exists to find this transformation, an iterative method must be used. The Levenberg-Marquardt algorithm is used here. The transformation Tcis represented by six parameters, the x, y, and z Euler angles as well as the x, y, and z translations. Also note that for increased efficiency, the denominator in equation 4 can be left out in the calculation of equation 5 since it is a constant equal to the needle tip offset, ‖pneedle‖.

Posing the calibration problem in this way, using the point-to-line distance, unfortunately is very susceptible to the optimization being caught in local minimum. It requires a close initial estimate of the solution. An approximation to the solution can be estimated by solving for 3 as seen in the previous section. This solution works well as an initial guess for Tcprovided that the segmented points in the TEE volume are relatively close to the needle tip (within several mm). Also, as previously mentioned, the stylus must be imaged at a wide range of angles and positions. Note that in the extreme case of when all of the images are taken with the needle at the same orientation there would be uncertainty in the calibration translation along the direction of the needle.

2.4. Considerations for the speed of sound

For the best accuracy in calibration, careful consideration must be taken to account for the speed of sound (SOS) in water. The US machine assumes that the TEE probe is imaging human tissue, which has an approximate SOS of 1540 m/s. The SOS for water is different and depends on the water temperature. At room temperature (20°C), the SOS in pure water is approximately 1480 m/s [1]. There is therefore a mismatch in the speed of sound when the EM tracked needle is imaged in water by the US. Although the water temperature can be adjusted such that the SOS matches approximately 1540 m/s, maintaining the water at room temperature is more practical, especially for the lengthy calibration experiments carried out here.

When performing a calibration experiment in water, the voxel scaling can simply be changed to account for the SOS. Given water at room temperature, the voxel size must be multiplied by 1480/1540 in each direction. A key point to consider regarding the SOS of water is that the calibration transformation for experiments performed in water is not valid for experiments performed on real tissues. The physical size of the water and tissue volumes are now different for the same depth setting. At an imaging depth of 10 cm there would be a physical size difference between tissue and water volumes of about 4 mm along the depth direction. The error incurred by using the incorrect calibration matrix would depend on the location of the volume origin (Figure 5).

To illustrate the effect that the speed of sound has on the calibration, lets look at how the volumes compare for different speed of sound values. We will look at a 2D case where the speed of sound of water is half of that assumed by the US and the origin of the US image is in the top left corner of the image (SOS is exaggerated for visual purposes). Figure 5(a) shows the US image of several points at various locations in the image, with the imaged points denoted by red ×’s and their actual physical location denoted by blue ×’s. The arrow represents the calibration transformation between the US and EM sensor coordinate systems. Figure 5(b) displays the result of calibration using an incorrect SOS value. Figure 5(c) shows the result of calibration with the correct SOS value. Note that the acquired image is physically smaller and therefore the origin of the image has moved. With the origin of the image in a different location, the calibration transformation changes as well (Figure 5(d)).

In this figure, the calibration transformation in water that is calculated by scaling the voxels appropriately is seen in the interior square image of Figure 5(d). A volume acquired in real tissue will be larger requiring a different calibration transformation, as represented by the outer square image of Figure 5(d). Therefore, care must be taken to convert the calibration transformation when imaging in real tissues. This conversion can be done through a translation of the calibration transformation based on the origin of the volumes.

Figure 5.

A illustration of the effect of the speed of sound on calibration. (a) US imaged points with an incorrect SOS (red ×’s) as compared to the correct physical location (blue ×’s) where the SOS value of the imaged points is half of the assumed SOS. (b) Calibration of points with an incorrect SOS. (c) Calibration of points with the correct SOS. (d) Calibration transformation changes for different SOS values depending on the image origin.

2.5. Statistical analysis

A statistical analysis can be performed on the data to remove outlying points and analyze the overall accuracy of the calibration. We wish to estimate the best points to include for a single calibration, as well as find the best calibration transformation with some indication of how well the calibration model fits. A single calibration using all available points can only provide a simple indication of the spread of the points. The analysis formed here includes outlier removal using RANSAC followed by a cross-validation analysis on the calibration using the remaining points. In this analysis, we will end up with a statistical view of the calibration accuracy using all available points.

A RANSAC framework is used for estimating outlying points [7]. The RANSAC algorithm examines random sets of “minimal” combinations of points. Minimal is taken to mean the minimum number of points required to perform the calibration. Three points are technically required, however, we choose to use 5 points since the calibration method of Section 2.3.2 is not robust to using only 3 points. A variation of RANSAC was used for our calibration called MLESAC, where each random sample set is ranked based on a likelihood function rather than the number of points that fit the model [17].

Having eliminated outlying points, we attempt to find the best calibration model from the remaining set of points. A random sample validation framework is used as an analysis tool to choose the best calibration transformation. Within this framework, we also look at the convergence of the calibration error as more points are used. By examining the convergence of the error, we are able to have confidence that enough points have been acquired for a successful calibration.

For the measure of error, when a simple calibration is carried out, the fiducial registration error (FRE) is calculated for how well the EMand TEE points fit together. This error consists of either the RMS point-to-point distance or RMS point-to-line distance for all points used in the calibration. Additionally, the target registration error (TRE) measures how well the validation points fit the calibration model. These points were not used to calculate the calibration transformation, and therefore provide a better indication of the accuracy of the calibration.

To examine the convergence of the calibration error, 5 points are used initially, incrementing by one up to N −3 points, assuming that N points are available to be used for calibration. For each set of calibration points, three of the points that were not used in the calibration are used for validation (to calculate TRE). Because a single calibration at each interval will be biased towards the points included in the calibration, the calibration is repeated 1000 times at each interval. For each trial, the included points are randomized from the set including all points. The validation points are also randomized.

Given multiple FRE, TRE, and calibration values at each incremental number of calibration points, the mean and standard deviations can be calculated and plotted. Example plots of this information is given in Figure 6. The final calibration transformation is taken to be the average over all of the trials above the convergence point. The convergence point is the point at which the TRE mean and standard deviation values become “constant”, when the differential values fall below a threshold (set to 0.01 mm for mean and 0.005 mm for standard deviation). A black vertical line represents the convergence point in Figure 6(a). If a convergence points cannot be found, the calibration cannot be reliably used. Also, provided enough points are acquired, we can estimate the minimum number of experimental points required for a successful calibration.

Table 1.

Calibration results for a single calibration using all of the points.

Figure 6.

The resulting plots after the statistical analysis of the calibration. Note the scale of the Y-axis. (a) A plot of the convergence of the FRE (red) and TRE (blue) values as more points are used. For each number of points used, the average and standard deviation over 1000 trials is plotted. The black vertical line represents the approximate convergence point of these values. (b) The convergence of the x (red), y (blue), z (green) translation parameters of the resulting calibration matrices. (c) The convergence of the x (red), y (blue), z (green) Euler angle rotation parameters of the resulting calibration matrices.

Advertisement

3. Results and discussion

The calibration methods were analyzed for two separate TEE probes using each imaging mode (L3D and TS). Both probes were analyzed using each calibration method. TS volumes were only calibrated using calibration method 2 since the volume is too narrow to image the tip of the needle precisely. The speed of sound during calibration was assumed to be 1480 m/s based on the approximate water temperature, and the TEE volumes were scaled appropriately. We take method 1 to be the method described in 2.3.1 and method 2 to be as described in 2.3.2.

3.1. Single calibration

We first examine the results of a single calibration. The calibration result is calculated after discarding 10% of the points having the highest FRE. Looking at the results, method 2 gives a lower FRE value than method 1. The reason for this is that in method 1, the error is based on the point-to-point error of the needle tip position. There is more variation in segmenting the specific needle tip in the TEE volume. As a result, the distance will be larger. Also, the point-to-line distance is orthogonal to the needle direction. As the needle tip distance may not be orthogonal to the needle itself, this error is expected to be larger.

It should also be noted that the calibration parameters are different between the two calibration methods. We cannot tell based on this table which one is better, but can attempt to understand why they are different. Although the translational terms are relatively similar, all within 1.5 mm of each other, the rotational terms have larger differences, up to about 4 degrees. Again, one main reason for these differences is the difficulty in segmenting the needle tip. Also, there may be a bias in the needle tip segmentation, since the low resolution of the TEE volume makes it nearly impossible to determine which point is the tip. Looking back to Figure 4, there is generally a significant volume at the needle tip. The tip is located somewhere within this area. It was taken to be the center of this area for this work, but it could in fact be closer to the end.

Looking at the thick slice mode, we see similar parameters to the Live 3D mode. In fact, only one parameter, the z-translation, is dissimilar between modes. Since the other 5 parameters are similar, we can guess that the TS volume is located at a similar orientation as the L3D volume, only translated along one axis. In this case, the differences in the 5 similar parameters would be due error in calibration. Higher errors are expected in the TS mode because it is essentially a calibration in 2 dimensions. Also, many of the needle positions are along a similar direction, orthogonal to the slice axis. Since many of the needle positions were in along this direction, there will be a high error localizing the plane along this direction.

Table 2.

Calibration results for a calibration statistical analysis.

3.2. Calibration statistical analysis

Next, we examine the statistical analysis of the calibrations. The analysis described in Section 2.5 was performed on the data acquired from each probe in each imaging mode (L3D and TS). The resulting FRE, TRE and parameters are listed in Table 2. Figure 7 shows the convergence of the FRE and TRE as more points are included in the calibrations. Note the X and Y-axis scaling in these images for comparisons. In this figure, the results are displayed from both calibration methods.

In Table 2, we see a lot of similarities to Table 1. There is a lower FRE in method 2 as compared to method 1. The listed parameters are also quite similar between the two tables, with translational differences all within 1.5 mm and rotational differences all within 4 degrees. The differences may be attributed to the fact that a different number of outliers were rejected by the RANSAC than the single calibration.

Much of the information for the statistical analysis is in the plots that are generated, as displayed in Figures 7. In Figure 7, the TRE plots are the most important. They show how well the error for points not included in the calibration converges. In L3D mode, each method performs similarly. With only a few points used, the standard deviation is much higher for method two, however, both methods converge to similar values. Also, both methods seem to converge in about 20-25 points. The convergence of the thick slice was much slower however, converging in about 35 points.

3.3. Needle position error

Assessing the accuracy of the TEE calibration must be done using points not included in the calibration. Since the resulting calibration from the statistical analysis includes all of the points, a new set of points must be acquired. A new set of 10 TEE images of an EM tracked needle at different positions were acquired with their corresponding EM measurements. The needle tip was then manually segmented and the average distance error between the needle tip in the TEE volume and the tip in EM after applying the calibration transformation was calculated. The average over all 10 points was 1.39±0.54 mm. This result was for Probe 2 using method 2. The error is reasonably close to the TRE seen in Table 2, aswould be expected.

Table 3.

Calibration results comparing the default pixel scale versus using a pixel scale adjusted for the speed of sound difference in water.

3.4. Effect of speed of sound

To look at the effect of the speed of sound on the calibration, we perform a single calibration on all points using pixel scaling having an assumed speed of 1540 and 1480 m/s. The FRE values are given in Table 3. A more significant increase in error is seen for Probe 1 than Probe 2. One possible reason is that Probe 2 was calibrated with much less variation in needle position. All of the segmented points are in a small area, whereas in Probe 1, the points are spread out much more. Having the points more spread out means that the calibration will be more effected by scaling errors.

Advertisement

4. Conclusions

A framework for the calibration of a 3D TEE probe to an EM tracking system was examined. The calibration framework consists of three basic steps: data acquisition, needle segmentation and calibration. We were able to analyze the results of the calibration of two separate TEE probes. Finally, two different calibration techniques for calibration using a needle were explored. The first method requiring the tip of the needle being segmented from the TEE volume, while the second method allows any point along the needle to be segmented.

It is very difficult to analyze which calibration method is better. Method 1 shows a faster convergence, requiring fewer images to be captured. However, it is more difficult to segment the needle tip from the TEE volume. A large imaging artifact at the needle tip prevents it from being precisely localized in the TEE volume. The likelihood of introducing a bias in method 1 due to segmentation error is much greater than in method 2. Therefore, method 2 is preferred. It does however require that more images are acquired. Greater than 30 images are recommended for good convergence properties. The speed of sound clearly also has an effect on the calibration accuracy. FRE values were improved after applying a scaling adjustment indicating that the segmented points had a better fit.

One potential for future work is to automate the calibration process. The presented calibration methods are not well suited for acquisition of more then a few dozen images since the acquisition is performed one image at a time with the needle being manually segmented. Image acquisition can be made simpler by acquiring a continuous series of images while the user simply moves the needle around the needle field of view. A continuous data acquisition would lead to other problems however, such as a temporal delay between the US acquisition and the EM tracking data. The needle segmentation can be improved through the use of more sophisticated, automatic needle identification methods. There are several published needle segmentation algorithms [2]. One way of segmenting the entire needle is to use a form of 3D Hough transform. If the needle can be automatically segmented, the calibration method described in 2.3.2 can be extended to a line-to-line registration method. More information extracted from these volumes can only improve the calibration consistency and accuracy.

Advertisement

Acknowledgement

The authors are sincerely grateful to Philips Research North America to provide the much needed support to conduct the research that has been presented here.

References

  1. 1. AblittJ.2000Speed of sound in pure water, http://resource.npl.co.uk/acoustics/techguides/soundpurewater/. National Physical Laboratory. URL: http://resource.npl.co.uk/acoustics/techguides/soundpurewater/
  2. 2. AboofazeliM.AbolmaesumiP.MousaviP. .FichtingerG.2009A new scheme for curved needle segmentation in three-dimensional ultrasound images, 10671070
  3. 3. ArunK. S.HuangT. S. .BlosteinS. D.1987Calibration of EM Sensors for Spatial Tracking of 3D Ultrasound ProbesIEEE Transactions on Pattern Analysis and Machine Intelligence95698700
  4. 4. BarrattD.PenneyG.ChanC.SlomczykowskiM.CarterT.EdwardsP.HawkesD.2006Calibration of EM Sensors for Spatial Tracking of 3D Ultrasound ProbesIEEE Transactions on Medical Imaging253312323
  5. 5. BergmeirC.SeitelM.FrankC.SimoneR. D.MeinzerH.P.WolfI.2009Calibration of EM Sensors for Spatial Tracking of 3D Ultrasound ProbesInternational Journal ofComputer Assisted Radiology and Surgery 42203213URL: http://www.springerlink.com/content/011g2nlu31q42035/
  6. 6. ChenT. K.ThurstonA. D.EllisR. E. .AbolmaesumiP.2009A real-time freehand ultrasound calibration systemwith automatic accuracy feedback and control, Ultrasoundin Medicine & Biology 3517993URL: http://www.sciencedirect.com/science/article/B6TD2-4TK2PMY-1/2/0229366696d3a893ecd930204c77aed1
  7. 7. FischlerM. A. .BollesR. C.1981Calibration of EM Sensors for Spatial Tracking of 3D Ultrasound ProbesCommunicationsof the ACM 24381395
  8. 8. HsuP.W.PragerR. W.GeeA. H. .TreeceG. M.2009Freehand 3D ultrasoundcalibration: A review, in B. H. Ch.W. Sensen (ed.), Advanced Imaging in Biology andMedicine, Springer Berlin Heidelberg, chapter 3, 4784URL: http://www.springerlink.com/content/x271137049707865/
  9. 9. HsuP.W.TreeceG. M.PragerR. W.HoughtonN. E.GeeA. H.2008Comparisonof freehand 3-D ultrasound calibration techniques using a stylus, Ultrasound in Medicine& Biology 341016101621URL: http://www.sciencedirect.com/science/article/B6TD2-4S97JGM-4/2/29347751b4b2b32d578f211a5d9eab11
  10. 10. KhameneA.SauerF.2005A novel phantom-less spatial and temporal ultrasoundcalibration method, Medical Image Computing and Computer-Assisted Intervention MICCAI2005, 3750of Lecture Notes in Computer Science, Springer Berlin / Heidelberg,6572URL: http://www.springerlink.com/content/4t0lr4pp0kqye7nx/
  11. 11. MercierL.LangøT.LindsethF.CollinsL. D.2005A review of calibrationtechniques for freehand 3-D ultrasound systems, Ultrasound in Medicine & Biology312143165URL: http://www.sciencedirect.com/science/article/B6TD2-4FFMNSK3/2/c40fe8177f79a9cfddb543694f849b17
  12. 12. MuratoreD. M.GallowayR. L.2001Beam calibration without a phantomfor creating a 3-D freehand ultrasound system, Ultrasound in Medicine & Biology271115571566URL: http://www.sciencedirect.com/science/article/B6TD2-44N9J9C-G/2/b4c61cdbf56df1bf72c554537de449cb
  13. 13. PagoulatosN.HaynorD. R.KimY.2001A fast calibration method for 3-Dtracking of ultrasound images using a spatial localizer, Ultrasound in Medicine & Biology27912191229URL: http://www.sciencedirect.com/science/article/B6TD2-444DTNW-9/2/fee010a12a55a764289158215f4d9b49
  14. 14. PoonT. C. .RohlingR. N.2005Comparison of calibration methods for spatialtracking of a 3-D ultrasound probe, Ultrasound in Medicine & Biology 31810951108URL: http://www.sciencedirect.com/science/article/B6TD2-4GT7XPWF/2/b57f38b74e41675342283dee4aa55349
  15. 15. PragerR. W.RohlingR. N.GeeA. H.BermanL.1998Rapid calibration for 3-Dfreehand ultrasound, Ultrasound in Medicine & Biology 246855869URL: http://www.sciencedirect.com/science/article/B6TD2-3W01HPN-17/2/4e6d61757137c9fc45d5692665accb7b
  16. 16. RousseauF.HellierP. .BarillotC.2005Confhusius: A robust and fully automaticcalibration method for 3D freehand ultrasound, Medical Image Analysis 912538URL: http://linkinghub.elsevier.com/retrieve/pii/S1361841504000404?showall=true
  17. 17. TorrP. .ZissermanA.2000MLESAC: A new robust estimator with applicationto estimating image geometry, Journal of Computer Vision and Image Understanding781138156
  18. 18. WeickertJ.1998Calibration of EM Sensors for Spatial Tracking of 3D Ultrasound ProbesTeubner, Stuttgart.
  19. 19. ZhangH.BanovacF.WhiteA. .ClearyK.2006Freehand 3D ultrasound calibrationusing an electromagnetically tracked needle, Proc. SPIE, 6141SPIE, San Diego, CA,USA, 61412M614129URL: http://link.aip.org/link/?PSI/6141/61412M/1

Written By

Andrew Lang, Vijay Parthasarathy and Ameet Jain

Submitted: 22 November 2011 Published: 23 August 2012