Open access

Detection of Craters and Its Orientation on Lunar

Written By

Nur Diyana Kamarudin, Kamaruddin Abd. Ghani, Siti Noormiza Makhtar, Baizura Bohari and Noorlina Zainuddin

Submitted: 02 December 2011 Published: 26 September 2012

DOI: 10.5772/48526

From the Edited Volume

MATLAB - A Fundamental Tool for Scientific Computing and Engineering Applications - Volume 1

Edited by Vasilios N. Katsikis

Chapter metrics overview

5,450 Chapter Downloads

View Full Metrics

1. Introduction

Craters are features commonly used as research landmarks compared with the other landforms such as rocks, mountains, cliffs and many others. Because of their simple and unique geometry and relatively established appearance under different conditions, the authors decided to select craters as ideal landmarks for detection and spacecraft localization. This chapter focuses on identification of craters in terms of their characteristics and detection of these visual features of the moon to determine a safe landing site for a lunar Lander. Cheng et al. proposed using craters as landmarks for navigation purposes because the geometric model grants a robust detection under different lighting conditions. Moreover, craters appear in enough density on most planetary system bodies of interest and they are also known to have fairly stable appearance or shapes over time or under different conditions and environments. These special features make them an appropriate type of landmark to observe. Currently, there is a lot of on-going studies mainly on craters detection and optical navigation systems for the moon and these studies still adopt a complex and similar approach such as detection using the Hough transform method. To part from this limitation, the authors decided to build a simple algorithm for detecting craters on the moon’s surface which will detect the craters based on two important measurements including the distance and angle measurements. The advantages of using this approach are threefold: (1) its uncomplicatedness (2) fast detection (3) can be used further in ellipse reconstruction algorithm to determine the position and orientation of the crater. This chapter will discuss the method of employing MATLAB and image processing tool on an optical image as well as the morphological image detection fundamentals. In addition, some geometrical projection analysis in reconstructing an ellipse as a disc will be evaluated in order to obtain the orientation of the disc (crater) for an autonomous optical navigation system.

1.1. Background and context

The first lunar exploration spacecraft named Luna 1 was flown to the moon on January 1959 [21]. Nonetheless, this mission did not give too much impact as it did not land on the moon itself. Due to the enthusiasm to continue the journey of previous research pioneers, Luna 2 became the first spacecraft to land on the moon’s surface in late 1959 [21]. These histories of moon explorations became a motivation for a new researcher and moon explorer to find out more about Lunar and its unique features.

A crater plays a vital feature to estimate the age of the moon’s surface when any sample specimen is not available [10, 11]. An autonomous crater detection algorithm will help space research scientists to reduce their laboratory works of manually identifying those craters. Previously, several automatic and semi-automatic crater detection algorithms were proposed [12], but their accuracy was not enough for craters chronology and they have yet to be fully tested for practical uses (example: spacecraft navigation). Craters chronology means the history or the sequence of events that formed the craters on the moon’s surface and the variety of its features. Optical Landmark Navigation using craters on the planetary surface was first used operationally by the Near Earth Asteroid Rendezvous (NEAR) mission [15, 16]. This mission is to determine the spacecraft orbits and the range of the body for close flybys condition and low attitude orbiting [13].

Many planetary missions such as SELENE (Selenological and Engineering Explorer) and Clementine take the images of the moon’s surface for on-going research. This attention to the moon exploratory especially will help us divulge the unimagined information and characteristics of planetary science specifically on the moon’s surface. In 2006, a Japanese Lunar Orbiting Spacecraft was launched and was expected to bring a large amount of useful data for on-going planetary research. However, it is known that the images taken under the low sun elevation, such as those from ‘Lunar Orbiter’ and ‘Apollo’ are suitable for crater detection as mentioned before to differentiate the ‘light and dark patches’ for sooner analysis.

Current descent and landing technology for planetary operations, such as those of lunar, is performed by a landing error ellipse greater than 30x100 kilometres without terrain recognition or hazard avoidance capability. Most of the previous research on lunar pin point landing specifically has a limitation such that requires a priori reference map describing the past and future lunar imaging and digital elevation map data sets in order to detect the landmarks on a particular planetary surface. Due to this drawback, the authors propose a landmark-based detection algorithm named craters detection algorithm to detect main hazards on the moon’s surface independently from those references in order to produce a reliable and repeated identification and detection system. This intelligent imagery-based algorithm will detect craters based on their pointing direction relative to the sun and classification to differentiate between the light and dark patches. Furthermore, by making a match of those detected craters with the internal lunar atlas, the Lander can further determine the spacecraft motion and velocity relative to the lunar surface.

1.2. State of the art

A spacecraft mission on the moon involving Entry, Descent and Landing (EDL) requires precise and intelligent landing techniques. There were numerous previous research efforts and various methods used to determine such landing sites that are safe for a moon Lander. Trying to get a new technique that can search for free hazards locations, this paper will propose an intelligent algorithm described as craters identification algorithm in order to recognize and detect craters consistently and repeatedly over most of the moon’s surface. In addition, using geometric recognition techniques, the authors we can also determine the position, attitude, velocity and angular velocity of the spacecraft; the four important parameters used to land safely on the moon by finding a match of those detecting craters to a database containing the 3D locations of the craters (internal lunar atlas).

The lunar surface consists of several hazardous characteristics such as rocks, mountain, boulders, slopes and mainly craters. Particularly, in this paper, the authors choose craters as primary hazard detection because of its geometric shape which makes it easy to identify using image detection codes. Over the years, craters are created as a result of a continuous bombardment of objects from outer space like meteorites, asteroids and comets. All of them strike the lunar surface at various speeds, typically 20 kilometres per second. In addition, unlike the earth, there is no atmosphere on the moon to protect it from collision with other potential impactors.

Previous researchers such as Cheng and Ansar [5] proposed a feature detector and tracker algorithm for detecting craters as mapped landmarks and matched those using applications during EDL for the spacecrafts. In a sequence, one can also determine the position and velocity of the spacecraft using the desired parameters achieved by the matched craters technique mentioned above. For this approach, craters are classified based on their size and orientation of their outlining ellipses. There are databases of previously matched craters to detect the desired impact craters. Position is estimated using subset middle values of at least three matched craters in a linear pose estimation algorithm [6]. By combining the average velocity between two image based position and computed velocity by integrating the accelerometer reading, the actual velocity is dictated by the output of the image processing algorithm.

Continuously, there were preceding research on On-board hazard detection and avoidance for a safe landing which has aimed to autonomously detect the hazards near the landing site and determine a new landing site free from those hazards [7]. In order to detect the potential hazards on the moon’s surface, there are specific requirements as agreed by the ALHAT project which will detect the hazards that are 0.3 meters tall or higher and slopes that are 5 degrees or greater mainly for the craters. Moreover, the requirement is not just to detect the hazards with the above mentioned criteria but also must be able to find a safe landing site with a diameter around 15 meters over most of the moon’s surface. This proposed system is achieved by using the imaging LIDAR sensors to get the direct measurements of the lunar surface elevation from high altitude. Besides, the probability of the existence of a hazard free landing site is determined as a function of a Lander diameter, hazard map area and rock coverage, and together these procedures are used as guidance for LIDAR sensors and the overall Navigation and Control Architecture.

Figure 1.

Terrain sensing and recognition functions for safe land site determination [5].

Hazard Detection Avoidance (HDA) and Terrain Relative Navigation (TRN) are on-board capabilities that use sensing and computing technologies to achieve optical safe and precise terrain navigation within 100 meters of a predetermined location on the lunar’s surface [8]. They are based on three methods including global position estimation, local position estimation and velocity estimation as illustrated in Figure 1 above. All these functions can be realized using passive imaging or active range sensing. One of the TRN approaches is by using pattern matching which requires a-priori reference map describing past and future imaging and digital elevation map datasets (map-dependant system). Pattern matching approach applies landmark (Craters) matching instead of patch correlation and employs passive visible imagery system. There are several parameters required such as diameter of craters, relative distances and angles between landmarks. Craters are usually distinguished in a map of the landing site and then stored in a database. During landing process, craters are detected in descent imagery and are matched as well as compared to the database. Then only the position of the Lander is determined.

Continuous research in developing the greyscale imagery mainly on detecting landform is still being explored within these past few years. In order to detect craters of any particular planetary bodies, one of the approaches is by using the Hough Transform shape detecting assignments [9]. The proposed algorithm focuses on detection of the (sensor independent) geometric features of the impact craters (i.e centre position, craters radius) as well as identification of sensor dependant geometric features such (i.e rim height) as a following task. The use of a simple model (circular shape) for craters detection makes it possible to exploit the algorithm in different operational environments (i.e recognition of Earth and other planetary craters in the Solar System) using data attained by dissimilar sensors such as Synthetic Aperture Radar (SAR). Because of its complex algorithm, Hough Transform is not directly employed to the original image. Some pre-processing steps are necessary to obtain better result and performance of the system as illustrated in Figure 2 below. The Hough Transform has been built by Paul Hough (1962) for the identification of lines in pictures. Describing a circle represented by lines, if the radius is r and centre coordinates represent (a, b), then the parametric representation of a circle:

R (x, y) = {x = a + r cos θ, y = b + r sin θ}

where θ = [0, 2π]

Each point (x, y) represents a, b and r parameter is mapped in a cone surface that has the following representation:

H (a, b, r) = {a = x - r cos θ, b = y – r sin θ}

where θ = [0, 2π]

Figure 2.

Result obtained using Hough Transform in SAR (Synthetic Aperture Radar) Image [7].

There is also multiple approach algorithms in detecting craters on the lunar’s surface as proposed by Sawabe, Matsunaga and Rokugawa, 2005. It is known that the crater’s feature changes according to its size. Small craters form a simple circle, and the larger its size, the more complex its shape becomes [3]. This change in feature poses difficult problems to detect craters with different sizes by a single approach. In their data-dependant based algorithm, they defined that a crater is a circular topographical feature in images and a minimum detection crater size is two pixels in radius [13] and it uses data from SELENE (Selenological and Engineering Explorer) to visualize the surface geological settings and the subsurface structure of the Lunar. These approaches are different to the authors’ research as they consider the crater to bean ellipse for their detection algorithm. The authors also propose the data independent based algorithm. Four different methods were used with the crater detecting algorithm to find (1) ‘shady and sunny’ patterns in images with low sun angle, (2) circular features in edge images (3)curves and circles in thinned and connected edge lines, and (4)discrete or broken circular edge lines using fuzzy Hough transform. Besides, the detected craters are also classified by spectral characteristics derived from Clementine UV-Vis multi-spectral images [13]. The main advantages of the proposed algorithm compared to the previous one are that the detection algorithm is uncomplicated and it has an outstanding successful rate of detections. These methods of detection and their determination of accuracy will be evaluated in the experimental results afterwards.

In Landmark Based Pinpoint Landing Simulator (LAMPS) by Cheng and Ansar, a robust yet complex crater detection algorithm has been developed for autonomous spacecraft navigation. Based on their research, craters might have a random appearance based on their ages and sizes. For example, younger craters may have sharper and regular rims [14]. Spatial densities of craters also form the primary basis for assessing the relative and absolute ages of geological units on planetary surfaces [14]. However, typical craters will have ellipse shape in their rims, with a light to dark pattern that is dictated by the sun azimuth and elevation as well as its own topography. In fact, this statement is very similar to the authors’ own approach in defining a crater as a composition of light and dark patch. Technically, Cheng and Ansar approach algorithm consists of five major steps which are edge detection, rim edge grouping, ellipse fitting, precision fitting and crater confidence evaluation. Another important property of landmark based detection system is the use of spacecraft pinpoint landing (PPL) for autonomous navigation method. To decrease the probability of landing on a hazard surface, one of the two safe landing proposals must be taken into account: craters hazard detection avoidance, which will detect all hazardous craters upon landing on the moon’s surface or pinpoint landing which determines the Lander’s position in real time and guide the spacecraft to a safe and free landing site, away from those hazards (craters).

According to recent studies on the size and frequency of the craters on a Mars’ surface [17], a sufficient number of adequately sized craters for determining spacecraft position are very likely to be found in descent imagery. For an instance, if the image was taken using a camera field of 45 degrees and is taken from 8km above the surface, there will be an average of 94 craters of less than 200m in diameter. Ideally, from this situation, these craters can be used as landmarks to match a pre-existing crater database and therefore to determine the position of the Lander. This approach of pattern matching will be further used as future works in the authors’ research. For the time being, the authors have proposed to use a projection geometry concept in determining the orientation and position of the spacecraft using two vital equations that were discussed later.

As in Figure (3) below, the proposed pinpoint landing is as follow. First, the landing site is pre-determined on the targeted body (moon’s surface, Mars’ surface, etc) on the earth using orbital imagery, and the landmarks within the landing ellipse (red ellipse) are mapped. During EDL, its preliminary position prior to the landmarks and selected landing site is determined. The Lander’s position is then frequently tracked and guided using continuous updates of the Lander’s position and the velocity all the way through the descent.

Figure 3.

Craters pattern matching for position estimation of the spacecraft during EDL [14]

Advertisement

2. Methodology

2.1. Real time craters detection algorithm

This reliable topography-based Craters Detection Algorithm that the authors are proposing in this chapter is mainly based on a real image (optical image) analysis and morphological Image Analysis. There are various stages of coding in order to get a satisfactory result, provided that the sun’s elevation angle is known. This algorithm is suitable for any optical images taken from ‘Lunar Orbiter’ or ‘Apollo’ sources. The Algorithm flowchart is presented in Figure (4) below:

Figure 4.

Flowchart of the proposed Craters Detection Algorithm

In the proposed craters detection algorithm, the authors used the original image (3D optical image) of the moon’s surface. Originally, this image is a coloured image. For pre-processing step, the authors invented a colour conversion technique which started/originated from RGB (red,green,blue) image to HSV (hue,saturation,value) image using rgb2hsv function in MATLAB. Converting the RGB image to HSV image is basically the transformational of colours from RGB space into HSV space. Value (brightness) gives the amount of light in the colour; hue describes the dominant wavelength which determines the colour type while saturation indicated the vibrancy of the colour. The vital reason for this colour conversion is that the authors want to analyze only one of the entities that is Value entity in HSV component range 0(tend to be dark) to 1(tend to be light) which is then further used in the thresholding calculation. Besides, HSV plane is more popular in analyzing images because of its similarities to the way human intends to recognize colour. RGB and CMYK (Cyan, Magenta, Yellow and Black) colour models are additive and subtractive models respectively, defining colour in terms of the combination of primary colours whilst HSV encapsulates information about a colour in terms of its familiarity to the human adaptation such as; what colour is it? How light and dark the picture is? Or what is the colour vibrant?

For thresholding purposes, a similar approach as Sawabe et al’s [13] has been implemented and is discussed thoroughly in the technical section. The purpose of applying a threshold is to distinguish the craters on the moon’s surface to light and dark patches groups. Thresholding is the simplest method of image segmentation. This segmentation procedure is to classify the pixels into object pixel (light and dark patch) and non object pixel (background). In this classification of images, it is clearly seen that a crater is formed by two different patterns that are light and dark patches under a different angle of sun beam. The authors use this property of craters in order to analyze and detect them on a lunar’s surface. These light and dark patches pattern is distinguished based on the values of pixel’s intensity for both images. For an instance, alight patch is determined by a pixel value that is below the threshold brightness calculated whilst a dark patch is determined from a pixel by a pixel value that is above the similar threshold brightness calculated.

Furthermore, in morphology image analysis, erosion and dilation is applied as a combined analysis to the tested image. These two operations can be described simply in terms of adding or removing pixels from the binary image according to certain rules which depend on the pattern of neighbouring pixels. Erosion removes pixel from features in an image or equally turns pixel OFF that were originally ON [20]. Fundamentally, erosion can entirely remove extraneous pixels representing point noise or line defects which are only a single pixel wide. The image that is processed using erosion and dilation are shown in Figure (9) below for better visualization. These two methods are discussed entirely in the experimental results section (craters detection algorithm results) later. Another method is called dilation, which is widely used to add pixels. The dilation rule is that for erosion, is to add (set to ON) any background pixel which touches another pixel that is already part of a foreground region [20]. This will add a layer of pixels around the periphery of all regions which results in dimension increment and may cause images to merge or expand.

In centroid determination, the authors use ‘regionprops’ function to get every centre for each light and dark blob classified previously. After that, the authors then have to link the blobs together in a single picture. As a result, the final image will comprise a group of clusters or patches that correspond to craters on the moon’s surface. These groups of blobs (light and dark patches) will then be used to measure the minimum distance and angle between each of them. First, the algorithm will calculate all the distances of every patch and will pick only the minimum distance. The light patches with minimum distances that are attached to the dark patches will be considered craters as a first step. Second, every angle between the known input sun vector and pairing patches vector is calculated using a scalar product or dot product. Technically, all these methods will be elaborated further in the technical section and experimental results section.

2.2. Geometrical analysis

In the geometric analysis, there are several stages that can be determined as Figure 5 below:

Figure 5.

Flowchart of Geometrical Analysis

Users must consider a crater as an ellipse in a real image. This method of consideration will convert an ellipse in a 2D image into a circle on a plane using Conical Projection Analysis. Any ellipse will appear to be a circle from a certain point of views. In other words, an ellipse will be projected into a circle at a certain projection point. At the final stage, this method will be able to calculate the orientation and position of a crater (disc in shape) that is being detected before through the proposed detection algorithm.

2.2.1. Fundamentals of ellipse and rotation matrix

Mathematically, an ellipse can be defined as the locus of all points on the plane whose distances R1 and R2 (as Figure 6 below) to two fixed points added to the same constant and can be notified as:

R1 + R2 = 2a

where a = semi major axis and the origin of the coordinate system is at one of the foci (-c,0) and (c,0). These two focis are chosen to be identical with the bounding ellipse algorithm equation. It is sometimes defined as a conical section from cutting a circular conical or cylindrical surface with an oblique plane. There are five vital parameters in ellipse including semi-major axis denoted as a, semi-minor axis denoted as b, centre, c of ellipse in X-coordinate, Xc, centre of ellipse in Y-coordinate, Yc, and an angle of rotation denoted as ω. The ellipse with all the parameters can be illustrated as below:

Figure 6.

Ellipse

An ellipse that lies along the horizontal X-axis with foci points F1 (-c, 0) and F2(c, 0) as can be shown in Figure (6) above, will have an equation of

x2/a2 + y2/(a2-c2) = 1 where a > c for the ellipse

For an ellipse, the distance c between the centre and a focus is less than the distance a between the centre and foci, so a2-c2 is positive and a new constant b>0 is introduced by setting [2]:

b2 = a2 – c2 for ellipses

Hence the equation of an ellipse with F1 (-a, 0) and F2 (a, 0) is simplified to

x2/a2 + y2/b2 = 1 where 0 < b < a

For both the hyperbola and the ellipse, a number e, called the eccentricity is introduced by setting [2]:

e = c/a or e = √ (a2-b2)/a

In this mathematical and geometrical analysis, the authors started to brief in ellipse equations and rotation matrix first which are going to be analyzed soon in the bounding ellipse algorithm and for the reconstruction of ellipse to a circle on a 2-D plane. These methods are beneficial to determine the orientation and the position of the spacecraft during Entry, Descent and Landing (EDL) applications. In this case, the rotation matrix for an ellipse can be illustrated as Figure 4 below:

Figure 7.

Rotation Ellipse

In S frame as in figure 7 above, it can be shown/demonstrated that using a standard ellipse equation, the x” and y” can be expressed as

(x”)2/a2 + (y”)2/b2 = 1 where 0 < b < a

Where

(x"y")=(cosωsinωsinωcosω)(x'y')
E1
=(x'cosωx'sinωy'sinωy'cosω)E2

and

(x'y')=(xxoyyo)E3

After substitution of these three vital equations, new formula for the rotation ellipse, which is also the bounding ellipse equation:

((xxo)cosω+(yyo)sinω)2a2+((x+xo)sinω+(yyo)cosω)2b2=1E4

Again, the rotation ellipse can also be expressed through this formula:

(x˜c˜)TA(x˜c˜)=(xxo)2A11+((xxo)(yyo))(A21+A12)+(yyo)2A22=1E5

Where A11,A21,A12,A22 is the E matrix that is determined from the bounding ellipse algorithm following the form of

A=(A11A21A12A22)E6

Thus, by comparing the equations of 12 and 13 above, the authors express all the ellipse parameters a,b,c (Xc and Yc) and ω in terms of this A11,A21,A12,A22 entities when drawing the bounding ellipse around the target patch which:

a=(2sin2ω(A11+A22)sin2ω+2A12)1/2E7
b=(2sin2ω(A11+A22)sin2ω2A12)1/2E8
ω=tan1(2A12A22A11)/2E9

Furthermore, the other two ellipse parameters Xc and Yc values can be determined straight away from the bounding ellipse algorithm.

2.2.2. Bounding ellipse algorithm

This is the first method used in the geometrical analysis and the reason of using it is to get the bounding ellipse around the targeted patch. This is further used in a final stage of this section that is reconstruction or projection of the ellipse to a circle in a 2-D plane. The bounding ellipse takes five inputs of the ellipse parameters described previously, which are semi-major axis a, semi-minor axis b, centre of ellipse in x-coordinate Xc, centre of ellipse in y-coordinate Yc and rotation angle ω. These five parameters are embedded in the A matrix produced by this algorithm as an output along with the Xc and Yc values. In order to draw the bounded ellipse the authors need to express all those ellipse parameters in terms of the entities A11,A21,A12,A22of the A matrix. The derivations of these terms are comprehensively described in the previous section. The results of this bounding ellipse algorithm will then be used to reconstruct this 2-D ellipse to a 2-D circle in a plane using Reconstruction Ellipse Algorithm. There are two vital equations of reconstructing a disc that will be used in this algorithm to determine the orientation of the spacecraft which is described below [18]; nonetheless in this chapter the authors will only determine the orientation:

q˜=Bsinα{cosβR˜1±1cosαsinβtan2αtan2βR˜2}E10
p˜=±2cotα{sinβcosαR˜1±1cosβtan2αtan2βR˜2}E11

Where:

q˜= is the position of the reconstructed ellipse

p˜= is the orientation of the reconstructed ellipse

B = is the radius of a disc (craters that the authors model as a disc)

α= is the arc length of the semi-major axis

β= is the arc length of the semi-minor axis

R˜1= Rotation matrix of the column vector

R˜2=Rotation matrix of the column vector

This is the reconstruction or projection ellipse equations where the authors consider an ellipse as a half-length along the axis symmetry, which is taken to 0 that is A=0. In this case, the authors need to model the crater bounded as a disc. That is the reason for the half-length along the axis symmetry A, is taken to 0. A in this case is not the attributes of the matrix determined previously. The authors have to deal with those two equations above where as can be seen the equationp˜, the orientation of a disc, is independent of parameters B (the radius) which means that the authors are able to determine quickly the reconstruction algorithm in order to identify the orientation of the spacecraft relative to the moon’s surface. There is the ambiguity case in equation p˜which is the negative and positive case of ±2 sign. In this case, the p˜equations always takes a positive value instead of negative as the crater’s orientation is just pointing upward towards a camera or in other words, a disc will only be seen if its outward face points towards the camera rather than away from the camera [18]. This is a discerning case; when one considers how human calculates an object’s position, its exact size is needless in finding the direction of neither its centre nor its orientation. In contrast, the equation q˜is dependent on the radius of a disc, B which the authors have no knowledge of the radius of the craters and how far it is from the moon’s surface. Therefore, searching for a solution q is actually an interesting future work that needs to be achieved if the authors want to find the position of the spacecraft during the EDL operations.

Advertisement

3. Technical sections

There are different mathematical equations and fundamentals applied during this project development. In order to make the project runs smooth as planned; the authors have divided the logical structures of the project into three different sections:

3.1. Craters detection algorithm

This project involved the development of the detection algorithm using MATLAB image processing tool and an image of the moon’s surface. It is mainly based on the binary image and morphological image analysis. At the first stage, the authors introduce the concept of HSV (Hue, Saturation, and Value) as well as morphology image investigation such as dilation and erosion to exploit the real image. The Hue is expressed as an angle around a colour hexagon mainly using the real axis as 0° axis. The Value is measured along the cone and has two different conditions. If V=0, the end of the axis is represented by black and white if V=1at the end of the axis is represented by white [4]. The Saturation is the purity of the colour and is measured as the distance from the V axis looking at the hue, saturation and value hexagonal cone.

The HSV colour system is based on cylindrical coordinates. Mathematically, converting form RGB (Red, Green and Blue) image to HSV is actually developing the equation from the Cartesian coordinates to cylindrical coordinates. To reduce the complicatedness of analysis on image detection, the authors analysed a 2-D optical image. The threshold for the image is set using intelligent approach from Sawabe, Natsunaga and Rokugawa in classifying the images as can be shown in equations (20) below. By using this approach, images were classified into two components that are light and dark patches or obviously known as ‘sunny and shady patches’. Ideally, these two groups of patches are easily recognizable if the image was taken under low sun elevation. These patterns of light and dark patches were detected when all these equations [13] are satisfied:

Rmin<Rm -

Rmax>Rm +

Pmin<Pmax

Where Rmin indicates the minimum pixel value, Rmax indicates the maximum pixel value; Rm indicates the average pixel value and indicates the standard deviation in the small area including the targeted patches. Pmin and Pmax indicate the positions at the minimum and maximum value pixels from the direction of the sun radiation or sun vector [13].

Apart from that, there are two basic fundamentals on morphological operation applied in the algorithm which are dilation and erosion. Dilation is an operation that grows or expands objects in a binary image. This can be represented by a matrix of 1’s and 0’s but usually it is convenient to show only the 1’s group. The dilation between two images A and B, is denoted by A B and is defined as [1]:

AÅ= {| (B)z Af}E12

Nevertheless, erosion shrinks or thins objects in a binary image, which is the opposite case of the dilation operation. The mathematical definition of erosion can be expressed as [1] and the result of eroded and dilated image is shown in Figure 8 as below. This can be compared to the original image in Figure (9):

Θ B = {| (B)zAcf}E13

Figure 8.

Image after dilation and erosion were applied

The previous stages are then followed by another image analysis methodology that is regionprops function and this region indicates the patch that the authors want to analyze. The reason for using this region descriptors function is to plot the centroids of each light and dark patch and gather them (light and dark patches) back together in a single picture which will then be used to further calculate the desirable minimum distance and angle between them to the known sun direction. Regionprops takes the labelled matrix as an input and uses axis-x and axis-y to describe horizontal and vertical coordinates respectively.

To classify and consider this region of interest as a crater, the authors proposed two ways of detection which are minimum distance measurement and angle detection based on the known sun vector. In the distance measurement, the minimum distances between each of the centroids calculated previously are determined using this formula:

|Distance| = √[(x2-x1)2 + (y2-y1)2] = |r|

where |Distance| is the measurement of distance between two pairing patches (light and dark), x2 and x1 are the x-component of the centroids and y2,y1 are the centroid’s component of the y-axis respectively. In the angle determination, the authors give an input for the sun vector which is known by looking at the sunray effect at those craters (the position of the light and dark shades). This algorithm will then compute each of the angles of every pairing blob with their minimum distances to the sun vector added input using scalar product or dot product in the vector analysis which is:

r.s= |r| |s| cos θE14

Where r = vector of each pairing blobs

s = sun vector

|r| = distance/length of each pairing blobs

|s| = distance of sun vector = unit vector = 1

θ = angle between sun vector and vector of each pairing blobs

Each of the pairing blobs angle is calculated using the above equation and those who has a minimum angle to with known sun vector and has a minimum distance calculated previously will be considered and stored as a crater. Oppositely, those who are against the direction and have the maximum angles to the sun vector will be scrapped and considered noise. At the end of these two measurements (distance and angle), the authors managed to get the best eight craters using this reliable craters detection algorithm.

3.2. Geometrical analysis

3.2.1. Bounding ellipse algorithm

The geometrical part is mainly based on the mathematical analysis on vector calculus, the rotation matrix, ellipse equations and also the mathematical applications to determine two major features that is the orientation of a crater that has been modelled as a disc and the position of the disc after projection. These two features make a vital solution to have a safe landing on the moon’s surface. Firstly, from the previous codes, the authors choose one of the best eight in a crater’s list to run a bounding ellipse algorithm around the targeted crater. The bounding ellipse algorithm input the P matrix which is composed from the targeted crater itself. The tolerance of the bounding image is set to get more precise result. As discussed above in the methodology section, the output of this algorithm will be the A matrix and c, the centre of the ellipse. A matrix is in the form ofA=(A11A21A12A22). The reason the authors highlighted this is to show that another important parameter in ellipse is embedded in this A matrix. They are semi-major axis a, semi-minor axis b, and angle of rotation ω. As a result, the final equations (equations 13, 14 and 15) are used to draw the ellipse to bind the targeted patches as derived from the previous methodology section. The next step is to draw the ellipse using another algorithm and all those parameters above and also the centre of the ellipse that can be determined straight away from the bounding ellipse algorithm. Statistically, after the algorithm has calculated all the values of the parameter, the authors have the same value for semi-major axis a and semi-minor axis b which suggests that a circle is formed. This important knowledge tells the authors that the camera is actually pointing straight around 90 degrees vertically to the targeted crater on the moon’s surface. If the authors have an ellipse instead of a circle, it means that the camera is not pointing straight 90 degrees downward away from it (crater). That is the first assumption the authors made before doing the third stage algorithm that is Reconstruction of an Ellipse to a circle in a 2-D plane.

3.2.2. Reconstruction of an ellipse on image plane

First, the input of this algorithm is the five primary parameters of the ellipse that are a, b, Xc, Yc, ω, the half-length of the semi major axis is denoted as capital A, and the half-length of the semi-minor axis is denoted as capital B. This reconstruction vision is to model the craters or projected ellipse as a disc. Mathematically, the concepts are adapted by taking the focal point as an origin of a camera frame and the x-axis is aligned with the focal axis of the camera and x > 0 is what the camera is looking for. Furthermore, the image plane is always assumed to be in front of the focal point rather than those in practice. Taking this objective into account, the half-length of the semi major axis A is taken to 0 because it is a disc. In comparison, the half-length of the semi-minor axis B is actually the radius of the crater or a disc. This reconstruction or projection ellipse algorithm is based on Stephen’s proposed complex method on reconstruction spheroid. The authors will use the equation 17 previously to determine the orientation of a disc or crater that the authors modelled from the reconstruction algorithm.

p˜=±2cotα{sinβcosαR˜1±1cosβtan2αtan2βR˜2}E15

p˜equation describes the orientation of the reconstructed disc (crater) and equation q˜ describes the position of a disc (crater) after reconstruction or projection. As the above two equations, there are four possible solutions (regardless of the ambiguity case of±2as discussed before in the methodology section), two of them are from equation p˜ and another two from equationq˜. The equation p˜ is independent from the half-length semi minor axis B as can be seen from the equation. Therefore, the authors can determine the orientation of a disc straight away from the algorithm. Unfortunately, the equation q˜ is dependent on the half-length of semi-minor axis B which the authors are not aware of the radius of a crater because the authors are not able to determine the distance to the moon’s surface. This is the main challenge. But getting the value of p˜ will then lead the authors to get the position of q˜ by some means because they are related. The position q˜of the disc will then be a greater future work to be explored.

Advertisement

4. Experimental results and discussion

4.1. Craters detection algorithm result

In this section, some experimental results are reported which were obtained by applying craters detection algorithm. In particular, the authors choose real images (optical images) of the moon’s surface and develop the codes mainly based on the binary image analysis and morphological techniques. As mentioned before, this is the independent algorithm which does not depend on the data elevation map as well as past and future imaging data, which is the advantage compared to other detection algorithms proposed previously. There are seven stages to develop this algorithm as mentioned in the methodology (refer to the flowchart). The authors will discuss the results from the first stage until the last stage by attaching the result image for a better view and understanding of the concept used for analysis.

Erosion and dilation are two fundamentals in Morphological Image Analysis. In Figures 10 and 11, the erosion and dilation are experimented in a combined process for dark patches and light patches. For an ideal result, the authors just used the eroded image for both light and dark patches by adding them together in a single picture for centroid detection later. If the authors take consideration of both the dilation and erosion process as a combined process, the image itself will produce too much noise as shown in the figure above. For centroid determination, therefore, the authors want the noise to be kept at a minimum level in order to attach valid pairing patches and in order to produce minimum small blobs or patches that do not have their pairs (light and dark patch). This is also to reduce the processing time (running time) and complexity of centroid calculation.

Figure 9.

Real Image of Craters (Optical Image)

Figure 10.

Erosion and Dilation for dark patches

Figure 11.

Erosion and Dilation for light patches

Figure 12.

Threshold Image

Secondly, after converting the RGB image plane to HSV image plane and taking into consideration only the ‘Value’ from the HSV image plane, thresholding is then used for image segmentation proposal. Thresholding is used firstly to differentiate the object (foreground) from the background. The targeted foreground is ideally the craters themselves that are composed by light and dark patches pattern. The light patch occurs when the pixel value is more than the calculated threshold brightness using the same approach that is fully described in the technical section. Whilst the dark patch is detected when the pixel value is less than the same threshold brightness calculated before. The threshold image can be shown in Figure (12) above.

Figure 13.

Centroid Determination using ‘regionprops’ function

Centroid determination stage is completed by using the ‘regionprops’ function in Matlab and this is basically to compute the desired region properties or targeted properties. This step also attached the pairing patches together to form a complete crater for further analysis. Regionprops only takes the label matrix as an input. Therefore, the authors have to pre-process the image by labelling it first using ‘bwlabel’ function along with regionprops function later. In MATLAB, bwlabel is used to label the connected components in binary image and bwlabel also supports the 2-D inputs only. After labeling the entire target then only the authors can use them in a memory with regionprops. Regionprops takes this labelled component as an input and return them with an image of centroid determination labelled as a cross (*) symbol as in Figure (13) above. For a smoother image, the authors apply another function in MATLAB called bwareaopen to remove all of the unnecessary objects or all connected components (objects) that are fewer than P pixels set, producing another binary image.

The final stage of this craters detection algorithm is minimum radial distance and angle calculation. The authors have tested the algorithm with two different image conditions (different sun angles and background noises). According to the algorithm, the authors can select the number of craters to be detected by the system. The users can choose how many craters to be detected by the system based on the original image taken. As for this case, it will select the eight best craters that satisfy the minimum distance and angle conditions by assuming that the sun direction is known. An image of the landing site on the moon’s surface has to be captured first and the amounts of craters needs to be detected are calculated manually. Based? on the original image, the authors have assumed the sun direction by identifying the shade and sunny pattern locations that formed the craters If the image has less than eight craters, then the system will choose the maximum number of craters on the image. If one deals with the image with many craters, he/she can choose any number that he/she wants to detect based on the original image, which has to be captured first prior to the detection. Although sometimes the system will detect the craters with wrong pairing patches (light patches connected to the wrong dark patch and vice versa) the Lander should understand that one of them might still be one of the hazards that have to be avoided during landing application. The radial distance is calculated for each of the pairs detected in green lines as in Figures (14) and (15) below and using the equation (23) and, the shortest distance between adjacent pairs (light and dark patch) is chosen as a preliminary result for further angle calculations.

After the light patches were connected to the dark patches with a minimum distance between them, then the system will calculate the minimum angle by inputting the direction of the sun (assuming that the authors know the sun’s angle) and later comparing it with the pairing patches angle using the dot product (equation 24) as has been thoroughly described in the methodology section. By taking into account both techniques (minimum distance and angle), the authors can determine the best craters that they want on an image. The final craters detected will be denoted in yellow lines as shown in Figures (14) and (15) below.

Ideally, this algorithm will work on any images that have a clear pattern of light and dark patches and the authors do not even have to know the important parameters such as radius, gradient and etc of the craters. Unfortunately, this algorithm will work effectively on the image that has a clear pattern of these light and dark patches only. A crater with a clear pattern in this context will have clear features of light and dark patches that constitute one crater. To determine the accuracy of the algorithm created by the authors, the authors will provide a figure taken from two separate images tested by this algorithm. The final results have to be compared with the real image in order to determine which craters are true. In a real scenario, all craters detected will be considered hazards even though they are connected to the wrong pair of light or dark patches.

Comparing the results taken from Image 1to the original image (real image of the moon’s surface), there are eleven valid craters or true craters after pre-processing (after erosion and noise reduction (bwareaopen function), labelled by BW label). Valid craters or true craters in this context means that the craters have a clear pattern of light and dark blobs regardless of the size and other features. Over these eleven valid craters, eight of them are successfully detected using this craters detection algorithm; this suggests that the authors have a 73% successful rate. The authors have tested this craters detection algorithm into two different optical images with different sun angles which are 10 degrees, > 10 degrees and with a noisy image as mentioned below. These evaluations below are to measure the accuracy of the algorithm based on the two images proposed and how robust the algorithm is. The determination of accuracy of the algorithm is based on how often craters are detected and the calculations are shown below:

Based on the original Image 1 as can be illustrated in Figure (14) below

Sun direction: 10 degrees

Manual Detection (number of craters after pre-processed): 11

Automatic Detection (number of craters detected): 8

8/11 x 100 = 73% accuracy

Figure 14.

Craters detected from Image 1 in yellow line based on distance and angle measurements

Based on the original Image 2 as can be illustrated in Figure (15) below

Sun direction: > 10 degrees

Manual Detection (number of craters detected after pre-processed): 10

Automatic Detection (number of craters detected): 8

8/10 x 100 = 80% accuracy

Figure 15.

Craters detected from Image 2 in yellow line based on distance and angle measurements

In Figure (15) above, yellow lines, which denote as craters are detected with a minimum distance and angle detection while green lines, which denote as craters are detected with a minimum distance only prior to the minimum angle detection. This angle detection will be a final stage in defining the craters based on the light and dark patch pattern (sometimes denoted as sunny and shady parts) and the final craters are those with yellow lines. By comparison, the accuracy of the algorithm based on these two images with different types of craters, angle (Sun) and lighting condition is said to be 77% and it is quite a satisfactorily accurate.

This accuracy factor can be improved if the authors know exactly the sun elevation angle since in this research; the authors just assumed the angle and the value is not really accurate. In a real application, this sun angle can be measured separately using the satellite, altimeter or radar prior to this detection process and the value will be more accurate. Besides, this algorithm will detect the craters that are above 0.0265 meters in image size (100 pixels). This can be vouched by using the ‘bwareaopen’ function where it will remove the entire blobs pixel which is less than 100 pixels.

4.2. Bounding ellipse algorithm results

In the geometrical analysis section, the authors start with the bounding ellipse algorithm using the information from the previous proposed craters detection algorithm to bound the targeted blob as shown in Figure 18 above. The blob is selected randomly from the true craters detected by the detection algorithm such as in figure (16) above by labelling the targeted output using ‘bwlabel.' This function numbers all the objects in a binary image. By using this information, the user can select the true matching pair that can be selected for further research (to determine its position and orientation) by using the function ‘find’ in MATLAB and this step can be repeated for all of the true matching pairs detected by the algorithm. This is the first step before the authors can draw the ellipse around the target (figure 16) to bound it and to reconstruct the crater selected as a disc using ellipse reconstruction algorithm. This reconstruction is beneficial to later determine the orientation and position of a Lunar Lander using the equation p˜and q˜as proposed before during EDL of the spacecraft. There are some mathematical fundamentals and equations that need to be understood before one can apply the bounding ellipse algorithm to a certain targeted object. They are fundamentals of the ellipse and also the rotation matrix fundamentals. This knowledge is used to extract the embedded entities from the output of bounding ellipse algorithm in terms of basic ellipse parameters such as a, b, Xc, Yc and ω which are further used to draw an ellipse around the targeted object (target patch). Generally, the authors want to express all those ellipse parameters in terms of E matrix which is the output of bounding ellipse algorithm. This E matrix takes a form ofE=(A11A21A12A22). Finally, the authors want to express the ellipse parameters in terms of A11,A21,A12,A22in order to draw the bounding ellipse and further to reconstruct it as a disc.

Figure 16.

Targeted blobs chosen randomly from the matched true pairs in distance and angle measurement using bwlabel function.

After the authors obtain the bounding ellipse around the targeted patch as in shown in the figure below, the next step is to reconstruct the bounded crater using ellipse reconstruction algorithm. As a result, the authors will get a circle which suggests that the camera is pointing straight vertically to the lunar’s surface.

Figure 17.

Bounded crater using bounding ellipse algorithm and Image plane ellipse algorithm in green circle line

As can be seen in Figure (17) above, a circle, instead of an ellipse, appeared after the authors ran the bounding ellipse algorithm along with the image plane ellipse algorithm. This is because the semi major axis a is actually similar to the semi minor axis b, so a circle is produced. In fact, this means that actually the camera is pointing down vertically straight to the moon’s surface providing an angle of around 90 degrees relative to the moon’s surface. Next, the reconstruction ellipse algorithm took the input of a, b, centre (Xc,Yc), A, B and produced the output of the orientation p˜of the ellipse after reconstruction or projection.

4.3. Ellipse reconstruction algorithm results

This algorithm is about to model a crater as a disc and reconstruct an ellipse to a circle in 2 dimension (2-D) plane in order to determine the position and orientation of a crater relative to the spacecraft. For the first case, the authors used a real image which is an optical image. To realize the above purposes, the authors have assumed several altitudes from the spacecraft to the moon’s surface. As mentioned before, after the authors performed the bounding algorithm and drew the bounding ellipse on a particular targeted crater, the authors have an image of a circle that bound a targeted crater rather than an ellipse, and therefore the authors have an assumption that the camera on the spacecraft is pointing vertically, almost 90 degrees from above in angle if measured from a flat lunar’s surface. From bounding ellipse algorithm, the authors have determined the image plane ellipse parameters and the results show that the semi-major axis, a is similar to the semi minor axis, b. Taking advantage of this idea, a circle will only have one solution in order to determine the position of a crater and the ambiguity case (an ellipse has 2 solutions) can be eliminated. The reconstruction algorithm [18] described in the methodology and technical sections previously is then used to reconstruct a possible disc (crater) positions and orientations after taking into account of the camera’s parameters [18] below where Xc and Yc is the coordinate of the centre of an ellipse. The authors use an optical image dimension of 1024x1024 pixels.

In real applications, the lunar’s surface is not flat, and the crater is not straight below the camera. In this case, the authors had determined a circle in bounding ellipse algorithm that bounds a certain target; hence the authors made this assumption as the above figure. By looking at the figure above, the focal axis line is not really parallel to the centre of the disc hence the perspective distortion would have an effect as being described further in the next section. When the authors set A = 0 it is important to bear in mind that the ellipse will become a disc and this means thatp˜ is parallel toq˜and the authors should obtain a circle. Contrary to this, ifp˜ is perpendicular toq˜, the authors will get a line rather than a circle.

In a real situation, the altitude assumptions above are measured prior to the landing purposes. This altitude is usually measured by the altimeter or the satellite. Before the authors can construct the position a disc, they must determine the orientation first. This orientation and position of the disc is obtained from the equations (18) and (19) previously. As being mentioned before, the orientation equation is free from the term B and can be determined fully from the reconstruction algorithm. The orientation of the disc can be described as a unit vector that gives the direction of the centre of the disc. It is a positive value since the crater can be seen positioned upwards rather than downwards (negative side).

An ellipse will be detected on the image plane for each disc that is visible on the camera’s view. For each ellipse detected, there will be two discs reconstructed in 3-D space in terms of its orientation and position as well; one is pointing away from the plane which is a true direction while the other will be pointing in the wrong direction. As can be seen in the result above, the authors have the orientations of (1.0000,-0.0000, 0.0000) and (1.0000,-0.0049,-0.0007). This ambiguity case can be removed by taking into consideration that from a camera’s perspective, a disc will only be seen if they are orientated upwards (positive values) rather than downwards (negative values). So, using the information above, it is clear that the image plane is one unit away (P (1, 0, 0)) from the origin (of a camera at the spacecraft) which is the orientation of the spacecraft itself. The readers should be also reminded that in this case, the orientation vector is the unit vector that gives the direction of the centre of the crater. Hence, this orientation vector is also considered the normal vector of the crater that is pointing upward.

As can be seen in this second solution of these orientations, there is an error when the authors calculate the vector unit of this orientation which has to be 1. One of the drawbacks when the authors use MATLAB is that it will always round the value, for example 0.99995 to 1. That is why those (1.0000,-0.0049,-0.0007) values when squared, summed them all and squared root them back, the authors will have more than 1. By theory, this value should be 1 and the reason for this error is maybe due to MATLAB that has rounded the value of 1.0000 that lies on the x axis.

Furthermore, in order to evaluate the error of the ellipse that the authors reconstruct, the ellipse itself has an error on the image. This is because of the digitization of a real shape that has an inherent loss of information c compared with the original shape. One should notice that there is no possibility that the original ellipse can be recovered from the digital ellipse but the errors can be optimized by increasing the picture resolution of an image. If the image is unclear or has a poor resolution, the authors can pre-process the image to reduce the presence of noise in the original image by using a smoothing technique [9]. This smoothing technique is carried out by implementing the low-pass filter to the original image. The main purpose is to attenuate the high-spatial frequencies by keeping the low spatial frequencies of the signal strength [9].

Besides, what cause the error are the uncertainties that appear from hardware (altimeter, satellite, or radar), software (MATLAB) and also the landing site topography itself. In a real situation, the sensor noise that comes from the altimeter also has to be considered a noise as it will affect the accuracy of the results determined by the system.

Advertisement

5. Under what conditions the craters are not detected

The higher the successful detection rate is, the lesser the false alarm rate will be. When the detection rate is 80%, the false alarm rate is just 17% whereas for the detection rate of 73%, the false alarm rate increases to 25%. The authors have plotted the graph of successful rate detection versus the false alarm rate as can be shown in Figure (18) below. The FAR (False Alarm Rate) is the percentage of non-signals that were detected as signals (craters) and is calculated based on the number of false alarms and the correct-rejections which can be formulated as [19]:

FAR = Num. Of False alarms/(Num. Of False alarms + Number of Correct-Rejections)

The number of false alarm in this case can be referred to as a signal that was not presented but was mistakenly detected by the system whilst correct rejections can be referred to as a signal that was not presented and not detected by the system at all. The lesser false alarm rate in the system is, the better the system/algorithm will be. The main reason that brought these false alarms is the assumption of the sun angle that will lead to a faulty detection of true crater pair (true light patch connected to a true dark patch) hence will decrease the accuracy of the detection rate.

As in any true scenario, an image has to be captured first before the system can detect the safe landing sites that are free from hazards (craters). As mentioned before, this algorithm is assuming that the authors know the sun’s direction and will be using the sun angle as one of the steps to detect the craters on the moon’s surface. But, there will be some errors when the authors assume the sun angle without knowing its true direction. This assumption will affect the algorithm to pick up the wrong pairs (light patches will be connected to wrong dark patches and vice versa). Nowadays, the authors can obviously determine the sun elevation by many ways from the satellite system or LIDAR.

Figure 18.

Relationship between successful detection rate and false alarm rate for the proposed craters detection algorithm

This algorithm is not effective on a noisy image with lots of tiny craters, undesired features that look like a crater, the craters’ rims which are overlapping and segmented as well as a blurry image. Besides, an image with a too high or too low of sun elevation angle will make the system unable to differentiate the pattern (the light and the dark patches/blobs) and thus, influence the craters to be rarely detected by the system/algorithm. The algorithm will work accurately/efficiently with a sun elevation angle between 10 degrees to 50 degrees based on the experimentations under different image conditions earlier. With noisy image, the only way to reduce the tiny blobs is by using the function in MATLAB called ‘bwareaopen’. As discussed before, this function removes from a binary image all connected components (objects) that have fewer than G (set by the authors) pixels, producing another binary image. Figure (21) below is an example of the image if the algorithm does not apply the ‘bwareaopen’ function to eliminate tiny blobs which makes the algorithm becomes ineffective while Figures (14) and (15) above show the detected craters after the function ‘bwareaopen’ is applied. The detection rate falls to only 36% and the difference of accuracy is very obvious which is about 37%. To capture an image with a clear pattern of light and dark patches, it is vital as it is one of the most important features to improve the accuracy of the system detection. Therefore, the image has to be taken by the spacecraft’s camera under ideal sun elevation angle (not too low and too high) and a low noise image is a bonus.

Figure 19.

Low detection of craters by the algorithm because of too many unnecessary blobs (tiny blobs)

However, for the advantages, the algorithm itself can detect the craters without knowing the main parameters such as the size (radius/diameter or the gradient of the craters). It is an uncomplicated detection algorithm and has a fast detection performance. Under a clear image (low noise, good lightning condition and ideal sun elevation angle) where the pattern is easily distinguishable, the accuracy will be much higher. Besides, the craters detection is independent of the shape detection whether it is a circle or an ellipse.

Advertisement

6. Comparison with other previous detection method

In comparison to other techniques in terms of performance, this craters detection algorithm also has an understanding performance in terms of accuracy measurement to the previous algorithm proposed by Sawabe, Matsunaga and Rokugawa in 2005. As highlighted and calculated above, it is proven that the craters detection algorithm has an accuracy detection of 77% based on the two images tested above. This understanding percentage measurement is based on how much craters are detected compared with the pre-processed image as in Figures (14) and (15) above. The detected craters are measured from groups of pairing patches (light and dark) with minimum distances and angles detection.

This proposed craters detection algorithm by the authors can be improved by introducing more approaches like edge detections of each crater and evaluating more techniques from the morphological image analysis. To experiment more with the image morphology, the authors have tested the edge detection method using prewitt and canny on the original Image 2 using the algorithm and the results are shown in Figures (22) and (23) below. As can be seen b, canny method has detected the edges more precisely than the prewitt method. There are some previous craters detection algorithms implemented in this edge detection method as proposed by Yang Cheng and Adnan Ansar in their proposed craters detection technique [5]. Edge detections are usually used in a pre-processing step to obtain better results before the shape of the crater can be detected.

Figure 20.

Edge detections using ‘prewitt’ detector

In comparison to the multiple approaches craters detection and automate craters classification algorithm proposed by Sawabe, Matsunaga and Rokugawa in 2005, the algorithm has an accuracy of 80%. Four approaches are implemented in the craters detection algorithm to find shady and sunny patters in images with low sun angles, circular features in edge images, curves and circles in thinned and connected edge lines and discrete or broken circular edge lines using fuzzy Hough transform. In this particular research, they have considered a crater as a circle and used circular Hough transform to detect circular feature of a crater. The detected crater is then classified by spectral characteristics derived from Clementine UV-Vis multi-spectral images. Although it has more percentage of accuracy compared with the algorithm proposed by the authors, it has a limitation such as the crater has to be assumed to be a circle before it can be used to detect a crater. If the authors have an ellipse in the image, then it will be difficult to use this method of detection.

Figure 21.

Edge Detection using ‘canny’ detector

Previously, there were quite a number of craters detection algorithms using Hough Transform especially using circular features detection as proposed by E.Johnson, A.Huertas, A.Werner and F.Montgomery in their paper [7]. As emphasized above, a camera will capture an ellipse if the image is taken from a certain angle and certain distance relative to the moon’s surface. An ellipse will have five dimensions that have to be considered in the Hough algorithm when detecting shapes. An ellipse is more complicated to be detected than a circle because a circle just has 3 dimensions to be considered. It will certainly have a complex codes hence will take a longer time to construct. That is the reason why the authors have created an uncomplicated and robust algorithm in detecting hazards mainly craters on the moon’s surface for easy implementation.

Advertisement

7. Ellipse reconstruction algorithm using artificial image

To test the algorithm with various image types, the authors have created a two dimension (2-D) image as given in Figure (22) using Adobe Photoshop with its axis of symmetry of a half-length A=0 (to model it as a disc) and degenerate axes of a half-length, B=48. The objective is to determine the position of the centre of the disc reconstructed from the reconstruction algorithm and to compare it with the known centre position determined by the Bounding Ellipse Algorithm. The ellipse created has a position of q = (168.5469, 140.0172).The image generated is 494x494 pixels. The image plane ellipse determined by the Bounding Ellipse algorithm is described as:

a = 133.998 pixels

b = 48.934 pixels

Xc = 168.5469 pixels

Yc = 140.0172pixels

omegha degrees, ω = -0.0017 degrees

Figure 22.

Artificial Image in 2-D created using Adobe Photoshop

To centralize the coordinate system and scale the image, it requires a translation of half of the image dimensions. The reconstruction algorithm is then used to determine the position and orientation of the modelled disc after taking into account the camera’s parameter as below:

Xc = (Xcimage – 247)/494

Yc = (Ycimage – 247)/494

a (semi major axis) = aimage/494

b (semi minor axis) = bimage/494

The ellipses generated in Figure (23) above will undergo the same process as the authors performed on the real image (optical) previously using the same method of detection. For the reconstruction results, the authors will be judging two planes namely plane y and plane z to determine the position of the disc reconstructed. This is because, as being set in the algorithm, the centre coordinate of the 2-D plane is at [Zc,Yc]. Thus, the results of the position of the disc will be analyzed in two dimensions only namely Zc and Yc. This reconstructed position will be compared with the centroid’s position calculated by the bounding ellipse algorithm. At the end of the experiment, the results from the reconstructed algorithm are satisfactory and similar to the results from the bounding ellipse algorithm. The positional error evaluated for both two solutions are shown in the table below. The one which has a low error will be taken as a true position. As can be seen in the results below, the positional errors are quite high from both solutions that are 8.8722 and 8.8715. Therefore, the authors take the solution 2 as a disc reconstructed position.

Figure 23.

Artificial image of ellipse processed using minimum bounding ellipse with Khachiyan Algorithm

The positional errors are evaluated as shown in Table (3) below. Unlike before, the causes of the errors in the disc position are something similar to what were discussed in the previous 4.1.2.4 section. The positional error can be in any circumstances such as pixellation, the bounding ellipse algorithm error, MATLAB rounding figures as described before, uncertainties from the hardware and the like. These are the reasons why the authors tend to have quite a high number of errors on the position determination part. Besides, when the authors have the ellipse that is too eccentric, both the outputs of the reconstruction algorithm will become complex.

The ellipse bounding algorithm also has an error in bounding the targeted patch. As can be seen in the Figure (26) above, the targeted ellipse is not totally bounded by the red lines and this will affect the output of the image plane ellipse parameter such as semi major axis and semi minor axis. Hence, these parameters will also affect the output of the reconstruction algorithm and will cause errors in the disc’s position. In practice, the way to overcome this problem is by reducing the eccentricity of the image plane ellipse or by increasing the eccentricity of the spheroid to be reconstructed. The results of this disc reconstruction algorithm are shown in the table below:

Reconstruction possible resultsSolution number 1Solution number 2
Position, q
Taken A=2;
B = 6 ;
Xc = 163.2225
Yc = 147.1142
Xc = 163.2224
Yc = 147.1137
Positional error8.87228.8715
Orientation, p
Where A=2 and B=6
1.0000
0.0038
0.0016
1.0000
-0.0085
-0.0036

Table 1.

Reconstruction algorithm results for disc position tested in artificial image

As for the orientation part, the authors will take the positive one which has to be (1.0000, 0.0038, 0.0016) similar to those in the real optical image. As discussed before, the orientations for the crater will only be seen if they are orientated upwards (positive values) rather than downwards (negative values). Therefore, after taking into account the above condition, the authors can eliminate the negative orientation. Further, the evaluation of error in the orientation part is similar to the one with the real image as discussed in previous section.

Advertisement

8. Conclusion and recommendation

This paper focuses primarily on the identification and detection of craters on a lunar’s surface. To realize these goals, an algorithm to detect craters which happen to be the main hazardous features on lunar is proposed. The authors divided the evaluation of this algorithm into a flowchart as presented in the methodology section. First, using the original image of craters on the moon’s surface, the authors convert the RGB image plane to HSV image plane and analyze only the Value parameter of a HSV plane. Further, the thresholding is applied to the image for classification using this Value and thresholding approaches by Sawabe et al. After these classifications of images between light and dark patches, the authors have labelled them and determined the centre of each patch using ‘regionprops’ function. This stage is then followed by a vital stage in determining the best craters of all using two proposed methods: the minimum distance determination and angle measurement. This is a new and simple method proposed by the authors in detecting craters as main hazardous features on the moon’s surface.

For precise moon landing, the authors then proposed the geometrical analysis consisted of projection or reconstruction of the ellipse to a 2-D circle on an image plane. At this stage, the authors applied the bounding ellipse algorithm as a first step in modelling a crater as a disc. The authors then calculated all the ellipse parameters using the information embedded in the output of bounding ellipse algorithm, then drew the bounding ellipse around the targeted patch. This output will then be used in ellipse reconstruction algorithm in order to get the orientation, p˜and further position, q˜of the disc (crater) from the camera’s projection.

There are some limitations that have to be stated here for further extension and modification. For the craters detection algorithm, it is dependent on the sun angles and these assumptions will lead to an error of detecting a true pair (light and dark pairing patch). In addition, there are uncertainties as discussed before from the software (MATLAB) and the hardware itself in a real application (altimeter to measure the altitude). The Hough method seems to give more precise results but have a constraint in the shape of a crater itself. For an instance, the reconstruction needs to analyze a crater as an ellipse model instead of a circle. In Hough ellipse transformation, the authors have to analyze the ellipse in 5 dimensions instead of 3 dimensions in a circle. These limitations make the Hough Transform method to be unreliable and make its computational method a burden to use together with this craters detection algorithm

For future works, this useful research can be extended to a crater pattern matching as described in the Literature Review section above. Craters Pattern matching is proposed by previous researchers to attain the position and velocity estimation of a spacecraft and a Lander during Entry, Descent and Landing (EDL) purposes and also for autonomous precision landing purposes. By making a pattern matching, one can get the differentiation between the position determined by the pattern matching and those from the reconstruction algorithm. The errors in the crater’s position between these two methods can be evaluated to determine which is better in a real application. In reality, the lunar’s surface is not flat and the camera parameters will not usually estimate perfectly. The image does require scaling, but the true amount is impossible to be identified without also knowing the camera’s specifications (focal length and field of view). In most cases, the picture is not usually taken straight at the centre of the image and perspective distortion will have an effect as discussed before. As none of these are true in real applications, the need of the reconstruction algorithm to find the position of the crater is high. The crater’s position determination and evaluation of this reconstruction algorithm were discussed in detail in the previous section. Then, the authors can determine the velocity of the spacecraft based on the position and the orientation of the crater. The idea is, if the authors can find the position and orientation in a single frame, then the velocities are the difference from one frame to the other one. Therefore, this research has a great valuable for future works. In addition, this research is a very worthy research indeed and has valuable benefits to any spacecraft missions in order to avoid the hazardous craters (feature proposed) and for a moon Lander to have a precise landing on a Lunar. Besides, the authors can compare the position determined using equation q˜and the position determined using the craters pattern matching and this will be a noble future work for new researchers.

References

  1. 1. GonzalezC.WoodsE.L.Eddins (2004Digital Image Processing Using MATLAB. Prentice Hall, New Jersey.
  2. 2. Randolph J.F. (1967). CalculusAnalytic GeometryVectors. Dickenson Publishing Company, California.
  3. 3. HeikenG. H.VanimanD. T.FrenchB. M.LunarSource.BookThe University of Cambridge, 1991
  4. 4. SeulM.LawrenceO’Gorman.J.Sammon (2000Practical Algorithms for Image Analysis, Description, Examples and Code. Cambridge University Press, United Kingdom.
  5. 5. ChengY.AnsarA.2005Landmark Based Position Estimation for Pinpoint Landing on Mars. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA), 44704475Barcelona, Spain.
  6. 6. AnsarA.AndDaniilidis. K.(20032003Linear Pose Estimation from Points or Lines.IEEE Transactions on Pattern Analysis and Machine Intelligence, 255578589
  7. 7. JohnsonE.AndresHuertas. A.WernerF.Montgomery (2008Analysis of On-Board Hazard Detection and Avoidance for Safe Lunar Landing.IEEEAC Paper, 18California.
  8. 8. JohnsonE.F.Montgomery (2008Overviewof Terrain Relative Navigation Approaches for Precise Lunar Landing.IEEEAC Paper, 19California.
  9. 9. BruzzoneL.LizziL.MarchettiP. G.EarlJ.MilnesM.Milnes.Recognition and Detection of Impact Craters from EO Products. University of Trento, Italy.
  10. 10. NeukumG.KonigB.Arkani-HamedJ. A.Studyof.LunarImpact.Cratersize-distributions.The Moon 121975201229
  11. 11. NeukumG.IvanovB.HartmaanW. K.Cratering Records in the Inner Solar System, Chronology and Evolution of Mars, 55-86Kluwer, Dordecht, 2001
  12. 12. HondaR.IijimaY.KonishiO.Mining of Topographic feature from heterogeneous imagery: its application to lunar craters. Progress of Discovery Science, LNAI 3954072002
  13. 13. SawabeY.MatsunagaT.RokugawaS.Automated detection and classification of lunar craters using multiple approaches, COSPAR (Advance in Space Research), 2005
  14. 14. Cheng Y., Ansar, A. A Landmark based Pinpoint Landing Simulator, Jet Propulsion Laboratory, California Institute of Technology, USA.
  15. 15. Williams J.K.Navigation Results for NASA’s Near Earth Asteroid Rendezvous Mission, AIAA/AAS Astrodynamics Specialists Conference.
  16. 16. Miller, J.K. Determination of Shape, Gravity and Rotational State of Asteroid 433 Eros, Icarus,155, vol. 3-17,2002.
  17. 17. BernardG.GolombekM.Crater and rock hazard modelling for Mars Landing, AIAA Space 2001Conference, Albuquerque, NM.
  18. 18. WokesD. S.PalmerP. L. .MayPerspective Projection and Reconstruction of a Spheroid onto an Image Sphere.SIAM Journal on Imaging Science, 2009
  19. 19. CoombsC. H.DawesR. M.TverskyA.1970Mathematical Psychology, Englewood Cliffs, 165201NJ: Prentice-Hall.
  20. 20. http://www.engnetbase.com/ejournals/books/book_summary/summary.asp?id=792
  21. 21. http://science.nationalgeographic.com/science/space/space-exploration/moon-exploration-article.html

Written By

Nur Diyana Kamarudin, Kamaruddin Abd. Ghani, Siti Noormiza Makhtar, Baizura Bohari and Noorlina Zainuddin

Submitted: 02 December 2011 Published: 26 September 2012