## Abstract

A crucial task for integrated geoscientific image (geo-image) interpretation is the relevant geological representation of multiple geo-images, which demands high-dimensional techniques for extracting latent geological features from high-dimensional geo-images. A standalone mathematical tool called SFE2D (spatiospectral feature extraction in two-dimension) is developed based on independent component analysis (ICA), continuous wavelet transform (CWT), k-means clustering segmentation, and RGB color processing that iteratively separates, extracts, clusters, and visualizes the highly correlated and overlapped geological features from multiple sources of geo-images. The SFE2D offers spatial feature extraction and wavelet-based spectral feature extraction for further extraction of frequency-dependent features. We show that the SFE2D is a robust tool for automated pattern recognition, fast pseudo-geological mapping, and detection of regions of interest with a wide range of applications in different scales, from regional geophysical surveys to the interpretation of microscopic images.

### Keywords

- feature extraction
- independent component analysis
- continuous wavelet transform
- k-means clustering segmentation
- pseudo-geological mapping

## 1. Introduction

Global trends show that the number of major mineral discoveries is drastically declined during the last 25 years because the most outcropping mineralization systems are mapped and already excavated [1]. Therefore, a significant challenge for mineral exploration is the deep targeting of hidden resources. The “deep” targets are usually considered as resources as deep as hundreds of meters to thousand meters of depth. However, occurrences of “deep” targets are not limited to the third dimension. They can be seen as complex obscured mineralization systems barely detectable within a large amount of hyper-dimensional geoscientific data. Consequently, advanced data mining methods are needed to extract useful information from large amounts of data sets. Responding to these challenges of integrating multidisciplinary information is the motivation of present research.

Automated interpretation of multiple images is an important topic in large data interpretation and multivariate pattern recognition. No matter how accurate the geoscientific images (geo-images) are, each geo-image is only sensitive to a limited number of geological features. Therefore, different parts of the subsurface geology can be reconstructed from various geo-images. The critical question is how one can put together the interpreted jigsaw puzzles inside a high-dimensional space to rebuild a relevant geological model. This has led us to the topic of feature extraction in the treatment of large data sets.

The fundamental problem in integrated geoscientific interpretation is the proper geological understanding of multiple geo-images, which demands high dimensional techniques for the extraction of geological information from hyperdimensional data sets. Several methods are available for feature extraction in the high dimensional space, which can be seen as a dimensionality increase problem [2]. The problem is that the increased dimensionality due to feature extraction leads to an overload of information that creates difficulties for human visual interpretation as well as machine learning optimization [2]. That is to say; latent patterns are lost in high space as one moves to higher dimensions. Several recent studies, mainly in seismic attributes analysis, have investigated the problem of high-dimensional pattern recognition to extract relevant geological features from broadband seismic data [3, 4, 5, 6, 7]. In mineral exploration, the treatment of multiple non-seismic data also put forth a similar pattern recognition problem, which seeks to extract relevant geological features from potential field and electromagnetic data in the form of multiple images [8].

This study aimed to provide a tool for spatial and spectral feature extraction from multiple geo-images to facilitate the identification of geoscientific features. The study employed a quantitative approach combining different source separation methods for feature extraction and representation. The output is the SFE2D program, which is a standalone 2D spatiospectral feature extraction tool based on unsupervised source separation methods like principal component analysis (PCA), independent component analysis (ICA), continuous wavelet transform (CWT), RGB color processing, and * k*-means clustering segmentation. The program is made in a MATLAB environment for feature extraction and dimensionality reduction of hyperdimensional geoscientific data sets through variance/kurtosis/negentropy maximization,

*-mean clustering segmentation, color pick algorithm, and RGB visualizations (SFE2D ver. 2.0).*k

## 2. Theoretical background of source separation

### 2.1 Feature extraction

The theoretical framework underpinning this study is based on blind source separation (BSS) methods. Generally, BSS aims to recover latent source signals (or images) from a set of highly mixed signals (or images). Consider a mixing model, where every geo-image () is a mixture of several latent features (

_{i}

f

*) with different contributions in constructing the observed image. A mathematical description for this model can be formulated as a linear mixing system that links the latent features to the observed geo-image by a set of mixing weights (*

_{j}

*) that determine the contribution of each feature in the geo-image construction:*a

_{ij}

Where * j* = 1, 2, …,

*denotes the number of latent features, and*n

*= 1, 2, …,*i

*indicates the number of geo-images. The vector form of Eq. (1) can be rewritten as:*m

Where the unknown mixing weights are defined as the matrix * x*and

*denote the coordinates on a two-dimensional space in the spatial feature extraction scheme. The problem is to find a separation matrix (*y

*) that tends to unmix the geo-images (*W

**) to recover the hidden features (**g

**):**f

Where separation matrix (

### 2.2 Spatial feature extraction through PCA and ICA

Estimating a relevant separation matrix is possible by making assumptions about the statistical measures such as correlation. In some instances, it is common for two geo-images to be statistically correlated. For example, suppose there is a linear relationship between two gamma-ray concentration images (e.g., K vs. eTh). In that case, the amount of information that the first image provides is as same as the second one. Therefore, one can transform the bivariate data to a univariate form without losing any valuable information. This transformation is called dimensionality reduction, which is the basis of PCA algorithms. PCA algorithms utilize the maximization of second-order statistical measure (variance) for image separation and produce linearly uncorrelated images (Figure 2a). However, when there is a nonlinear form of correlation (dependency) between images, PCA will not work. In this case, ICA is useful to separate the geo-images into nonlinearly uncorrelated images by maximizing non-Gaussianity (Figure 2b).

PCA is, in fact, the first step in the ICA process. PCA looks for a weight matrix * D*so that a maximal variance of the principal components (

During this preprocessing step, the mean of the input geo-images () is removed (also known as Centering). Then whitening is performed by eigenvalue decomposition, which results in unit variance and identity covariance matrix. The matrix

_{i}

*that is calculated for linear transformation of centered geo-images into whitened images is:*D

Where

Whitening solves half of the ICA problem. The second step in the ICA is increasing the non-Gaussianity of the whitened images or principal components (* R*that during the multiplication with principal components produces the least Gaussian outputs (

### 2.3 Fast-ICA algorithms

This study has employed two alternative methods to obtain the rotation matrix * R*based on Fast-ICA through kurtosis maximization and negentropy maximization. Both kurtosis and negentropy maximization methods performed through the Hyvärinen fixed-point algorithm employ higher-order statistics to recover the independent sources [9]. The fixed-point algorithm has a couple of properties that make it superior to the gradient-based methods: (a) It has a cubic speedy convergence criterion. (b) In contrast to gradient-based algorithms, it does not need any learning rate adjustment or parameter tuning, making it easy to implement [9].

#### 2.3.1 Fast-ICA through kurtosis maximization

According to the central limit theorem, the distribution of a sum of independent random variables tends toward a Gaussian (normal) distribution [9]. One can use this principle to maximize the non-Gaussianity through kurtosis maximization (kICA). Kurtosis that is the fourth-order cumulant of the whitened images can be expressed as the normalized version of the fourth moment:

Where * E*{.} denotes the expectation over the unknown density of input geo-images. Kurtosis provides a measure of how Gaussian (

*= 0), super-Gaussian (*kurt

*> 0) or sub-Gaussian (*kurt

*< 0) the probability density functions of the geo-images are. Maximization of the kurtosis of the principal components estimates the latent independent components (*kurt

Step 1. Center and whiten the geo-image data ** g**to give

Step 2. Set * N*as the number of independent components to be estimated and a counter

Step 3. Set randomly an initial vector

Step 4. Let

Step 5. Rotate by deflationary orthogonalization:

Step 6. Normalize:

Step 7. Go to step 4 if * I*of the estimated separation matrix.

Step 8. If

Step 9. Obtain the latent features by

#### 2.3.2 Fast-ICA through fixed-point negentropy maximization

For large-scale studies, kurtosis maximization is time-consuming because of its computational complications. Also, kurtosis is not a robust measure of non-Gaussianity in the presence of unknown noises and image artifacts. Kurtosis is very sensitive to outliers, that is, a single erroneous outlier value in the tails of the distribution makes kurtosis extremely large. Therefore, using kurtosis is well justified only if the independent components are sub-Gaussian and there are no outliers.

Another principle in information theory enables us to obtain the rotation matrix * R*more efficiently in the presence of outliers. It states that the spatial complexities of the input images are equal to or greater than that of the least complex latent features [9]. This general principle enables us to maximize the non-Gaussianity of multiple geo-images through negentropy maximization (nICA).

Negentropy is based on the information-theoretic quantity of entropy. The entropy (* H*) of a geo-image (

The more random (unpredictable and unstructured) the variable is, the larger its entropy. Between all random variables, the Gaussian variable has the largest entropy. Negentropy of a geo-image ** g**is normalized differential entropy of

**, which is the difference between the entropy of that geo-image**g

**and the entropy of a Gaussian random vector of the same covariance matrix as**g

**(**g

g

_{gauss}):

Negentropy is always non-negative and is zero when the signal has a Gaussian distribution. In other words, the more random (unpredictable and unstructured) the variable is, the larger its entropy [9]. Negentropy approximated in this study is based on using classical higher order cumulants given as:

Since entropy estimation is not directly performed on geo-images, and instead, centered/whitened images (principal components) are used for entropy calculations, therefore, ** g**is replaced by

**. Replacing polynomial functions**y

*is an index denoting odd and even functions):*i

Where * ν*is a standardized Gaussian variable with unit variance and zero mean, and

k

_{1}and

k

_{2}are positive constants. In the case where only one non-quadratic function

*is used, the approximation becomes simpler:*G

In this moment-based approximation, we used a slow-growing exponential function for * G*:

The criterion for negentropy maximization is based on bringing an objective function to an approximated maximum value. The algorithm for negentropy maximization in the present SFE2D program is as followed:

Step 1. Center and whiten the geo-image data ** g**to give

Step 2. Set * N*as the number of independent components to be estimated and a counter

Step 3. Set randomly an initial vector

Step 4. Let

Step 5. Rotate by deflationary orthogonalization:

Step 6. Normalize:

Step 7. Go to step 4 if * I*of the estimated separation matrix.

Step 8. If

Step 9. Obtain the latent features by

### 2.4 Spectral feature extraction through wavelet-based PCA and ICA

Spectral decomposition of geo-images provides a unique way for feature extraction in the frequency domain. Although the Fourier transform is a powerful tool for image decomposition, it does not represent abrupt changes efficiently due to the infinite oscillation of the periodic function in any given direction. In contrast, wavelets are localized in space and have finite durations. Therefore, the output of wavelet decomposition effectively reflects the sharp changes in images, and that makes it an ideal tool for feature extraction [10, 11, 12].

Mathematically, the 2D CWT of an image * I*(

*,*x

*) is defined as a decomposition of that image (*y

*) on a translated and dilated version of a mother wavelet*I

*(*ψ

*,*x

*). Thus, the 2D CWT coefficients are given by:*y

Where _{1} and _{2} are controlling the spatial translation, * a* > 1 is the scale, and

** is the complex conjugate of the mother wavelet*ψ

*(*ψ

*,*x

*).*y

Analysis of isolated sources in potential field data with CWT was introduced by Moreau [13]. Moreau showed that the maxima lines of the CWT indicate the location of the potential field sources [10]. He also showed that the maxima lines bear the highest signal-to-noise ratios, allowing the treatment of the noisy data sets [14]. An example of a CWT on a geophysical image in which the mother wavelet is shifting and scaling in one direction is shown in Figure 4. The scale is inversely proportional to frequency. Large-scale factors are corresponding to largely expanded mother wavelets (low frequencies).

As can be seen in different frequencies, different features are detectable (Figure 4b). Scaling and shifting the mother wavelet in other directions also reveal other sets of features (Figure 4c). If the mother wavelet is isotropic, there is no dependence on the angle in the CWT. The Mexican Hat mother wavelet used in Figure 4b is an example of isotropic wavelets.

On the other hand, an anisotropic mother wavelet is dependent on the angle in the analysis; therefore, the CWT acts as a local filter for an image in scale, position, and angle. The Cauchy wavelet is an example of an anisotropic wavelet (Figure 4c). To better see the effect of directional transformation, we can use different anisotropic mother wavelets and compare the results.

## 3. The SFE2D architecture

To separate the spectral features, we created a 2D feature extraction scheme that combines 2D CWT with variance, kurtosis, and negentropy maximization algorithms (PCA, kICA, and nICA) to extract spectral features from wavelet spectra. The algorithm consists of three main stages (Figure 5):

A preprocessing step where 2D interpolation and filtering of raw datasets prepare the input images for feature extraction.

Spatial feature extraction employs PCA/ICA for spatial source separation and dimensionality reduction.

Spectral feature extraction that consists of two substages: (a) continuous wavelet transform (CWT) and (b) spectral PCA/ICA (SPCA/SkICA/SnICA in Figure 5).

An effective spectral feature extraction depends on computer hardware specifications, specifically for larger numbers of spectral features produced by increasing the number of scales and mother wavelet’s directions. In practice, users can create a larger number of features by changing the CWT parameters in the SFE2D program. For large numbers of CWT features (100–1000 CWT features), PCA/ICA methods provide a way to decorrelate and separate spectral features from CWT images.

If needed, users can also reduce the dimensionality to summarize the features. By running the proposed workflow in the SFE2D program, the redundant frequency volumes could be reduced to a more manageable number of components. Taking advantage of the ICA statistical properties, we can keep the most geologically pertinent information within the spectral decomposed data.

## 4. Feature representation

The SFE2D program provides interpretation tools for feature representation in two dimensions:

### 4.1 RGB image compilation and analysis

SFE2D users can select the RGB image compilation method to assemble a colored image from three manually chosen extracted features by PCA and ICA methods. The RGB combination makes it possible to bring out hidden characteristics. To control which extracted features are used for image compilation, the user can graphically set the label number of desired features. For example, in nICA feature extraction, the 1, 2, 3 labels mean the features 1, 2, and 3 are used for image compilation. The user can also change the polarity of the features since PCA/ICA methods distort the polarities randomly. For example, one can set −2, 1, 3 labels for RGB compilation.

A color pick algorithm is also developed to select the region of interest (ROI) based on color intensities. To find objects of a specific color, the SFE2D program assigns the low and high thresholds for each RGB color band with several clicks on the regions of interest. The more users click on the specified zones, the more accurate the thresholding works. Then, the program automatically picks up the minimums and maximums from red/green/blue bands. To reduce the effect of the outliers, only a range of 5 to 95 percentiles are kept, which results in smoother color picking tasks. Figure 6 represents an example of testing the ROI detection method in the SFE2D program to detect the red-colored objects in a standard image.

### 4.2 The color image segmentation algorithm

SFE2D uses a * k*-means clustering algorithm for the segmentation of color images. The segmentation helps to reduce the color space size to a manageable number of colors. The process produces a pseudo-geological map based on the spatially extracted RGB features. This eventually helps geologists to detect the hidden geological contacts and structures in geo-images. In addition, segmentation significantly reduces memory usage and speeds up image analysis by focusing on relevant information.

The output of segmentation is a set of * k*non-overlapping segments {

S

_{1},

S

_{2}, …,

*} that comprises the whole segmented representation of a dataset*S

_{k}

*in the form of [15]:*X

For the non-overlapping condition, we should have

SFE2D incorporates a * k*-means segmentation for grayscale or RGB images. The program aims to separate extracted features from an image in different clusters iteratively. The algorithm computes a hyperdimensional centroid for each cluster. Each segment

*is uniquely defined by a center*S

_{i}

*:*c

_{i}

Where * i*th cluster.

The centroid gets modified interactively through minimizing a cost function that is the distance between data pattern * p*and centroid

_{j}

*:*c

_{i}

If the image is RGB, the program calculates a 3D Euclidean distance for the RGB color space. The SFE2D program reads data points * p*and returns

*cluster centroids as*k

*-means algorithm for RGB image segmentation in the SFE2D program is as followed:*k

Step 1: Generate * k*random initial centroids:

Step 2: Repeat the following until * S*do not change:

_{i}

Step 3: Return all clustered segments as

## 5. Evaluation

Before applying SFE2D algorithms to geoscientific applications, we tested the workflows, including PCA, kICA, nICA, ROI color pick algorithm, and * k*-mean segmentation in two sections dedicated to grayscale and RGB ICA-based image segmentation.

### 5.1 Spatial feature extraction performance

In grayscale, we tested the performances of kICA versus nICA algorithms and the effect of feature overlaps (shadow effects) on image segmentation. An example of PCA and ICA procedures is presented in Figure 7. We simulated and evaluated the feature extraction processes on an overlapped photo that is mixed with three images: a human face, fruits, and vegetable scenes (features _{1}, _{2}, and _{3} in Figure 7). The problem can be formulated as the extraction of the human face (feature _{2}) from the mixed images (_{1}, _{2}, and _{3}) without any prior information from the original face. This simulation helps us to evaluate the performance of the SFE2D algorithm before the implementation on more complex applications.

While PCA whitening fails to recover all three features (

Another influencing factor with the same level of added unpredictable noise is the sampling interval. This is crucially important in the geophysical sense since the sampling interval directly influences the geophysical survey budget. Large regional-scale data sets are sampled in smaller intervals and give more details about geological structures. As is shown in Figure 9, increasing the number of samples reduces the performance of kICA compared to nICA and indicates that for large datasets with smaller sampling intervals, negentropy maximization is superior to kurtosis maximization.

However, there are two inherent ambiguities in the ICA framework: (1) Ambiguity in permutations of the recovered sources. This problem states that the order of the estimated independent components is unspecified. (2) Ambiguity in the recovered amplitudes of the sources. One can partially solve this problem by assuming unit variance for all sources. However, there is still a 50% chance that the polarities of the sources are not determined correctly. In cases where we have enough prior information, we can multiply sources by −1 to achieve the best results.

The effect of mixing on image segmentation is also explored in Figure 10. ICA significantly reduces the impact of feature overlaps and enhances the performance of image segmentation. The proposed approach integrates information from multiple geo-images and makes sure that the combined images are maximally independent and unique. In other words, the feature overlaps are minimal. This has an important implication in image segmentation since a slight presence of image overlaps and artifacts distort the segmentation output. As shown in Figure 10, the segmentation after negentropy maximization improves the human face detection.

### 5.2 Spectral feature extraction

Further feature extraction can be performed on the images to extract spectral features like edges and lineaments inside the photos. Figure 11a shows the results of CWT with Cauchy mother wavelet and 20 scales on the second independent component (recovered human face). As can be seen, different spectral features appear in different CWT scales/directions. Low-level features like edges appear in higher frequencies, and high-level features appear in lower frequencies. However, the transition from each scale/direction is relatively smooth, and that allows some features from each scale/direction leak into the following scale/direction, creating unwanted spectral features’ overlap. To reduce these effects, we perform source separation methods like PCA and ICA algorithms again. The results are shown in Figure 11b. As is shown, spectral source separation with nICA has improved the low/high-level features with crisper edges and more visible boundaries.

## 6. Applications

### 6.1 Integrated interpretation of geophysical data

#### 6.1.1 Fast pseudo-geological mapping

Geophysical surveys are essential for fast geological mapping in mineral exploration programs. Nevertheless, there is not necessarily a one-to-one correlation between geophysical responses and contrasts in lithotypes because the rock-forming minerals do not entirely control geophysical properties. For example, in magnetic surveys, most magnetic anomalies are only related to the distribution of rock-forming magnetite inside the rocks. Therefore, a “geological” map created using geophysical data sets should be referred to as a “pseudo-geological” map [16]. However, the integrated interpretation of multiple data types, for example, magnetic, radiometric, conductivity, and so on, helps reduce uncertainty and acquire a more reliable geological model. SFE2D provides such an integration tool for explorations. A geological map of the exemplified region indicates a very complex geological setting along with complex structural faults and fractures (Figure 12).

The geophysical survey of the area is part of a regional program that aims to complete the geological mapping of the southwestern part of James Bay at a scale of 1: 50,000. The region’s rocks are mainly of Archean age and belong to the Superior Province, which forms the heart of the Canadian Shield, one of the largest Precambrian cratons exposed from the terrestrial globe. There are also a number of dykes of Neoarchean to Paleoproterozoic diabase [18]. The region is subdivided into five sub-provinces, and the boundaries between the different sub-provinces are based on lithological contrasts, metamorphic, structural, and geophysical information.

However, the geological compilation is subject to varying interpretations due to intrinsic uncertainties in geoscientific data sets. This section provides an example of feature extraction for integrated geophysical interpretation through a PCA and ICA-based color image segmentation by * k*-means clustering on three sources of geophysical data sets.

GGMplus Bouguer gravity data sets [19], high-resolution airborne magnetic data sets, and digital elevation model exerted from SIGEOM [17] are used as inputs for integrated feature extraction and segmentation (Figure 13).

The program can perform an image segmentation based on the * k*-means algorithm over the RGB-merged independent components. Unlike traditional segmentation methods, the integrated

*-means segmentation algorithm helps calculate segments based on principal or independent components of the original data sets. The results are pseudo-geological maps that integrate extracted features from the three sources of data sets (Figure 14a and b).*k

#### 6.1.2 Spectral feature extraction

CWT is performed on the magnetic data sets (Figure 13c). Decomposing the magnetic data sets with CWT forms raw spectral features that eventually increase dimensionality. 2D wavelet coefficients are calculated here for eight scales with Mexican Hat mother wavelet, containing several frequency-dependent raw features (Figure 15). However, the algorithm allows choosing any number of scales and directions (in the case of anisotropic mother wavelets), as long as the computer hardware specifications allow.

As shown in Figure 15, spectral decomposition reveals many latent features that are not properly visible in the spatial domain. The major difficulty arises in extracting and selecting the spectral features due to the statistical interdependence of features in various spectra. Spectral decomposition of images also overloads the computational cost of interpretation and demands a methodology to pick the best spectral representation of images. The SFE2D program tackles this problem with the spectral independent component analysis. We used Fast-ICA through negentropy maximization to separate the eight raw wavelet features to produce the spectral inputs necessary for interpretation. The Fast-ICA algorithm can reduce the dimensionality of the raw features. This is important when we produce a large number of raw features through CWT, and as a result, our computational hardware resources are not enough to handle the inevitable high dimensionality.

We reduced the dimensionality of the raw spectral features and thus produced three independent features (Figure 16). As shown in Figure 16, the program uncovered several low-frequency features in higher wavelet scales related to deep sources and high-frequency features related to shallow sources.

2D wavelet coefficients are also calculated with Cauchy mother wavelet on eight scales and eight directions, producing several directional/frequency-dependent raw features (Figure 17). As can be seen, most of the wavelet spectral content is redundant, and similar features are repeated in 64 subsequent directions/scales. Three independent spectral components are extracted through spectral feature extraction and dimensionality reduction (Figure 18a–c). Therefore, the 64-dimensional hyperspace is reduced to a 3D RGB space to facilitate visual interpretations (Figure 18d). As can be seen, the process helped to uncover several hidden lineaments in the NE–SW direction.

### 6.2 Microscopic image analysis

Optical microscopy with polarized light microscopes has become one of the most powerful techniques in petrography and mineral exploration. However, due to the complexity of mineral assemblages, acquiring quality images with details of mineral grains and fine textures is a challenging task. Here we present an application of the SFE2D program in the micromorphological characterization of minerals under thin section. An example of an optical microscopic image is presented in Figure 19a. The SFE2D program can separate mineral zones on a highly mixed texture.

The algorithm starts with balancing the colors on the original image based on the normal distribution. Then, source separation on color bands helps extract hidden features in principal or independent components. The RGB compilation of independent components of the image is also shown in Figure 19b–d. As can be seen, different polarities offer different feature representations.

The program also offers a fast detection of the region of interest (ROI) with a color-pick algorithm based on few clicks on specified color zones. As can be seen in Figure 20, subtle mineralogical zones are optimally extracted for geological interpretations. This eventually helps calculate the percentage of specified minerals on thin sections that are very important for mineral exploration studies.

## 7. Conclusions

The result presented here illustrates the theory, design, performance, and applications of a standalone mathematical tool (SFE2D) for 2D spatial and spectral feature extraction, based on PCA, ICA, CWT, * k*-means clustering segmentation, and image processing algorithms. SFE2D provides an integration tool for the interpretation of multiple geoscientific data sets at once. SFE2D has straightforward applications in various geophysical and geoscientific explorations where one needs to add value to observed geo-images by recovering hidden features in hyperdimensional data sets.

The program can perform an image segmentation based on the * k*-means algorithm over the RGB-merged independent/principal components. The results are pseudo-geological maps that integrate extracted features from multiple data sets. Unlike traditional segmentation methods, the integrated

*-means segmentation algorithm helps calculate segments based on principal or independent components of the original data sets. The proposed approach integrates information from multiple geophysical data sets and makes sure that the combined images are maximally independent and unique. In other words, the feature overlaps are minimal. This has an important implication in image segmentation since a slight presence of image overlaps and artifacts distort the segmentation output.*k

Spectral decomposition of images also provides a unique way for feature extraction in the frequency domain. Deploying the SFE2D algorithms helps eliminate redundant frequency volumes and reduce them to a more manageable number of components. Taking advantage of the ICA statistical properties, we can keep the most geologically pertinent information within the spectral decomposed data. The feature extraction algorithms in SFE2D can also be used in deep learning applications where feature extraction is a primary step in optimizing neural network design.

The SFE2D program can also be used in the micromorphological characterization of minerals under thin sections. The program offers a fast detection of the region of interest (ROI) with a color-pick algorithm based on few clicks on specified color zones.

The SFE2D application is straightforward and for immediate use, as long as users already installed the MATLAB runtime library on their computer.

## Acknowledgments

This study is funded by FRQNT (Fonds de recherche, Nature et technologies du Québec) and MERNQ (ministère de l’Énergie et des Ressources naturelles du Québec).

The SFE2D is provided as a standalone executable program. To use the executable program (SFE2D.exe), users do not need any previously installed Matlab software on the PC. The only prerequisite is installing the latest Matlab 2021a Runtime library, free to download from MathWorks [21]. To do so, users can install the required runtime codes by double-clicking on the offline program installer “Installer_SFE2D (Offline).exe” or the online version of it “Installer_SFE2D (Online).exe” that is a lightweight version in which the program and the runtime codes are going to be installed automatically through downloading from MathWorks server. Users can also directly download the runtime from the MathWorks website [21].

After unzipping the file and installing the runtime, the program should work by simply double click on the SFE2D.exe. It is recommended to copy/paste the SFE2D.exe to a project folder where data sets are located with read/write permission. Running the program for the first time needs activation. To activate the program, one needs to double click on the “Activate.exe” file to create a “Key” file. A dialog box (Password Required) appears that asks to insert the password provided for the user. As soon as typing it and pressing Enter, a “Key” file is going to be generated. The user needs to keep that file and save it in the same project folder where “SFE2D.exe” is copied. When running the SFE2D, the program interface appears as in Figure A1. On the right side, the user can control the program parameters, and on the left, the progress of calculations and possible errors can be tracked with the windows command shell console for execution (black screen on the left).