Open access peer-reviewed chapter

Mathematical Basics as a Prerequisite to Artificial Intelligence in Forensic Analysis

Written By

KP Mredula Pyarelal

Submitted: 15 September 2022 Reviewed: 03 October 2022 Published: 27 November 2022

DOI: 10.5772/intechopen.108416

From the Edited Volume

Numerical Simulation - Advanced Techniques for Science and Engineering

Edited by Ali Soofastaei

Chapter metrics overview

96 Chapter Downloads

View Full Metrics

Abstract

The chapter examines a review and revisit to the current study of advancements in mathematics and statistical methods underlying the most sorted topic of artificial intelligence (AI). Inclusion of references is done for better and smooth discussion for more clarity to the underlying difficulties faced by readers. Mathematics motivates image processing and image processing improves methods involved with mathematics, which is to be explored. Mathematics stands as a back bone and with discussions of basics of neural network the path way to artificial neural network would be build. The struggle to recall the prerequisites faced by researchers is addressed in this chapter. The chapter will provide you through an ariel view by stating the definitions of prerequisites such as mathematics for image processing, mathematics for forensic image processing which includes basics of neural network and prerequisites of probability theory as a subsection. Forensic sciences utilize the concepts of probability density to a great extent. The topics briefed would provide the readers to have a quick recap of the concepts which though seem to be from different specializations but are deeply connected to one another. Section one is dedicated to mathematics for image processing and Section two connects mathematics, image processing with forensic sciences.

Keywords

  • mathematics
  • probability
  • forensic sciences
  • image processing
  • neural network
  • score- based likelihood

1. Introduction

The chapter aims to bridge the gap between AI and Mathematics at core of it. They represent the two branches of the same tree [1]. But since they are taught independently, many times it becomes difficult to connect the two and look at the broader picture with a satellite view. AI was invented way back in 1950’s by John Mc Carthy who coined the phrase the science and engineering of making intelligent machines. A brief history could be referred in [2].

The three layers of AI basically consist of Incremental advancements in algorithm design, application to specific design and step change is performed for improvement. Mathematical and statistical concept clarity is highly desirable to understand and implement AI. At this point, an attempt is made with few topics to show the connection in a lucid way. Few explanations are beyond the scope of the chapter so are stated with detailed references for further explorations.

Since there is a deep connection between mathematics, image processing and forensic sciences, the topics included will be an initial knowhow to start exploring the possibilities to scholars in forensic based image processing. Multimedia forensic utilizes AI techniques which involve convolution neural network. Analysis of forensic image is used to decode the fake information about photographs using machine learning [3], another important application is in medicine for forensic anthropology [4]. To understand such recent research in multimedia, concepts of image sampling, enhancement, convolution and neural network are included in this chapter. IoT based forensic analysis also utilizes the statistical approaches discussed.

Beginning with few basic image processing and mathematical expressions, a systematic development is traced through score-based likelihood used in forensic exploration. It would be beneficial to a reader interested in exploring the quantitative procedure to weigh evidence which would use likelihood ratio [5]. In the later part of the chapter a brief introduction to neural network paving the path to digital forensic neural network is included.

Keeping the above idea in mind, the first section includes a brief discussion on topics of image processing and mathematical representation, image formation model, image sampling, intensity transformation and spatial filtering, image enhancement using Laplacian mask, image denoising, order statistics filters, convolution in image processing. The second section is dedicated to forensic image processing and mathematics which includes widely concepts of steganography, discrete Fourier transform, score-based likelihood, and finally an overview of Neural network and probability theory.

Advertisement

2. Brief of mathematics for image processing

Digital image processing has developed rapidly along with the advances in computing and mathematical advances, to nurture the ever-growing demand for technological luxuries.

2.1 Mathematics in image processing

Mathematical image processing is widely used in fields such as medical imaging, surveillances, video transmission, astrophysics and many more. Signals are one dimensional image. Planar images are in two dimension and volumetric are in three dimensions. Images in grey scale are classified as single valued functions while coloured images are vector valued functions. The imperfections such as blurring and noise reduce the quality of the image.

2.2 Image formation model

Mathematically a planar image is represented by a function form as spatial domain to a function value,

xyfxyE1

In an image the intensity value is the energy radiated by the physical source.

0<fxy<E2

fxy depends on the illumination ixy and the reflectance rxy hence,

fxy=ixyrxyE3

A similar expression is applicable to the images formed by transmission through a medium.

2.3 Image sampling

Sampling means digitalization of coordinate values and digitalizing the amplitude means quantization. A digital image is represented as a matrix and the number of grey levels is taken in powers of 2.

2.4 Intensity transformation and spatial filtering

For improving contrast, a statistical tool of histogram equalizer is used. Consider a low contrast/dark image or light image as input image pxyand an output as a high contrasted image mxy.

Assume that the grey-level range consists of L grey-levels. In the discrete case, let rk=pxy be a gray level of

p,k=0,1,2,L1.E4

Lesk=mxy be the desired gray level of output image m.

A transformation T:

0L10L1such thatsk=Trk,forallk=0,1,2,,L1.E5

Define hrk=nk where rkis the kth grey level, and nk is the number of pixels in the image p taking the value rk.

Visualization of discrete function gives histogram.

In the discrete case, let rk=pxy. Then we define the histogram-equalized image m at(x,y ) by

mxy=sk=L1j=0kprjE6

Theoretical interpretation of histogram equalization (continuous case) considering the grey levels r and s as random variables with associated probability distribution functions prr and pss The continuous version uses the cumulative distribution function and we define

s=Tr=L10rprwwE7

If pk (r) > 0 on 0L1, then T is strictly increasing from 0L1to0L1, thus T is invertible. Moreover, if T is differentiable, then we can use a formula from probabilities: if s=Tr, the

pss=prrrsE8

where we view s = s(r) = T(r) as a function of r, and r = r(s) = T–1 (s) as a function of s. From the definition of s, we have by differentiating equation (7)

sr=L1prr=T'rE9
rs=sT1s=1T's=1L1prrspss=prrrs=prr1L1prrs=1L1E10

So, the uniform probability distribution function ps in the interval 0L1, corresponding to a flat histogram in the discrete case.

2.5 Image enhancement

Considering an image function f with second order partial derivatives, Laplacian of f in continuous form is defined as,

Δf=2fx2+2fy2E11

Δff is linear and rotationally invariant.

Implementing numerical analysis finite difference approximation, the second derivatives can be approximated as,

2fx2xyfx+hy2fxy+fxhyh2E12

and

2fy2xyfxy+k2fxy+fxykk2E13

Here second derivative of fxy is approximated using the function values of f at x,x+h,xh keeping y constant for second derivative with x. Similar approach is used for second derivative of f with y.

Considering h=k=1 for any image a Laplacian 5-point formula given by

Δfxyfx+1y+fx1y+fxy+1+fxy14fxyE14

which can be applied to discrete images.

The Laplacian mask defined for a spatial filter is

m=010141010E15

Laplacian operator can be discretized with 9-point Laplacian formula as,

Δfxyfx+1y+fx1y+fxy+1+fxy1+fx+1y+1+fx1y+1+fx1y1+fx+1y18fxyE16

with Laplacian mask m=111181111

The sum of coefficients is 0 for both the Laplacian masks m, which is related with the sharpening property of the filter. The Laplacian can be used to enhance images. For example, in 1D a smooth-edged image can be enhanced to a sharp-edged profile by applying the operator e=ff. In 2D for a blurry image an operator exy=fxyfxy. In discrete case if 5-point Laplacian is used, we obtain the linear spatial filter and mask as,

exy=fxyΔfxy=5fxyfx+1yfx1yfxy+1fxy1E17

and

m=010151010

Similarly, the 9-point Laplacian mask

m=111191111E18

2.6 Image denoising

Human vision can classify and categorize the image into different levels but for a digital camera denoising is difficult. Enhance observations, interpolating missing image data are to be performed rigorously to improve image quality. There are many reasons for image contamination such as heat generated by camera or external sources which might emit free electrons from the image sensor itself, thus contaminating the true photoelectrons.

Mathematically, one can write the observed image captured by devices as [6]:

vi=ui+ni,E19

where vi is the observed value, ui is the true value, which needs to be recovered from vi. ni is the noise perturbation.

For a grey value image, the range of the pixel value is (0, 255), where 0 represents black and 255 represents white. To measure the amount of noise of an image, one may use the signal noise ratio (SNR),

SNR=σuσn,E20

where σ(u) denotes the empirical standard deviation of u(i),

σu=iuiu¯2NE21
σn=iuivi2NE22

where u¯=uiN the average grey level values computed from a clean image.

SNRs are usually expressed in terms of the logarithmic decibel scale as signals have a wide dynamic range. In decibels, the SNR is, by definition, 10 times the logarithm of the power ratio:

SNR=10log10iuiu¯2iuivi2E23

A denoising method can be defined as Dh working on an image u:

u=Dhu+nDhuE24

where h is the filtering parameter, Dh is the denoised image, and nDhu is the noise guessed by the method.

It is not sufficient to just smoothu and get the denoised image. The more recent methods are not only working on smoothing, but also try to recover lost information innDhu as needed as discussed by [7, 8], i.e. in an image captured by digital SLR cameras, we often need to keep the sharpness and the detailed information while the noise is being blurred. In the literature, work has been done on local filtering methods which include Gaussian smoothing model [9], Bilateral filters (Elad), PDE based methods, including anisotropic filtering model [10, 11] and total variation model (F. Guichard). Approaches using frequency domain filtering [12], Steering kernel regression [13] and so on.

2.7 Order statistics filters

Order filters are used in image processing. Non-linear spatial filters are classified as Order statistic filters, whose response is based on the ordering of the pixels contained in the image area encompassed by the filter, and then replacing the value in the centre pixel with the value determined by the ranking result.

It is an estimator of mean which uses a linear combination of order statistics. Now the question is how is it different from the mean filter? It is known that the mean filter is a simple sliding-window of spatial filter that replaces the centre value in the window with the average of all the pixel values in the window but for order statistic filter we consider N observations arranged in ascending order,

X1<X2<<XNE25

Xi are the order statistics of the N observations [14]. An orders statistics filter is an estimator of FX1X2XN of the mean of X as

FX1X2XN=a1X1+a2X2++aNXNE26

The linear average which has coefficients ai=1N and the median filter which has coefficients

ai=1i=N+12,0otherwiseE27

The trimmed mean filter has coefficients,

ai=1MNM+12i<N+M+12,0otherwiseE28

For any distribution one can determine the optimal coefficient by minimizing the criteria function

Ja=EaTXμ2 with a being the vector of order statistics filter coefficient,X is the vector of order statistics and is the mean of X, here, EX=iXiN. The order statistic filters which turns out to be important for Grid Filters is that they are piecewise linear. Order statistic filter is generally applied to 3x3, 5x5 or 7x7 windows.

2.8 Convolution in image processing

Filter effect for images is due to convolution. A matrix operation is applied to an image in which a mathematical operation comprised of integers is performed. The idea is to determine the value of a central pixel by adding the weighted values of all its neighbours together. The outcome is a new modified filtered image. Convolution is performed to obtain a smooth/enhance/intensity/sharpen an image.

A convolution is done by multiplying a pixel to its neighbouring pixels colour value by a matrix called kernel. Kernel is a small matrix of numbers that is used in image convolutions. Differently sized kernels containing different patterns of numbers produce different results under convolution. The size of a kernel is arbitrary but 3x3 is often used [15].

Formula for convolution is given by,

W=i=1qj=1qfijdijFE29

W is the output pixel value.

fij the coefficient of the convolution kernel at position i,j in the kernel matrix

dijdata value of the pixel that corresponds to fij

F sum of the coefficients of kernel matrix

q dimension of the kernel.

Now in next section a connection between mathematics behind the image processing and forensic based image processing is handled.

Advertisement

3. Mathematics for forensic image processing

Forensic can provide information security when the information source is not trusted. There are forgery detection algorithms to detect interplay between the actual and the modified details. Probability, linear algebra helps in learning forgery. Fourier transform infrared micro spectroscope is used in analysing traumatic brain injuries [16]. Hypothesis testing with local estimates help in identifying the unaltered and falsified images. Stochastic gradient descent approaches are also used in forensic editing detection. These are used as metadata is unreliable and multimedia forensic is found superior in such cases. In certain advance studies graph theory is also employed to image identification purposes in a better way. So now we will walk through the different aspects to have the essence of the approaches utilized in forensic sciences.

3.1 Steganography

The art of hidden writing is known as Steganography. Steganography is different from cryptography (the art of secret writing), which is utilized to make a message unreadable by the creator. Steganography is commercially used functions in the digital world, most notably digital watermarking. If someone else steals the file and claims the work as his or her own, the artist can later prove ownership because only he/she can recover the watermark [17, 18].

Kessler [19] quoted that a computer forensics examiner might suspect the use of steganography because of the nature of the crime, books in the suspect’s library, the type of hardware or software discovered, large sets of seemingly duplicate images, statements made by the suspect or witnesses, or other factors. A website might be suspect by the nature of its content or the population that it serves. These same items might give the examiner clues to passwords, as well. And searching for steganography is not only necessary in criminal investigations and intelligence gathering operations. Forensic accounting investigators are realizing the need to search for steganography as this becomes a viable way to hide financial records which is discussed by [20].

3.2 Discrete Fourier transform

A discrete Fourier transform transforms a sequence of N complex numbers xnx1,x2,xN1 to another sequence of complex numbers XrX1,X2,XN1 as,

Xr=n=0N1xne2πrnN=n=0N1xncos2πrnN+isin2πrnNE30
X=FxE31

3.3 Score-based likelihood

Statistical methods are widely used for digital image forensic. A score-based likelihood ratio (SLR) for camera device identification is one amongst them. Score based likelihood is a deciding factor for selection of the similarities between the evidences such as two fingerprints obtained from a crime scene and the one from the suspect. Photo-response non-uniformity as a camera fingerprint and one minus the normalized correlation as a similarity score [21]. The procedure includes source identification problems and forgery and tampering detection problems. A camera device identification problem utilizes the SLR. It decides whether given an unknown digital image has been taken by a known camera device or not along with the strength and weakness of the evidence in favour of the decision. It helps in qualifying the weight of the evidence. SLRs fit probability density functions to both sets of scores. Both pdfs are evaluated at the score between the noise residual of the image in question and the camera fingerprint, and the SLR is the ratio of the results.

Let’s discuss a simple example as,

Let I1 be the image output from a camera that includes noise, and let I0 be the perfect image without noise. Consider the camera fingerprint, as K, and denote all other noise components from the image as ϕ. Then the image output t I1can be modelled as,

I1=I0+I0K+ϕE32

The true image K is not obtained practically so we estimate it using maximum likelihood estimator K̂

K̂ can be obtained by collecting first N images from the camera in discussion as I1,I2·IN

The noise residuals from each image can be computed utilizing a filter F using the formula

W=I1iFI1iE33

with i=1,2,N.

Then the photo response non uniformity is given by the formula,

K̂=i=0NWI1i=0NI12E34

For accuracy of similarity score, peak to correlation score is replaced instead of the normalized correlation computation. The former has a better decision threshold stability [21]. Further a trace anchored score-based likelihood ratio could also be studied. SLR framework uses the hypothesis testing methodology to justify the conclusion based on the evidences received.

3.4 Basics of neural network

Neural network consists of artificial neurons connected in a network structure. Its structure is inspired by the human brain and learns from the data feed to it. It has an extensive approximation property. There are several types of neural networks namely, recurrent neural network, convolutional neural network, radial basis function neural network, feedforward neural network, modular neural network. A basic structure will be discussed here for getting the insight of its architecture. Single layer neural network in mathematical expressions is seen as

y1=fα11x1+α12x2+α13x3E35

The output y1 is derived from the input xj using weights α1jdescribed in a function form j=1to3 (Figure 1).

Figure 1.

Representation of a single layer neural network without hidden layer.

3.4.1 Neural network with one hidden layer

The neural network with one hidden layer having three inputs and one output with one hidden layer is given by,

y12=gα111x1+α121x2+α131x3y22=gα211x1+α221x2+α231x3y32=gα311x1+α321x2+α331x3y13=gα112y12+α122y22+α132y32E36

The output yj2 is derived from the input xj using weights αij1 described in a function form fori,j=1to3 . The superscript describes the level of layer applied. Then the final output y13 is computed from the information of hidden layer. Similarly, architecture of networks with two hidden layers can be generated (Figure 2).

Figure 2.

Visualization of the neural network with one hidden layer having three inputs and one output with one hidden layer.

Such structures are classified under single layer feed forward network. In multilayer feed forward networks, the inclusion of one or more hidden layers enables the network to be computationally more effective. Feedback networks take in output as its input and closes the loop to gain improvements in the procedure. There are other patterns as well to enhance the performances of the network.

Kingston [22] quoted that Simulation experiments with a type of neural network known as a Hopfield net indicate that it may have value for the storage of tool mark patterns (including bullet striation patterns) and for the subsequent retrieval of the matching pattern using another mark by the same tool for input.

A convolutional neural network is a deep learning algorithm which takes image as an input, assign importance to various aspects and is able to differentiate between the features. Its architecture is very similar to the neurons in human brain. An image which is a 3*3 matrix entry is written as 9*1 vector and is feed to a multilayer perceptron for classification. Digital forensic implements neural network for better analysis [23].

3.5 Basics of probability

Forensic analysis is using probabilistic inferences and probabilistic reasoning. Probability can be termed as specialised facet of logical reasoning whereas statistics deals with collection and summary of data. It may be utilized in measuring. Forensic may include criminal proceedings with chemical analysis of suspicious substances and measurements of elemental compositions of glass fragments. Probability is a branch of mathematics which aims to conceptualise uncertainty and render it tractable to decision-making (Aitken-Roberts-Jackson). In the criminal justice context, the accused is either factually guilty or factually innocent, there is no other possibility. Hence, p(Guilty, G) + p(Innocent, I) = 1. Applying the ordinary rules of number, this further implies that p(G) = 1–p(I); and p(I) = 1–p(G). The probabilistic formulae are also used to measure levels of uncertainty associated with a particular estimate.

Probability is widely used in AI as it resolves around the data collection handling and analysing to reach conclusive outcomes. With the advance foreseen utility of AI in various fields like agriculture, engineering, demography, medicine, education, marketing and so on probability will be involved as the core of algorithms. An overview of few of the important basic concepts are included in this section.

Statistics will deal in data characterisation and analysis. It involves grouping and the choice of selection by hypothesis testing. Here an overview is included.

An experiment is a process of observation and measurement. Randomness is the key desired inclusion for the experiments in discussion. The subsets of a sample space are events.

If equally likely events having finite outcomes, then P(A) is the probability of event A as

PA=Number of outcomes to the occurance of an eventATotal number of equally likely events
0PA1

Impossible event P(A)=0 and P(A)=1 is a certain event.

Conditional probability deals with the concept of events occurrence under the condition of another event already occurred. If the requirement to study the probability of event A under the circumstance with event B occurring already is given by

PAB=PABPB,PB0

3.6 Bayes theorem

Bayes theorem is implemented in a variety of applications where conditional probability is involved. It is used in classification of occurrence of an event when the consideration of probability of occurrence of event is true is included without any specific evidence.

Bayes theorem states that

If we consider S to be a sample space which is subdivided into C1,C2,,Ck with PCi0 for i=1,2,,k. Then for any arbitrary event A in S with PA0 we have for r=1,2,,k,

PCrA=PCrAi=1kPCiA=PCrPACri=1kPCiPACiE37

Although the Bayes theorem is widely used but there are few limitations of the theorem as the requirement of mutually exclusive and exhaustiveness is not achievable practically. If the data is huge the computation is not feasible. In many problems the requirement of relevant conditional probability to be stable over time is also not achievable. Further often assuming the constant likelihood ratio in the denominator is not feasible in practical cases. Bayes theorem helps in making interpretations from the collected samples from the site of crime [24].

3.7 Probability density functions

Probability density function is widely used in the AI and Neural networks. Probability density function is used to define a range for a specific occurrence of a continuous random variable. The area under the curve represents the interval in which the continuous random variable will fall.

fx is called a probability density function if,

fxdx=1fx0xE38

Probability density function is different from probability mass function which will indicate whether there is a probability of a function to achieve a specific value and not a range as in the case of probability density function.

Probability density function of an image (Rafael C. GONZALES)

A=g=0255mIkg+1E39

mIkg+1 represents the number of pixels in Ik(k denotes colour band of image I ) with value g. A denotes the number of pixels in I. The gray level probability density function of Ik is given by,

pIkg+1=1AmIkg+1E40

The use of probability lies in the starting point of neural network for making decisions. Casual and probabilistic decisions are made naturally in human brain but the machine needs to be trained using probability theory. Machine learning uses Bayes theorem. Bayes theorem relates marginal probability and conditional probabilities. Experiment outcomes termed as events are mutually exclusive if they do not occur simultaneously. Events are called exhaustive if the occurrence of them constitute the entire possibilities.

Mathematically events A and B are said to be mutually exclusive and exhaustive if

AB=
AB=S

where S constitutes all possible outcomes of the sample space.

For further enhancement of the approaches many more algorithms are required which could be understood by starting with the ideas discussed above. Probability is utilized to understand the interpretations of forensic and law for concluding legal decisions [25].

Advertisement

4. Conclusion

Artificial intelligence is built along human cognition i.e., learning and retrieving. A network is expected to recognize a previously learned pattern even when some noise is involved. Associative memory is desirable in building multimedia database. Here optimization plays a significant role as it deals with finding the solution which satisfies a given set of constraints. The purpose of AI will be to create admissible model to the human brain. The idea to produce some computational structures similar to neurons or neuron system and connections between them to form neural networks. Application of AI is vast and in this chapter an attempt is made to sensitize the reader with few basic prerequisites to start exploring AI using mathematics for advancements in forensic sciences. The topics involve maybe studied in details to implement the procedures. This will pave the way to understanding and exploring new search methods.

A variety of search mechanisms are employed to problems which include blind search, searching in extent and Heuristic search. A basic introduction to graph theory and probability with examples of implementation in game of ‘chess’ or ‘Go’ motivates the reader to have an insight to the applications. For example, the study of Bayesian networks as a direct acyclic connected graph, with probability distribution associated with each node that defines a mutual relationship between nodes and edges.

Modelling data can be done using algorithms mentioned could use MATLAB techniques. For example if the knowledge of how different parameters influence the energy load is known then we might use statistics or curve fitting tools to model the data with linear or nonlinear regression. If the number of variables is more, the underlying system is particularly complex, or the governing equations are unknown, then we could use machine learning techniques such as decision trees or neural networks. Use of Neural fitting app (Filion).

Further a deeper conceptualization aims to modify the existing research in the benefit of society as a whole. Future research in IoT forensic analysis is having wide scope as different algorithms could be modified incorporating data from relevant inputs which could lead to capturing of the limitations for the protype generated.

To conclude, the prerequisites of the chapter would enhance the reader to work with the underlying concepts in dept to advance the research in the field of image processing and forensic sciences. In future study, researchers may undertake improvement of the algorithms of forensic image processing for an enhanced AI development which is not possible without the basic understanding of some key procedures as discussed. Although the topics may be of interest to readers not limited to forensic sciences as they are applied in various engineering fields [26, 27, 28, 29, 30, 31, 32, 33].

References

  1. 1. Garrido A. Mathematics and artificial intelligence, two branches of the same tree. Procedia Social and Behavioral Sciences. 2010;2(2):1133-1136
  2. 2. Rigano C. Using Artificial Intelligence to Address Criminal Justice Needs. National Institute of Justice; 2019
  3. 3. Atiyah J. Image forensic and analytics using machine learning. International Journal of Computing and Business Research. 2022;12:69-93
  4. 4. Andrej Thurzo HS. Use of advanced artificial intelligence in forensic medicine. New Trends in Forensic and Legal Medicine. Forensic Anthropology and Clinical Anatomy. Healthcare (Basel, Switzerland). 12 Nov 2021;9(11):1545
  5. 5. Iyer SP. Likelihood ratio as weight of forensic evidence: A closer look. Journal of Research of National Institute of Standards and Technology. 2017;122
  6. 6. Deng H. Mathematical Approaches to Digital Color Image Denoising. Thesis. 2009
  7. 7. Malgouyres F. A noise selection approach of image restoration. Applications in. 2001:34-41
  8. 8. Osher S. Using geometry and iterated refinement for inverse problems. Total Variation Based Image Restoration. 2004:4-13
  9. 9. Bruchkstein ML. On Gabor contribution to image enhancement. 1994;27:1-8
  10. 10. Morel LA-L-M. Image selective smoothing and edge detection by nonlinear diffusion (II). SIAM Journal of Numerical Analysis. 1992;29:845-866
  11. 11. Malik PP. Scale space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1990;12:629-639
  12. 12. Zhou YW. A total variation wavelet algorithm for medical image denoising. The International Journal on Biomedical Imaging. 2006. DOI: 10.1155/IJBI/2006/89095
  13. 13. Takeda H. Kernel regression for image processing and reconstruction. IEEE Transactions on Image Processing. Feb 2007;16(2)
  14. 14. David HA. Order Statistics. 3rd edn. Toronto: Wiley; 1981
  15. 15. Mather PM. Computer Processing of Remotely Sensed Images, An Introduction. West Sussex: John Wiley & Sons Ltd; 2004
  16. 16. Zhang J et al. Characterization of protein alterations in damaged axons in the brainstem following traumatic brain injury using fourier transform infrared microspectroscopy: A preliminary study. Journal of Forensic Sciences. 2015:759-763
  17. 17. Arnold MS. Techniques and Applications of Digital Watermarking and Content Protection. Book by Artech house. 2003
  18. 18. Barni MP. Watermark embedding: Hiding a signal within a cover image. IEEE Communications. 2001;39(8):102-108
  19. 19. Kessler GC. An overview of steganography for the computer forensics examiner. Forensic Science Communications. 2015
  20. 20. Hosmer CA. Discovering covert digital evidence. In: Digital Forensic Research Workshop (DFRWS). Proceedings DFRWS; 2003
  21. 21. Reinders S. Statistical Methods for Digital Image Forensics: Algorithm Mismatch for Blind Spatial. Iowa State University; 2020
  22. 22. Kingston C. Neural netowrk in forensic science. Journal of Forensic Sciences. 1992
  23. 23. Mohammad R. A neural network based digital forensics classification. In: 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA). 2018. pp. 1-7
  24. 24. Pete Blair J. Evidence in context: Bayes theorem and investigations. 2010
  25. 25. Taroni F. Uncertainty in forensic science: Experts, probabilities and Bayes’ theorem. Italian Journal of Applied Statistics. 2015:129-144
  26. 26. Jegede OA. Neural networks and its application in engineering. In: Proceedings of Informing Science & IT Education Conference (InSITE). 2009
  27. 27. El-Sharkawi MA. Neural networks and their application to power engineering. Control and Dynamic Systems. 1991
  28. 28. Singh Y. Application of neural networks in software engineering: A review. Communications in Computer and Information Science. 2009:128-137
  29. 29. Aitken-Roberts J. Fundamentals of Probability and Statistical Evidence. Royal Statistical Society. Practitioner guide no 1. 2022. pp. 1-122
  30. 30. Elad M. On the Origin of the Bilateral Filter and Ways to Improve It. 2022. DOI: 10.1109/TIP.2002.801126
  31. 31. Guichard F. Image Analysis and P.D.E.’s Book. Available from: https://archive.siam.org/meetings/an00/talks_online/MorelGuichard.pdf
  32. 32. Filion SD. Data-driven insights with MATLAB analytics: An energy load forecasting case study. Mathwork. 2012
  33. 33. Rafael C, Gonzales RE. Digital Image Processing. 2022

Written By

KP Mredula Pyarelal

Submitted: 15 September 2022 Reviewed: 03 October 2022 Published: 27 November 2022