Open access peer-reviewed chapter

Algorithm for Calculating Coordinates of Images in an Optoelectronic Device with a Matrix Photodetector

Written By

Viktor Pachkov

Submitted: 05 January 2023 Reviewed: 09 January 2023 Published: 11 April 2023

DOI: 10.5772/intechopen.109949

From the Edited Volume

Optoelectronics - Recent Advances

Touseef Para

Chapter metrics overview

25 Chapter Downloads

View Full Metrics

Abstract

The algorithm described below simulates the functions of an optoelectronic CCD array angle meter. This algorithm makes it possible to generate implementations of useful signals from point emitters, external interference, and internal noise, as well as their additive mixture at the output of the device. In addition, it makes it possible to solve the problem of extracting a useful signal from a mixture with a constant background and external and internal noises, as well as determine the coordinates of images of emitters and evaluate the accuracy of determining these coordinates.

Keywords

  • CCD
  • simulation
  • optical electronic
  • simulation flowchart
  • background
  • noise

1. Introduction

Autonomous optical navigation and astro-orientation of spacecrafts is one possible way of improving the reliability of the spacecraft. Self-determination of the spacecraft is inextricably linked to the need to determine its exact orientation, which can be carried out using onboard optical devices based on multi-pixel converters. This work consists of an introduction, six chapters, a conclusion, and a list of literature. The first chapter is devoted to the review and analysis of Optical Electronic Instruments (OEIs) at the CCD. The second and third chapters discuss the features of signal conversion and processing in OEI with CCD, and the development of quasi-optimal methods for estimating the coordinates of images. The fourth chapter considers methods of estimating the coordinates of moving images of point emitters. The fifth chapter is devoted to the method of modeling angular OEIs. The sixth chapter shows the results of the simulation. In conclusion, the main results of the work are formulated. Attached are the modeling algorithms. This work is intended for engineers, researchers, and students.

Advertisement

2. Algorithm for calculating coordinates of images in an optoelectronic device with a matrix photodetector

2.1 Mathematical setting of the problem

The algorithm requires the following source data.

  • Characteristics of emitters: stellar values for stars, their spectral class; radiation force, and its spectrum of emitters illuminated by the sun, as well as range to them. You must also specify the number of point sources and their angular coordinates;

  • Characteristics of the optical system: angular field, diameter of the entrance pupil, focal length, and point scattering function (PSF);

  • CCD matrix characteristics: format, pixel sizes, organization (frame, line-frame), operation mode (frame transfer or full illumination mode), quantum efficiency (spectral sensitivity), aperture characteristic (pixel sensitivity), charge transfer inefficiency, dark current density, noise at the output, dark current unevenness (“geometric noise”), and potential hole depth (wall);

  • Characteristics of output circuit: discharge of analog-to-digital converter.

Then, based on the initial data, the implementation of the signal at the output of the CCD can be calculated according to formula (26), a useful signal is extracted according to formulae (27)(29), its coordinates are determined using expressions (31)(39), and a posterior probability of detecting a signal in a frame according to formulae (40)(42) is calculated.

2.2 Algorithm description

The characteristics of an optoelectronic device with a CCD matrix can be investigated by simulation modeling. To this end, the processes taking place in such a device can be presented in the form of a flowchart shown in Figure 1. We will consider the simulation flowchart [1, 2, 3].

Figure 1.

Simulation flowchart.

Unit 1 simulates a background-target situation in object space. Here, the number of point emitters “n” is set as spatial δ-functions, their coordinates αi, βi, and their radiation characteristics are stellar values, mv for stars and the radiation force I for mobile objects, including those irradiated by the sun.

The background is modeled based on initial data on its brightness at each point. The background emission force at the point is defined as:

IΦ=Bω1ω2NML2E1

where:

ω1 is the angular field along the axis “x,” rad;

ω2 is angular field along the axis “y,” rad.;

M is the number of matrix pixels along the “x” axis;

N is the number of matrix pixels along the y axis;

L is range to objects space (range to target), m; and

B is the brightness of the background at this point.

Calculation sequence.

  1. (1) Calculation of photodetector diagonal d

d2=MΔx2+NΔy2E2

  1. (2) Focal length of the lens

f=d/2tgω,E3

where, ω: angular field of the lens, rad.

  1. (3) Calculation of ω12

ω1=2arccosfMΔx/22+f2E4
ω2=2arccosfNΔy/22+f2

Then, the background is calculated by the formula (1).

In block 2, the signal from the point objects and the signal from the background in the plane of the photodetector matrix are simulated. We will simulate only those areas of the matrix where the useful signal is located. Assume that the region of the matrix where the useful signal is generated, and the background is 16 × 16 pixels. This is based on the experience of developing an OEP with a CCD matrix ОЗД-1 (optical star sensor) by the Institute of Space Research of the Russian Academy of Sciences. Such submatrices must be “n.” We determine the coordinates of the pixel where the useful signal from the nth emitter enters

xn=ftgαn;yn=ftgβnE5

or in number of pixels

M1=xn/Δx;N1=yn/Δy

M1, N1: radiator coordinates in matrix pixels counted from the matrix center.

If the beginning of the reference is taken as the upper left corner of the matrix, then we get

M2=M/2+M1E6
N2=N/2N1

Thus, the final formulas for calculating the coordinates of the nth emitter in the plane of the matrix take the form.

M2=M/2+ftgαn/ΔxE7
N2=N/2ftgβn/Δy

The main beam from the point source is projected to this point.

Now you need to form a submatrix occupied by the image of the point source. As mentioned above, such a submatrix should occupy 16 × 16 pixels. The value of the useful signal from the radiation force source I in the pixels is defined as:

Qij=πD24L2τχijcos4ωnIληλcpλλcphC,E8

where

D: diameter of the inlet pupil of the optical system, m;

τ: lens transmission, rel. unit;

χ [i, j] is a factor that takes into account the share of energy accounted for 1 pixel, rel. units;

I (λ): source radiation force, W/(cf. μm);

η(λср): quantum efficiency, rel. unit;

Δλ: spectral range, m;

t: accumulation time, s;

h: Planck constant, J × s; and

C: speed of light, m/s;

Signals from stars are calculated by formulas:

forG0:Qij=0,25πtD2τχijcos4ωn104,4230,4mvE9
forB3V:Qij=0,25πtD2τχijcos4ωn104,4110,4mv

Calculation sequence

  1. Calculation of coefficients χ [i, j]

    The image of the point source is projected onto the submatrix of the 4 × 4 pixel. The main beam hits the point with coordinates M2, N2 or in the coordinates of the submatrix given in Figure 2 for one of the pixels: i = 2, j = 2; i = 3, j = 2; i = 2, j = 3; i = 3, j = 3.

    The total signal from the PSF image equal to IΣ is determined as the sum of the signals from the polychromatic light beams of the entire PSF along the “x” and “y” axes.

    IΣ=yxΔI,

    where ΔΙ is the number of rays (or the proportion of illumination) that falls on the elementary resolution cell of the PSF (it is 1.5 ÷ 3 μm, sometimes 5 μm).

    The share of illumination per pixel will be

    I1ij=ΔxΔyIxydxdy=ΔxΔyΔI,

    thus

    χij=ΔxΔyΔIxyΔI=I1ijxyΔIE10

  2. Calculation cos ωn

    cosωn=fxn2+yn2+f2E11

  3. Calculation of useful signal value in submatrix pixels 4 × 4 from the radiator with radiation force I taking into account lens transmission

    Qij=πD2τχijcos4ωnIηtΔλλ4L2hCE12

  4. Background calculation

Figure 2.

Radiation distribution by submatrix.

Background calculated on 16 × 16 pixels submatrix

Qkl=πD2τχ1klcos4ωklITMklηλλ4L2hC=πD2τcos4ωklηλλ4L2hC××χ1k1l1Ik1l1+χ1kl1Ikl1+χ1k+1l1Ik+1l1+χ1k1lIk1l+χ1klIkl+χ1k+1lIk+1l+χ1k1l+1Ik1l+1+χ1kl+1Ikl+1+χ1k+1l+1Ik+1l+1E13

Unit 3 performs the procedure of generating noise of the CCD matrix of the visible range in the accumulation mode. The mathematical expectation of charge in the pixel will be defined as [3, 4]

qkl=ΔxΔyjdte102E14

where jd: average dark current density, A/cm2; and

e: charge of the electron, the coulomb (C).

The value of MSE “geometric noise” will be

σΓ=H200qE15

where H is dark current nonuniformity, %.

The implementation of dark current taking into account “geometric noise” will be defined as:

qΓkl=q+σTξE16

where ξ are random numbers (hereinafter referred to as) having a normal distribution density with parameters (0, 1).

The MSE of dark charge fluctuations in the pixel will be calculated by the formula

σΦл=qTE17

Then, the realization of the dark charge in the pixel will be defined as:

qkl=qT+qTξ1E18

Thus, a noise array of 16 × 16 pixels in the signal accumulation mode is to be formed. The implementation of the background signal will be determined similarly.

Q1Φkl=QΦkl+QΦξ2E19

In block 4, a signal mixture with noise and background is calculated.

Formulas for calculating noise and background implementations were given above. The implementation of the useful signal can be calculated similarly.

Q1ij=Qij+Qijξ3E20

where Q [i, j] is a signal from a star or a point emitter.

Then, the realization of the mixture of signal, noise, and background in each pixel of the matrix 16 × 16, where part of the matrix is occupied by the signal, will be calculated as an additive mixture.

Q2kl=Q1Φkl+q1kl+Q1ijE21

In this case, the signal accumulation process in each nth submatrix

16 × 16 pixels end. Now, it is necessary to calculate the implementation of the signal at the output of the matrix and when writing it to the RAM.

In block 5, the signal implementation at the output of the matrix is calculated.

The value of the signal when transmitting charge to the CCD output, taking into account losses, will be determined as:

Q3kl=13M3+N3εQ2kl,E22

where M3 and N3 correspond to indices k and l, respectively, but only in coordinates (column and row numbers) of the entire matrix.

When reading a signal, noise is added to it, the MSE of which is determined by the value

σсч=2e8/3KT/gmfcC2+C3,E23

where

e: charge of the electron,

K: Boltzmann constant, J/k.

T: absolute temperature, deg. K,

gm: transistor slope, mA/V,

fc: clock frequency, Hz, and

C2 + C3 = 1÷5 pf.

Then, the implementation of the signal at the output of the reader (before the ADC) will be defined as:

Q4kl=Q3klσсчξ4E24

Now, it is necessary to determine the price of one ADC discharge in electrons. It can be calculated as:

DN=Qm/2bE25

where Qm: maximum charge that can be accumulated in potential wall (for matrix with volume channel is ∼130,000 e¯), and

b: number of ADC bits,

Then, the value of the signal after digitization will be determined as:

Q5kl=Q4kl/DN,E26

wherein Q5 must be written as an integer.

If Q5 [k, l] ≥ 2b, you must assign Q5 [k, l] = 2b (saturation mode). Thus, “n” digitized array 16 × 16 pixels to be processed in subsequent blocks will be formed.

Block 6 solves the task of detecting a useful signal. To do this, line-by-line viewing of each of the “n” matrices is carried out. To this end, first, the mathematical expectation of a constant background on the line in the vicinity of the useful signal and the MSE of this value in the rate of arrival of samples and taking into account their accumulation is estimated, that is,

EV=116i=116Q4klE27
MSE=115i=116EVQ4kl2E28

Then, the signal is detected in the following lines by the condition

Q4klEV3MSEE29

If the condition is met, then the corresponding pixel (or line pixels) is registered, if not, then again, after passing the line, the expected value (EV) and MSE are refined and again the condition (29) is checked for the next line. In this way, all lines are viewed and all item numbers where condition (29) is met are recorded, as well as the item where the signal is maximum.

The next step is to determine the sub-array 4 × 4 of the pixels occupied by the useful signal.

Set the submatrix to Qm as shown in Figure 3 and find the sum signal for all submatrix pixels.

Figure 3.

Charge distribution in submatrix.

QZUM=i=14j=14Q4ijE30

Then, we move the window one pixel to the right and again calculate the sum according to formula (30) and compare it with the previous one. If it is larger, then we remember it, if it is smaller, we leave it first. From this position, the window is shifted one pixel down and again we find the sum according to formula (30) and compare it with the sum that is recorded. If the new amount is larger, then we remember it and the new position of the window. The last position of the window is a shift by one pixel to the left, again calculating the sum, comparing it with the recorded one, and if the resulting sum is larger, then its record, if not, then the previous largest sum remains. Thus, the signal selection (window setting) is completed. Now you need to highlight the “clean” signal. To do this, subtract the mathematical expectation of a constant background from the signal values in the 4 × 4 pixels submatrix. Thus, a useful signal with zero mathematical expectation in each formed sub-array of 16 × 16 pixels will be extracted, which is supplied to the unit for estimating the coordinates of the obtained images.

The following quasi-optimal methods are used to estimate the coordinates of the images in block 7.

  1. Estimation of image coordinates by energy center (method “weighings”).

  2. Modified “weighing” method.

  3. Finite difference method.

  4. Least squares estimation.

  5. An iterative method for estimating a Kalman filter-based mismatch.

  6. Nine-pixel algorithm.

Methods 1 ÷ 5 assume the formation of one-dimensional lines along the axes “x” and “y.” Method 6 involves working with the original submatrix without converting it into one-dimensional signals. One-dimensional rulers from the submatrix are formed as shown in Figure 4. Then, expressions for estimating the coordinate of the image (similar to for) using methods 1 ÷ 4 have the form [3, 4].

Figure 4.

Lines of readings by axes.

x̂m=Δx23Q1Q2+Q3+3Q4Q1+Q2+Q3+Q4E31
x̂m=Δx23Q12Q22+Q32+3Q42Q12+Q22+Q32+Q42E32

But for certainty that the signal Q23 is maximum (although it may be another pixel), then the finite difference method has the form

a)x̂m=ΔxQ2Q3Q22Q3+Q4
b)x̂m=ΔxQ2Q4Q22Q3+Q4E33
c)x̂m=ΔxQ3Q4Q22Q3+Q4

where the expressions (a)–(c) are derived from the interpolation formulas of Newton, Newton-Bessel, and Newton-Stirling.

x̂m=3Δx2arctgQ1Q2+Q3+Q4Q1+Q2+Q3Q4E34

The mismatch between the reference function and the x-axis ruler formed from the image can be determined as follows (the algorithm for calculating the reference function will be given below). For this, an analytical description of the distribution of samples over the ruler in the form of a truncated Fourier series can be used.

Qx=a0+a1sinπx2Δx+a2cosπx2Δx,

where

a0=14Qx1+Qx2+Qx3+Qx4;a1=24Qx1Qx2+Qx3+Qx4;a2=24Qx1+Qx2+Qx3Qx4

Such a description makes it possible to normalize the reference function and generated ruler, as well as determine the required mismatches at points. x1=1,5Δx;x2=0,5Δx;x3=0,5Δx;x4=1,5Δx Consider the normalization operation first. To do this, you need to find the position and value of the maximum Q (x). In accordance with (34), the maximum position is defined as:

x̂m=2ΔxπarctgQx1Qx2+Qx3+Qx4Qx1+Qx2+Qx3Qx4,

thus

Qm=a0+a1sinπx̂m2Δx+a2cosπx̂m2Δx,

and the normalized values of the generated reference function ruler are defined as:

Qx1,=Qx1Qm,,Qx4H=Qx4Qm;F1H=F1Fm,,F4H=F4Fm

Further, on newly formed rulers Qx1H,,Qx4H;F1H,,F4H where Fi is the reference function sample, you must define new coefficients a0H,a1H,a2H for the reference function and the signal implementation being processed. As a result of these procedures, we will obtain those inconsistencies δx1÷δx4 that can be defined as:

δx1=ΔF1F∂xx=1,5Δx,,δx4=ΔF4∂F∂xx=1,5Δx,

where

ΔF1=F1HQx1H,,ΔF4=F4HQx4H.

Derivatives in the denominator are defined as:

Fx=a1π2Δxcosπx2Δxa2π2Δxsinπx2Δx

and will have values at the appropriate points

∂F∂xx=1,5Δx=π2Δxa122+a222=π4ΔxF2HF4H;∂F∂xx=0,5Δx=π4ΔxF3HF1H;∂F∂xx=0,5Δx=∂F∂xx=1,5Δx;∂F∂xx=1,5Δx=∂F∂xx=0,5Δx

Now you need to process the presented series of values in order to obtain an assessment δx̂. The definition of the latter can be produced using the Kalman filter, which in our case has the form [3]

Pk+1=P˜kBk=Pk/Pk+Δ2P˜k1BkPkδx̂k=δx̂k1+Bkδxkδx̂k1E35

where Δ2: variance of systematic (or methodical) error definitions δx, in our case does not exceed (0.05 Δ x)2; and

P * is in the general case a matrix of dispersions, and in our case the variance of random error of “measurement” of value δx.

It is determined in the real device by both external interference and internal noise. Since we are now considering the system without interference, the random error is determined only by the error of calculating and rounding the signal values. In (35), P * is a nominal value and is defined as:

Pk=σΦQm∂F∂xk12,

where σф is the MSE of interference error.

Note that P * ∼ is 2 orders of magnitude less than Δ2. That will do to calculate the first B value of the filter gain.

Algorithmic calibration of optoelectronic system

The essence of the algorithmic calibration is to estimate the impulse response or reference function of the primary path of the meter. The main stages of a stellar sensor-type EPR, which are as follows:

  1. A portion of the starry sky is shot so as to register as many stars as possible in one frame throughout the field of the device, and this data is recorded in the computer memory.

  2. The corresponding method is used to subtract the background,

normalizing, bring images to a common origin, and average them. If you are performing an algorithmic calibration, you can obtain both a one-dimensional and a two-dimensional reference function, consider the calibration procedure in more detail.

Obtaining a one-dimensional reference function

As mentioned above, OEI with CCD as the first step must register a piece of a starry sky. Received frame representing a digitized signal matrix is recorded in computer memory. The signal from each individual star, which is an ideal point source, is nothing more than a function of scattering the point of the lens. Moreover, the latter has already been spatial sampled and digitized. The initial step of processing is to determine the mathematical expectation of the background and variance (or MSE) of the background (this procedure has already been described above).

Applicable to 16×16 pixels fragments. First, the sub-array 4×4 of the pixels occupied by the useful signal from the stars is eliminated, then the mathematical background expectation and background or noise variance are determined in a known manner. Further, from the signals (samples) of the 4×4 submatrix occupied by the useful signal, the background signal is subtracted and a new submatrix containing only the useful signal and random noise with zero mathematical expectation is obtained.

The next step of processing is the formation of lines along the axes “x,” “y,” each of which contains 4 pixels. This procedure is described above. Since the position of the datum function relative to the coordinate center of the 4×4 submatrix of the pixel is random, it is necessary to bring it to the coordinate center of the submatrix. For this, the above analytical description can be used, then the operation of bringing the reference function to the center of the submatrix is reduced to calculating derivatives and determining corrections ΔFk to Fk at points

x1=1,5Δx;x2=0,5Δx;x3=0,5Δx;x4=1,5Δx,

that is,

ΔF1=x̂m∂F∂x1;ΔF2=x̂m∂F∂x2;ΔF3=x̂m∂F∂x3;ΔF4=x̂m∂F∂x4,

where x̂m and ∂F∂xk defined by the expressions given above, that is,

∂F∂x1=π4ΔxQx2Qx4; ∂F∂x2=π4ΔxQx3Qx1;
∂F∂x3=π4ΔxQx2Qx4; ∂F∂x4=π4ΔxQx3Qx1;

Then, considering x̂m and ∂F∂xk we obtain

ΔF1=Qx2Qx42arctgQx1Qx2+Qx3+Qx4Qx1+Qx2+Qx3Qx4
ΔF2=Qx3Qx12arctgQx1Qx2+Qx3+Qx4Qx1+Qx2+Qx3Qx4E36
ΔF3=Qx4Qx22arctgQx1Qx2+Qx3+Qx4Qx1+Qx2+Qx3Qx4
ΔF4=Qx1Qx32arctgQx1Qx2+Qx3+Qx4Qx1+Qx2+Qx3Qx4

Corrected values of the given reference function are determined by expressions

F1=F1+ΔF1,,F4=F4+ΔF4

and the maximum TFT value at point x = 0 is

Fmax=14F1+F2+F3+F4+24F1+F2+F3F4E37

To get the average of the reference function the frame (and, accordingly, the angular field) should be normalized for each image, which is reduced to dividing Fi by Fm and calculating the average value Fi¯ of all L images of stars in the frame along each coordinate axis

Fi,j¯=1Ll=1LFi,j/Fmx,y;i,j=1,2,3,4E38

Thus, we get normalized and centered, relative to the origin, submatrices of the values of the datum function along the axes “x” and “y.” For ease of calculation, a suitable analytical description may be adopted as the reference function. Thus, for example, to describe the reference function of the OEI ОЗД-1 (optical star sensor), Student distribution is the most suitable.

Fx=2π31+xx0232

The distribution of illumination over the image of the point emitter, as a rule, has one maximum, so it is often described by the Gaussian of rotation. Based on these representations, an algorithm can be obtained for estimating the parameters of such a Gaussian: The position of the maximum x̂m,ŷm and the parameters of the distribution σx, σy of taking into account the averaging by samples within the submatrix occupied by the useful signal. Suppose that the envelope of the distribution of counts over the image can be described by a two-dimensional gauss function [4]

Qxy=Q0expxx̂m22σx2yŷm22σy2

where Q0: signal amplitude;

x̂m,ŷm: maximum position (note that it is in.

within pixel Q33Figure 5);

Figure 5.

Distribution of signals caused by the image over the matrix.

σxy: distribution parameters.

Expressions for image coordinate estimates have the form

σ̂x2=3/lnQ232Q332Q432Q22Q24Q32Q34Q42Q44
x̂m=σ̂x26lnQ24Q34Q44Q22Q32Q42E39
σ̂y2=3/lnQ322Q332Q342Q24Q44Q23Q43Q22Q42
ŷm=σ̂y26lnQ22Q23Q24Q42Q43Q44

Estimates of the formulas given will be obtained in the matrix decomposition pixels. It should also be noted here that the resulting values x̂m,ŷm will differ from the estimates for previous algorithms by 0.5 pixel for each coordinate since the center of the coordinate system is placed in the center of the pixel.

Block 9 is intended for the calculation of error of coordinate estimation. Its Δxm value is defined as follows. Sets the original position of the image. It calculates the value of the signal in the pixels taking into account external and internal interference within the image, according to the modeling algorithms given above. The original image position values x0, y0 range from −0,5Δx to +0,5Δx. Further, a values Δxm=x0x̂m, Δym=y0ŷm are determined from the expressions for estimating coordinates. Thus, at least 250 implementations in each position of the image are processed and the law of distribution of this error is constructed, by which its parameters can be determined.

In the last block 10, a posteriori estimation of the probability of detecting a signal in one frame for each image is made [5]. The calculation sequence is as follows. We set the relative detection threshold for each of the “n” images zt = 3 MSE,

where MSE is the mean square deviation calculated in block 7.

Next, we find the signal-to-noise ratio ρ in the pixel containing the maximum signal within the image

ρ=QmijMSE,

where Qm is the maximum count (minus the constant background) in the submatrix. By these values, we find zt − ρ = z0

Φz0=12π0zteθ22dθ,E40

that allows to calculate probability of signal detection

Pобн=0,512Фz0E41

each of the “n” images in the frame.

The probability of a false alarm will be determined by the expression

Pлт=0,5ФztE42

This procedure ends the process of simulating the primary processing of still images in the ECU with a matrix photodetector.

Replace the entirety of this text with the main body of your chapter. The body is where the author explains experiments, and presents and interprets data of one’s research. Authors are free to decide how the main body will be structured. However, you are required to have at least one heading. Please ensure that either British or American English is used consistently in your chapter.

Advertisement

3. Conclusion

The studies carried out in this work produced the following results. The method of modeling the angular OEI with CCD and assessing its accuracy and detection characteristics, as well as assessing the requirements for the parameters of such devices, including.

  • External input impact model;

  • EIA internal interference model;

  • Model of operation of the OEI pixels;

  • Model of processes of detection, extraction of useful signal, and estimation of its parameters.

References

  1. 1. Pashkov V. On-board optoelectronic measuring system. Lightning Source Incorporated. 2022;2022:240. DOI: 10.31772/2587-6066-2019-20-4-416-422
  2. 2. Pashkov V. On-board Optoelectronic Measuring System. Petersburg, Russia; 2021. p. 263. DOI: 10.31772/2587-6066-2019-20-4-416-422
  3. 3. Pashkov V. On-board Optoelectronic Measuring System. Saarbrucken, Germany: Palamarium Academic Publishing/OmniScriptum GmbH&Co; 2014. p. 253. DOI: 10.31772/2587-6066-2019-20-4-416-422
  4. 4. Pashkov V. Modelling of the optical-electronic device with the matrix photodetector. Danish Scientific Journal. 2017;5(119):85-101
  5. 5. Pashkov VS. The signal detection simulation in the optical -electronic devices with matrix photodetectors. Znanstvena Misel Journal. 2017;10(104):91-95

Written By

Viktor Pachkov

Submitted: 05 January 2023 Reviewed: 09 January 2023 Published: 11 April 2023