Open access

A Method and Electronic Device to Detect the Optoelectronic Scanning Signal Energy Centre

Written By

Moisés Rivas, Wendy Flores, Javier Rivera, Oleg Sergiyenko, Daniel Hernández-Balbuena and Alejandro Sánchez-Bueno

Submitted: 25 June 2012 Published: 16 January 2013

DOI: 10.5772/51993

From the Edited Volume

Optoelectronics - Advanced Materials and Devices

Edited by Sergei L. Pyshkin and John M. Ballato

Chapter metrics overview

3,179 Chapter Downloads

View Full Metrics

1. Introduction

In optoelectronic scanning, it has been found that in order to find the position of a light source, the signal obtained looks like a Gaussian signal shape. This is mainly observed when the light source searched by the optoelectronic scanning is punctual, due to the fact that when the punctual light source expands its radius a cone-like or an even more complex shape is formed depending on the properties of the medium through which the light is travelling. To reduce errors in position measurements, the best solution is taking the measurement in the energy centre of the signal generated by the scanner, see [1].

The Energy Centre of the signal concept considers the points listed below, see [2], in order to search which one of them represents the most precise measurement results:

  • The Signal Energy Centre could be found in the peak of the signal.

  • The Signal Energy Centre could be found in the centroid of the area under the curve Gaussian-like shape signal.

  • The Signal Energy Centre could be found in the Power Spectrum Centroid.

The Energy Centre of the signal could be found by means of the optoelectronic scanner sensor output processing, through a computer programming algorithm, taking into account the points mentioned above, in a high level technical computing software for engineering and science like MATLAB. However, our contribution is a method and an electronic hardware to produce an output signal related to the Energy Centre in the optoelectronic scanning sensor, for applications in position measurements.

This method is based on the assumption that the signal generated by optical scanners for position measurements is a Gaussian-like shape signal. However, during experimentation it has been seen that the optoelectronic scanning sensor output is a Gaussian-like shape signal with some noise and deformation. This is due to some internal and external error sources like the motor eccentricity at low speed scanning, noise and deformation that could interfere with the wavelength of the light sources. Other phenomena could also affect such as of reflection, diffraction, absorption and refraction, producing a trouble that can be minimized by taking measurement in the energy centre of the signal.

The main interest of this chapter is to describe and explain a method to find the energy centre of the signal generated by optical scanners based on a dynamic triangulation, see [3], to reduce errors in position measurements.


2. Optoelectronic scanners for position measurements

Nowadays optoelectronic scanners are widely used for multiple applications; most of the position or geometry measuring scanners use the triangulation principle or a variant of this measurement method. There are two kinds of scanners for position measuring tasks: scanners with static sensors and scanners with rotating mirrors. Optical triangulation sensors with CCD or PSD are typically used to measure manufactured goods, such as tire treads, coins, printed circuit boards and ships, principally for monitoring the target distance of small, fragile parts or soft surfaces likely to be deformed if touched by a contact probe.

2.1. Scanners with position triangulation sensors using CCD or PSD

A triangulation scanner sensor can be formed by three subsystems: emitter, receiver, and electronic processor as shown in figure 1. A spot light is projected onto the work target; a portion of the light reflected by the target is collected through the lens by the detector which can be a CCD,CMOS or PSD array. The angle(α) is calculated, depending on the position of the beam on the detectors CCD or PSD array, hence the distance from the sensor to the target is computed by the electronic processor. As stated by Kennedy William P. in [4], the size of the spot is determined by the optical design, and influences the overall system design by setting a target feature size detection limit. For instance, if the spot diameter is 30 μm, it will be difficult to resolve a lateral feature <30 μm.

Figure 1.

Principle of Triangulation.

Many devices are commonly utilized in different types of optical triangulation position scanners and have been built or considered in the past for measuring the position of light spot more efficiently. One method of position detection uses a video camera to electronically capture an image of an object. Image processing techniques are then used to determine the location of the object. For situations requiring the location of a light source on a plane, a position sensitive detector (PSD) offers the potential for better resolution at a lower system cost[5]. However, there are other kinds of scanners used commonly in large distances measurement or in structural health monitoring tasks, these scanners will be explained in the next section.

2.2. Scanners with rotating mirrors and remote sensing

In the previous section, we described the operational principle of scanners for monitoring the distance of small objects, now we will describe the operational principle of scanners with rotating mirrors for large distances measurement or in structural health monitoring tasks.

There are two main classification of optical scanning: remote sensing and input/output scanning. Remote sensing detects objects from a distance, as by a space-borne observation platform. For example an infrared imaging of terrain. Sensing is usually passive and the radiation incoherent and often multispectral. Input / output scanning, on the other hand, is local. A familiar example is the document reading (input) or writing (output).The intensive use of the laser makes the scanning active and the radiation coherent. The scanned point is focused via finite-conjugate optics from a local fixed source, see [6].

In remote sensing there is a variety of scanning methods for capturing the data needed for image formation. These methods may be classified into framing, push broom, and mechanical. In the first one, there is no need for physical scan motion since it uses electronic scanning and implies that the sensor has a two-dimensional array of detectors. At present the most used sensor is the CCD and such array requires an optical system with 2-D wide-angle capability. In push broom methods a linear array of detectors are moved along the area to be imaged, e. g. airborne and satellite scanners. A mechanical method includes one and two dimensional scanning techniques incorporating one or multiple detectors and the image formation by one dimensional mechanical scanning requires the platform with the sensor or the object to be moved in order to create the second dimension of the image.

In these days there is a technique that is being used in many research fields named Hyperspectral imaging (also known as imaging spectroscopy). It is used in remotely sensed satellite imaging and aerial reconnaissance like the NASA’s premier instruments for Earth exploration, the Jet Propulsion Laboratory’s Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) system. With this technique the instruments are capable of collecting high-dimensional image data, using hundreds of contiguous spectral channels, over the same area on the surface of the Earth, as shown in figure 2. where the image measures the reflected radiation in the wavelength region from 0.4 to 2.5 μm using 224 spectral channels, at nominal spectral resolution of 10 nm. The wealth of spectral information provided by the latest generation hyperspectral sensors has opened ground breaking perspectives in many applications, including environmental modelling and assessment; target detection for military and defence/security deployment; urban planning and management studies, risk/hazard prevention and response including wild-land fire tracking; biological threat detection, monitoring of oil spills and other types of chemical contamination [7].

Figure 2.

The concept of hyperspectral imaging illustrated using NASA’s AVIRIS sensor [7].

While remote sensing requires capturing passive radiation for image formation, active input/output scanning needs to illuminate an object or medium with a ‘‘flying spot, ’’ derived typically from a laser source. In Table 1, we listed some examples divided into two principal functions: input (when the scattered radiation from the scanning spot is detected) and output (when the radiation is used for recording or displaying). Therefore, we can say that in input scanning the radiation is modulated by the target to form a signal and in the output scanning it is modulated by a signal.

Input / Output Scanning
Input Output
Image scanning / digitising Image recording / printing
Bar-code reading Colour image reproduction
Optical inspection Medical image outputs
Optical character recognition Data marking and engraving
Optical data readout Micro image recording
Graphic arts camera Reconnaissance recording
Scanning confocal microscopy Optical data storage
Colour separation Phototypesetting
Robot vision Graphic arts platemaking
Laser radar Earth resources imaging
Mensuration (Measurement) Data / Image display

Table 1.

Examples of Input / Output Scanning.

2.2.1. Polygonal scanners

These scanners have a polygonal mirror rotating at constant speed by way of an electric motor and the radiation received by the lens is reflected on a detector. The primary advantages of polygonal scanners are speed, the availability of wide scan angles, and velocity stability. They are usually rotated continuously in one direction at a fixed speed to provide repetitive unidirectional scans which are superimposed in the scan field, or plane, as the case may be. When the number of facets reduces to one, it is identified as a monogon scanner, figure 3 illustrates an hexagonal rotating mirror scanner.

Figure 3.

Polygon scanner.

2.2.2. Pyramidal and prismatic facets

In these types of scanners, the incoming radiation is focused on a regular pyramidal polygon with a number of plane mirrors facets at an angle, rather than parallel, to the rotational axis. This configuration permits smaller scan angles with fewer facets than those with polygonal mirrors. Principal arrangements of facets are termed prismatic or pyramidal. The pyramidal arrangement allows the lens to be oriented close to the polygon, while the prismatic configuration requires space for a clear passage of the input beam.

2.2.3. Holographic scanners

Almost all holographic scanners comprise a substrate which is rotated about an axis, and utilize many of the characterising concepts of polygons. An array of holographic elements disposed about the substrate serves as facets, to transfer a fixed incident beam to one which scans. As with polygons, the number of facets is determined by the optical scan angle and duty cycle, and the elemental resolution is determined by the incident beam width and the scan angle. In radially symmetric systems, scan functions can be identical to those of the pyramidal polygon. Meanwhile there are many similarities to polygons, there are significant advantages and limitations.

Figure 4.

Polygonal scanner (From

2.2.4. Galvanometer and resonant scanners

To avoid the scan non uniformities which can arise from facet variations of polygons or holographic deflectors, one might avoid multifacets. Reducing the number to one, the polygon becomes a monogon. This adapts well to the internal drum scanner, which achieves a high duty cycle, executing a very large angular scan within a cylindrical image surface. Flat-field scanning, however, as projected through a flat-field lens, allows limited optical scan angle, resulting in a limited duty cycle from a rotating monogon. If the mirror is vibrated rather than rotated completely, the wasted scan interval may be reduced. Such components must, however, satisfy system speed, resolution, and linearity. Vibrational scanners include the familiar galvanometer and resonant devices and the least commonly encountered piezoelectrically driven mirror transducer as shown in figure 5.

Figure 5.

Galvanometer scanner (From

2.2.5. 45° cylindrical mirror scanner

Optical scanning systems can use coherent light emitting sources, such as laser or incoherent light sources like the lights of a vehicle. In the use of laser as light emitting source, the measurements are independent of environment lighting, so it is possible to explore during day and night, however, there are some disadvantages such as the initial cost, the hazard due to its high energy output, and that they cannot penetrate dense fog, rain, and warm air currents that rise to the structures, interfering the laser beam, besides, it is difficult to properly align the emitter and receiver. A passive optical scanning system for SHM can use conventional light emitting sources placed in a structure to determine if its position changes due to deteriorating. Figure 6 illustrates a general schematic diagram with the main elements of the optical scanning aperture used to generate the signals to test the proposed method.

Figure 6.

Cylindrical Mirror Scanner.

The optical system is integrated by the light emitter source set at a distance from the receiver; the receiver is compound by the mirror E, which spins with an angular velocity ω. The beam emitted arrives with an incident angle β with respect to the perpendicular mirror, and is reflected with the same angle β, according to the reflecting principle (C L. Wyatt, 1991) to pass through a lens that concentrates the beam to be captured by the photodiode, which generates a signal “f” with a shape similar to the Gaussian function. When the mirror starts to spin, the sensor “s” is synchronized with the origin generating a pulse that indicates the starting of measurement that finishes when the photodiode releases the stop signal. This signal is released when the Gaussian signal energetic centre has been detected.

Figure 7 shows that light intensity increments in the centre of the signal generated by the scanner. The sensor “s“ generates a starting signal when tα =0, then the stop signal is activated when the Gaussian function geometric centre has been detected.

Figure 7.

Signal generated by a 45° cylindrical mirror scanner.

The distance T2π is equal to the time between m1 and m1, that are expressed by the code N2π as defined in equation 1.


On the other hand, the time tα is equal to the distance between m1 andm2, could be expressed by the code defined in equation 2.


Where f0 is a standard frequency reference. With this consideration the time variable could be eliminated from equation 2obtaining equation 3,see [8].


3. Scanner sensors

All detectors (sensors) act as transducer that receive photons and produce an electrical response that can be amplified and converted into a form of relevant parameters to handle the input data for results interpretation. Among relevant parameters we can find spectral response, spectral bandwidth, linearity, dynamic range, quantum efficiency, noise, imaging properties and time response. Photon detectors respond directly to individual photons. Absorbed photons release one or more bound charge carriers in the detector that modulates the electric current in the material and moves it directly to an output amplifier. Photon detectors can be used ina spectral band width from X-ray and ultraviolet to visible and infrared spectral regions. We can classify them as analogue waveform output and image detectors, however, another type of classification is also possible but we will only describe this type of sensors in this section.

Analogue waveform output detectors

Analogue waveform output detectors are used as an optical receiver to convert light into electricity. This principle applies to photo detectors, phototransistors and other detectors as photovoltaic cells, and photo resistance, but the most widely used today in position measuring process are the photodiode andthe phototransistors.

3.1.1. Photodiode

The photodiode could convert light in either current or voltage, depending upon the mode of operation. A photodiode is based on a junction of oppositely doped regions (pn junction) in a sample of a semiconductor. This creates a region depleted of charge carriers that results in high impedance. The high impedance allows the construction of detectors using silicon and germanium to operate with high sensitivity at room temperatures.

Figure 8.

Cross section of a typical silicon photodiode.

A cross section of a typical silicon photodiode is shown in the figure. 8. N type silicon is the starting material. A thin "p" layer is formed on the front surface of the device by thermal diffusion or ion implantation of the appropriate doping material (usually boron). The interface between the "p" layer and the "n" silicon is known as a pn junction. Small metal contacts are applied to the front surface of the device and the entire back is coated with a contact metal. The back contact is the cathode; the front contact is the anode. The active area is coated with silicon nitride, silicon monoxide or silicon dioxide for protection and to serve as an anti-reflection coating. The thickness of this coating is optimized for particular irradiation wavelengths.

In semiconductors whose bandgaps permit intrinsic operation in the 1-15µm, a junction is often necessary to achieve good performance at any temperature. Because these detectors operate through intrinsic rather than extrinsic absorption, they can achieve high quantum efficiency in small volumes. However, high performance photodiodes are not available at wavelengths longer than about 1.5µm because of the lack of high-quality intrinsic semiconductors with extremely small bandgaps. Standard techniques of semiconductor device fabrication allow photodiodes to be constructed in arrays with many thousands, even millions, of pixels. Photodiodes are usually the detectors of choice for 1-6µm and are often useful not only at longer infrared wavelengths but also in the visible and near ultraviolet.

The photodiode operates by using an illumination window, which allows the use of light as an external input. Since light is used as an input, the diode is operated under reverse bias conditions. Under the reverse bias condition the current through the junction is zero when no light is present, this allows the diode to be used as a switch or relay when sufficient light is present.

Photodiodes are mainly made from gallium arsenide instead of silicon because silicon creates crystal lattice vibrations called phonons when photons are absorbed in order to create electron-hole pairs. Gallium arsenide can produce electron-hole pairs without the slowly moving phonons; this allows faster switching between on and off states and Ga As also is more sensitive to the light intensity. Once charge carriers are produced in the diode material, the carriers reach the junction by diffusion.

Photodiodes are similar to regular semiconductor diodes except that they may be either exposed to detect vacuum UV or X-ray or packaged with a windows or optical fibre connection to allow light to reach the sensitive part of the device. Many diodes designed to use specifically as a photodiode use a PIN junction rather than a p-n junction, to increase the speed of response [9].

Spectral response: The wavelength of the radiation to be detected is an important parameter. As shown in figure 9, silicon becomes transparent to radiation of a wavelength longer than 1100 nm.

Linearity: Current output of the photodiode is very linear with radiant power throughout a wide range. Nonlinearity remains below approximately 0.02% up to 100mA photodiode current. The photodiode can produce output currents of 1mA or greater with high radiant power, but nonlinearity increases to a certain percent in this region. This excellent linearity at high radiant power assumes that the full photodiode area is uniformly illuminated. If the light source is focused on a small area of the photodiode, nonlinearity will occur at lower radiant power.

Figure 9.

Spectral responsivity and response vs. incident angle of a photodiode.

Dynamic Range: Dynamic response varies with feedback resistor, using 1M resistor, the dynamic response of the photodiode can be modelled as a simple R/C circuit with a –3dB cut off frequency of 4kHz. This yields a rise time of approximately 90µs (10% to 90%). See figure 10.

Noise: The noise performance of a photo detector is sometimes characterized by Noise Effective Power (NEP). This is the radiant power which would produce an output signal equal to the noise level. NEP has the units of radiant power (watts). The typical performance curve “Noise Effective Power vs. Measurement Bandwidth” shows how NEP varies with RF and measurement bandwidth.

Imagining Properties: The output is measured in voltage thru time, imaging like a Gaussian-like signal shape [10].

Figure 10.

Small and large signal dynamic response of a photodiode.

3.1.2. Phototransistors

The Phototransistor is similar to the photodiode except for an n-type region added to the photodiode configuration. The phototransistor includes a photodiode with an internal gain. A phototransistor can be represented as a bipolar transistor that is enclosed in a transparent case so that photons can reach the base-collector junction. The electrons that are generated by photons in the base-collector junction are injected into the base, and the photodiode current is then amplified by the transistor’s current gain β (or hfe). Unlike photodiode phototransistor cannot detect light any better, it means that they are unable to detect low levels of light. The drawback of a phototransistor is the slower response time in comparison to a photo diode. If the emitter is left unconnected, the phototransistor becomes a photodiode [11].

Figure 11.

Relative spectral sensitivity and collector current vs. angular displacement of a phototransistor.

3.2. Image sensors

Nowadays image sensors are recognized as the most advanced technology to record electronic images.

These sensors are based on the photoelectric effect in silicon. When a photon of an appropriate wavelength (in general between 200 and 1000 nm) hits silicon, it generates an electron-hole pair. If an electric field is present, the electron and the hole are separated and charge can accumulate, proportional to the number of incident photons, and therefore the scene imaged onto the detector will be reproduced if a proper X-Y structure is present. Each basic element, defining the granularity of the sensor, is called a pixel (picture element) [12].

3.2.1. CCD sensor (charge coupled device)

This sensor is used in scanners to capture digital Images. Typically, it is an array to perform the scanning row by row, scanning one horizontal row pixel at a time, moving the scan line down with a carriage motor. The scanners that use this CCD sensor cell use an optical lens, often like a fine camera lens, and a system of mirrors to focus the image onto the CCD sensor cells.

The CCD sensor cell is an analogue device, when light strikes the chip, it is held as a small electrical charge in each photo sensor. The charges are converted to voltage, one pixel at a time, as they are read from the chip. An additional circuitry is also required to convert analogue to digital signal to produce an image as shown in figure 12.

Figure 12.

CCD operating principle.

The basic concept o CCDs is a simple series connection of Metal-Oxide-Semiconductor capacitors (MOS capacitors). The individual capacitors are physically located very close to each other. The CCD is a type of charge storage and transport device: charge carriers are stored on the MOS capacitors and transported. To operate the CCDs, digital pulses are applied to the top plates of the MOS structures. The charge packets can be transported from one capacitor to its neighbour capacitor. If the chain of MOS capacitors is closed with an output node and an appropriate output amplifier, the charges forming part of a moving charge packet can be translated into a voltage and measured at the outside of the device. The way the charges are loaded into the CCDs is application dependent.

The advantages of CCDs are size, weight, cost, power consumption, stability and image quality (low noise, good dynamic range, and colour uniformity). A disadvantage is that it is susceptible to vertical smear from bright light sources when the sensor is overloaded [13].

Figure 13.

Responsivity and quantum efficiency vs. wavelength of a CCD.

As light enters the active photo sites in the image area, electron hole pairs are generated and the electrons are collected in the potential wells of the pixels. The wells have a finite charge storage capacity determined by the pixel design [14].

3.2.2. CMOS sensor (complementary metal oxide semiconductor)

This is an active pixel sensor or image sensor fabricated with an integrated circuit that has an array of pixel sensors, each pixel containing both a CMOS component and an active amplifier. Extra circuitry next to each photo sensor converts the light energy to a voltage and additional circuitry is also required to convert analogue to digital signal. In CMOS sensor the incoming photons go through colour filters, then through glass layers supporting the metal interconnect layers, and then into silicon, where they are absorbed, exciting electrons that then travel to photodiode structures to be stored as signal. These are commonly used in cell phone cameras and web cameras. They can potentially be implemented with fewer components, use less power, and/or provide faster readout than CCDs, scaling to high resolution formats. CMOS sensors are cheaper to manufacture than CCD sensors. However, a disadvantage is that they are susceptible to undesired effects that come as a result of rolling shutter [15].

Figure 14.

CMOS sensor.

Typically the sensitivity of the sensor is evaluated based on the quantum efficiency, QE, or the chance that one photon generates one electron in the sensor at a given wavelength. This is a good indicator. This gives the minimum amount of light you can see.In general, CMOS sensors have a higher QE in the sensitivity due to their design structure, and this can be further optimized by producing the sensor using a thicker epitaxial layer (shown as CMOS 1-b in Figure 15 below).Hence, at 800nm, the CMOS sensor with the thicker epitaxial layer has the best QE.

Figure 15.

Relative radiant sensitivity vs. angular displacement and CCD vs. CMOS sensitivity.

3.2.3. Position sensing detector

Other device widely used as triangulation position sensor is the PSD (Position Sensing Detector), which converts an incident light spot into continuous position data(figure 16)and is more accurate and faster than CCD because the PSD is a continuous sensor, while CCD is a matrix of dots switched on and off and its resolution depends on how many dots are located on the sensor. Typically a linear CCD has 1024 or 2048 dots.

Figure 16.

PSD Operating principle.

PSD has an infinite resolution because it is a continuous sensor, therefore the digital resolution of a PSD depends not on the PSD itself. Alignment sensors using CCDs have to be programmed to do multiple measurements at every step to improve accuracy and to lower noise because linear CCDs have a low resolution. To have the same accuracy of a PSD, CCD should perform no less than 32 measurements and hence calculate the average measurement. However, CCD is generally preferred to PSD because PSD needs an expensive circuit design including Analogue-to-Digital conversion [17].


4. Typical optoelectronic scanners signals

Different shapes of signals are generated during an optical scanning process, depending on the kind of light source and the sensor of the scanner. Some precision semiconductor optical sensors like CCD or PSD produce output currents related to the “centre of mass” of light incident on the surface of the device See [18]. All light registered by the CCD or PSD originates an ideal signal shape as shown in figure 17(Image credit:

Figure 17.

Ideal photon distribution on CCD and PSD sensors.

A typical position measuring process includes an emitter source of light, as a laser diode or an incoherent light lamp and the position sensitive detector like CCD or PSD as a receiving device, which collects a portion of the back-reflected light from the target. The position of the spot on the PSD is related to the target position and the distance from the source, see[19].However, the real photon distribution on the sensor depends on the characteristic dimensions related to the diffraction pattern of the light in the space. Common examples of signals generated by the light registered on a CCD camera appear in a study about super-resolution by spectrally selective imaging, shown in figure 18 (Image credit: A.M. van Oijen and J.Köhler).

Figure 18.

Real photon distributions as a function of the detector for different diffraction patterns.

Based on A.M. van Oijen and J.Köhler study, we can observe that the spatial distribution function of light has an Airy-function-like shape, see [20]. It is well known that CCD, CMOS and SPD use the light quantity distribution of the entire beam spot entering the light receiving element to determine the beam spot centre or centroid and identifies this as the target position. However, they are not the only sensors that generate a similar Gaussian-like shape, there are still a lot of sensors to be further investigated. For example, a simple photodiode can also originate a similar Gaussian-like shape, when it is used as a sensor on a scanner with a rotating mirror, [21]. Figure 19 below illustrates a hypothetical spot model, and attempts to explain how the signal is created by the photodiode on a scanner with a rotating mirror.

Figure 19.

Principle of electrical signal formation during rotational scanning.

In this case, the signal created, as a similar Gaussian-like shape, goes up(Fig. 19, a) and falls down (Fig. 19, e), and a fluctuating activity takes place around its maximum area in figs. 19 (b-d). As we mentioned before, in a real practice the signal becomes noisy, see [22]. The experiment recently developed by Rivas M. and Flores W., with the scanner shown in Figure 6, for angular position measuring and using an incoherent light source and a simple photodiode, validated the model shown in Figure 20. During experimentation, it has been observed that the optoelectronic scanning sensor (photodiode) output is a Gaussian-like shape signal with some noise and deformation. This is due to some internal and external error sources like the motor eccentricity at low speed scanning, noise and deformation that could interfere with the wavelength of the light sources. Other phenomena could also affect, though, such as reflection, diffraction, absorption and refraction, producing a as seen in Figure 20.

As we can see, the photodiode signal originates a similar function to a CCD, consequently, it is possible to enhance the accuracy measurements in optical scanners with a rotating mirror, using a method for improving centroid accuracy by taking measurement in the energy centre of the signal. In the following section, we propose a new method and its respective circuit to find the centre of the signal by an optical scanner.

Figure 20.

Scanning sensor Gaussian like shape, measurements at different angular positions of the light source.


5. Signal processing methods to locate signal energy centre

In the previous sections, we described different sensors and scanners that produce output currents related to the "the centre of mass" of the light incident in the surface of the device. In this section we will compare some techniques to find the energy centre of the signal and eventually discuss their advantages.

5.1. Time-series simple statistics algorithms for peak detection

Peak Signal Algorithms are simple statistic algorithms for non-normally distributed data series [23] to find the peak signal through threshold criteria statically calculated [23]. The algorithms which identify peaks in a given normally distributed time-series were selected to be applied in a power distribution data, whose peaks indicate high demands, and the highest corresponds to the energy centre. Each different algorithm is based on specific formalization of the notion of a peak according to the characteristics of the optical signal. These algorithms are classified as simple since the signal does not require to be pre-processed to smooth it, neither to be fit to a known function. However, the used algorithm detects all peaks whether strong or not, and to reduce the effects of noise it is required that the signal-to-noise ratio (SNR) should be over a certain threshold [23]:

h = m a x + a b s _ a v g ) 2 + K * a b s _ d e v E4
S 1 k ,   i ,   x i , T   = m a x x i - x i - 1 ,     x i - x i - 2 ,   x i - x i - k + m a x x i - x i + 1 ,     x i - x i + 2 ,   x i - x i + k 2 E5
S 2 k ,   i ,   x i , T   = x i - x i - 1 +     x i - x i - 2 + + x i - x i - k k + x i - x i + 1 +     x i - x i + 2 + + x i - x i + k k 2 E6
S 3 k ,   i ,   x i , T   = x i - x i - x i - 1 +     x i - x i - 2 + + x i - x i - k k + x i - x i - x i + 1 +     x i - x i + 2 + + x i - x i + k k 2 E7
S 4 k ,   i ,   x i , T   = H w N k ,   i ,   T - H w N ' k ,   i ,   T E8

This method consists in define the variables: T = x1, x2, …,x Nbe a given univariate uniformly sampled time-series containing N values (1,2, …,N). xi be a given it h point in T. k> 0 is a given integer. N+(k,i,T) = <xi+1, xi+2,…,xi+k> the sequence of k right temporal neighbours of xi. N-(k,i,T) = <xi-1, xi-2,…,xi-k> the sequence of k left temporal neighbours of xi.N(k,i,T) = N+(k,i,T) • N-(k,i,T) denote the sequence of 2k points around the it h point (without the ith point itself) in T (• denotes concatenation). N’(k,i,T) = N+(k,i,T) • {xi} • N-(k,i,T). And S be a given peak function, (which is a non-negative real number).S(i, xi, T) with it h element xi of the given time-series T.

A given point xi in T is a peak if S(i, xi, T) >θ, where θ is a user-specified (or suitably calculated) threshold value.

5.2. Calculation of the centroid of the light distribution

This method has been widely used in digital imaging for the location of different image features with subpixel accuracy. By definition, the centroid of a continuous 1-D light intensity distribution is given by:

x = - x f x d x - f x d x E9

where f (x) is the irradiance distribution at the position x on the image, see [24].

In our case the signal geometric centroid is a function of the voltage signal shape generated by the scanner (a plane figure of two dimensional shape X) and is the intersection of all straight lines that divide X into two parts of equal moment about the line [25].

For the geometric centroid computation the “integral plane figures method” will be used to provide two coordinates: Ẋ which we will assign to the time axis, thus Ẋ=T and Ẏ that we will assign to the voltage axis, thus Ẏ=V, where we will only take the t coordinate to correlate the geometric centroid with the position on time (sample number) where the energy centre is located. The signal generated by the optical aperture should be represented by a function:


In Figure 21 the area under the curve delimited by the function y=v(t) and the lines Aa and Bb define the function integral limits of the plane figure. Selecting the differential area

dA=v(t) dtE11

The integral limits are on t (a, b). As the differential dt is a rectangle, the geometric centroid is in the half base and half height. As dt tends to zero and the half of it is a very small value, we could consider that half of dt is dt, therefore the next equations are used to calculate the geometric centroid [5]:

T = tV=v/2E12

First Integral, to calculate the complete area under the signal curve.

A= dA = v(t)dtE13

Second Integral, to find the T coordinate that corresponds to the energetic signal centre.


Solving for T

T = T d A A = t v d t A E15

Third Integral, to find the V coordinate to know which voltage value was present in the energetic signal centre coordinate, if required for future experimentation.



V = V d a A = v / 2 v d t A = 1 2 A v 2 d t E17

Figure 21.

Optical Signal represented as a plane figure.

5.3. Power (energy) spectrum centroid

The Power Spectrum Centroid is a parameter from the spectrum characterization mainly used until now for musical computing due to the spectral centroid corresponding to a timbral feature that describes the brightness of a sound. In this application we will correlate the Power Spectrum Centroid with the Energy Centre of the Signal due to the fact that the Power Spectrum Centroid can be thought as the centre of gravity for the frequency components in a spectrum. The power spectrum is a positive real function of a frequency variable associated with the function of time, which has dimensions of power per hertz (Hz) or energy per hertz, that means the power carried by the wave (signal) per unit frequency.

Power Spectrum Method:

The first step to go from time-series domain to frequency-series domain is to apply the Fourier series, which provides an alternate way of representing data. Instead of representing the signal amplitude as a function of time, Fourier Series represent the signal by how much information (power) is contained at different frequencies and also allow to isolate certain frequency ranges that could be from noise sources, if necessary. Whenever we have a vector of data (finite series)with Matlab we can apply the FFT (Fast Fourier Transform) to convert from time to frequency domain, computing the Discrete Fourier transform (DFT), which is the Fourier application for discrete data and whose non-zero values are finite series.

The second step is to compute the power spectrum, that is to compute the square of the absolute value of the FFT, which result is considered as the power of the signal at each frequency.

ѡ = 1 2 π     n = - f n e - i ω n 2 = F ω F * ω 2 π E18

Where F(ω) is the discrete-time Fourier Transform of fn.

The third step consists of applying the Power Spectrum Centroid:

S C H z = k = 1 N - 1 k .   X d k k = 1 N - 1 X d k E19

6. Method and electronic device to locate signal energy centre

The principal focus of this chapter is a method to find the energy centre of the signal generated by optical scanners and to reduce errors in position measurements. The method is based on the assumption that the signal generated by optical scanners for position measurement is a Gaussian-like shape signal, and this signal is processed by means of an electronic circuit.

6.1. Electronic method operating theory

A signal V(t) is obtained from the optical scanning aperture, as shown in Figure 22.

Figure 22.

a) Chanel 1 Original signal from apertureb) Original signal representation.

The signal V(t) is amplified through an operational amplifier until saturation to obtain a square signal, this signal can be expressed as:

Vs(t) = VsmaxE20

which is a constant for a≤t ≤bas shown in Figure 23.

Figure 23.

a) Channel 1 Original Signal from aperture. Channel 2 Square Signal. b) Square signal representation.

The signal Vs (t) is integrated with respect to dt in order to get the ramp Vr(t) as shown below in Figure 24.

  V r ( t ) = V s ( t ) d t E21

then energetic signal centre is located in Vrmax/2= Vsmax /2 as shown in Figure 24.

Figure 24.

a)Channel 1 original signal from aperture, Channel 2 ramp signal. b) Channel 1 ramp signal, Channel 2 Pulse indicating the energetic signal centre overlapped on ramp signal c) Energetic signal centre search process representation.

All this process is carried out by a circuit similar to the one shown in figure 25.

Figure 25.

Electronic control circuit representation.

The advantage of this method is that the mathematical processing is performed by using electronic components in real time to get the data vector of the saturated signal which can be handled in Matlab [8].

6.2. Electronic control circuit method experimentation

The first stage of the experimentation started with regular signals simulated by a function generator, the signals utilized were: a rectified sin signal, a rectified square signal, rectified triangle signal and an Airy function, as illustrated in figure 26.

Figure 26.

Regular signals simulated by a function generator: a)Rectified Sin signal. b) Square ignal. c) Rectified triangle signal d)Airy function signal.

The second stage consisted of processing each signal by means of the electronic circuit to get the energetic centre, as shown in figure 27.

Figure 27.

Detailed example. a)Channel 1: original signal from function generator; Channel 2: square signal obtained from saturation. b) Channel 1: saturated signal Channel 2: ramp signal obtained from integration. c) Channel1: original signal from generator; Channel 2: impulse signal overlapped on original signal from generator to indicate the energetic signal centre.

The circuit was tested with different signals obtaining satisfactory results. In figure 28, the results are illustrated with a triangle signal and an airy function signal.

Figure 28.

Impulse signal indicating the energetic signal centre overlapped on: a) triangle signal. b)Airy function signal.

Finally, to characterize and increase the accuracy and resolution of signal measurements four methods were selected and compared to obtain the most applicable settingsin order to find the energetic centre of the signal. The given results are shown in [2].


7. Conclusion

A pertinent method to detect the energetic centre of signal generated by optical scanning with a rotating mirror and a simple photodiode was presented. The results of a series of experiments and simulations were used to analyse the performance of the method and the circuit considering regular signals. Consequently, further work is required to reduce problems encountered in processing real signals. Besides, The method can also be used to detect geometrical centre of the light distribution on CCD and PSD, future experiments with this kind of sensors should be considered. The results suggest that the circuit proposed can support different patterns of light distributions. To conclude, it is strongly recommended that this circuit and the photodiode be manufactured in the same integrated circuit.


  1. 1. RivasL.MoisésSergiyenko.OlegTyrsa.VeraHernández.WilmarOptoelectronic method for structural health monitoring. International Journal of Structural Health Monitoring 201091105120
  2. 2. FloresF.WendyRivas. L.MoisésSergiyenko.OlegY.RiveraC.JavierComparison of Signal Peak Detection Algorithms in the Search of the Signal Energy Center for measuring with optical scanning. In: Falcon S. Bertha, Trejo O. René, Gómez E. Raúl: IEEE ROC&C2011:XXII autumn international conference on communications, computer, electronics, automation, robotics and industrial exposition:ROC&C2011, 27 Nov-1 Dec. 2011Acapulco Gro., México.
  3. 3. Rodríguez Q. Julio C., Sergiyenko Oleg, Tyrsa Vera,. Básaca P Luis C., Rivas L. Moisés, Hernández B. Daniel, 3D Body& Medical Scanners`Technologies: Methodology and Spatial Discriminations, in: Srgiyenko Oleg (ed.) Optoelectronic Devices and Proprieties. In-Tech: 2011.p307-322.
  4. 4. KennedyWilliam. P.The Basics of triangulation sensors, Cyber Optics Corp.,accessed June 18 2012
  5. 5. Vahelal, Ahmedabad. Seminar report 3 of Cryptography: Sensors on 3D Digitization, Hasmukh Goswami College of Engineering, Gujarat, India:2010
  6. 6. M. Bass, Handbook of Optics, Vol. II- Devices, Measurements and Properties, New York: McGRAW-HILL , INC, 1995.
  7. 7. J. P. *. a. C. B. Javier Plaza, Multi-Channel Morphological Profiles for Classification of Hyperspectral Images Using Support Vector Machines, Sensors:2009.p197
  8. 8. Rivas M., Sergiyenko O., Aguirre M, Devia L, Tyrsa V, Rendón I, Spatial data acquisition by laser scanning for robot or SHM task: IEEE-IES: International. Symposium on Industrial Electronics:(ISIE-2008), Cambridge, United Kingdom, 2008, p.1458-1463
  9. 9. Rieke, George. Detection of Light: From the Ultraviolet to the Sub millimeter.Optical detectors.2002Publisher: Cambridge University Press. West Nyack, NY, USA. eUSBN: 978113948351. pISBN:9780521816366.
  10. 10. Burr-BrownCorporation. P. D.S-12199B.O. P.1994OPT301. Integrated photodiode and amplifier datasheet.
  11. 11. EverlightElectronics.CoLtdD. P.T-03305 2005PT334-6C. Technical data sheet 5mm phototransistor T-1 3/4.
  12. 12. JungR.Image sensors technology for beam instrumentation, CERN, CH1211 Geneva 23, Switzerland
  13. 13. AlbertJ. P.TheuwissenSolid.Solid-State Imaging with Charge-Coupled Devices.1995Kluwer Academic Publishers.
  14. 14. TexasInstruments. S. O. C.S0D.2003TC281.1036X1010-Pixel CCD image sensor datasheet.
  15. 15. Sebastiano Battiato. Arcangelo Ranieri Bruna. Giuseppe Messina.Image Processing for Embedded Devices. 2010Bentham Science Publisher. Sharjah, UAE.
  16. 16. ADIMEC. CCD vs. CMOS Image Sensors in Defense Cameras. 2011.
  17. 17. Boni Fabio, What is the difference between PSD and CCD sensor technology?, FASEP. 20 Jun 2012
  18. 18. WilliamsR. D.SchafergP.DavisG. K.RossR. A.Accuracy of position detection using a position sensitive detector, “IEEE transactions on instrumentation and measurement. 19981998474914
  19. 19. MassariN.GottardiM.GonzoL.SimoniA.High Speed Digital CMOS 2D Optical Position Sensitive Detector in:Castello C., Baschirotto A. (eds.) ESSCIRC 2002.Proceedings of 28th European Solid-State Circuits Conference of IEEE Solid-State Circuits Society: ESSCIRC 2002,September 24262002Firenze, Italy.
  20. 20. van Oijen A.M., J. Köhler, Schmidt J., Müller M., Brakenhoff G.J.3-Dimensional super-resolution by spectrally selective imaging Chemical Physics Letters, 1998; 292(1998)183-187
  21. 21. Rivas L. Moisés, Sergiyenko Oleg, and Tyrsa Vera. Machine Vision: Approaches and Limitations. In: Xiong Zhihui. (ed.) Computer Vision. InTech; 2008.p396-428.
  22. 22. Sergiyenko Oleg ,Tyrsa Vira, Basaca P. Luis., Rodríguez Q. Julio C., Hernandez W., Nieto H.Juan I. , Rivas L M. Electromechanical 3D optoelectronic scanners: resolution constraints and possible ways of its improvement. In: Seriyenko (ed.) Optoelectronic Devices and Properties. InTech: 2011p549-582.
  23. 23. Girish Keshav Palshikar, Simple Algorithms for Peak Detection in Time-Series, Tata Research Development and Design Centre (TRDDC), 54B Hadapsar Industrial Estate.Pune 411013,India.
  24. 24. StantonR.AlexanderJ. W.DennisonE. W.GlavichT. A.HovlandL. F.Optical tracing using charge-coupled devices. 1987Opt. Eng. 269930938
  25. 25. Larry Herrera, Magister in Educational Planning, Oriente University of Bolívar, (accessed 10 Jun 2012).

Written By

Moisés Rivas, Wendy Flores, Javier Rivera, Oleg Sergiyenko, Daniel Hernández-Balbuena and Alejandro Sánchez-Bueno

Submitted: 25 June 2012 Published: 16 January 2013