Open access peer-reviewed chapter

Physical Characteristics, Sensors and Applications of 2D/3DIntegrated CMOS Photodiodes

Written By

Oscal T.-C. Chen, Yi-Yang Lee and Robin R.-B. Sheen

Submitted: 02 March 2015 Reviewed: 30 April 2015 Published: 07 October 2015

DOI: 10.5772/60737

From the Edited Volume

Optoelectronics - Materials and Devices

Edited by Sergei L. Pyshkin and John Ballato

Chapter metrics overview

1,901 Chapter Downloads

View Full Metrics

Abstract

Two-dimensional photodiodes are reversely biased at a reasonable voltage whereas 3D photodiodes are likely operated at the Geiger mode. How to design integrated 2D and 3D photodiodes is investigated in terms of quantum efficiency, dark current, crosstalk, response time and so on. Beyond photodiodes, a charge supply mechanism provides a proper charge for a high dynamic range of 2D sensing, and a feedback pull-down mechanism expedites the response time of 3D sensing for time-of-flight applications. Particularly, rapid parallel reading at a 3D mode is developed by a bus-sharing mechanism. Using the TSMC 0.35μm 2P4M technology, a 2D/3D-integrated image sensor including P-diffusion_N-well_P-substrate photodiodes, pixel circuits, correlated double sampling circuits, sense amplifiers, a multi-channel time-to-digital converter, column/row decoders, bus-sharing connections/decoders, readout circuits and so on was implemented with a die size of 12mm×12mm. The proposed 2D/3D-integrated image sensor can perceive a 352×288-pixel 2D image and an 88×72-pixel 3D image with a dynamic range up to 100dB and a depth resolution of around 4cm, respectively. Therefore, our image sensor can effectively capture gray-level and depth information of a scene at the same location without additional alignment and post-processing. Finally, the currently available 2D and 3D image sensors are discussed and presented.

Keywords

  • CMOS photodiode
  • active pixel
  • Geiger mode
  • time of flight
  • image sensor

1. Introduction

Nowadays, 3-dimensional (3D) images and videos are widely used in various applications, like games, robotics, cinema and so on. How to effectively capture 3D information becomes a critical issue. In general, stereo images from two slightly different viewpoints are used to establish 3D perception of which mechanism is based on binocular parallax. For example, a 3D camera of Panasonic LUMIX® DMC-3D1K takes two pictures simultaneously and then displays these two pictures to the left and right eyes of a human for 3D perception [1]. Since the parallax of two viewpoints for capturing is fixed, 3D perception may not be easily adapted to different viewpoints.

Another approach also adopts two cameras in which one captures a gray-level image, and the other seizes object depths where an active light source, like infrared, is utilized. Based on the characteristics of light reflected from objects, a 3D image can be built via post-processing computation. For instance, Kinects I and II from Microsoft Xbox use the structured light technique and Time Of Flight (TOF) technique to acquire object depths, respectively [2], [3]. Particularly, the TOF technique is used to estimate positions of objects in space according to the travel time of light emitted from a light source, reaching an object, reflected from an object and arriving at a sensor [4], as depicted in Fig. 1. Based on the different travel time captured by photodiodes, a depth map of a scene can be fairly attained. The information from the luminance (2D) and depth (3D) cameras is used to reconstruct complete 3D pictures which are observed from multiple viewpoints. Since these two cameras may be located at different positions, the capture-point difference need be compensated to yield a correct 3D picture.

Figure 1.

Concept of TOF.

In this chapter, physical characteristics of CMOS photodiodes are illustrated first. How to design high-efficiency photodiodes for 2D and 3D sensing is addressed. Second, a 2D/3D-integrated active pixel is presented where the same photodiodes are shared by 2D and 3D capturing. Additionally, the readout circuits used for 2D and 3D image sensors are integrated. Particularly, two feedback mechanisms are adopted to delay saturation of 2D luminance sensing, and accelerate response time of 3D depth sensing. Third, in order to reach a compact pixel array, high readout speed and the minimized coupling effect between transmission lines and photodiodes, the mechanism associated with bus sharing is developed to effectively accomplish parallel reading. Fourth, based on the TSMC 0.35μm 2P4M CMOS technology, the proposed 2D/3D-integrated image sensor was implemented with a die size of 12mm×12mm where 2D and 3D modes have the resolutions of 352×288 and 88×72 pixels, respectively, under a fill factor of 44%. Finally, the proposed image sensor, and the currently available 2D and 3D image sensors are illustrated, compared and concluded.

Advertisement

2. Physical characteristics of CMOS photodiodes for 3D sensing

The p-n junction of a diode under a reversely biased voltage is employed to perceive photos. Such a diode is named as a photodiode. Due to the electrical field of the reversely biased voltage, the depletion region between p and n layers is created. The size of the depletion region is dependent on the amount of the reversely biased voltage, and the doping concentrations of p and n layers. This depletion region incited by a photo can yield a pair of electron and hole which are drifted and guided by the electrical field to become a photocurrent. When the reversely biased voltage approaches the breakdown voltage, the depletion region becomes large to increase the incitation rate of electrons and holes, and the high electrical field expedites the drifting of electrons and holes and reduces the recombination chance of electrons and holes. This operation manner is called as the Geiger mode [5], which has a rapid response to incident light. Restated, slight light can induce a sufficient photocurrent for detection.

A 3D image can be captured by the time of flight technique where object depths are interpreted by the time of round trips associated with light that originates from a light source, like light emitting diode (LED), shining objects, and is reflected from objects. To effectively seize reflected light, photodiodes must quickly respond to photos owing to high speed of light. Hence, an avalanche photodiode which operates at the Geiger mode can be a good choice to accomplish TOF sensing. An avalanche photodiode has high sensitivity to precisely apprehend slight photos which are early birds. Under such physical characteristics, a 3D depth map can be successfully attained from an array of avalanche photodiodes.

Many avalanche complementary metal-oxide semiconductor (CMOS) photodiodes have been explored and proposed in the literature. For instance, Niclass et al. proposed a 3-D imager based on a 2-D array of single-photon avalanche diodes fabricated by the standard CMOS technology where sub-millimetric precision could be achieved under a depth-of-field of several meters [6]. Marshall et al. realized a CMOS 64×64 pixel array in which avalanche photodiodes and active pixel sensors were integrated [7]. Zappa et al. adopted a standard 0.8-μm CMOS technology to realize an integrated sensor consisting of photodiodes, input sensing circuit, photon-counting and control circuits where an active quenching and active reset circuit were implemented to drive a single-photon avalanche diode [8]. Faramazpour et al. developed avalanche photodiodes and their corresponding driving circuits under a standard 0.18-μm CMOS technology where the breakdown voltage was 10.2V and the dead time was 30ns [9]. Atef et al. implemented two photodiode structures using the 40-nm standard CMOS technology where one could perform like an avalanche photodiode and the other functioned as a regular photodiode [10]. Pancheri et al. presented a low-noise avalanche photodiode based on the graded junction in 0.15-μm CMOS technology [11]. The other studies focused on ameliorating the process, materials and doping concentrations in order to minimize noise, to lower dark currents and to enhance sensitivity [12]-[15].

By using the TSMC 0.35μm CMOS technology, P-diffusion_N-well_P-substrate, N-well_P-substrate and N-diffusion_P-substrate photodiodes are explored to understand the physical characteristics where their cross sections are shown in Fig. 2 [16]. Figures 3 and 4 depict breakdown voltages of P-diffusion_N-well_P-substrate and N-well_P-substrate photodiodes under the same area sizes with different perimeters, respectively. The measurement results reveal that the breakdown voltage becomes larger as the perimeter is decreased. Additionally, the breakdown voltage of a P-diffusion_N-well_P-substrate photodiode is smaller than that of an N-well_P-substrate photodiode. The photodiode with a deeper p-n junction, larger area and smaller perimeter has a higher breakdown voltage. Particularly, the lateral region of a photodiode may form a dead space which brings about the decrease of a fill factor, quantum efficiency and breakdown voltage [16]. From the abovementioned phenomena, a circular photodiode with the maximum ratio of area to perimeter is likely preferred to attain a high breakdown voltage.

As well as the boundary effect, the efficiency of induced photocurrents is examined where a light source adopts a laser at 850nm. Referring to [17], the current gain can be expressed as

G=(Iphoto_GeigerIdark_Geiger)/(Iphoto_typicalIdark_typical)E1

Iphoto_typical and Idark_typical are the induced and dark currents of a photodiode at a typical reversely biased voltage, respectively. This typical voltage can be Vdd, which is a supply voltage of circuits at the currently used process technology. Restated, it is the maximum reversely biased voltage for 2D sensing of a photodiode. Additionally, Iphoto_Geiger and Idark_Geiger are the induced and dark currents of a photodiode at the Geiger mode, respectively. The measurement results, depicted in Fig. 5, reveal that the current gain of N-diffusion_P-substrate is the best, and the second is P-diffusion_N-well_P-substrate where each of three photodiodes has the area of 30μm×30μm, and Vdd is 3.3V. Due to high variations of currents, the logarithmic scale of ‘log(fA)’ at the left Y axis is adopted to clearly present the measured currents.

In addition to the breakdown voltage of a photodiode, the crosstalk effect, as shown in Fig. 6, need to be well investigated to minimize the interference of the neighboring pixels [18], [19]. Figure 7 displays a top view of 3×3 photodiodes, with and without an N-well guard ring, which were implemented by the TSMC 0.35μm CMOS technology. The photodiode in the center of a 3×3 photodiode array is measured when the neighboring eight photodiodes are biased at different voltages. The measured breakdown voltages of 3×3 P-diffusion_N-well_P-substrate and N-well_P-substrate photodiodes with and without an N-well guard ring are depicted in Fig. 8. The breakdown voltage of a photodiode with a guard ring is greater than that of a photodiode without a guard ring. Notably, the central pixel likely has a higher breakdown voltage as its neighboring pixels are reversely biased to a higher voltage. Figure 9 shows breakdown voltages of 3×3 P-diffusion_N-well_P-substrate, N-well_P-substrate and N-diffusion_P-substrate photodiodes with N-well guard rings while neighboring pixels are reversely biased at voltages from 0V to 9V. The N-well_P-substrate photodiode yields the largest breakdown voltage, the second is the P-diffusion_N-well_P-substrate photodiode and the last one is the N-diffusion_P-substrate photodiode.

Figure 2.

Cross sections of photodiodes. (a) P-diffusion_N-well_P-substrate. (b) N-well_P-substrate. (c) N-diffusion_P-substrate photodiodes.

Figure 3.

Breakdown voltages of P-diffusion_N-well_P-substrate photodiodes.

Figure 4.

Breakdown voltages of N-well_P-substrate photodiodes.

Figure 5.

Induced currents, dark currents and gains of photodiodes at an 850nm laser.

Figure 6.

Topology of a crosstalk effect.

Figure 7.

Top view of 3×3 photodiode arrays. (a) Without a guard ring. (b) With an N-well guard ring.

Figure 8.

Measured breakdown voltages. (a) 3×3 P-diffusion_N-well_P-substrate photodiodes. (b) 3×3 N-well_P-substrate photodiodes.

Figure 9.

Measured breakdown voltages. (a) P-diffusion_N-well_P-substrate. (b) N-well_P-substrate. (c) N-diffusion_P-substrate.

Advertisement

3. 2D/3D-integrated pixel

A 3D depth sensor can yield a depth map of a scene but fails to provide delicate gray-scale information. However, a 2D image sensor can effectively interpret a pixel at a fairly fine gray scale rather than depth resolution. Accordingly, there is a demand for using multiple 2D or/and 3D sensors to capture a real-world scene and then to display a 3D picture to a watcher. For example, if two 2D image sensors are employed to mimic binocular vision, 3D perception adheres to a specific viewpoint. Nevertheless, if a 2D image sensor and a 3D depth sensor are used together to capture objects, the difference of viewpoints from these two sensors at different positions has to be amended. Additionally, the camera including two individual sensors takes relatively high hardware cost accompanied with great power consumption. Therefore, we propose to realize 2D luminance and 3D depth perceptions at one sensor of which 2D and 3D operations are alternately executed. The concept of how to design the proposed 2D/3D-integrated image sensor is illustrated in Fig. 10 [20].

Figure 10.

Design methodology of the 2D/3D-integrated sensor.

The first step of integrating a 2D image sensor and a 3D depth sensor is to design a photodiode that can be shared by these two sensors. Figure 11 depicts the measured spectrum responses of P-diffusion_N-well_P-substrate, N-well_P-substrate and N-diffusion_P-substrate photodiodes using the TSMC 0.35μm CMOS technology where the reversely biased voltage is zero. From the measured results, N-well_P-substrate and P-diffusion_N-well_P-substrate photodiodes are the best and second best, respectively. In the following, the operations of photodiodes associated with 2D image and 3D depth sensors are studied to determine which photodiode structure is preferred.

Figure 11.

Measured spectral responses of three CMOS photodiodes.

3.1. Photodiodes of 2D and 3D sensors

When a PN photodiode is biased at a reverse voltage, incident light reaches the depletion region of this photodiode, and then incites numerous pairs of holes and electrons which create a photocurrent. The light intensity increases with the induced photocurrent. Based on an exposure period, the sensing circuit of a pixel converts an integrated photocurrent to an analogue voltage which represents gray-level luminance. Such a mechanism accomplishes a 2D image sensing. While the reversely biased voltage approaches the breakdown voltage of a photodiode, this photodiode has relatively large electrical field and depletion region in which pairs of holes and electrons are easily generated by few photons. This kind photodiode operating at the Geiger mode is to acquire a 3D depth map where the time of the flight trip associated with light emitted from a source, reaching objects, reflected from objects and arriving at photodiodes is computed.

Physical characteristics of photodiodes with different geometric and junction structures must be well understood in order to effectively integrate 2D and 3D photodiodes. From our previous study [16], the corner in the geometric shape of a photodiode easily leads to breakdown because of charged elements likely gathering at this point. Accordingly, the number of corners in a photodiode increases, resulting in the decrease of a breakdown voltage. Furthermore, the dark current is correspondingly enlarged as the number of corners increases. In the previous section, we conclude that the area and peripheral of a photodiode are somehow proportional and inversely proportional to the breakdown voltage, respectively. The increased number of pixels in an array lowers the breakdown voltage as well. Therefore, a photodiode with a circle shape and a guard ring is chosen to achieve high-efficiency sensing capability.

According to quantum efficiency of photodiodes in Fig. 11, N-well_P-substrate and P-diffusion_N-well_P-substrate photodiodes are good candidates. The reversely biased voltages of a photodiode used for 2D and 3D sensing are quite different. When the N-well_P-substrate photodiode is adopted for 2D and 3D sensors, P-substrate must be driven by two different voltages. However, P-substrate is a common base which is usually shared by transistors of all circuits. Under the standard CMOS technology, P-substrate is always connected to the ground. Hence, the N-well_P-substrate photodiode does not satisfy our demand. In the P-diffusion_N-well_P-substrate photodiode, P-diffusion and P-substrate are addressed by the ground to form two PN junctions for a 2D sensing where P-diffusion_N-well and N-well_P-substrate work for short-wavelength and long-wavelength light receiving, respectively. When a 3D sensing is conducted, P-diffusion and P-substrate are biased by a large negative voltage and 0V, respectively. Such a biasing manner makes P-diffusion_N-well an avalanche photodiode and N-well_P-substrate a photodiode under a normal reversely biased voltage. Once few photos reach the P-diffusion_N-well_P-substrate photodiode, P-diffusion_N-well reacts rapidly, and N-well_P-substrate may not respond too much.

3.2. Pixel circuits

To capture 2D gray-level information, the dynamic range is one of the key issues associated with a pixel design. A pixel circuit with a larger dynamic range can interpret a greater range of light luminance. Figure 12 shows the pixel circuit at a 2D mode, which employs an extra path to provide charges [21]. Such a path includes two transistors of M2 and M3 where M2 is manipulated by an external control signal, and M3 functions like an active resistor with gate-node sensing. Icharge-supply goes through this path to compensate the current sink from Iphoto, and thus to delay the gate node of M6 to become 0V. The simulations in Fig. 13 reveal that the pixel circuit with a charge supply mechanism can reach up to 110dB and greatly extend the dynamic range than without a charge supply mechanism.

Figure 12.

2D pixel circuit.

Figure 13.

Simulated dynamic range of a 2D pixel circuit. (a) With a charge supply mechanism. (b) Without a charge supply mechanism.

The pixel circuit at a 3D mode is to discover objects associated with depth information. The photodiode in a pixel is reversely biased close to a breakdown voltage. Once the photodiode is triggered by few photons, a significant current is generated. Such an instant and great current accelerates the response of the pixel circuit. Referring to a simple passive quenching structure [22], we proposed a modified 3D pixel circuit, as depicted in Fig. 14. When a slight light reaches this 3D pixel, a great current is induced to make a voltage drop at the N node of a photodiode. This voltage drop is expedited by M4, M5 and an inverter as well. The timing diagram of the proposed 3D pixel circuit is depicted in Fig. 15. During the reset, the N node of a photodiode is charged to a voltage, above 12Vdd, which is within the input noise margin of an inverter outputting 0V. The source and gate nodes of M4 are connected to the input and output of an inverter, respectively, to create a feedback loop. Additionally, M5 is linked to the drain node of M4, and is manipulated by a signal of ‘3D control’ to build a pull-down path. After the reset, the photodiode begins to perceive photos. Once few photos are detected, a discharge action is taken. At the same time, M4 and M5 are activated to accelerate the inverter dropping to 1/2Vdd. Such a pull-down path lessens the time interval associated with light detection, and thus raises the depth resolution. Figure 16 shows the circuit diagram and layout of the proposed 2D/3D-integrated pixel that is easily addressed by using the 2D and 3D control signals to switch its operation modes.

Figure 14.

3D pixel circuit.

Figure 15.

Timing diagram of a 3D pixel circuit. (a) Timing chart. (b) Simulation chart.

Figure 16.

2D/3D-integrated pixel circuit. (a) Circuit diagram. (b) Layout diagram.

Advertisement

4. Parallel reading using the bus-sharing mechanism

When the proposed 2D/3D-integrated pixel circuit is operated at a 3D mode, a pulse signal is triggered by the perceived photos. Afterwards, the time difference between light emitted and detected is calculated, and then the corresponding depth information is derived based on a velocity of light. An independent bus line from each pixel of a large pixel array may not be addressed at low hardware cost. Hence, there is a need for low-cost parallel reading and computation in time. Nevertheless, as the size of a pixel array goes up, the hardware complexity of parallel reading is extremely expanded. Not only is the area increased, but the coupling effect between photodiodes and transmission lines is likely induced as well. According to the conventional work [23], [24], the reading of trigger pulses is fulfilled at a time-multiplexed manner.

Figure 17.

Parallel reading of the bus-sharing mechanism.

To overcome the abovementioned drawbacks, a bus-sharing mechanism is proposed to realize parallel reading at low hardware cost. This bus-sharing mechanism employs a connection topology in which each pixel connects to multiple shared buses, as displayed in Fig. 17. Since each bus is connected to multiple photodiodes, a decoder associated with the shared buses is demanded to determine which photodiode is activated. Restated, when buses are enabled by one or multiple photodiodes simultaneously, these buses become Vdd. Based on the pattern of the enabled buses, the locations of photodiodes are discovered. For instance, while light is sensed by a pixel, P1, the buses of LO1, C1, R1 and ROp are pulled up to Vdd. As P2 also observes light, it enables the buses of LO2, C1, R2 and ROp1. Although there is a bus of C1 shared by P1 and P2, the other three buses are quite different, and can be used to distinguish P1, P2 and others. Therefore, according to the enabled buses and triggered time, when and which photodiode(s) are activated can be decoded and ascertained.

Figure 18.

Special case of a photodiode mistaken.

Photodiodes triggered at different time can be discovered by the proposed bus-sharing mechanism at an effectual and low-cost manner. However, when many photodiodes receive photos simultaneously, a special condition must be considered. An un-triggered pixel completely encircled by triggered pixels that discover light at once is mistaken for a triggered one. A special case, depicted in Fig. 18, mistakes P10 where P10 does not capture any light, and photodiodes of P5 ~ P7, P9, P11 and P13 ~ P15 are activated by light. The buses of LO4, C3, RO3 and R2 addressed by P10 are enabled by (P7, P13), (P9, P11), (P5, P15) and (P6, P14), respectively. During decoding P10, an AND gate with inputs of LO4, C3, RO3 and R2 yields logic-1, which is an error solution. Such a situation can be figured out while P10 is only enabled at an earlier or later time. Restated, to analyze the activated patterns of P10 and its neighboring pixels at different time intervals, the accurate triggering time associated with P10 can be attained.

4.1. Multi-channel time to digital converter

When an LED generates light projecting upon an object, the time to digital converter (TDC) begins to calculate the time of light travel from an LED to a photodiode. The derived travel time which is multiplied by a velocity of light can present a double distance from a sensor to an object. According to this distance information, the depth of an object apart from a sensor can be attained. Figure 19 shows the block diagram of multi-channel TDC which is an event-driven approach [25]. The counter in TDC is applied to obtain a timescale number which is correlated with time of light flight. Additionally, the delay line circuit is used to interpolate a fractional scale. To attenuate the influence of temperature and process, differential pairs are adopted to realize flip flops of a delay line.

A 3D depth map comes from a pixel array of which pixels demand timing information. If each pixel has the corresponding timing circuit to compute its depth, there are too many timing circuits which take great hardware cost and high power dissipation. To reach the TDC shared by multiple pixels, a multi-channel TDC composed of a ring TDC, a thermal encoder and a 4-bit counter, as displayed in Fig. 19, is adopted. Referring to [25], Fig. 20 shows a 15-stage ring TDC, which is a core of a multi-channel timing circuit. When the signal of ‘Start’ is active, a NAND gate and 14 inverters build a ring oscillation. This ring TDC produces 15 outputs of C1, C2,... and C15, which are compacted by a thermal encoder to give a 4-bit fine result. In the meantime, the counter yields a 4-bit coarse result as well. 4-bit fine and course results form 8-bit timing information which is stored in the latch array, and used to interpret the depth information.

Figure 19.

Block diagram of multi-channel TDC.

Figure 20.

Ring TDC.

Advertisement

5. 2D/3D-integrated image sensor

The proposed 2D/3D-integrated image sensor employs P-diffusion_N-well_ P-substrate photodiode, which can be switched to different reversely biased voltages, and then operates at a 2D or 3D photo sensing mode. The Correlated Double Sampling (CDS) circuit and readout circuit used for a 2D mode, and Sense Amplifiers (SA), a multi-channel TDC and a readout circuit used for a 3D mode are implemented. Additionally, sequential and parallel reading is realized by using row and column decoders, and bus-sharing connections and decoders at 2D and 3D modes, respectively. The CDS circuit reduces the fixed pattern noise, and the SA boosts a trigger pulse generated from a pixel to lower the dead time. The block diagram of the proposed 2D/3D-integrated image sensor is depicted in Fig. 21. Since human visual perception has a good resolution in luminance rather than depth, the proposed 2D/3D-integrated image sensor adopts pixel dimensions of 352×288 and 88×72 pixels associated with 2D and 3D sensing, respectively. Restated, the pixel dimension at a 3D mode is one sixteenth of that at a 2D mode to lessen hardware cost. Particularly, to effectively decrease the overhead of parallel reading at a 3D mode, the bus-sharing mechanism can address 88×72 pixels by using 478 lines rather than 6,336 lines. After decoding 478 signal bits, 88×72 bits are stored in latches at every counting-time interval where each bit indicates if a pixel is triggered or not.

Figure 21.

Block diagram of the proposed 2D/3D-integrated image sensor.

The TSMC 0.35μm 2P4M CMOS technology was employed to implement the proposed 2D/3D-integrated image sensor with a die size of 12mm×12mm [26]. The field factor is about 44% where a photodiode has a diameter of 10μm. The peak and average powers are 2.56W and 0.58W, at a supply voltage of 3.3V, respectively. The dynamic range of luminance detection can reach to 100dB. Each stage of the ring TDC was measured to have a delay of 234ps which can interpret a depth resolution of 4cm. When capturing an object, the external start signal resets the pixel circuit, enables the TDC and triggers a light source. The proposed sensor begins to receive the reflected light from an object and calculate the travel time using the TDC. The counted travel time is used to derive object depths. In our measurement, the FPGA board is programmed to manipulate an LED array and provide the timing control signals to read data from the proposed chip, as displayed in Fig. 22, where a cylindrical box is located in front of the sensor. At a light source from an 850nm LED array, Fig. 23 shows the depth map of a cylindrical box. This depth map exhibits a similar cylindrical shape but somehow includes little noise.

Figure 22.

Measurement setup using the FPGA board. (a) Block diagram of measurements. (b) Proposed chip with on-board package and a convex lens. (c) Discovering a cylindrical box.

Figure 23.

Measured depth map of a cylindrical box.

Advertisement

6. Currently available techniques of 2D and 3D image sensors

Nowadays, CMOS image sensors generally have extensive resolutions, high frame rates, large dynamic ranges and low power dissipation. To meet these demands, pixel circuits, read-out structures, analogue-to-digital converter (ADC) architectures and 3D integrated circuits (IC) have been well explored as well as high-efficiency photodiodes. For example, Lim et al. adopted two column-shared cyclic ADCs arranged at a two-stage stack, to reduce the power consumption of the read-out circuit [27]. Seo et al. employed a column-parallel folding integration and cyclic ADC architecture to minimize the amount of noise, and reached to 13-bit or more resolution for each pixel [28]. To enhance the sensitivity of a pixel, Xu et al. designed a capacitive trans-impedance amplifier pixel with a tiny metal–oxide–metal (MOM) capacitor. To compensate the mismatch of small integration capacitors across the pixel array, an on-chip calibration scheme with in-pixel circuits was developed [29]. Afterwards, Xu et al. further implemented a small multi-layer MOM integration capacitor to achieve high sensitivity and low mismatches across a pixel array. Under such a design, the size of the previous capacitive trans-impedance amplifier pixel was effectively minimized [30]. Sakakibara et al. added a floating diffusion node and a storage node in a pixel circuit which supports two-channel read-outs for low and high intensities. Via a single-slope 12-bit ADCs, their pixel with dual storage structure made it possible to obtain up to 83dB dynamic range [31]. Chung et al. proposed a 0.5V operated pulse-width modulation CMOS image sensor with threshold-variation canceling and programmable current-controlled threshold schemes in which the fixed-pattern noise could be effectively minimized [32]. Yeh et al. used the 3D IC technology to stack a pixel array, an ADC array and an image processing array, which could operate in parallel, to achieve a high frame rate and high spatial resolution [33]. Sukegawa et al. combined the back-illuminated photosensitive layer and logic layer to become an image sensor. Such an approach could make the size of 8M pixels down to 1/4 inch. Additionally, the RGBW (Red, Green, Blue and White) coding was added in the color filter of the pixel to improve sensitivity, and binning-SVE was adopted to reach a high dynamic range [34]. The specifications of the abovementioned approaches are summarized in Table 1.

Authors Xu et al. [30] Sukegawa1 et al. [34] Yeh et al. [33] Chung et al. [32] Sakakibara et al. [31] Seo et al. [28] Lim et al. [27]
Pixel architecture CTIA N/A 4T-APS
PPD
N/A N/A 4-TR PPD 2-T PPD
Shutter Global N/A Rolling N/A Global N/A N/A
Process (CMOS) 0.18 μm 65nm
(1P4M)
0.18 μm (1P6M) 0.18 μm (1P6M) 90nm
(1P4M)
0.18 μm
(1P4M)
0.13 μm
(1P4M)
Array size 640×480 3280×2464 2048×1536 64×40 5M 1032×1024 1696×1212
Pixel size ( μm2 ) 8.7×8.22 1.12×1.12 2.8×2.8 10×10 5.86×5.86 7.5×7.5 2.25×2.25
Fill factor (%) 41 N/A 28 25.4 N/A 52 N/A
Frame rate (fps) 400 30 100 78.5 N/A 2.2 250
Dynamic range 50.1dB 60dB N/A 82dB 83dB 78dB 59dB
FPN 0.55% N/A 0.43 mVrms 0.055% N/A N/A 0.1%
Sensitivity 1.89
V/luxs
6.7k
e/ luxs
(white pixel)
3.69
V/luxs
N/A 78k
e/ luxs
10
V/luxs
8.6k
e/ luxs
Temporal noise 15.6 erms 2.2 erms 16 erms N/A 4.8erms 1erms 12.5 erms
ADC resolution (bit) 12 10 12 10 12 13-19 10
ADC type SAR N/A Off-chip Ramp Column single-slope Folding-integration/Cyclic Shared Cyclic
Power N/A 185mW 19mW 29.6 μW 2W 450mW 300mW

Table 1.

Specifications of Currently Available 2D Image Sensors.

Note: CTIA: Capacitive Trans-Impedance Amplifier; PPD: Pinned Photo-Diode


As well as 2D image sensors, there are some special approaches to implement 3D image sensors. For instance, Koyama et al. used a camera lens and a single sensor to realize the binocular images for left and right eyes [35]. Particularly, a lenticular lens separates incident light to become two beams for left-eye and right-eye viewing points, which were captured by multiple pairs of neighboring pixels. However, to solve the energy losses due to the beam splitter and the crosstalk and so on, the polycyclic Si3N4 digital micro lenses were employed to make the beam focused at photodiodes. Wang et al. used a light-field image sensor to capture the intensity and angle of the incident light, and then derived depth information where angle sensitive pixels were adopted [36]. Based on the traditional pixel architecture and diffraction gratings, the Talbot effect could be addressed to capture the information of the incident light angle. By calculating the convergence and divergence angles, the depth information is attained. Kim et al. use single lens to capture 2D and 3D images via a time-multiplexed manner [37]. The pixel architecture consisted of a 4-transistor active pixel and two floating diffusion nodes where each pixel unit was adjacent to two transfer gates. Through a time-multiplexed manner, the transfer gates are used to switch the operational modes. At a normal mode, the capture manner is identical to that of a traditional CMOS image sensor, and at a 3D mode, all of the transfer gates are switched on to yield many 4×4 pixel units for capturing the depth information. Restated, they enlarged the sensing area of a photodiode to enhance the sensitivity at a 3D mode. However, the increase of the reversely biased voltage usually brings higher sensitivity improvement than the increase of the sensing area. The specifications of the abovementioned 3D image sensors and the proposed 2D/3D-integrated image sensors are listed in Table 2.

Authors Koyama et al. [35] Wang et al. [36] Kim et al. [37] Proposed
Luminance Pixel architecture 3-T APS 3-T APS 4-T APS (PPD) 5-T APS
Process (CMOS) 0.11μm (1P3M) 0.18μm (1P6M) 0.11μm (1P4M) 0.35μm(2P4M)
Array size 2.1M 400×384 1920×1080 (2D)
480×270 (3D)
352×288 (2D)
88×72 (3D)
Pixel size ( μm2 ) 7.6 56.3 13.3 (2D)
213.2 (3D)
78.5
Fill factor (%) N/A 58 38.5 44
Frame rate(fps) 60 200 30 30
Depth Technique Lenticular lens Light-field Time of flight Time of flight
Measured range (m) 0.2~1 1 0.75~4.5 1~3
Non-linearity (%) 5 0.25 0.93 N/A
Frame rate (fps) N/A N/A 11 N/A
Resolution N/A 2.5mm 38mm 40mm
Calculation Yes Yes No No

Table 2.

Specifications of currently available 3D image sensors and proposed 2D/3D-integrated image sensors.

Note: PPD: Pinned Photo-Diode


Advertisement

7. Conclusion

This chapter presents a 2D/3D-integrated image sensor which includes photodiodes, pixel circuits, CDS circuits, sense amplifiers, a multi-channel TDC, readout circuits, row/column decoders, bus-sharing connections/decoders and so on. The luminance and depth information of a scene can be captured by the same pixel at a time-multiplexed manner. Based on the standard CMOS technology, the circular photodiode of P-diffusion_ N-well_ P-substrate is utilized thanks to good quantum efficiency, fair breakdown voltage and easy integration of 2D and 3D pixels. Particularly, the proposed integrated pixel can yield a high dynamic range at a 2D mode using a charge supply mechanism and a high response speed at a 3D mode using a feedback pull-down mechanism. Additionally, the bus-sharing mechanism is employed to diminish hardware cost of parallel reading. By using the TSMC 0.35μm 2P4M CMOS technology, the 352×288-pixel 2D and 88×72-pixel 3D integrated image sensor was designed to have a dynamic range up to 100dB and a depth resolution of around 4cm. The measured results reveal very promising performance. Therefore, the 2D/3D-integrated image sensor proposed herein can be widely applied to various multimedia capturing applications under low hardware cost and low power dissipation.

Advertisement

Acknowledgments

This work was partially supported by National Science Council (NSC) and Ministry of Science and Technology (MOST), Taiwan, under the project numbers of NSC 97-2221-E-194-060-MY3, and MOST 102-2221-E-194 -047. The authors would like to thank Wei-Jean Liu, Meng-Lin Hsia, Zhe Ming Liu, Ming-Chih Lin, Chieh Ning Chan, Kuan-Hsien Lin, Shu Chun Wang and Hsiu-Fen Yeh who helped in designing, simulating, implementing and measuring the 2D and 3D image sensors. Additionally, Chip Implementation Center (CIC), Hsinchu, Taiwan, providing the services of VLSI CAD tools and chip fabrication is highly appreciated.

References

  1. 1. Panasonic. Panasonic Product Support - DMC-3D1K. Available from: http://shop.panasonic.com/support-only/DMC-3D1K.html.
  2. 2. Andersen MR, Jensen T, Lisouski P, Mortensen AK, Hansen MK, Gregersen T. Kinect depth sensor evaluation for computer vision applications. Aarhus University, Department of Engineering, 2012.
  3. 3. Microsoft. Kinect for Windows features. Available from: http://www.microsoft.com/en-us/kinectforwindows/meetkinect/features.aspx.
  4. 4. Charbon E. Introduction to time-of-flight imaging. In: Proc. of IEEE Sensors Conference; 2-5 Nov. 2014; Valencia. p. 610-613.
  5. 5. Aull BF, Loomis AH, Young DJ, Heinrichs RM, Felton BJ, Daniels PJ. Geiger-mode avalanche photodiodes for three-dimensional imaging. Lincoln Laboratory Journal. 2002; 13(2):335-349.
  6. 6. Niclass C, Rochas A, Besse P-A, Charbon E. Toward a 3-D camera based on single photon avalanche diodes. IEEE Journal of Selected Topics in Quantum Electronics. 2004; 10(4):796-802. DOI: 10.1109/JSTQE.2004.833886.
  7. 7. Marshall GF, Jackson JC, Denton J, Hurley PK, Braddell O, Mathewson A. Avalanche photodiode-based active pixel imager. IEEE Trans. on Electron Devices. 2004; 51(3):509-511. DOI: 10.1109/TED.2003.823051.
  8. 8. Zappa F, Lotito A, Tisa S. Photon-counting chip for avalanche detectors. IEEE Photon Technology Letters. 2005;17(1):184-186. DOI: 10.1109/LPT.2004.838136.
  9. 9. Faramarzpour N, Deen MJ, Shirani S, Qiyin F. Fully integrated single photon avalanche diode detector in standard CMOS 0.18-μm technology. IEEE Trans. on Electron Devices. 2008; 55(3):760-767. DOI: 10.1109/TED.2007.914839.
  10. 10. Atef M, Polzer A, Zimmermann H. Avalanche double photodiode in 40-nm standard CMOS technology. IEEE Journal of Quantum Electronics. 2013; 49(3):350-356. DOI: 10.1109/JQE.2013.2246546.
  11. 11. Pancheri L, Dalla Betta GF, Stoppa D. Low-noise avalanche photodiode with graded junction in 0.15-μm CMOS technology. IEEE Electron Device Letters. 2014; 35(5):566-568. DOI: 10.1109/LED.2014.2312751.
  12. 12. Kang Y, Mages P, Clawson AR, Yu PKL, Bitter M, Pan Z. Fused InGaAs-Si avalanche photodiodes with low-noise performances. IEEE Photonics Technology Letters. 2002; 14(11):1593-1595. DOI: 10.1109/LPT.2002.803370.
  13. 13. Ong DSG, Jo Shien N, Yu Ling G, Chee Hing T, Shiyong Z, David JPR. InAlAs avalanche photodiode with type-II superlattice absorber for detection beyond 2-μm. IEEE Trans on Electron Devices. 2011; 58(2):486-489. DOI: 10.1109/TED.2010.2090352.
  14. 14. Jun H, Banerjee K, Ghosh S, Hayat MM. Dual-carrier high-gain low-noise superlattice avalanche photodiodes. IEEE Transactions on Electron Devices. 2013; 60(7):2296-2301. DOI: 10.1109/TED.2013.2264315.
  15. 15. Zhen Guang S, Dun Jun C, Hai L, Rong Z, Da Peng C, Wen Jun L. High-gain AlGaN solar-blind avalanche photodiodes. IEEE Electron Device Letters. 2014; 35(3):372-374. DOI: 10.1109/LED.2013.2296658.
  16. 16. Chan CN, Chen OTC. Physical effects of avalanche CMOS photodiodes. In: Proc. of OptoElectronics and Communications Conference; 5-9 July 2010. p. 824-825.
  17. 17. He J, Xi X, Chan M, Hu C, Li Y, Zhang X. Equivalent junction method to predict 3-D effect of curved-abrupt p-n junctions. IEEE Transactions on Electron Devices. 2002; 4 9(7):1322-1325.
  18. 18. Hsia M-L, Liu ZM, Chan CN, Chen OTC. Crosstalk effects of avalanche CMOS photodiodes. In: Proc. of IEEE Conference on Sensors; 28-31 Oct. 2011; Limerick, Ireland: IEEE; 2011. p. 1689-1692.
  19. 19. Chen OTC, Lin K-H, Liu ZM. High-efficiency 3D CMOS image sensor. In: Proc. of 18th OptoElectronics and Communications Conference; June 30 - July 4, 2013; Kyoto, Japan: Optical Society of America; 2013.
  20. 20. Liu ZM, Lin M-C, Chan CN, Chen OTC. 2D and 3D-integrated image sensor. In: Proc. of IEEE 53rd Midwest Symposium on Circuits and Systems; 1-4 Aug. 2010; Seattle, USA: IEEE; 2010. p. 292-295.
  21. 21. Liu W-J, Yeh H-F, Chen OTC. A high dynamic-range CMOS image sensor with locally adjusting charge supply mechanism. In: Proc. of IEEE 48th Midwest Symposium on Circuits and Systems; 7-10 Aug. 2005: IEEE; 2005. p. 384-387.
  22. 22. Cova S, Ghioni M, Lacaita A, Samori C, Zappa F. Avalanche photodiodes and quenching circuits for single-photon detection. Applied Optics. 1996; 35(12):1956-1976.
  23. 23. Niclass C, Favi C, Kluter T, Gersbach M, Charbon E. A 128 128 single-photon image sensor with column-level 10-bit time-to-digital converter array. IEEE Journal of Solid-State Circuits. 2008; 43(12):2977-2989.
  24. 24. Gersbach M, Maruyama Y, Labonne E, Richardson J, Walker R, Grant L. A parallel 32× 32 time-to-digital converter array fabricated in a 130 nm imaging CMOS technology. In: Proc. of IEEE European Conference on Solid-State Circuits; 14-18 Sept. 2009; Athens: IEEE; 2009. p. 196-199.
  25. 25. Jianjun Y, Fosterdai F, Jaeger R. 12-Bit vernier ring time-to-digital converter in 0.13 μm CMOS technology. IEEE Journal of Solid-State Circuits. 2010; 45(4):830-842.
  26. 26. Chen OTC, Lin K-H, Liu ZM, Wang SC, Hsia M-L. 2D and 3D integrated image sensor with a bus-sharing mechanism. In: Proc. of IEEE 55th Midwest Symposium on Circuits and Systems; 5-8 Aug. 2012; Boise, Idaho, USA: IEEE; 2012. p. 138-141.
  27. 27. Lim S, Cheon J, Chae Y, Jung W, Lee D-H, Kwon M. A 240-frames/s 2.1-Mpixel CMOS image sensor with column-shared cyclic ADCs. IEEE Journal of Solid-State Circuits. 2011; 46(9):2073-2083.
  28. 28. Seo M-W, Suh S-H, Iida T, Takasawa T, Isobe K, Watanabe T. A low-noise high intrascene dynamic range CMOS image sensor with a 13 to 19b variable-resolution column-parallel folding-integration/cyclic ADC. IEEE Journal of Solid-State Circuits. 2012; 47(1):272-283.
  29. 29. Xu R, Liu B, Yuan J. A 1500 fps highly sensitive 256 256 CMOS imaging sensor with in-pixel calibration. IEEE Journal of Solid-State Circuits. 2012; 47(6):1408-1418.
  30. 30. Xu R, Ng WC, Yuan J, Yin S, Wei S. A 1/2.5 inch VGA 400 fps CMOS image sensor with high sensitivity for machine vision. IEEE Journal of Solid-State Circuits. 2014; 49(10):2342-2351.
  31. 31. Sakakibara M, Oike Y, Takatsuka T, Kato A, Honda K, Taura T. An 83dB-dynamic-range single-exposure global-shutter CMOS image sensor with in-pixel dual storage. In: Technical Digest of IEEE International Solid-State Circuits Conference; 19-23 Feb. 2012; San Francisco, CA: IEEE; 2012. p. 380-382.
  32. 32. Chung M-T, Lee C-L, Yin C, Hsieh C-C. A 0.5 V PWM CMOS imager with 82 dB dynamic range and 0.055% fixed-pattern-noise. IEEE Journal of Solid-State Circuits. 2013; 48(10):2522-2530.
  33. 33. Yeh S-F, Hsieh C-C, Yeh K-Y. A 3 Megapixel 100 Fps 2.8 m pixel pitch CMOS image sensor layer with built-in self-test for 3D integrated magers. IEEE Journal of Solid- State Circuits. 2013; 48(3):839-849.
  34. 34. Sukegawa S, Umebayashi T, Nakajima T, Kawanobe H, Koseki K, Hirota I. A 1/4-inch 8Mpixel back-illuminated stacked CMOS image sensor. In: Technical Digest of IEEE International Solid-State Circuits Conference; 17-21 Feb. 2013; San Francisco, CA: IEEE; 2013. p. 484-485.
  35. 35. Koyama S, Onozawa K, Tanaka K, Kato Y. A 3D vision 2.1 Mpixel image sensor for single-lens camera systems. In: Technical Digest of IEEE International Solid-State Circuits Conference; 17-21 Feb. 2013; San Francisco, CA: IEEE; 2013. p. 492-493.
  36. 36. Wang A, Molnar A. A light-field image sensor in 180 nm CMOS. IEEE Journal of Solid-State Circuits. 2012; 47(1):257-271.
  37. 37. Kim S-J, Kang B, Kim JD, Lee K, Kim C-Y, Kim K. A 1920× 1080 3.65μm-pixel 2D/3D image sensor with split and binning pixel structure in 0.11 pm standard CMOS. In: Technical Digest of IEEE International Solid-State Circuits Conference; 19-23 Feb. 2012; San Francisco, CA: IEEE; 2012. p. 396-398.

Written By

Oscal T.-C. Chen, Yi-Yang Lee and Robin R.-B. Sheen

Submitted: 02 March 2015 Reviewed: 30 April 2015 Published: 07 October 2015