Specifications of Currently Available 2D Image Sensors.
Two-dimensional photodiodes are reversely biased at a reasonable voltage whereas 3D photodiodes are likely operated at the Geiger mode. How to design integrated 2D and 3D photodiodes is investigated in terms of quantum efficiency, dark current, crosstalk, response time and so on. Beyond photodiodes, a charge supply mechanism provides a proper charge for a high dynamic range of 2D sensing, and a feedback pull-down mechanism expedites the response time of 3D sensing for time-of-flight applications. Particularly, rapid parallel reading at a 3D mode is developed by a bus-sharing mechanism. Using the TSMC 0.35μm 2P4M technology, a 2D/3D-integrated image sensor including P-diffusion_N-well_P-substrate photodiodes, pixel circuits, correlated double sampling circuits, sense amplifiers, a multi-channel time-to-digital converter, column/row decoders, bus-sharing connections/decoders, readout circuits and so on was implemented with a die size of 12mm×12mm. The proposed 2D/3D-integrated image sensor can perceive a 352×288-pixel 2D image and an 88×72-pixel 3D image with a dynamic range up to 100dB and a depth resolution of around 4cm, respectively. Therefore, our image sensor can effectively capture gray-level and depth information of a scene at the same location without additional alignment and post-processing. Finally, the currently available 2D and 3D image sensors are discussed and presented.
- CMOS photodiode
- active pixel
- Geiger mode
- time of flight
- image sensor
Nowadays, 3-dimensional (3D) images and videos are widely used in various applications, like games, robotics, cinema and so on. How to effectively capture 3D information becomes a critical issue. In general, stereo images from two slightly different viewpoints are used to establish 3D perception of which mechanism is based on binocular parallax. For example, a 3D camera of Panasonic LUMIX® DMC-3D1K takes two pictures simultaneously and then displays these two pictures to the left and right eyes of a human for 3D perception . Since the parallax of two viewpoints for capturing is fixed, 3D perception may not be easily adapted to different viewpoints.
Another approach also adopts two cameras in which one captures a gray-level image, and the other seizes object depths where an active light source, like infrared, is utilized. Based on the characteristics of light reflected from objects, a 3D image can be built via post-processing computation. For instance, Kinects I and II from Microsoft Xbox use the structured light technique and Time Of Flight (TOF) technique to acquire object depths, respectively , . Particularly, the TOF technique is used to estimate positions of objects in space according to the travel time of light emitted from a light source, reaching an object, reflected from an object and arriving at a sensor , as depicted in Fig. 1. Based on the different travel time captured by photodiodes, a depth map of a scene can be fairly attained. The information from the luminance (2D) and depth (3D) cameras is used to reconstruct complete 3D pictures which are observed from multiple viewpoints. Since these two cameras may be located at different positions, the capture-point difference need be compensated to yield a correct 3D picture.
In this chapter, physical characteristics of CMOS photodiodes are illustrated first. How to design high-efficiency photodiodes for 2D and 3D sensing is addressed. Second, a 2D/3D-integrated active pixel is presented where the same photodiodes are shared by 2D and 3D capturing. Additionally, the readout circuits used for 2D and 3D image sensors are integrated. Particularly, two feedback mechanisms are adopted to delay saturation of 2D luminance sensing, and accelerate response time of 3D depth sensing. Third, in order to reach a compact pixel array, high readout speed and the minimized coupling effect between transmission lines and photodiodes, the mechanism associated with bus sharing is developed to effectively accomplish parallel reading. Fourth, based on the TSMC 0.35μm 2P4M CMOS technology, the proposed 2D/3D-integrated image sensor was implemented with a die size of 12mm×12mm where 2D and 3D modes have the resolutions of 352×288 and 88×72 pixels, respectively, under a fill factor of 44%. Finally, the proposed image sensor, and the currently available 2D and 3D image sensors are illustrated, compared and concluded.
2. Physical characteristics of CMOS photodiodes for 3D sensing
A 3D image can be captured by the time of flight technique where object depths are interpreted by the time of round trips associated with light that originates from a light source, like light emitting diode (LED), shining objects, and is reflected from objects. To effectively seize reflected light, photodiodes must quickly respond to photos owing to high speed of light. Hence, an avalanche photodiode which operates at the Geiger mode can be a good choice to accomplish TOF sensing. An avalanche photodiode has high sensitivity to precisely apprehend slight photos which are early birds. Under such physical characteristics, a 3D depth map can be successfully attained from an array of avalanche photodiodes.
Many avalanche complementary metal-oxide semiconductor (CMOS) photodiodes have been explored and proposed in the literature. For instance, Niclass
By using the TSMC 0.35μm CMOS technology, P-diffusion_N-well_P-substrate, N-well_P-substrate and N-diffusion_P-substrate photodiodes are explored to understand the physical characteristics where their cross sections are shown in Fig. 2 . Figures 3 and 4 depict breakdown voltages of P-diffusion_N-well_P-substrate and N-well_P-substrate photodiodes under the same area sizes with different perimeters, respectively. The measurement results reveal that the breakdown voltage becomes larger as the perimeter is decreased. Additionally, the breakdown voltage of a P-diffusion_N-well_P-substrate photodiode is smaller than that of an N-well_P-substrate photodiode. The photodiode with a deeper p-n junction, larger area and smaller perimeter has a higher breakdown voltage. Particularly, the lateral region of a photodiode may form a dead space which brings about the decrease of a fill factor, quantum efficiency and breakdown voltage . From the abovementioned phenomena, a circular photodiode with the maximum ratio of area to perimeter is likely preferred to attain a high breakdown voltage.
As well as the boundary effect, the efficiency of induced photocurrents is examined where a light source adopts a laser at 850nm. Referring to , the current gain can be expressed as
In addition to the breakdown voltage of a photodiode, the crosstalk effect, as shown in Fig. 6, need to be well investigated to minimize the interference of the neighboring pixels , . Figure 7 displays a top view of 3×3 photodiodes, with and without an N-well guard ring, which were implemented by the TSMC 0.35μm CMOS technology. The photodiode in the center of a 3×3 photodiode array is measured when the neighboring eight photodiodes are biased at different voltages. The measured breakdown voltages of 3×3 P-diffusion_N-well_P-substrate and N-well_P-substrate photodiodes with and without an N-well guard ring are depicted in Fig. 8. The breakdown voltage of a photodiode with a guard ring is greater than that of a photodiode without a guard ring. Notably, the central pixel likely has a higher breakdown voltage as its neighboring pixels are reversely biased to a higher voltage. Figure 9 shows breakdown voltages of 3×3 P-diffusion_N-well_P-substrate, N-well_P-substrate and N-diffusion_P-substrate photodiodes with N-well guard rings while neighboring pixels are reversely biased at voltages from 0V to 9V. The N-well_P-substrate photodiode yields the largest breakdown voltage, the second is the P-diffusion_N-well_P-substrate photodiode and the last one is the N-diffusion_P-substrate photodiode.
3. 2D/3D-integrated pixel
A 3D depth sensor can yield a depth map of a scene but fails to provide delicate gray-scale information. However, a 2D image sensor can effectively interpret a pixel at a fairly fine gray scale rather than depth resolution. Accordingly, there is a demand for using multiple 2D or/and 3D sensors to capture a real-world scene and then to display a 3D picture to a watcher. For example, if two 2D image sensors are employed to mimic binocular vision, 3D perception adheres to a specific viewpoint. Nevertheless, if a 2D image sensor and a 3D depth sensor are used together to capture objects, the difference of viewpoints from these two sensors at different positions has to be amended. Additionally, the camera including two individual sensors takes relatively high hardware cost accompanied with great power consumption. Therefore, we propose to realize 2D luminance and 3D depth perceptions at one sensor of which 2D and 3D operations are alternately executed. The concept of how to design the proposed 2D/3D-integrated image sensor is illustrated in Fig. 10 .
The first step of integrating a 2D image sensor and a 3D depth sensor is to design a photodiode that can be shared by these two sensors. Figure 11 depicts the measured spectrum responses of P-diffusion_N-well_P-substrate, N-well_P-substrate and N-diffusion_P-substrate photodiodes using the TSMC 0.35μm CMOS technology where the reversely biased voltage is zero. From the measured results, N-well_P-substrate and P-diffusion_N-well_P-substrate photodiodes are the best and second best, respectively. In the following, the operations of photodiodes associated with 2D image and 3D depth sensors are studied to determine which photodiode structure is preferred.
3.1. Photodiodes of 2D and 3D sensors
When a PN photodiode is biased at a reverse voltage, incident light reaches the depletion region of this photodiode, and then incites numerous pairs of holes and electrons which create a photocurrent. The light intensity increases with the induced photocurrent. Based on an exposure period, the sensing circuit of a pixel converts an integrated photocurrent to an analogue voltage which represents gray-level luminance. Such a mechanism accomplishes a 2D image sensing. While the reversely biased voltage approaches the breakdown voltage of a photodiode, this photodiode has relatively large electrical field and depletion region in which pairs of holes and electrons are easily generated by few photons. This kind photodiode operating at the Geiger mode is to acquire a 3D depth map where the time of the flight trip associated with light emitted from a source, reaching objects, reflected from objects and arriving at photodiodes is computed.
Physical characteristics of photodiodes with different geometric and junction structures must be well understood in order to effectively integrate 2D and 3D photodiodes. From our previous study , the corner in the geometric shape of a photodiode easily leads to breakdown because of charged elements likely gathering at this point. Accordingly, the number of corners in a photodiode increases, resulting in the decrease of a breakdown voltage. Furthermore, the dark current is correspondingly enlarged as the number of corners increases. In the previous section, we conclude that the area and peripheral of a photodiode are somehow proportional and inversely proportional to the breakdown voltage, respectively. The increased number of pixels in an array lowers the breakdown voltage as well. Therefore, a photodiode with a circle shape and a guard ring is chosen to achieve high-efficiency sensing capability.
According to quantum efficiency of photodiodes in Fig. 11, N-well_P-substrate and P-diffusion_N-well_P-substrate photodiodes are good candidates. The reversely biased voltages of a photodiode used for 2D and 3D sensing are quite different. When the N-well_P-substrate photodiode is adopted for 2D and 3D sensors, P-substrate must be driven by two different voltages. However, P-substrate is a common base which is usually shared by transistors of all circuits. Under the standard CMOS technology, P-substrate is always connected to the ground. Hence, the N-well_P-substrate photodiode does not satisfy our demand. In the P-diffusion_N-well_P-substrate photodiode, P-diffusion and P-substrate are addressed by the ground to form two PN junctions for a 2D sensing where P-diffusion_N-well and N-well_P-substrate work for short-wavelength and long-wavelength light receiving, respectively. When a 3D sensing is conducted, P-diffusion and P-substrate are biased by a large negative voltage and 0V, respectively. Such a biasing manner makes P-diffusion_N-well an avalanche photodiode and N-well_P-substrate a photodiode under a normal reversely biased voltage. Once few photos reach the P-diffusion_N-well_P-substrate photodiode, P-diffusion_N-well reacts rapidly, and N-well_P-substrate may not respond too much.
3.2. Pixel circuits
To capture 2D gray-level information, the dynamic range is one of the key issues associated with a pixel design. A pixel circuit with a larger dynamic range can interpret a greater range of light luminance. Figure 12 shows the pixel circuit at a 2D mode, which employs an extra path to provide charges . Such a path includes two transistors of M2 and M3 where M2 is manipulated by an external control signal, and M3 functions like an active resistor with gate-node sensing. Icharge-supply goes through this path to compensate the current sink from Iphoto, and thus to delay the gate node of M6 to become 0V. The simulations in Fig. 13 reveal that the pixel circuit with a charge supply mechanism can reach up to 110dB and greatly extend the dynamic range than without a charge supply mechanism.
The pixel circuit at a 3D mode is to discover objects associated with depth information. The photodiode in a pixel is reversely biased close to a breakdown voltage. Once the photodiode is triggered by few photons, a significant current is generated. Such an instant and great current accelerates the response of the pixel circuit. Referring to a simple passive quenching structure , we proposed a modified 3D pixel circuit, as depicted in Fig. 14. When a slight light reaches this 3D pixel, a great current is induced to make a voltage drop at the
4. Parallel reading using the bus-sharing mechanism
When the proposed 2D/3D-integrated pixel circuit is operated at a 3D mode, a pulse signal is triggered by the perceived photos. Afterwards, the time difference between light emitted and detected is calculated, and then the corresponding depth information is derived based on a velocity of light. An independent bus line from each pixel of a large pixel array may not be addressed at low hardware cost. Hence, there is a need for low-cost parallel reading and computation in time. Nevertheless, as the size of a pixel array goes up, the hardware complexity of parallel reading is extremely expanded. Not only is the area increased, but the coupling effect between photodiodes and transmission lines is likely induced as well. According to the conventional work , , the reading of trigger pulses is fulfilled at a time-multiplexed manner.
To overcome the abovementioned drawbacks, a bus-sharing mechanism is proposed to realize parallel reading at low hardware cost. This bus-sharing mechanism employs a connection topology in which each pixel connects to multiple shared buses, as displayed in Fig. 17. Since each bus is connected to multiple photodiodes, a decoder associated with the shared buses is demanded to determine which photodiode is activated. Restated, when buses are enabled by one or multiple photodiodes simultaneously, these buses become Vdd. Based on the pattern of the enabled buses, the locations of photodiodes are discovered. For instance, while light is sensed by a pixel, P1, the buses of LO1, C1, R1 and ROp are pulled up to Vdd. As P2 also observes light, it enables the buses of LO2, C1, R2 and ROp1. Although there is a bus of C1 shared by P1 and P2, the other three buses are quite different, and can be used to distinguish P1, P2 and others. Therefore, according to the enabled buses and triggered time, when and which photodiode(s) are activated can be decoded and ascertained.
Photodiodes triggered at different time can be discovered by the proposed bus-sharing mechanism at an effectual and low-cost manner. However, when many photodiodes receive photos simultaneously, a special condition must be considered. An un-triggered pixel completely encircled by triggered pixels that discover light at once is mistaken for a triggered one. A special case, depicted in Fig. 18, mistakes P10 where P10 does not capture any light, and photodiodes of P5 ~ P7, P9, P11 and P13 ~ P15 are activated by light. The buses of LO4, C3, RO3 and R2 addressed by P10 are enabled by (P7, P13), (P9, P11), (P5, P15) and (P6, P14), respectively. During decoding P10, an AND gate with inputs of LO4, C3, RO3 and R2 yields logic-1, which is an error solution. Such a situation can be figured out while P10 is only enabled at an earlier or later time. Restated, to analyze the activated patterns of P10 and its neighboring pixels at different time intervals, the accurate triggering time associated with P10 can be attained.
4.1. Multi-channel time to digital converter
When an LED generates light projecting upon an object, the time to digital converter (TDC) begins to calculate the time of light travel from an LED to a photodiode. The derived travel time which is multiplied by a velocity of light can present a double distance from a sensor to an object. According to this distance information, the depth of an object apart from a sensor can be attained. Figure 19 shows the block diagram of multi-channel TDC which is an event-driven approach . The counter in TDC is applied to obtain a timescale number which is correlated with time of light flight. Additionally, the delay line circuit is used to interpolate a fractional scale. To attenuate the influence of temperature and process, differential pairs are adopted to realize flip flops of a delay line.
A 3D depth map comes from a pixel array of which pixels demand timing information. If each pixel has the corresponding timing circuit to compute its depth, there are too many timing circuits which take great hardware cost and high power dissipation. To reach the TDC shared by multiple pixels, a multi-channel TDC composed of a ring TDC, a thermal encoder and a 4-bit counter, as displayed in Fig. 19, is adopted. Referring to , Fig. 20 shows a 15-stage ring TDC, which is a core of a multi-channel timing circuit. When the signal of ‘Start’ is active, a NAND gate and 14 inverters build a ring oscillation. This ring TDC produces 15 outputs of C1, C2,... and C15, which are compacted by a thermal encoder to give a 4-bit fine result. In the meantime, the counter yields a 4-bit coarse result as well. 4-bit fine and course results form 8-bit timing information which is stored in the latch array, and used to interpret the depth information.
5. 2D/3D-integrated image sensor
The proposed 2D/3D-integrated image sensor employs P-diffusion_N-well_ P-substrate photodiode, which can be switched to different reversely biased voltages, and then operates at a 2D or 3D photo sensing mode. The Correlated Double Sampling (CDS) circuit and readout circuit used for a 2D mode, and Sense Amplifiers (SA), a multi-channel TDC and a readout circuit used for a 3D mode are implemented. Additionally, sequential and parallel reading is realized by using row and column decoders, and bus-sharing connections and decoders at 2D and 3D modes, respectively. The CDS circuit reduces the fixed pattern noise, and the SA boosts a trigger pulse generated from a pixel to lower the dead time. The block diagram of the proposed 2D/3D-integrated image sensor is depicted in Fig. 21. Since human visual perception has a good resolution in luminance rather than depth, the proposed 2D/3D-integrated image sensor adopts pixel dimensions of 352×288 and 88×72 pixels associated with 2D and 3D sensing, respectively. Restated, the pixel dimension at a 3D mode is one sixteenth of that at a 2D mode to lessen hardware cost. Particularly, to effectively decrease the overhead of parallel reading at a 3D mode, the bus-sharing mechanism can address 88×72 pixels by using 478 lines rather than 6,336 lines. After decoding 478 signal bits, 88×72 bits are stored in latches at every counting-time interval where each bit indicates if a pixel is triggered or not.
The TSMC 0.35μm 2P4M CMOS technology was employed to implement the proposed 2D/3D-integrated image sensor with a die size of 12mm×12mm . The field factor is about 44% where a photodiode has a diameter of 10
6. Currently available techniques of 2D and 3D image sensors
Nowadays, CMOS image sensors generally have extensive resolutions, high frame rates, large dynamic ranges and low power dissipation. To meet these demands, pixel circuits, read-out structures, analogue-to-digital converter (ADC) architectures and 3D integrated circuits (IC) have been well explored as well as high-efficiency photodiodes. For example, Lim
|Authors||Xu||Sukegawa1 ||Yeh||Chung ||Sakakibara||Seo||Lim |
|N/A||N/A||4-TR PPD||2-T PPD|
|Pixel size ()||8.7×8.22||1.12×1.12||2.8×2.8||10×10||5.86×5.86||7.5×7.5||2.25×2.25|
|Fill factor (%)||41||N/A||28||25.4||N/A||52||N/A|
|Frame rate (fps)||400||30||100||78.5||N/A||2.2||250|
|ADC resolution (bit)||12||10||12||10||12||13-19||10|
|ADC type||SAR||N/A||Off-chip||Ramp||Column single-slope||Folding-integration/Cyclic||Shared Cyclic|
As well as 2D image sensors, there are some special approaches to implement 3D image sensors. For instance, Koyama
|Authors||Koyama ||Wang ||Kim ||Proposed|
|Luminance||Pixel architecture||3-T APS||3-T APS||4-T APS (PPD)||5-T APS|
|Process (CMOS)||0.11μm (1P3M)||0.18μm (1P6M)||0.11μm (1P4M)||0.35μm(2P4M)|
|Array size||2.1M||400×384||1920×1080 (2D)|
|Pixel size ()||7.6||56.3||13.3 (2D)|
|Fill factor (%)||N/A||58||38.5||44|
|Depth||Technique||Lenticular lens||Light-field||Time of flight||Time of flight|
|Measured range (m)||0.2~1||1||0.75~4.5||1~3|
|Frame rate (fps)||N/A||N/A||11||N/A|
This chapter presents a 2D/3D-integrated image sensor which includes photodiodes, pixel circuits, CDS circuits, sense amplifiers, a multi-channel TDC, readout circuits, row/column decoders, bus-sharing connections/decoders and so on. The luminance and depth information of a scene can be captured by the same pixel at a time-multiplexed manner. Based on the standard CMOS technology, the circular photodiode of P-diffusion_ N-well_ P-substrate is utilized thanks to good quantum efficiency, fair breakdown voltage and easy integration of 2D and 3D pixels. Particularly, the proposed integrated pixel can yield a high dynamic range at a 2D mode using a charge supply mechanism and a high response speed at a 3D mode using a feedback pull-down mechanism. Additionally, the bus-sharing mechanism is employed to diminish hardware cost of parallel reading. By using the TSMC 0.35μm 2P4M CMOS technology, the 352×288-pixel 2D and 88×72-pixel 3D integrated image sensor was designed to have a dynamic range up to 100dB and a depth resolution of around 4cm. The measured results reveal very promising performance. Therefore, the 2D/3D-integrated image sensor proposed herein can be widely applied to various multimedia capturing applications under low hardware cost and low power dissipation.
This work was partially supported by National Science Council (NSC) and Ministry of Science and Technology (MOST), Taiwan, under the project numbers of NSC 97-2221-E-194-060-MY3, and MOST 102-2221-E-194 -047. The authors would like to thank Wei-Jean Liu, Meng-Lin Hsia, Zhe Ming Liu, Ming-Chih Lin, Chieh Ning Chan, Kuan-Hsien Lin, Shu Chun Wang and Hsiu-Fen Yeh who helped in designing, simulating, implementing and measuring the 2D and 3D image sensors. Additionally, Chip Implementation Center (CIC), Hsinchu, Taiwan, providing the services of VLSI CAD tools and chip fabrication is highly appreciated.
Panasonic. Panasonic Product Support - DMC-3D1K. Available from: http://shop.panasonic.com/support-only/DMC-3D1K.html.
Andersen MR, Jensen T, Lisouski P, Mortensen AK, Hansen MK, Gregersen T. Kinect depth sensor evaluation for computer vision applications. Aarhus University, Department of Engineering, 2012.
Microsoft. Kinect for Windows features. Available from: http://www.microsoft.com/en-us/kinectforwindows/meetkinect/features.aspx.
Charbon E. Introduction to time-of-flight imaging. In: Proc. of IEEE Sensors Conference; 2-5 Nov. 2014; Valencia. p. 610-613.
Aull BF, Loomis AH, Young DJ, Heinrichs RM, Felton BJ, Daniels PJ. Geiger-mode avalanche photodiodes for three-dimensional imaging. Lincoln Laboratory Journal. 2002; 13(2):335-349.
Niclass C, Rochas A, Besse P-A, Charbon E. Toward a 3-D camera based on single photon avalanche diodes. IEEE Journal of Selected Topics in Quantum Electronics. 2004; 10(4):796-802. DOI: 10.1109/JSTQE.2004.833886.
Marshall GF, Jackson JC, Denton J, Hurley PK, Braddell O, Mathewson A. Avalanche photodiode-based active pixel imager. IEEE Trans. on Electron Devices. 2004; 51(3):509-511. DOI: 10.1109/TED.2003.823051.
Zappa F, Lotito A, Tisa S. Photon-counting chip for avalanche detectors. IEEE Photon Technology Letters. 2005;17(1):184-186. DOI: 10.1109/LPT.2004.838136.
Faramarzpour N, Deen MJ, Shirani S, Qiyin F. Fully integrated single photon avalanche diode detector in standard CMOS 0.18-μm technology. IEEE Trans. on Electron Devices. 2008; 55(3):760-767. DOI: 10.1109/TED.2007.914839.
Atef M, Polzer A, Zimmermann H. Avalanche double photodiode in 40-nm standard CMOS technology. IEEE Journal of Quantum Electronics. 2013; 49(3):350-356. DOI: 10.1109/JQE.2013.2246546.
Pancheri L, Dalla Betta GF, Stoppa D. Low-noise avalanche photodiode with graded junction in 0.15-μm CMOS technology. IEEE Electron Device Letters. 2014; 35(5):566-568. DOI: 10.1109/LED.2014.2312751.
Kang Y, Mages P, Clawson AR, Yu PKL, Bitter M, Pan Z. Fused InGaAs-Si avalanche photodiodes with low-noise performances. IEEE Photonics Technology Letters. 2002; 14(11):1593-1595. DOI: 10.1109/LPT.2002.803370.
Ong DSG, Jo Shien N, Yu Ling G, Chee Hing T, Shiyong Z, David JPR. InAlAs avalanche photodiode with type-II superlattice absorber for detection beyond 2-μm. IEEE Trans on Electron Devices. 2011; 58(2):486-489. DOI: 10.1109/TED.2010.2090352.
Jun H, Banerjee K, Ghosh S, Hayat MM. Dual-carrier high-gain low-noise superlattice avalanche photodiodes. IEEE Transactions on Electron Devices. 2013; 60(7):2296-2301. DOI: 10.1109/TED.2013.2264315.
Zhen Guang S, Dun Jun C, Hai L, Rong Z, Da Peng C, Wen Jun L. High-gain AlGaN solar-blind avalanche photodiodes. IEEE Electron Device Letters. 2014; 35(3):372-374. DOI: 10.1109/LED.2013.2296658.
Chan CN, Chen OTC. Physical effects of avalanche CMOS photodiodes. In: Proc. of OptoElectronics and Communications Conference; 5-9 July 2010. p. 824-825.
He J, Xi X, Chan M, Hu C, Li Y, Zhang X. Equivalent junction method to predict 3-D effect of curved-abrupt p-n junctions. IEEE Transactions on Electron Devices. 2002; 4 9(7):1322-1325.
Hsia M-L, Liu ZM, Chan CN, Chen OTC. Crosstalk effects of avalanche CMOS photodiodes. In: Proc. of IEEE Conference on Sensors; 28-31 Oct. 2011; Limerick, Ireland: IEEE; 2011. p. 1689-1692.
Chen OTC, Lin K-H, Liu ZM. High-efficiency 3D CMOS image sensor. In: Proc. of 18th OptoElectronics and Communications Conference; June 30 - July 4, 2013; Kyoto, Japan: Optical Society of America; 2013.
Liu ZM, Lin M-C, Chan CN, Chen OTC. 2D and 3D-integrated image sensor. In: Proc. of IEEE 53rd Midwest Symposium on Circuits and Systems; 1-4 Aug. 2010; Seattle, USA: IEEE; 2010. p. 292-295.
Liu W-J, Yeh H-F, Chen OTC. A high dynamic-range CMOS image sensor with locally adjusting charge supply mechanism. In: Proc. of IEEE 48th Midwest Symposium on Circuits and Systems; 7-10 Aug. 2005: IEEE; 2005. p. 384-387.
Cova S, Ghioni M, Lacaita A, Samori C, Zappa F. Avalanche photodiodes and quenching circuits for single-photon detection. Applied Optics. 1996; 35(12):1956-1976.
Niclass C, Favi C, Kluter T, Gersbach M, Charbon E. A 128 128 single-photon image sensor with column-level 10-bit time-to-digital converter array. IEEE Journal of Solid-State Circuits. 2008; 43(12):2977-2989.
Gersbach M, Maruyama Y, Labonne E, Richardson J, Walker R, Grant L. A parallel 32× 32 time-to-digital converter array fabricated in a 130 nm imaging CMOS technology. In: Proc. of IEEE European Conference on Solid-State Circuits; 14-18 Sept. 2009; Athens: IEEE; 2009. p. 196-199.
Jianjun Y, Fosterdai F, Jaeger R. 12-Bit vernier ring time-to-digital converter in 0.13 μm CMOS technology. IEEE Journal of Solid-State Circuits. 2010; 45(4):830-842.
Chen OTC, Lin K-H, Liu ZM, Wang SC, Hsia M-L. 2D and 3D integrated image sensor with a bus-sharing mechanism. In: Proc. of IEEE 55th Midwest Symposium on Circuits and Systems; 5-8 Aug. 2012; Boise, Idaho, USA: IEEE; 2012. p. 138-141.
Lim S, Cheon J, Chae Y, Jung W, Lee D-H, Kwon M. A 240-frames/s 2.1-Mpixel CMOS image sensor with column-shared cyclic ADCs. IEEE Journal of Solid-State Circuits. 2011; 46(9):2073-2083.
Seo M-W, Suh S-H, Iida T, Takasawa T, Isobe K, Watanabe T. A low-noise high intrascene dynamic range CMOS image sensor with a 13 to 19b variable-resolution column-parallel folding-integration/cyclic ADC. IEEE Journal of Solid-State Circuits. 2012; 47(1):272-283.
Xu R, Liu B, Yuan J. A 1500 fps highly sensitive 256 256 CMOS imaging sensor with in-pixel calibration. IEEE Journal of Solid-State Circuits. 2012; 47(6):1408-1418.
Xu R, Ng WC, Yuan J, Yin S, Wei S. A 1/2.5 inch VGA 400 fps CMOS image sensor with high sensitivity for machine vision. IEEE Journal of Solid-State Circuits. 2014; 49(10):2342-2351.
Sakakibara M, Oike Y, Takatsuka T, Kato A, Honda K, Taura T. An 83dB-dynamic-range single-exposure global-shutter CMOS image sensor with in-pixel dual storage. In: Technical Digest of IEEE International Solid-State Circuits Conference; 19-23 Feb. 2012; San Francisco, CA: IEEE; 2012. p. 380-382.
Chung M-T, Lee C-L, Yin C, Hsieh C-C. A 0.5 V PWM CMOS imager with 82 dB dynamic range and 0.055% fixed-pattern-noise. IEEE Journal of Solid-State Circuits. 2013; 48(10):2522-2530.
Yeh S-F, Hsieh C-C, Yeh K-Y. A 3 Megapixel 100 Fps 2.8 m pixel pitch CMOS image sensor layer with built-in self-test for 3D integrated magers. IEEE Journal of Solid- State Circuits. 2013; 48(3):839-849.
Sukegawa S, Umebayashi T, Nakajima T, Kawanobe H, Koseki K, Hirota I. A 1/4-inch 8Mpixel back-illuminated stacked CMOS image sensor. In: Technical Digest of IEEE International Solid-State Circuits Conference; 17-21 Feb. 2013; San Francisco, CA: IEEE; 2013. p. 484-485.
Koyama S, Onozawa K, Tanaka K, Kato Y. A 3D vision 2.1 Mpixel image sensor for single-lens camera systems. In: Technical Digest of IEEE International Solid-State Circuits Conference; 17-21 Feb. 2013; San Francisco, CA: IEEE; 2013. p. 492-493.
Wang A, Molnar A. A light-field image sensor in 180 nm CMOS. IEEE Journal of Solid-State Circuits. 2012; 47(1):257-271.
Kim S-J, Kang B, Kim JD, Lee K, Kim C-Y, Kim K. A 1920× 1080 3.65μm-pixel 2D/3D image sensor with split and binning pixel structure in 0.11 pm standard CMOS. In: Technical Digest of IEEE International Solid-State Circuits Conference; 19-23 Feb. 2012; San Francisco, CA: IEEE; 2012. p. 396-398.