Introduction to CMOS Image Sensors图像处理器

The arrival of high-resolution solid state imaging devices, primarily charge-coupled devices (CCDs) and complementary metal oxide semiconductor (CMOS) image sensors, has heralded a new era for optical microscopy that threatens to eclipse traditional image recording technology, such as film, video tubes, and photomultipliers. Charge-coupled device camera systems designed specifically for microscopy applications are offered by numerous original equipment and aftermarket manufacturers, and CMOS imaging sensors are now becoming available for a few microscopes.

Introduction to CMOS Image Sensors图像处理器_第1张图片

Both technologies were developed between the early and late 1970s, but CMOS sensors had unacceptable performance and were generally overlooked or considered just a curiosity until the early 1990s. By that time, advances in CMOS design were yielding chips with smaller pixel sizes, reduced noise, more capable image processing algorithms, and larger imaging arrays. Among the major advantages enjoyed by CMOS sensors are their low power consumption, master clock, and single-voltage power supply, unlike CCDs that often require 5 or more supply voltages at different clock speeds with significantly higher power consumption. Both CMOS and CCD chips sense light through similar mechanisms, by taking advantage of the photoelectric effect, which occurs when photons interact with crystallized silicon to promote electrons from the valence band into the conduction band. Note that the term "CMOS" refers to the process by which the image sensor is manufactured and not to a specific imaging technology.

When a broad wavelength band of visible light is incident on specially doped silicon semiconductor materials, a variable number of electrons are released in proportion to the photon flux density incident on the surface of a photodiode. In effect, the number of electrons produced is a function of the wavelength and the intensity of light striking the semiconductor. Electrons are collected in a potential well until the integration (illumination) period is finished, and then they are either converted into a voltage (CMOS processors) or transferred to a metering register (CCD sensors). The measured voltage or charge (after conversion to a voltage) is then passed through an analog-to-digital converter, which forms a digital electronic representation of the scene imaged by the sensor.

The photodiode, often referred to as a pixel, is the key element of a digital image sensor. Sensitivity is determined by a combination of the maximum charge that can be accumulated by the photodiode, coupled to the conversion efficiency of incident photons to electrons and the ability of the device to accumulate the charge in a confined region without leakage or spillover. These factors are typically determined by the physical size and aperture of the photodiode, and its spatial and electronic relationship to neighboring elements in the array. Another important factor is the charge-to-voltage conversion ratio, which determines how effectively integrated electron charge is translated into a voltage signal that can be measured and processed. Photodiodes are typically organized in an orthogonal grid that can range in size from 128 × 128 pixels (16 K pixels) to a more common 1280 × 1024 (over a million pixels). Several of the latest CMOS image sensors, such as those designed for high-definition television (HDTV), contain several million pixels organized into very large arrays of over 2000 square pixels. The signals from all of the pixels composing each row and each column of the array must be accurately detected and measured (read out) in order to assemble an image from the photodiode charge accumulation data.

In optical microscopy, light gathered by the objective is focused by a projection lens onto the sensor surface containing a two-dimensional array of identical photodiodes, termed picture elements or pixels. Thus, array size and pixel dimensions determine the spatial resolution of the sensor. CMOS and CCD integrated circuits are inherently monochromatic (black and white) devices, responding only to the total number of electrons accumulated in the photodiodes, not to the color of light giving rise to their release from the silicon substrate. Color is detected either by passing the incident light through a sequential series of red, green, and blue filters, or with miniature transparent polymeric thin-film filters that are deposited in a mosaic pattern over the pixel array.

Anatomy of the CMOS Photodiode

A major advantage that CMOS image sensors enjoy over their CCD counterparts is the ability to integrate a number of processing and control functions, which lie beyond the primary task of photon collection, directly onto the sensor integrated circuit. These features generally include timing logic, exposure control, analog-to-digital conversion, shuttering, white balance, gain adjustment, and initial image processing algorithms. In order to perform all of these functions, the CMOS integrated circuit architecture more closely resembles that of a random-access memory cell rather than a simple photodiode array. The most popular CMOS designs are built around active pixel sensor (APS) technology in which both the photodiode and readout amplifier are incorporated into each pixel. This enables the charge accumulated by the photodiode to be converted into an amplified voltage inside the pixel and then transferred in sequential rows and columns to the analog signal-processing portion of the chip.

Thus, each pixel (or imaging element) contains, in addition to a photodiode, a triad of transistors that converts accumulated electron charge to a measurable voltage, resets the photodiode, and transfers the voltage to a vertical column bus. The resulting array is an organized checkerboard of metallic readout busses that contain a photodiode and associated signal preparation circuitry at each intersection. The busses apply timing signals to the photodiodes and return readout information back to the analog decoding and processing circuitry housed away from the photodiode array. This design enables signals from each pixel in the array to be read with simple x,y addressing techniques, which is not possible with current CCD technology.

Introduction to CMOS Image Sensors图像处理器_第2张图片

The architecture of a typical CMOS image sensor is presented in Figure 1 for an integrated circuit die that contains an active image area of 640 × 480 pixels. The photodiode array, located in the large reddish-brown central area of the chip, is blanketed by an ordered thin layer of red, green, and blue-dyed polymeric filters, each sized to fit over an individual photodiode (in a manner similar to the technology utilized for color CCDs). In order to concentrate incident photons into the photodiode electron collection wells, the filtered photodiodes are also housed beneath a miniature positive meniscus lens (see Figures 2, 3, and 4) known as a microlens, or lenticular, array. The inset in Figure 1 reveals a high magnification view of the filters and microlens array. Also included on the integrated circuit illustrated in Figure 1 is analog signal processing circuitry that collects and interprets signals generated by the photodiode array. These signals are then sent to the analog-to-digital conversion circuits, located adjacent to the photodiode array on the upper portion of the chip (as illustrated in Figure 1). Among the other duties performed by the CMOS image sensor are clock timing for the stepwise charge generation, voltage collection, transfer, and measurement duties, as well as image processing and output of the accumulated signals.

A closer look at the photodiode array reveals a sequential pattern of red, green, and blue filters that are arranged in a mosaic pattern named after Kodak engineer Bryce E. Bayer. This color filter array (a Bayer filter pattern) is designed to capture color information from broad bandwidth incident illumination arriving from an optical lens system. The filters are arranged in a quartet (Figure 2(a) and Figure 2(b)) ordered in successive rows that alternate either red and green or blue and green filters (Figure 2(a)). Presented in Figure 2 are digital images captured by a high-resolution optical microscope of a typical Bayer filter array and the underlying photodiodes. Figure 2(a) illustrates a view of alternating filter rows. Each red filter is surrounded by four green and four blue filters, while each blue filter is surrounded by four red and four green filters. In contrast, each green filter is surrounded by two red, four green, and two blue filters. A high magnification image of the basic repeating unit is presented in Figure 2(b), and contains one red, one blue, and two green filters, making the total number of green filters in the array equal to the number of red and blue filters combined. The heavy emphasis placed upon green filters is due to human visual response, which reaches a maximum sensitivity in the 550-nanometer (green) wavelength region of the visible spectrum.

Also illustrated in Figure 2(b) is a small portion of the microlens array (also termed lenslets) deposited by photolithography onto the surface of the Bayer filters and aligned so that each lens overlies an individual filter. The shape of the miniature lens elements approaches that of a convex meniscus lens and serves to focus incident light directly into the photosensitive area of the photodiode. Beneath the Bayer filter and microlens arrays are the photodiodes themselves, which are illustrated in Figure 2(c) as four complete photodiode assemblies or pixel units. One of the photodiodes in Figure 2(c) is identified with a large white box (upper right-hand corner) that also contains a smaller rectangular box within the larger grid. The white boxes are identified with the letters P and T, which refer to the photon collection (photosensitive) and support transistor areas of the pixel, respectively.

As is evident from examining the photodiode elements in Figure 2(c), a majority of the pixel (approximately 70 percent in this example) area is dedicated to the support transistors (amplifier, reset, and row select), which are relatively opaque to visible light photons and cannot be utilized for photon detection. The remaining 30 percent (the smaller white box labeled P in Figure 2(c)) represents the photosensitive part of the pixel. Because such a small portion of the photodiode is actually capable of absorbing photons to generate charge, the fill factor or aperture of the CMOS chip and photodiodes illustrated in Figures 1, 2, and 3 represents only 30 percent of the total photodiode array surface area. The consequence is a significant loss in sensitivity and a corresponding reduction in signal-to-noise ratio, leading to a limited dynamic range. Fill factor ratios vary from device to device, but in general, they range from 30 to 80 percent of the pixel area in CMOS sensors.

Compounding the reduced fill factor problem is the wavelength-dependent nature of photon absorption, a term properly referred to as the quantum efficiency of CMOS and CCD image sensors. Three primary mechanisms operate to hamper photon collection by the photosensitive area: absorption, reflection, and transmission. As discussed above, over 70 percent of the photodiode area may be shielded by transistors and stacked or interleaved metallic bus lines, which are optically opaque and absorb or reflect a majority of the incident photons colliding with the structures. These stacked layers of metal can also lead to undesirable effects such as vignetting, pixel crosstalk, light scattering, and diffraction.

Introduction to CMOS Image Sensors图像处理器_第3张图片

Reflection and transmission of incident photons occurs as a function of wavelength, with a high percentage of shorter wavelengths (less than 400 nanometers) being reflected, although these losses can (in some cases) extend well into the visible spectral region. Many CMOS sensors have a yellow polyimide coating applied during fabrication that absorbs a significant portion of the blue spectrum before these photons can reach the photodiode region. Reducing or minimizing the use of polysilicon and polyimide (or polyamide) layers is a primary concern in optimizing quantum efficiency in these image sensors.

Shorter wavelengths are absorbed in the first few microns of the photosensitive region, but progressively longer wavelengths drill down to greater depths before being totally absorbed. In addition, the longest visible wavelengths (exceeding 650 nanometers) often pass through the photosensitive region without being captured (or generating an electron charge), leading to another source of photon loss. Although the application of microlens arrays helps to focus and steer incoming photons into the photosensitive region and can double the photodiode sensitivity, these tiny elements also demonstrate a selectivity based on wavelength and incident angle.

Presented in Figure 3 is a three-dimensional cutaway drawing of a typical CMOS active sensor pixel illustrating the photosensitive area (photodiode), busses, microlens, Bayer filter, and three support transistors. As discussed above, each APS element in a CMOS image sensor contains an amplifier transistor, which represents the input device of what is generally termed a source follower (the load of the source follower being external to the pixel and common to all the pixels in a column). The source follower is a simple amplifier that converts the electrons (charge) generated by the photodiode into a voltage that is output to the column bus. In addition, the pixel also features a reset transistor to control integration or photon accumulation time, and a row-select transistor that connects the pixel output to the column bus for readout. All of the pixels in a particular column connect to a sense amplifier.

In operation, the first step toward image capture is to initialize the reset transistor in order to drain the charge from the photosensitive region and reverse bias the photodiode. Next, the integration period begins, and light interacting with the photodiode region of the pixel produces electrons, which are stored in the silicon potential well lying beneath the surface (see Figure 3). When the integration period has finished, the row-select transistor is switched on, connecting the amplifier transistor in the selected pixel to its load to form a source follower. The electron charge in the photodiode is thus converted into a voltage by the source follower operation. The resulting voltage appears on the column bus and can be detected by the sense amplifier. This cycle is then repeated to read out every row in the sensor in order to produce an image.

One of the major drawbacks of the three-pixel APS design is the relatively high level of an artifact known as fixed pattern noise (FPN). Variations in amplifier transistor gain and offset, which are a fundamental problem with CMOS technology process fluctuations during manufacture, produce a mismatch in transistor output performance across the entire array. The result is a noise pattern evident in captured images that is constant and reproducible from one image to another. In most cases, fixed pattern noise can be significantly reduced or eliminated by design tuning of analog signal processing circuitry located at the periphery of the array or by electronic subtraction of a dark image (flat-field correction).

Mosaic Filter Arrays and Image Reconstruction

The unbalanced nature of Bayer filter mosaic arrays, having twice as many green filters as blue or red, would also appear to present a problem with regards to accurate color reproduction for individual pixels. Typical transmission spectral profiles of the common dyes utilized in the construction of Bayer filters are presented in Figure 4. The quantum efficiency of the red filters is significantly greater than that of the green and blue filters, which are close to each other in overall efficiency. Note the relatively large degree of spectral overlap between the filters, especially in the 520 to 620 nanometer (green, yellow, and orange) region.

Introduction to CMOS Image Sensors图像处理器_第4张图片

A question often arises as to the exact nature of color reproduction and spatial resolution from photodiode arrays having pixels divided into the basic elements of the Bayer filter pattern. A photodiode array having pixel dimensions of 640 × 480 pixels contains a total of 307,200 pixels, which yields 76,800 Bayer quartets. Does this mean that the actual useful image spatial resolution is reduced to 320 × 240 pixels? Fortunately, spatial resolution is primarily determined by the luminance component of color images and not the chrominance (color) component. This occurs because the human brain enables rather coarse color information to be added to fine spatial information and integrates the two almost seamlessly. In addition, the Bayer filters have broad wavelength transmission bands (see Figure 4) with large regions of overlap, which allows spatial information from other spectral regions to pass through the filters rendering each color with a considerable degree of spatial information.

For example, consider an object that reflects a significant amount of yellow light (centered at 585 nanometers) into the lens system of a CMOS digital camera. By examining the Bayer filter transmission spectra in Figure 4, it is obvious that the red and green filters transmit identical amounts of light in this wavelength region. In addition, the blue filters also transmit approximately 20 percent of the wavelengths passed through the other filters. Thus, three of the four Bayer filters in each quartet pass an equal amount of yellow light, while the fourth (blue) filter also transmits some of this light. In contrast, lower wavelength blue light (435 nanometers; see Figure 4) passes only through the blue filters to any significant degree, reducing both the sensitivity and spatial resolution of images composed mainly of light in this region of the visible spectrum.

After a raw image has been obtained from a CMOS photodiode array blanketed by a Bayer pattern of color filters, it must be converted into standard red, green, and blue (RGB) format through interpolation methodology. This important step is necessary in order to produce an image that accurately represents the scene imaged by the electronic sensor. A variety of sophisticated and well-established image processing algorithms are available to perform this task (directly on the integrated circuit after image capture), including nearest neighborlinearcubic, and cubic spline techniques. In order to determine the correct color for each pixel in the array, the algorithms average color values of selected neighboring pixels and produce an estimate of the color (chromaticity) and intensity (luminosity) for each pixel in the array. Presented in Figure 5(a) is a raw Bayer pattern image before reconstruction by interpolation, and in Figure 5(b), the results obtained after processing with a correlation-adjusted version of the linear interpolation algorithm.

Introduction to CMOS Image Sensors图像处理器_第5张图片

As an example of how color interpolation functions, consider one of the green pixels nested in the central region of a Bayer filter array. The pixel is surrounded by two blue, two red, and four green pixels, which are its immediate nearest neighbors. Interpolation algorithms produce an estimate of the green pixel's red and blue values by examining the chromaticity and luminosity values of the neighboring red and blue pixels. The same procedure is repeated for each pixel in the array. This technique produces excellent results, provided that image color changes slowly over a large number of pixels, but can also suffer from artifacts, such as aliasing, at edges and boundary regions where large color and/or intensity transitions occur.

In order to improve quantum efficiency and spectral response, several CMOS designers are turning to the use of color filter arrays based on the primary subtractive colors: cyan, yellow, and magenta (CMY), instead of the standard additive primaries red, green, and blue (RGB) that were discussed above. Among the advantages of using CMY filter arrays are increased sensitivity resulting in improved light transmission through the filter, and a stronger signal. This occurs because subtractive filter dyes display a reduced absorption of light waves in the visible region when compared to the corresponding additive filters. In contrast to the red, green, and blue filters, which are composites of two or more layers producing additive absorption, CMY filters are applied in a single layer that has superior light transmission characteristics. The downside of CMY filters is a more complex color correction matrix required to convert CMY data collected from the sensor into RGB values that are necessary in order to print or display images on a computer monitor. These algorithms result in the production of additional noise during color conversion, but the enhanced sensitivity obtained with CMY filter arrays can often offset problems encountered during image processing.

Sources and Remedies of Noise

A major problem with CMOS image sensors is the high degree of noise that becomes readily apparent when examining images produced by these devices. Advances in sensor technology have enabled the careful integration of signal processing circuitry alongside the image array, which has substantially dampened many noise sources and dramatically improved CMOS performance. However, other types of noise often plague both designers and end users. As discussed above, fixed pattern noise has been practically eliminated by modern CMOS post-acquisition signal processing techniques, but other forms, such as photon shot noise, dark current, reset noise, and thermal noise are not so easily handled.

During initialization or resetting of the photodiode by the reset transistor, a large noise component termed kTC (or reset) noise, is generated that is difficult to remove without enhanced circuit design. The abbreviation k refers to Boltzmann's constant, while T is the operating temperature and C is the total capacitance appearing at the input node of the amplifier transistor and composed by the sum of the photodiode capacitance and the input capacitance of the amplifier transistor. Reset noise can seriously limit the signal-to-noise ratio of the image sensor. Both reset and another noise source, commonly referred to as amplifier or 1/f low-frequency noise, can be controlled with a technique known as correlated double sampling (CDS), which must be implemented by adding a fourth "measuring" (or transfer) transistor to every pixel. The double sampling algorithm functions by measuring the reset or amplifier noise alone, and then subtracting the combined image signal plus the reset noise.

Photon shot noise is readily apparent in captured images as a random pattern that occurs because of temporal variation in the output signal due to statistical fluctuations in the amount of illumination. Each photodiode in the array produces a slightly different level of photon shot noise, which in the extreme can seriously affect CMOS image sensor performance. This type of noise is the dominant source of noise for signals much larger than the intrinsic noise floor of the sensor, and is present in every image sensor, including CCDs. Dark current is generated by artifacts that produce signal charge (electrons) in the absence of illumination, and can exhibit a significant degree of fluctuation from pixel to pixel, which is heavily dependent upon operating conditions. This type of noise is temperature-sensitive, and can be removed by cooling the image sensor or through an additional frame store, which is placed in random access memory and subtracted from the captured image.

Dark current is virtually impossible to eliminate, but can be reduced through the utilization of pinned photodiode technology during CMOS sensor fabrication. To create a pinned photodiode pixel, a shallow layer of P-type silicon is applied to the surface of a typical N-well photosensitive region to produce a dual-junction sandwich that alters the visible light spectral response of the pixel. The surface junction is optimized for responding to lower wavelengths (blue), while the deeper junction is more sensitive to the longer wavelengths (red and infrared). As a result, electrons collected in the potential well are confined near the N region, away from the surface, which leads to a reduction of dark current and its associated noise elements. In practice, it can be difficult to construct a pinned photodiode pixel that produces a complete reset in the low-voltage environment under which CMOS sensors operate. If a complete reset condition is not achieved, lag can be introduced into the array with a corresponding increase in reset transistor noise. Other benefits of pinned photodiode technology are improved blue response due to enhanced capture of short-wavelength visible light radiation in the vicinity of the P-silicon layer interface.

The transistors, capacitors, and busses intertwined among the photosensitive areas of the pixels are responsible for inducing thermal noise in CMOS image sensors. This type of noise can be reduced by fine-tuning the imager bandwidth, increasing the output current, or cooling the camera system. In many cases, the CMOS pixel readout sequence can be utilized to reduce thermal noise by limiting the bandwidth of each transistor amplifier. It is not practical to add complex and expensive Peltier or similar cooling apparatus to low-cost CMOS image sensors, so these devices are generally not employed for noise reduction.

CMOS Pixel Architecture

There are two basic photosensitive pixel element architectures utilized in modern CMOS image sensors: photodiodes and photogates (see Figure 6). In general, photodiode designs are more sensitive to visible light, especially in the short-wavelength (blue) region of the spectrum. Photogate devices usually have larger pixel areas, but a lower fill factor and much poorer blue light response (and general quantum efficiency) than photodiodes. However, photogates often reach higher charge-to-voltage conversion gain levels and can easily be utilized to perform correlated double sampling to achieve frame differencing.

Introduction to CMOS Image Sensors图像处理器_第6张图片

Photogate active pixel sensors utilize several aspects of CCD technology to reduce noise and enhance the quality of images captured with CMOS image sensors. Charge accumulated under the photogate during integration is localized to a potential well controlled by an access transistor. During readout, the support pixel circuitry performs a two-stage transfer of charge (as a voltage) to the output bus. The first step occurs by conversion of the accumulated charge into a measurable voltage by the amplifier transistor. Next, the transfer gate is pulsed to initiate transport of charge from the photosensitive area to the output transistor, and is then passed on to the column bus. This transfer technique allows two signal sampling opportunities that can be utilized through efficient design to improve noise reduction. The pixel output is first sampled after photodiode reset, and once again after integrating the signal charge. By subtracting the first signal from the second to remove low frequency reset noise, the photogate active pixel architecture can perform correlated double sampling.

A major benefit of photogate designs are their reduced noise features when operating at low light levels, as compared to photodiode sensors. Photodiode-based CMOS sensors are useful for mid-level performance consumer applications that do not require highly accurate images with low noise, superior dynamic range, and highly resolved color characteristics. Both devices capitalize on economical power requirements that can be satisfied with batteries, low voltage supplies from computer interfaces (USB and FireWire), or other direct current power supplies. Typically, the voltage requirement for a CMOS processor ranges from 3.3 and 5.0 volts, but newer designs are migrating to values that are reduced by half.

CMOS Image Sensor Operational Sequence

In most CMOS photodiode array designs, the active pixel area is surrounded by a region of optically shielded pixels, arranged in 8 to 12 rows and columns, which are utilized in black level compensation. The Bayer (or CMY) filter array starts with the upper left-hand pixel in the first unshielded row and column. When each integration period begins, all of the pixels in the same row will be reset by the on-board timing and control circuit, one row at a time, traversing from the first to the last row catalogued by the line address register (see Figure 7). For a sensor device with analog output, when integration has been completed, the same control circuitry will transfer the integrated value of each pixel to a correlated double sampling circuit (CDS block in Figure 7) and then to the horizontal shift register. After the shift register has been loaded, the pixel information will be serially shifted (one pixel at a time) to the analog video amplifier. The gain of this amplifier is controlled either by hardware or software (and in some cases, a combination of both). In contrast, CMOS image sensors with digital readout utilize an analog-to-digital converter for every column, and conversion is conducted in parallel for each pixel in a row. A digital bus having a width equal to the number of bits over which the conversion is accomplished is then employed to output the data. In this case, only the digital values are "serially" shifted. White balance algorithms are often applied to the pixels at this stage.

After the gain and offset values are set in the video amplifier (labeled Video Amp in Figure 7), the pixel information is then passed to the analog-to-digital converter where it is rendered into a linear digital array of binary digits. Subsequently, the digital pixel data is further processed to remove defects that occur in "bad" pixels and to compensate black levels before being framed and presented on the digital output port. The black level compensation algorithm (often referred to as a frame rate clamp) subtracts the average signal level of the black pixels surrounding the array from the digital video output to compensate for temperature and time-dependent dark noise levels in the active pixel array.

The next step in the sequence is image recovery (see Figure 7) and the application of fundamental algorithms necessary to prepare the final image for display encoding. Nearest neighbor interpolation is performed on the pixels, which are then filtered with anti-aliasing algorithms and scaled. Additional image processing steps in the recovery engine often include anti-vignetting, spatial distortion correction, white and black balancing, smoothing, sharpening, color balance, aperture correction, and gamma adjustment. In some cases, CMOS image sensors are equipped with auxiliary circuits that enable on-chip features such as anti-jitter (image stabilization) and image compression. When the image has been sufficiently processed, it is sent to a digital signal processor for buffering to an output port.

Introduction to CMOS Image Sensors图像处理器_第7张图片

Because CMOS image sensors are capable of accessing individual pixel data throughout the entire photodiode array, they can be utilized to selectively read and process only a selected portion of the pixels captured for a specific image. This technique is known as windowing (or window-of-interest readout), and dramatically expands the image-processing possibilities with these sensors. Windowing is controlled directly on the chip through the timing and control circuit, which enables any size window in any position within the active region of the array to be accessed and displayed with one-to-one pixel resolution. This feature can be extremely useful when temporal motion tracking of an object in one subregion of the image is necessary. It can also be employed for on-chip control of electronic pan, zoom, accelerated readout, and tilt operations on a selected portion or the entire image.

A majority of high-end CMOS sensors feature several readout modes (similar to those employed in CCD sensors) to increase versatility in software interface programming and shuttering. Progressive scan readout mode enables every pixel in each row within the photodiode array to be consecutively accessed (one pixel at a time) starting with the upper left-hand corner and progressing to the lower right-hand corner. Another popular readout mode is termed interlaced, and operates by reading pixel data in two consecutive fields, an odd field followed by an even field. The fields alternate in rows from the top of the array to the bottom, and each row of a group is recorded sequentially before the next group is read. As an example, in a sensor having 40 pixel rows, the first, third, fifth and so on down to the 39th row are read first, followed by the second, fourth, sixth, down to the 40th row.

Electronic shuttering in CMOS image sensors requires the addition of one or more transistors to each pixel, a somewhat unpractical approach considering the already compromised fill factor in most devices. This is the case for most area-scan image sensors. However, line-scan sensors have been developed that have shutter transistors placed adjacent to the pixel active area in order to reduce the fill factor load. Many designers have implemented a nonuniform rolling shutter solution that exposes sequential rows in the array at different time intervals utilizing a minimum of in-pixel transistors. Although rolling shutter mechanisms operate well for still images, they can produce motion blurs leading to distorted images at high frame rates. To solve this problem, engineers have crafted uniform synchronous shutter designs that expose the entire array at one time. Because this technique requires extra transistors at each pixel, there is some compromise of fill factor ratios unless larger pixels are simultaneously implemented.

The dynamic range of a CMOS image sensor is determined by the maximum number of signal electrons accumulated by the photodiodes (charge capacity) divided by the sum of all components of sensor read noise (noise floor), including temporal noise sources arising over a specific integration time. The contribution from all dark noise sources, such as dark current noise, as well as pixel read noise, and temporal noise arising from the signal path (but not photon shot noise), is included in this calculation. The noise floor limits image quality in dark regions of the image, and increases with exposure time due to dark current shot noise. In effect, therefore, the dynamic range is the ratio of the largest detectable signal to the smallest simultaneously detectable signal (the noise floor). Dynamic range is often reported in gray levelsdecibels or bits, with higher ratios of signal electrons to noise producing greater dynamic range values (more decibels or bits). Note that dynamic range is governed by sensor signal-to-noise characteristics, while bit depth is a function of the analog-to-digital converter(s) employed in the sensor. Thus, a 12-bit digital conversion corresponds to slightly over 4,000 gray levels or 72 decibels, while 10-bit digitization can resolve 1,000 gray levels, an appropriate bit depth for a 60-decibel dynamic range. As the dynamic range of a sensor is increased, the ability to simultaneously record the dimmest and brightest intensities in an image (intrascene dynamic range) is improved, as are the quantitative measurement capabilities of the detector. The interscene dynamic range represents the spectrum of intensities that can be accommodated when detector gain, integration time, lens aperture, and other variables are adjusted for differing fields of view.

One of the most versatile capabilities of CMOS image sensors is their ability to capture images at very high frame rates. This enables recording of time-lapse sequences and real-time video through software-controlled interfaces. Rates between 30 and 60 frames per second are common, while several high-speed imagers can achieve accelerated rates of more than 1000. Additional support circuitry, including co-processors and external random access memory are necessary in order to produce camera systems that can take advantage of these features.

Conclusions

CMOS image sensors are fabricated on well-established standard silicon processes in high-volume wafer plants that also produce related chips such as microprocessors, memory circuits, microcontrollers, and digital signal processors. The tremendous advantage is that digital logic circuits, clock drivers, counters, and analog-to-digital converters can be placed on the same silicon foundation and at the same time as the photodiode array. This enables CMOS sensors to participate in process shrinks that move to smaller linewidths with a minimum of redesign, in a manner similar to other integrated circuits. Even so, in order to guarantee low-noise devices with high performance, the standard CMOS fabrication process must often be modified to specifically accommodate image sensors. For example, standard CMOS techniques for creating transistor junctions in logic chips might produce high dark currents and low blue response when applied to an imaging device. Optimizing the process for image sensors often involves tradeoffs that render the fabrication scenario unreliable for common CMOS devices.

Pixel size has continued to shrink during the past few years, from the 10-20 micron giant pixels that ruled the mid-1990s devices, to the 6-8 micron sensors currently swamping the market. A greater demand for miniature electronic imaging devices, such as surveillance and telephone cameras, has prompted designers to drop pixel sizes even further. Image sensors featuring 4-5 micron pixels are being utilized in devices with smaller arrays, but multi-megapixel chips will require pixel sizes in the 3 to 4 micron range. In order to achieve these dimensions, CMOS image sensors must be produced on 0.25-micron or narrower fabrication lines. By employing narrower line widths, more transistors can be packed into each pixel element while maintaining acceptable fill factors, provided that scaling ratio factors approach unity. With 0.13 to 0.25-micron fabrication lines, advanced technology, such as in-pixel analog-to-digital converters, full-color processing, interface logic, and other associated complex circuitry tuned to increase the flexibility and dynamic range of CMOS sensors should become possible.

Although many CMOS fabrication plants lack the process steps for adding color filters and microlens arrays, these steps are being increasingly implemented for image sensor production as market demands grow. In addition, optical packaging techniques, which are critical to imaging devices, require clean rooms and flat-glass handling equipment not usually found in plants manufacturing standard logic and processor integrated circuits. Thus, ramp-up costs for image sensor fabrication can be significant.

The list of applications for CMOS image sensors has grown dramatically in the past several years. Since the late 1990s, CMOS sensors have accounted for increasing numbers of the imaging devices marketed in applications such as fax machines, scanners, security cameras, toys, games, PC cameras and low-end consumer cameras. The versatile sensors will also probably begin to appear in cell phones, bar code readers, optical mice, automobiles, and perhaps even domestic appliances in the coming years. Due to their ability to capture sequential images at high frame rates, CMOS sensors are being increasingly utilized for industrial inspection, weapons systems, fluid dynamics, and medical diagnostics. Although not expected to replace CCDs in most of the higher-end applications, CMOS image sensors should continue to find new homes as the technology advances.

原文链接:Digital Imaging in Optical Microscopy - Introduction to CMOS Image Sensors | Olympus LS (olympus-lifescience.com.cn)

机器翻译中文:
 

高分辨率固态成像设备(主要是带电耦合器件)和互补金属氧化物半导体(CMOS)图像传感器的到来,预示着光学显微镜的新时代的到来,有可能使胶片、显像管和光电极等传统图像记录技术黯然失色。专为显微镜应用设计的充电耦合设备摄像机系统由许多原始设备和售后制造商提供,CMOS 成像传感器现在可用于几台显微镜。

Introduction to CMOS Image Sensors图像处理器_第8张图片

这两种技术都是在20世纪70年代早期和后期开发的,但CMOS传感器的性能令人无法接受,直到20世纪90年代初,人们普遍忽视或认为只是一种好奇心。到那时,CMOS 设计的进步是生产像素尺寸更小、噪音更小、图像处理算法更有能力、成像阵列更大的芯片。CMOS 传感器的主要优点包括其低功耗、主时钟和单电压电源,与通常以不同时钟速度需要 5 个或更多电源电压且功耗显著提高的 CD 不同。CMOS 和 CCD 芯片都通过类似的机制感知光线,利用光电效应,当光子与结晶硅相互作用时,就会将电子从价带提升到传导带中。请注意,"CMOS"一词是指制造图像传感器的过程,而不是特定的成像技术。

当在特殊掺杂的硅半导体材料上发生宽波长的可见光波段时,可变数量的电子会按照光电二极管表面的光子通量密度事件的比例释放。实际上,产生的电子数量是波长和光照射半导体的强度的函数。电子在潜在井中收集,直到集成(照明)期完成,然后它们要么转换成电压 (CMOS 处理器),要么转移到计量寄存器 (CCD 传感器)。测量的电压或电荷(转换为电压后)然后通过模拟到数字转换器传递,该转换器形成传感器映像的场景的数字电子表示。

光二极管通常称为像素,是数字图像传感器的关键元素。灵敏度取决于光二极管可以累积的最大电荷的组合,以及事件光子转换到电子的效率以及设备在不泄漏或溢出的密闭区域累积电荷的能力。这些因素通常由光电二极管的物理大小和光圈及其空间和电子关系决定。另一个重要因素是电荷与电压转换比,它决定了集成电子电荷如何有效地转换为可以测量和处理的电压信号。光二极管通常以正交网格组织,其尺寸从 128 × 128 像素(16 K 像素)到更常见的 1280 × 1024(超过 100 万像素)不等。一些最新的CMOS图像传感器,如专为高清电视(HDTV)设计的传感器,包含数百万像素,这些像素被组织成超过2000平方像素的非常大的阵列。必须准确检测和测量来自构成每行和阵列每个列的所有像素的信号,以便从光二极管电荷累积数据中组装图像。

在光学显微镜中,目标采集的光线通过投影透镜聚焦到传感器表面,传感器表面包含一系列相同的光二极管、称为图像元素像素。因此,阵列大小和像素尺寸决定传感器的空间分辨率。CMOS 和 CCD 集成电路本质上是单色(黑白)设备,只响应光电二极管中累积的电子总数,而不响应从硅基板释放的光的颜色。颜色通过连续的红色、绿色和蓝色滤光片传递事件光,或者通过微型透明聚合物薄膜滤光片(在像素阵列上沉积在马赛克图案中)来检测颜色。

CMOS光二极管的解剖学

CMOS 图像传感器比 CCD 传感器享有的一个主要优势是能够将一些处理和控制功能直接集成到传感器集成电路上,这些功能超出了光子收集的主要任务。这些功能通常包括计时逻辑、曝光控制、模拟到数字转换、快门、白平衡、增益调整和初始图像处理算法。为了执行所有这些功能,CMOS 集成电路架构更类似于随机访问存储器单元,而不是简单的光电二极管阵列。最流行的CMOS设计是围绕主动像素传感器(APS)技术构建的,其中光二极管和读出放大器都包含在每一个像素中。这样,光二极管累积的电荷可以转换为像素内的放大电压,然后按顺序排和列传输到芯片的模拟信号处理部分。

因此,每个像素(或成像元件)除了光二极管外,还包含一个晶体管三合会,将累积的电子电荷转换为可测量电压,重置光二极管,并将电压传输到垂直柱形总线。生成的阵列是金属读出总线的有组织的棋盘,每个路口都包含光电二极管和相关信号制备电路。总线将正时信号应用到光二极管上,并将读出信息返回到远离光二极管阵列的模拟解码和处理电路。此设计使阵列中每个像素的信号都能使用简单的x、y寻址技术进行读取,这在当前的 CCD 技术中是不可能的。

Introduction to CMOS Image Sensors图像处理器_第9张图片

典型的 CMOS 图像传感器的架构在图 1 中呈现,用于集成电路模具,该模具包含 640 × 480 像素的活动图像区域。光二极管阵列位于芯片的大红褐色中央区域,由一层有序的红色、绿色和蓝色染色聚合物滤光片覆盖,每个尺寸适合在单个光二极管上(其方式类似于用于彩色 CCD 的技术)。为了将事件光子集中到光二极管电子收集井中,过滤光电二极管还位于微型正半月板透镜(见图 2、3 和 4)下,称为微透镜或扁桃体阵列。图 1 中的插图显示了过滤器和微透镜阵列的高放大视图。图1中说明的集成电路还包括收集和解释光二极管阵列产生的信号的模拟信号处理电路。然后,这些信号被发送到模拟到数字的转换电路,该电路位于芯片上部的光二极管阵列附近(如图 1 所示)。CMOS 图像传感器执行的其他职责包括分步充电、电压收集、传输和测量职责的时钟计时,以及累积信号的图像处理和输出。

仔细观察光电二极管阵列,就会发现红色、绿色和蓝色滤镜的顺序图案,这些滤镜排列在以柯达工程师布莱斯·拜耳命名的马赛克图案中。此颜色滤镜阵列(拜耳滤镜图案)旨在从来自光学镜头系统的宽带宽事件照明中捕获颜色信息。过滤器以四重奏(图 2(a)和图 2(b)排列,连续排排成一排,交替使用红色、绿色或蓝色和绿色滤镜(图 2(a))。图2中显示的是典型的拜耳滤光片阵列的高分辨率光学显微镜和底层光电二极管拍摄的数字图像。图 2(a) 说明了交替过滤行的视图。每个红色过滤器被四个绿色和四个蓝色过滤器包围,而每个蓝色过滤器被四个红色过滤器和四个绿色过滤器包围。相比之下,每个绿色滤镜周围有两个红色、四个绿色和两个蓝色过滤器。图 2(b)中显示基本重复单元的高放大倍率图像,包含一个红色、一个蓝色和两个绿色滤镜,使阵列中的绿色滤镜总数等于红色和蓝色滤镜的总和。对绿色滤光片的高度重视是由于人类视觉反应,在可见光谱的550纳米(绿色)波长区域达到最大灵敏度。

图2(b)中还说明了由光刻学沉积到拜耳滤镜表面的微透镜阵列(也称为透镜)的一小部分,并对齐,使每个镜头都叠塞了单个滤镜。微型透镜元件的形状接近凸状半月板透镜的形状,并直接将事件光聚焦到光二极管的感光区域。拜耳滤镜和微透镜阵列下面是光二极管本身,如图 2(c)中所示为四个完整的光二极管组件或像素单元。图 2 (c) 中的光电二极管之一与一个大的白色框(右上角)相识别,该框还包含较大网格内的较小矩形框。白色盒子与字母PT识别,分别指光子集合(感光)和支持像素的晶体管区域。

从图 2 (c) 中的光二极管元素中可以明显看出,大多数像素(本示例中大约 70%)区域都用于支持晶体管(放大器、重置和行选择),这些晶体管对可见光子相对不透明,无法用于光子检测。其余 30%(图 2(c)中标有P的较小白色框)表示像素的感光部分。由于光二极管的这一小部分实际上能够吸收光子产生电荷,因此图 1、2 和 3 中所示 CMOS 芯片和光电二极管的填充因子或光极管的填充因子或光极管仅占光二极管阵列总面积的 30%。其后果是灵敏度显著下降,信号与噪声比相应降低,导致动态范围有限。填充因子比率因设备而异,但一般来说,它们的范围在 CMOS 传感器中 30% 到 80% 的像素区域之间。

使填充因子减少的问题更加复杂的是光子吸收的波长依赖性,这个词恰当地称为CMOS和CCD图像传感器的量子效率。三种主要机制通过感光区来阻止光子收集:吸收、反射和传输。如上所述,超过 70% 的光电二极管区域可能由晶体管和堆叠或交错的金属总线屏蔽,这些线在光学上不透明,吸收或反射大多数与结构碰撞的事件光子。这些堆积的金属层还会导致不良效果,如维涅特、像素相声、光散射和衍射。

Introduction to CMOS Image Sensors图像处理器_第10张图片

事件光子的反射和传输是波长的函数,反射的波长较短(小于 400 纳米)的比例很高,尽管这些损失(在某些情况下)可以很好地扩展到可见光谱区域。许多 CMOS 传感器在制造过程中都涂有黄色多酰胺涂层,在这些光子到达光二极管区域之前,该涂层吸收了蓝色光谱的很大一部分。减少或最小化多晶硅和多酰胺(或多酰胺)层的使用是优化这些图像传感器量子效率的主要问题。

较短的波长被光敏区域的前几微米吸收,但逐渐更长的波长在被完全吸收之前深入到更大的深度。此外,最长的可见波长(超过 650 纳米)通常通过感光区域而不被捕获(或产生电子电荷),导致光子丢失的另一个来源。虽然微透镜阵列的应用有助于将传入的光子聚焦和引导到感光区,并且能够将光二极管灵敏度提高一倍,但这些微小元素还表现出基于波长和事件角度的选择性。

图3中展示了典型的CMOS活动传感器像素的三维切口图,描绘了光敏区域(光二极管)、大巴、微透镜、拜耳滤镜和三个支持晶体管。如上所述,CMOS 图像传感器中的每个 APS 元素都包含一个放大器晶体管,它表示通常称为源跟随者的输入设备(源跟随者的负载位于像素外部,并且是列中所有像素的常见值)。源跟随器是一个简单的放大器,可将光电二极管产生的电子(电荷)转换为输出到柱总线的电压。此外,像素还具有重置晶体管以控制集成或光子积累时间,以及将像素输出连接到列总线的行选择晶体管以进行读取。特定列中的所有像素都连接到感官放大器。

在操作中,图像捕获的第一步是初始化重置晶体管,以从感光区域排出电荷并反向偏置光二极管。接下来,集成期开始,光与像素的光电二极管区域相互作用产生电子,这些电子储存在地表下方的硅电位中(见图3)。集成期结束后,行选择晶体管将打开,将选定像素中的放大器晶体管连接到其负载,形成源跟随器。因此,光二极管中的电子电荷通过源跟随器操作转换为电压。由此产生的电压显示在柱子总线上,可通过感官放大器检测到。然后重复此循环以读取传感器中的每一行,以生成图像。

三像素 APS 设计的主要缺点之一是称为固定图案噪声(FPN)的人工制品水平相对较高。放大器晶体管增益和偏移的变化是 CMOS 技术过程在制造过程中波动的根本问题,在整个阵列中产生了晶体管输出性能的不匹配。其结果是捕获的图像中明显的噪声模式,该模式是恒定的,可从一个图像重复到另一个图像。在大多数情况下,通过设计调谐位于阵列外围的模拟信号处理电路或通过电子减去暗图像(平场校正),可以显著减少或消除固定模式噪声。

马赛克过滤器阵列和图像重建

拜耳滤光片马赛克阵列的不平衡性质,其绿色滤光片的数量是蓝色或红色的两倍,在单个像素的准确色彩再现方面也似乎存在问题。图4中介绍了拜耳过滤器结构中常用染料的典型传输光谱剖面。红色滤光片的量子效率明显高于绿色和蓝色滤光片,后者在整体效率上相近。请注意过滤器之间的光谱重叠程度相对较大,尤其是在 520 到 620 纳米(绿色、黄色和橙色)区域。

Introduction to CMOS Image Sensors图像处理器_第11张图片

光二极管阵列的颜色再现和空间分辨率的确切性质经常出现,像素被划分为拜耳滤镜模式的基本元素。像素尺寸为 640 × 480 像素的光二极管阵列共包含 307,200 像素,可产生 76,800 个拜耳四重奏。这是否意味着实际有用的图像空间分辨率降低到 320 × 240 像素?幸运的是,空间分辨率主要取决于彩色图像的亮度组件,而不是色度(彩色)组件。这是因为人脑能够将相当粗糙的颜色信息添加到精细的空间信息中,并几乎无缝地将两者集成在一起。此外,拜耳滤波器具有波长宽的传输带(见图 4),具有较大的重叠区域,这使得来自其他光谱区域的空间信息能够通过滤波器,从而以相当程度的空间信息呈现每个颜色。

例如,考虑将大量黄光(中心为 585 纳米)反射到 CMOS 数码相机镜头系统的对象。通过检查图 4 中的拜耳滤波传输光谱,很明显,红色和绿色滤光片在此波长区域传输相同数量的光。此外,蓝色滤光片还传输大约 20% 的波长通过其他过滤器。因此,每个四重奏中的四个拜耳滤光片中有三个通过等量的黄光,而第四个(蓝色)滤光片也传递一些黄光。相比之下,波长较低的蓝光(435纳米;见图4)只通过蓝色滤光片,从而显著降低图像的灵敏度和空间分辨率,这些图像主要由可见光谱的这一区域的光组成。

从拜耳彩色滤光片图案覆盖的 CMOS 光二极管阵列中获取原始图像后,必须通过插值方法将其转换为标准的红色、绿色和蓝色(RGB)格式。为了生成能够准确表示电子传感器所映像的场景的图像,这一重要步骤是必要的。各种复杂且成熟的图像处理算法可用于执行此任务(直接在图像捕获后的集成电路上),包括最近的邻居线性立方立方条形技术。为了确定阵列中每个像素的正确颜色,算法平均选定相邻像素的颜色值,并生成阵列中每个像素的颜色(色度)和强度(亮度)估计值。图5(a)中显示的是通过插值重建前的原始拜耳图案图像,图5(b)中,通过线性插值算法的相关调整版本处理后获得的结果。

Introduction to CMOS Image Sensors图像处理器_第12张图片

作为颜色插值功能的示例,请考虑嵌套在拜耳滤镜阵列中心区域的绿色像素之一。像素被两个蓝色、两个红色和四个绿色像素包围,它们是其近邻。国际刑交算法通过检查相邻的红色和蓝色像素的色度和亮度值,对绿色像素的红色和蓝色值进行估计。阵列中的每个像素重复相同的过程。该技术产生出色的效果,只要图像颜色在大量像素上缓慢变化,但也可能在发生大颜色和/或强度转换的边缘和边界区域受到人工制品(如别名)的影响。

为了提高量子效率和光谱响应,一些CMOS设计者开始使用基于主要减法颜色的颜色滤镜阵列:青色、黄色和洋红色(CMY),而不是上面讨论的标准添加剂原汁原木(RGB)。使用 CMY 滤光片阵列的优点之一是提高了灵敏度,从而改善了通过过滤器的光传输,以及更强的信号。这是因为与相应的添加剂过滤器相比,减法滤芯染料在可见区域显示光波的吸收减少。与红、绿、蓝三色滤光片(由两层或两层以上复合材料产生添加剂吸收)相比,CMY 滤芯应用于具有卓越的透光特性的单层。CMY 滤镜的缺点是需要更复杂的颜色校正矩阵,以便将从传感器收集的 CMY 数据转换为 RGB 值,以便在计算机监视器上打印或显示图像。这些算法会导致在颜色转换过程中产生额外的噪声,但 CMY 滤镜阵列获得的增强灵敏度通常可以抵消图像处理过程中遇到的问题。

噪音的来源和补救措施

CMOS 图像传感器的一个主要问题是在检查这些设备产生的图像时容易产生的高噪音。传感器技术的进步使信号处理电路与图像阵列进行了精心集成,从而大大抑制了许多噪声源,并显著提高了 CMOS 性能。然而,其他类型的噪音经常困扰着设计师和最终用户。如上所述,现代CMOS采集后信号处理技术已基本消除了固定模式噪声,但光子射噪声、暗电流、重置噪声、热噪声等其他形式却不容易处理。

在重置晶体管初始化或重置光电二极管期间,生成一种称为kTC(或重置)噪声的大型噪声组件,如果不加强电路设计,则难以拆卸。缩写k是指博尔茨曼的常数,而T是工作温度,C是放大器晶体管输入节点中出现的总电容,由光二极管电容和放大器晶体管的输入电容之和组成。重置噪声会严重限制图像传感器的信号与噪声比。重置和另一个噪声源(通常称为放大器1/f低频噪声)均可使用称为相关双采样(CDS) 的技术进行控制,该技术必须通过在每个像素中添加第四个"测量"(或传输)晶体管来实现。双重采样算法通过单独测量重置或放大器噪声,然后减去组合图像信号加上重置噪声来发挥作用。

光子拍摄噪声在捕获的图像中很容易显示为随机模式,由于照明量的统计波动,输出信号的时间变化而发生。阵列中的每个光二极管都会产生稍有不同的光子拍摄噪声水平,这在极端情况下会严重影响 CMOS 图像传感器的性能。这种类型的噪声是信号的主要噪声源,比传感器的内在噪声地板大得多,存在于每个图像传感器中,包括CCD。 暗电流是由在没有照明的情况下产生信号电荷(电子)的伪影产生的,并且会表现出从像素到像素的显著波动, 这在很大程度上取决于操作条件。这种类型的噪声对温度敏感,可以通过冷却图像传感器或通过额外的帧存储(放置在随机访问内存中并从捕获的图像中减去)来消除。

暗电流几乎无法消除,但在 CMOS 传感器制造过程中,可以通过使用固定光二极管技术来减少。为了创建固定光二极管像素,将浅层 P 型硅应用于典型的N井感光区域的表面,以产生双结夹层,从而改变像素的可见光谱响应。表面交汇点优化以响应较低的波长(蓝色),而较深的交汇点对较长的波长(红色和红外线)更敏感。因此,在潜在井中收集的电子被限制在离地表较远的N区域附近,从而减少了暗电流及其相关的噪声元件。实际上,很难构建固定光电二极管像素,从而在 CMOS 传感器运行的低压环境中产生完整的重置。如果未达到完整的重置条件,则可以引入时差,并相应增加重置晶体管噪声。固定光二极管技术的其他优点是,由于在P-silicon层接口附近增强了短波长可见光辐射的捕获,提高了蓝色响应。

晶体管、电容器和大巴交织在像素的感光区域之间,负责在 CMOS 图像传感器中诱导热噪声。通过微调成像仪带宽、增加输出电流或冷却摄像机系统,可以降低这种类型的噪声。在许多情况下,CMOS 像素读出序列可以通过限制每个晶体管放大器的带宽来降低热噪声。在低成本的 CMOS 图像传感器中添加复杂且昂贵的 Peltier 或类似的冷却设备是不实际的,因此这些设备通常不用于降噪。

CMOS 像素架构

现代 CMOS 图像传感器中使用了两种基本的感光像素元件架构:光二极管光门(见图 6)。一般来说,光二极管设计对可见光更敏感,尤其是在光谱的短波长(蓝色)区域。光门设备通常具有更大的像素区域,但填充因子较低,蓝光响应(和一般量子效率)比光二极管差得多。但是,光门通常达到更高的电荷到电压转换增益水平,并且可以轻松地用于执行相关的双重采样,以实现帧差异。

Introduction to CMOS Image Sensors图像处理器_第13张图片

光门活动像素传感器利用CCD技术的几个方面来降低噪音,提高使用CMOS图像传感器捕获的图像的质量。集成过程中在光门下累积的电荷被本地化为由访问晶体管很好地控制的潜在电荷。在读出过程中,支持像素电路执行两阶段电荷(作为电压)传输到输出总线。第一步是通过放大器晶体管将累积电荷转换为可测量电压。接下来,将脉冲转闸,开始将电荷从感光区传输到输出晶体管,然后传输到柱式总线。这种传输技术允许两个信号采样机会,可以通过有效的设计来改善降噪。像素输出在光二极管重置后首次采样,并在集成信号电荷后再次进行采样。通过从第二个信号中减去第一个信号以消除低频重置噪声,光门活动像素架构可以执行相关的双重采样。

与光二极管传感器相比,光门设计的主要好处是低光照率下操作时的噪音特征降低。基于光二极管的 CMOS 传感器适用于中级性能消费者应用,不需要低噪音、卓越的动态范围和高度分辨率的颜色特性的高精度图像。这两种设备都利用了经济的电力需求,这些需求可以满足电池、计算机接口(USB 和 FireWire)的低电压电源或其他直接电流电源的需求。通常,CMOS 处理器的电压要求范围为 3.3 伏特和 5.0 伏特,但较新的设计正在迁移到减半值。

CMOS 图像传感器操作序列

在大多数 CMOS 光电二极管阵列设计中,活动像素区域被光学屏蔽像素区域包围,排列成 8 到 12 行和列,用于黑色级别补偿。拜耳(或 CMY)滤镜阵列从第一个未闪挡行和列中的左上像素开始。当每个集成周期开始时,同一行中的所有像素将按车载计时和控制电路重置,一次一行,从行地址寄存器编目的第一行到最后一行(见图 7)。对于具有模拟输出的传感器设备,当集成完成后,相同的控制电路将每个像素的集成值传输到相关双采样电路(图 7 中的CDS块),然后传输到水平移位寄存器。加载换档寄存器后,像素信息将串序地移动(一次一个像素)到模拟视频放大器。此放大器的增益由硬件或软件控制(在某些情况下,两者兼有)。相比之下,具有数字读数的 CMOS 图像传感器为每个列使用模拟到数字转换器,并且连续对每个像素进行并行转换。然后,使用宽度等于完成转换的位数的数字总线来输出数据。在这种情况下,只有数字值被"串行"转移。在这个阶段,白平衡算法通常应用于像素。

在视频放大器中设置增益和偏移值(图 7 中标记为视频放大器)后,像素信息将传递到模拟数字转换器,然后将其渲染为二位数的线性数字阵列。随后,对数字像素数据进行进一步处理,以消除"坏"像素中出现的缺陷,并在数字输出端口上进行框框和呈现之前补偿黑色水平。黑色级别补偿算法(通常称为帧速率夹)从数字视频输出中减去阵列周围黑色像素的平均信号水平,以补偿活动像素阵列中的温度和时间依赖的暗噪声水平。

序列的下一步是图像恢复(见图 7),以及应用必要的基本算法来准备显示编码的最终图像。最近的邻居插值在像素上执行,然后用反别名算法进行过滤并缩放。恢复引擎中的其他图像处理步骤通常包括防磁、空间失真校正、白色和黑色平衡、平滑、锐化、颜色平衡、光圈校正和伽马调整。在某些情况下,CMOS 图像传感器配备辅助电路,可实现芯片上功能,如防抖动(图像稳定)和图像压缩。当图像经过充分处理后,将发送到数字信号处理器以缓冲到输出端口。

Introduction to CMOS Image Sensors图像处理器_第14张图片

由于 CMOS 图像传感器能够在整个光二极管阵列中访问单个像素数据,因此它们可以用于选择性地读取和处理为特定图像捕获的选定部分像素。这种技术被称为窗口(或兴趣窗口读出),并显著扩展了这些传感器的图像处理可能性。通过正时和控制电路直接控制在芯片上,从而能够以一对一像素分辨率访问和显示阵列活动区域内任何位置的任何大小窗口。当图像的一个子区域中物体的时空运动跟踪是必要的时,此功能可能非常有用。它还可以用于电子平移、缩放、加速读取和在选定部分或整个图像上倾斜操作的芯片上控制。

大多数高端 CMOS 传感器具有多种读出模式(类似于 CCD 传感器中使用的读出模式),以提高软件界面编程和关闭的多功能性。渐进式扫描读出模式使光二极管阵列中每行中的每个像素都能从左上角开始连续访问(一次一个像素),并进展到右下角。另一种流行的读出模式被称为隔行模式,通过连续两个字段中的读取像素数据来操作,一个奇怪的字段,然后是偶字段。字段从阵列顶部到底部以行为连续交替,在阅读下一组之前,将按顺序记录组的每个行。例如,在具有 40 像素行的传感器中,第一、第三、第五等下到第 39 行首先读取,然后是第二、第四、第六、下至第 40 行。

CMOS 图像传感器中的电子关闭要求在每个像素中添加一个或多个晶体管,考虑到大多数设备中已经损坏的填充因子,这种方法有点不切实际。大多数区域扫描图像传感器都是如此。但是,已开发了线扫描传感器,该传感器将快门晶体管放置在像素活动区域旁边,以减少填充因子负载。许多设计师已经实施了一个非统一的滚动快门解决方案,利用最少的像素内晶体管,在不同的时间间隔内暴露阵列中的连续行。虽然滚动快门机制对静止图像运行良好,但它们可以产生运动模糊,导致高帧速率扭曲图像。为了解决这个问题,工程师们设计了统一的同步快门设计,可以同时暴露整个阵列。由于此技术需要每个像素的额外晶体管,因此除非同时实现较大的像素,否则填充因子比率会有所妥协。

CMOS 图像传感器的动态范围由光二极管(充电容量)累积的最大信号电子数量除以传感器读取噪声(噪声地板)的所有组件之和(包括特定集成时间产生的时空噪声源)的总和来确定。此计算中包括来自所有暗噪声源的贡献,如暗电流噪声以及像素读取噪声以及信号路径产生的时间噪声(但不是光子拍摄噪声)。噪声地板限制了图像黑暗区域的图像质量,并且由于暗电流拍摄噪声而随着曝光时间的增加而增加。因此,实际上,动态范围是最大可探测信号与最小同时可检测信号(噪声地板)的比例。动态范围通常以灰色水平分贝报告,信号电子与噪声的比例较高,产生更大的动态范围值(更多分贝或位)。请注意,动态范围受传感器信号到噪声特性控制,而位深度是传感器中使用的模拟到数字转换器的功能。因此,12 位数字转换对应于略高于 4,000 灰色水平或 72 分贝,而 10 位数字化可以解决 1,000 个灰色水平,这是 60 分贝动态范围的适当位深度。随着传感器动态范围的增加,同时记录图像中最暗和最亮强度(内部动态范围)的能力得到提高,探测器的定量测量能力也得到提高。闭会间动态范围表示当探测器增益、集成时间、镜头孔径和其他变量因不同视场而调整时可以容纳的强度光谱。

CMOS 图像传感器最通用的功能之一是它们能够以极高的帧速率捕获图像。这有助于通过软件控制的界面记录延时序列和实时视频。每秒 30 到 60 帧之间的速率很常见,而多个高速成像仪的加速速率可以达到 1000 以上。需要额外的支持电路,包括共同处理器和外部随机访问存储器,以便生成能够利用这些功能的摄像机系统。

结论

CMOS 图像传感器是在大批量晶圆厂中成熟的标准硅工艺中制造的,这些晶圆厂还生产相关的芯片,如微处理器、内存电路、微控制器和数字信号处理器。巨大的优势是,数字逻辑电路、时钟驱动程序、计数器和模拟到数字转换器可以放置在同一硅基上,同时与光电二极管阵列相同。这使得 CMOS 传感器能够以与其他集成电路类似的方式,以最小重新设计的方式参与收缩到较小线宽的过程收缩。即便如此,为了保证低噪音设备的高性能,必须经常修改标准 CMOS 制造工艺,以专门适应图像传感器。例如,在逻辑芯片中创建晶体管结的标准 CMOS 技术在应用于成像设备时可能会产生高暗电流和低蓝色响应。优化图像传感器的流程通常涉及权衡,使常见的 CMOS 设备的制造场景不可靠。

在过去几年中,像素尺寸持续缩小,从上世纪90年代中期统治设备的10-20微米巨型像素,到目前充水市场的6-8微米传感器。对监控和电话摄像机等微型电子成像设备的需求增加,促使设计者进一步降低像素尺寸。具有 4-5 微米像素的图像传感器正在小阵列的设备中使用,但数百万像素芯片需要 3 到 4 微米范围内的像素大小。为了实现这些尺寸,CMOS 图像传感器必须在 0.25 微米或更窄的制造线上生成。通过采用较窄的线宽,可以将更多的晶体管打包到每个像素元件中,同时保持可接受的填充因子,前提是缩放比因子接近统一性。通过 0.13 到 0.25 微米制造线,应调整像素内模拟到数字转换器、全彩色处理、界面逻辑和其他相关复杂电路的先进技术,以提高 CMOS 传感器的灵活性和动态范围。

尽管许多 CMOS 制造工厂缺乏添加彩色滤镜和微透镜阵列的工艺步骤,但随着市场需求的增长,这些步骤越来越多地用于图像传感器生产。此外,光学封装技术对成像设备至关重要,它要求在制造标准逻辑和处理器集成电路的工厂中通常找不到的洁净室和平板玻璃处理设备。因此,图像传感器制造的成本可能会大幅上升。

在过去几年中,CMOS 图像传感器的应用列表急剧增加。自 20 世纪 90 年代末以来,CMOS 传感器在传真机、扫描仪、安全摄像头、玩具、游戏、PC 摄像机和低端消费相机等应用中销售的成像设备数量不断增加。在未来几年里,通用传感器可能还会开始出现在手机、条形码读取器、光学鼠标、汽车甚至家用电器中。由于 CMOS 传感器能够以高帧速率捕获连续图像,因此越来越多地用于工业检查、武器系统、流体动力学和医疗诊断。虽然预计大多数高端应用不会取代 CCD,但随着技术的进步,CMOS 图像传感器应该继续寻找新的家。

贡献作者

你可能感兴趣的:(ISP,Microlens,Cmos)