Deep color

转载自Doctor HDMI


In computer graphics, color depth or bit depth is the number of bits used to indicate the color of a single pixel in a bitmapped image or video frame buffer. This concept is also known as bits per pixel (bpp), particularly when specified along with the number of bits used. Higher color depth gives a broader range of distinct colors.
Color depth is only one aspect of color representation, expressing how finely levels of color can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut).

Indexed color

With relatively low color depth, the stored value is typically a number representing the index into a color map or palette. The colors available in the palette itself may be fixed by the hardware or modifiable within the limits of the hardware (for instance, both color Macintosh systems and VGA-equipped IBM-PCs typically ran at 8-bit due to limited VRAM, but while the best VGA systems only offered an 18-bit (262,144 color) palette from which colors could be chosen, all color Macintosh video hardware offered a 24-bit (16 million color) palette). Modifiable palettes are sometimes referred to as pseudocolor palettes.
Deep color_第1张图片
1 bit (2 colors)

  • 1-bit color (2^1 = 2 colors) monochrome, often black and white, compact Macintoshes, Atari ST.
  • 2-bit color (2^2 = 4 colors) CGA, gray-scale early NeXTstation, color Macintoshes, Atari ST.

Deep color_第2张图片
2 bits (4 colors)

  • 3-bit color (2^3 = 8 colors) many early home computers with TV displays

Deep color_第3张图片
4 bits (16 colors)

  • 4-bit color (2^4 = 16 colors) as used by EGA and by the least common denominator VGA standard at higher resolution, color Macintoshes, Atari ST.
  • 5-bit color (2^5 = 32 colors) Original Amiga chipset.
  • 6-bit color (2^6 = 64 colors) Original Amiga chipset.

Deep color_第4张图片
8 bits (256 colors)

  • 8-bit color (2^8 = 256 colors) most early color Unix workstations, VGA at low resolution, Super VGA, color Macintoshes, Atari TT, AGA, Falcon030.
  • 12-bit color (2^12 = 4096 colors) some Silicon Graphics systems, Neo Geo, Color NeXTstation systems, and Amiga systems in HAM mode.
  • 13-bit color (2^13 = 8192 colors)
  • 14-bit color (2^14 = 16384 colors)
  • 15-bit color (2^15 = 32768 colors)
  • 16-bit color (2^16 = 65536 colors) some color Macintoshes.

Old graphics chips, particularly those used in home computers and video game consoles, often feature an additional level of palette mapping in order to increase the maximum number of simultaneously displayed colors. For example, in the ZX Spectrum, the picture is stored in a two-color format, but these two colors can be separately defined for each rectangular block of 8x8 pixels.

Direct color

As the number of bits increases, the number of possible colors becomes impractically large for a color map. So in higher color depths, the color value typically directly encodes relative brightnesses of red, green, and blue to specify a color in the RGB color model.

A typical computer monitor and video card may offer 8 bits of resolution (256 output levels) per R/G/B color channel, for an overall 24-bit color space (or 32-bit space, with alpha transparency bits, which have no bearing on the output resolution), though earlier standards offered 6 bits per channel (64 levels) or less; the DVD standard defines up to 10 bits of resolution (1024 levels) for each of the Y/U/V video encoding channels (luminance plus two chrominance channels).

8-bit color

A very limited but true direct color system, there are 3 bits (8 possible levels) for each of the R and G components, and the two remaining bits in the byte pixel to the B component (four levels), enabling 256 (8 × 8 × 4) different colors. The normal human eye is less sensitive to the blue component than to the red or green,[citation needed] so it is assigned one bit less than the others. Used, amongst others, in the MSX2 system series of computers in the early to mid 1990s.

Do not confuse with an indexed color depth of 8bpp (although it can be simulated in such systems by selecting the adequate table).

High color (15/16-bit)

High color supports 15/16-bit for three RGB colors. In 16-bit direct color, there can be 4 bits (16 possible levels) for each of the R, G, and B components, plus optionally 4 bits for alpha (transparency), enabling 4,096 (16 × 16 × 16) different colors with 16 levels of transparency. Or in some systems there can be 5 bits per color component and 1 bit of alpha (32768 colors, just fully transparent or not); or there can be 5 bits for red, 6 bits for green, and 5 bits for blue, for 65536 colors with no transparency.These color depths are sometimes used in small devices with a color display, such as mobile telephones.

Variants with 5 or more bits per color component are sometimes called high color, which is sometimes considered sufficient to display photographic images.

Almost all cheap LCD displays (such as typical twisted nematic types) use dithered 18-bit color (64 × 64 × 64 = 262,144 combinations) to achieve faster transition times, but they must use either dithering or Frame Rate Control to fake 24-bit-per-pixel truecolor, or throw away 6 bits of color information entirely. The best LCD displays can display 24-bit or greater color depth.

True color (24-bit)
Deep color_第5张图片
24 bits (16,777,216 colors, “truecolor”)

True color supports 24-bit for three RGB colors. Method of representing and storing graphical image information (especially in computer processing) in an RGB color space such that a very large number of colors, shades, and hues can be displayed in an image, such as in high quality photographic images or complex graphics. Usually, truecolor is defined to mean at least 256 shades of red, green, and blue, for a total of at least 16,777,216 color variations. The human eye is capable of discriminating among as many as ten million colors.
Deep color_第6张图片

Color images composed from 3 grayscale images A, B & C assigned to R,G & B in different orders.
Truecolor can also refer to an RGB display mode that does not need a color look-up table (CLUT).

For each pixel, generally one byte is used for each channel while the fourth byte (if present) is being used either as an alpha channel data or simply ignored. Byte order is usually either RGB or BGR. Some systems exist with more than 8 bits per channel, and these are often also referred to as truecolor (for example a 48-bit truecolor scanner).

Even with truecolor, monochromatic images, which are restricted to 256 levels, owing to their single channel, can sometimes still reveal visible banding artifacts.

Truecolor, like other RGB color models, cannot express colors outside of the gamut of its RGB color space (generally sRGB).

On Macintosh systems, 24-bit color is referred to as “millions of colors.”

Many modern desktop systems (Mac OS X, GNOME, KDE, Windows XP/Vista/7, etc.) offer an option for 24-bit truecolor with 8 bits for an alpha channel, which is referred to as “32-bit color”. When switching to an 8/16/24-bit color option in those systems, generally transparency/translucency effects are disabled, and the only reduction in color depth is seen when going to 8/16-bit color.

Deep color (30/36/48-bit)

Deep color is a term used to describe a gamut comprising a billion or more colors. The xvYCC, sRGB, and YCbCr color spaces can be used with deep color systems.

Deep color supports 30/36/48/64-bit for three RGB colors. Video cards with 10 bits per one color (30-bit color RGB), started coming into the market in the late 1990s. An early example was the Radius ThunderPower card for the Macintosh, which included extensions for QuickDraw and Adobe Photoshop plugins to support editing 30-bit images.

Systems using more than 24 bits in a 32-bit pixel for actual color data exist, but most of them opt for a 30-bit implementation with two bits of padding so that they can have an even 10 bits of color for each channel, similar to many HiColor systems.

While some high-end graphics workstation systems and the accessories marketed toward use with such systems, as from SGI, have always used more than 8 bits per channel, such as 12 or 16 (36-bit or 48-bit color), such color depths have only worked their way into the general market more recently.

As bit depths climb above 8 bits per channel, some systems use the extra bits to store more intensity range than can be displayed all at once, as in high dynamic range imaging (HDRI). Floating point numbers are used to describe numbers in excess of ‘full’ white and black. This allows an image to accurately describe the intensity of the sun and deep shadows in the same color space for less distortion after intensive editing. Various models describe these ranges, many employing 32-bit accuracy per channel. A new format is the ILM “half” using 16-bit floating point numbers, it appears this is a much better use of 16 bits than using 16-bit integers and is likely to replace it entirely as hardware becomes fast enough to support it.

Windows 7 includes support for up to 48-bit color.

Industry support

The HDMI 1.3 specification defines bit depths of 30 bits (1.073 billion colors), 36 bits (68.71 billion colors), and 48 bits (281.5 trillion colors). In that regard, the NVIDIA Quadro graphics cards support 30-bit deep color as do some models of the Radeon HD 5900 series such as the HD 5970. The ATI FireGL V7350 graphics card supports 40-bit and 64-bit color.

The DisplayPort specification also supports color depths greater than 24 bpp.

At WinHEC 2008, Microsoft announced that color depths of 30 bits and 48 bits would be supported in Windows 7, along with the wide color gamut scRGB (which can be converted to xvYCC output).

Television color

Virtually all television displays and computer displays form images by varying the strength (technically, tristimulus values) of just three primary colors: red, green, and blue. Bright yellow, for example, is formed by roughly equal red and green contributions, with little or no blue contribution. Recent technologies such as Texas Instruments’s BrilliantColor augment the typical red, green, and blue channels with up to three other primaries: cyan, magenta and yellow. Mitsubishi and Samsung, among others, use this technology in some TV sets. However, the associated signal processing typically fails to mimic additive mixing; hence, the displayed colors are distorted (for example, compared to sRGB source values). The Sharp Aquos line of televisions has introduced Quattron technology, which augments the usual RGB pixel components with a yellow subpixel. Again, signal processing fails in general to follow the laws of additive mixing, and colors are distorted.

Analog TVs use continuous signals which have no fixed number of different colors, although the signals are subject to noise introduced in transmission.

你可能感兴趣的:(Deep color)