Visual Perception and Image Quality(视觉感知和图像质量)

本文转自http://www.ricoh.com/about/company/technology/voice/column/009.html

We can read small characters without difficulty. Do our eyes have such high resolution? Is eye lens resolution much better than a state of the art camera lens?

Before answering, test your vision by doing the following. In a dark room, try to read a book with light flashing every second. Because the human eye can hold the after image, theoretically you should be able to read it without difficulty. In reality, however, you cannot read it. You can grasp the rough shape of the object, but you cannot focus on the characters. The visual area is too small to read. If the human eye happens to focus on the object, the image is still blurred. The perceived image area is too small to read the document.

Image quality increases with eye movement. Image quality of each component is low, but the perceived image is high enough for reading. What mechanism supports this image processing? It is said that resolution improves more than 10 times, and gray level tone reaches about 12 bits (4096 levels). Figure 1 shows the simulation by which several coarsely sampled low resolution images (a) are processed to a high resolution image (b). The human brain has a similar processing capability. For more details refer to Shin Aoki, “Super resolution image processing from multiple digital images”, Image Sensing Symposium, 1999, Yokohama.


Fig.1-(a) Coarsely sampled low resolution image


Fig.1-(b) Processed high resolution image using multiple randomly phase shifted images


On the human retina, only a small area surrounding the fovea has dense light sensitive cells. If eye movement were fixed, the visual area would be very small. To recognize a document, which often spans more than 30 degrees, eye movement is essential. Camera simulation is shown in Figure 2. A tripod is fixed at one point and two different images are taken, both of the same object. Two red circles in the image show the identical object, which are slightly deformed against each other. Why did this happen, with the same camera, from same shooting point? The only difference was the direction of the lens. We never see this deformation in the images we view. In reality, automatic image processing seems to be at work in the brain. For more details, refer to Ejiri, Aoki, Guan, "Panorama Image Synthesis": Optronics, 1999, p.140-144 (Japanese).


Fig.2

Two images represent two different directions with the common object circled in red. The common objects are deformed with each other, but we do not perceive the deformation with the human eye.


Figure 3 shows the result of panorama image processing.

Fig.3 Panorama image synthesis from 4 different subimages.

Figure 4 is more advanced. Panorama processed images are projected on the surface of a sphere after each image is shot with a digital camera. The images on the sphere are then projected on both sides of an equatorial plane using stereo projection. The whole direction is represented on just two flat planes. This representation is powerful enough for use with video images.


Fig.4-(a) Whole directional image composed
from 40 images


Fig.4-(b) Stereo projection images

(Ej, 2004.3)



 

你可能感兴趣的:(Visual Perception and Image Quality(视觉感知和图像质量))