Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
I don't understand
#5
[quote name='Frank' timestamp='1346335943' post='19891']

Hi BC and Photonius:



Thank you for your response. I think that I understand it now. The doubled area of green cells on a Bayer sensor is used to increase the resolution of the image in green color and reduce the noise in green color which human eyes are most sensitive, not to increase the recorded density of green light.



However, I have another question: in a scene which has very large contrast people usually do not see that large contrast as a camera sees because human eyes respond to light differently than the camera sensor. As a result, the camera records an image with contrast much larger than that we have seen with our naked eyes. Assume that the camera indeed faithfuly recorded the contrast of the scene and produced an image that faithfully reflects the contrast recorded by the camera. But, why don't our eyes respond to the image the same way as to the true scene?



Best regards,

Frank

[/quote]



Well, if you look again at the eye link I gave, we actually only really see well in a narrow field of view for which optimal viewing is adjusted. Our eyes constantly adjusts to the light with the iris and other measures (the eye has an aperture, and can adjust ISO so to speak, e.g. after some time in a dark place you can start to see). Think of it like a camera with spot meter, as you move the camera (and the spot meter in the center) around, automatic exposure will adjust constantly depending on whether you point the spot at a dark or bright area, while the camera still covers a large field. In our brain the whole thing gets assembled into something like a HDR image. And it's not perfect, if a bright light at night hits your eye, you can't see the rest of the dark night.



The dynamic range of a day scene can be huge. The way to capture it with a camera is to expose for each part separately, like the eye does.

The problem comes later, when you view this as a whole. You can create of course an HDR image that shows everything, but the dynamic range of the resulting image is never like the original scene, so it doesn't look right, because you compressed the difference between black and white to a much narrower range.

With computer monitors, even though you have the back-lights, your dynamic range is still limited. Most monitors are actually only 8 bit in each color channel, i.e. 256 levels, 8 f-stops.
  


Messages In This Thread
I don't understand - by frank - 08-30-2012, 03:08 AM
I don't understand - by Brightcolours - 08-30-2012, 07:05 AM
I don't understand - by Guest - 08-30-2012, 12:50 PM
I don't understand - by frank - 08-30-2012, 02:12 PM
I don't understand - by Guest - 08-30-2012, 04:11 PM

Forum Jump:


Users browsing this thread:
1 Guest(s)