Got a question that bothers me, if anyone have read abt it or knows the answer, pls enlightened me.
The digital sensor captures things as they are, with manual inputs of white balance control from the user. On the other hand, our eyes automatically differentiate between light source and their wavelength, do autocorrection in our brain, thus allowing us to correct weird looking color cast that otherwise appear in our photos when the correct color balance is not input.
my question is that, if we see the fluorescent light and our mind corrects the green tinge, why is it that when a sensor capture a picture under fluorecent light without color balance correction input producing a picture with the green tinge, we can see the tinge instead and our mind does not autocorrect it?
is it becos the weird color casts that is reflected into our eyes is somehow at a different wavelength or with different properties when it is captured by the sensor and reproduced onto a print or a screen, which explains why the brain only corrects what the eyes sees directly and not what the eyes sees from the uncorrected print?