DXO Mark: the free resource to comparing RAW sensor quality


Status
Not open for further replies.

theRBK

Senior Member
May 16, 2005
2,044
1
0
Came across this website by the people from DXO Labs, better known for their RAW processing and image auto optimization software, DXO Optics Pro,

DXO Mark

it displays the data they have collected from their testing of cameras, allowing us to look into the capabilities of sensors and make apples to apples comparisons between cameras :thumbsup: ... if you can make out what the data is saying that is :)
 

great website. finally someone did some lab tests of raw files. I always find the dpreview and imaging resource way of doing things is not very accurate with a lot of weird results.
 

looks like night shooter`s camera will be Nikon D700 since according to the webbie, it beats the rest in low light conditions and yet reasonable priced even to Sony`s Alpha 900.
 

looks like night shooter`s camera will be Nikon D700 since according to the webbie, it beats the rest in low light conditions and yet reasonable priced even to Sony`s Alpha 900.

It's about 400 SGD cheaper... but for landscapes, I'd still pick the A900 for the extra detail.
 

yup, the website normalised the comparison (under PRINT tab) results to 8 megapixel file for cameras of different resolution.

So that in a way benefited the D700 as the higher resolution adv of the A900 and 5D2 is not reflected under low ISO.

One thing I noticed is that Canon DSLRs red pixels is not very pure... it seems to sense both green and red light equally..... isn't that bad?
 

Last edited:
One thing I noticed is that Canon DSLRs red pixels is not very pure... it seems to sense both green and red light equally..... isn't that bad?

The skewed color gamut of Canon cameras has been a problem for a long time. But better not talk about it, some people will always believe their colors and cameras are perfect.
 

One thing I noticed is that Canon DSLRs red pixels is not very pure... it seems to sense both green and red light equally..... isn't that bad?

For good colour reprouction, it should not purely respond to red. The sensor needs to mimick the spectral response of the human eye. It is a widespread misconception that human color vision is based on RGB. It is not.

To see how "pure" the response of the human eye is, see e.g. here:

http://en.wikipedia.org/wiki/Image:Cones_SMJ2_E.svg

You'll find that the response of the three types of cones overlaps, and that most overlap is in the red/green region of the spectrum. A camera sensor with good colour reproduction needs this overlap.

Addendum: More precisely, the spectral response of the photosensing devices needs to form a set of basis vectors. I.e. it doesn't need to follow the sensitivity curves of the cone cells, but can also use any set of linear combinations that is not linearly dependent. However, if you choose a linear combination that narrows the spectral response (i.e. is more specific to "red" or "green"), the sensor also needs to generate a negative signal (i.e. "less than no light") for some wavelengths. This is, with conventional sensor technology, physically impossible.
 

Last edited:
response of the eye is one thing... how the brain reacts to the stimulous is another... Dan Margulis, in his book "Photoshop LAB Color", suggested that humans might actually be said to function in LAB color than RGB... but that's a whole different story... those interested should go read the "A Closer Look" section of Chapter 3 of that book...
 

response of the eye is one thing... how the brain reacts to the stimulous is another

The interpretation of the brain doesn't matter at all here. In a photo, you record and playback image information before it even reaches the brain. Whatever the brain does comes after the photo. So the best you can do is to make your pictures in a way that generate the same stimuli to the eye as the real scene.

Where the brain matters is when you ask "How much error can I introduce before it becomes intolerable?". When your question is "What is needed for the most accurate colours?", there's pretty much only one answer.
 

The interpretation of the brain doesn't matter at all here. In a photo, you record and playback image information before it even reaches the brain. Whatever the brain does comes after the photo. So the best you can do is to make your pictures in a way that generate the same stimuli to the eye as the real scene.

Where the brain matters is when you ask "How much error can I introduce before it becomes intolerable?". When your question is "What is needed for the most accurate colours?", there's pretty much only one answer.
actually, to follow along that line of reasoning, recommending that the recording of images according to stimuli patterns of the eye, to its logical conclusion, what needs to be recorded should also take into account how the brain would interpret the colour... because if the brain ultimately would be the arbitrator of colour, anything the brain does not accept would be wasted anyway... this would probably deal more with colour gradation and colour contrast (not luminance contrast) within the gamut rather than absolute accuracy (and what is absolute accuracy... see next point) :)

on the other hand, to talk about accuracy, one would have to record a scene with as much fidelity as possible rather than make assumptions about eye or brain processing... because colour processing does vary in the individual, as evidenced at the most extreme by those who are colour blind (who are not so much blind to colour but process colours differently) :)
 

actually, to follow along that line of reasoning, recommending that the recording of images according to stimuli patterns of the eye, to its logical conclusion, what needs to be recorded should also take into account how the brain would interpret the colour...

You are offering the brain a tristimulus (set of 3 values). There is not much left that you can reduce without gross effects (such as throwing away one value and simulating colour blindness).

this would probably deal more with colour gradation and colour contrast (not luminance contrast) within the gamut rather than absolute accuracy (and what is absolute accuracy... see next point) :)

Ah... no, even then you don't escape from the sensitivity curves. Human eyes, just like film or camera sensors, suffer from metamerism, i.e. different spectra give rise to the same colour. If you do not follow the spectral sensitivity of the eye, you'll end up with the situation that two objects may look the same colour to the human eye, but turn out very different in the photograph. The human brain is not very good at identifying specific colours, but it can be very good at spotting differences in colour. This is a well-known problem with cameras that deviate a lot from the human eye (e.g. the Leica M8 where you have to correct the flawed sensor response with an optical filter).

Following the spectral sensitivity of the eye is the only objective method of recording colour (apart from recording a full spectrum), everything else involves arbitrary trade-offs that can make things only worse, never better.

on the other hand, to talk about accuracy, one would have to record a scene with as much fidelity as possible rather than make assumptions about eye or brain processing... because colour processing does vary in the individual, as evidenced at the most extreme by those who are colour blind (who are not so much blind to colour but process colours differently) :)

Differences in how the brain processes the information do not matter. But it is true that there is variability between individuals when it comes to colour vision: one is due to genetic variations (i.e., the spectral sensitivity of the cones differ), the other due to the filtering effect of the eye (which may have slight discolorations). So yes, the entire idea of reproducing colours with three primaries is based on a "standardized" human eye, and it breaks down once you take these differences into account.

However, for a large majority, these differences are rather minor. If you reject this standardisation, then no photographic image that doesn't record spectra (e.g. Lippmann's interferometric colour process) is capable of good colour reproduction. In particular, all the colour management people love to talk about is utterly pointless, because it is based on the same standard model for human colour perception.

Anyway, my main reason for writing all this is to point out that a strong spectral overlap of "red" and "green" sensors is not a flaw, but a necessity.
 

Last edited:
Status
Not open for further replies.