Thread: Film better tonal range than digital?

1. I thought up a better explanation. Imagine an analog electrical circuit can handle between 0 and +5V, and any higher, the cutout will be activated. Assuming a circuit with no noise (superconducting ) to simplify the example. Now, with a 8-bit A-D converter, you will be able to differentiate differences at an interval of 1/256 (no noise, remember?). A 16-bit A-D convert will be able to differentiate at an interval of 1/65536.

The 0 to +5V remains the same but its discrimintary ability has increased dramatically. The 0 to +5V is equivalent to the light intensity. When we say "7 f-stops", we are saying that the intensity of light is from zero photons to "x" photos per second (assuming monochromatic light). 7 f-stops is just a short hand.

2. Originally Posted by Watcher
I thought up a better explanation. Imagine an analog electrical circuit can handle between 0 and +5V, and any higher, the cutout will be activated. Assuming a circuit with no noise (superconducting ) to simplify the example. Now, with a 8-bit A-D converter, you will be able to differentiate differences at an interval of 1/256 (no noise, remember?). A 16-bit A-D convert will be able to differentiate at an interval of 1/65536.

We have to assume that the ADC is linear. Right?
Let's assume a few more properties.
ADC input range from 0V to +8V.

Then,

For 8 bits conversion,
0V --> 00000000b
8V --> 11111111b
Since the conversion is linear,
4V --> 01111111b
2V --> 00111111b
1V --> 00011111b
0.5V --> 00001111b
0.25V --> 00000111b
and so on...

For 16 bits conversion,
0V --> 0000000000000000b
8V --> 1111111111111111b
Since the conversion is linear,
4V --> 0111111111111111b
2V --> 0011111111111111b
1V --> 0001111111111111b
0.5V --> 0000111111111111b
0.25V --> 0000011111111111b
and so on...

Obviously, for 16 bits, more info can be recorded in the darker areas, and also to higher precision.
The 8 bits are actual truncated values of 16 bits. This is because there is no need for so many bits due to noise issues; don't need to be so precise.

So how to translate the raw bit values to dynamic range?
The scale now is linear. That is why you need a curve to brighten up the darker areas, so that the darker areas will be less dark (and will be more visible to the naked eye on print/screen). How much dark areas can be brightened up will be attributed to the amount of bits used (assume no noise issue). Using 8 bits will not allow u to brighten up too much, because there is not enough information stored. That is why the more bits the better (again assume no noise).

Now, the crux of our arguement is "that each bit represents one f-stop".
Unless you can prove otherwise, or come up with an acceptable theory that this is false, then I'll admit defeat. Don't tell me whereever said that whatever camera has only 6.85 stops for 12 bits. It doesn't mean anything there.

Originally Posted by Watcher
When we say "7 f-stops", we are saying that the intensity of light is from zero photons to "x" photos per second (assuming monochromatic light). 7 f-stops is just a short hand.

U didn't explain this part properly... I'll explain it for u..

7 f-stops means that the difference the amount of light received is about 2^7 (128) times. Each f-stop is twice (or 1/2) the amount of light of the previous f-stop.

--------------------------------------------------

Talking about 11 f-stop DR, I am saying THEORETICALLY it's true. In the real world, there are many factors which prevent this from happening.

.

3. Originally Posted by AReality
Obviously, for 16 bits, more info can be recorded in the darker areas, and also to higher precision.
The 8 bits are actual truncated values of 16 bits. This is because there is no need for so many bits due to noise issues; don't need to be so precise.
The 8 bits are usually not truncated, but result from a nonlinear transform (e.g. applying gamma) from the high-resolution data. The useful dynamic range is typically larger than linear 8 bit data.

4. Originally Posted by AReality
We have to assume that the ADC is linear. Right?
<chopped>

Obviously, for 16 bits, more info can be recorded in the darker areas, and also to higher precision.
The 8 bits are actual truncated values of 16 bits. This is because there is no need for so many bits due to noise issues; don't need to be so precise.
It is not "truncated" although you can interprete it that way as the 16-bit to 8-bit conversion will "truncate" the LSBs. The digitization during the sampling instead makes each step bigger. Another example. When you display the colors on your screen, if you change the mode between 4bits to 8bits per channel, the range is the same, RGB (0,0,0) to RGB(255,255,255) on a LCD, but the steps are far bigger, giving you banding. It is the same with audio. Imagine a simple sine wave and you have 4 or 8 steps (2 bits or 3 bits). The magnitude is captured but not the subtlety.

Originally Posted by AReality
So how to translate the raw bit values to dynamic range?
The scale now is linear. That is why you need a curve to brighten up the darker areas, so that the darker areas will be less dark (and will be more visible to the naked eye on print/screen). How much dark areas can be brightened up will be attributed to the amount of bits used (assume no noise issue). Using 8 bits will not allow u to brighten up too much, because there is not enough information stored. That is why the more bits the better (again assume no noise).
No. You assumed that the system maps the magnitude to bits with fixed intervals. That is the mistake. When an ADC works, it uses the entire range of analog signal, say -5 to +5 or 0 to +8. It does not "truncate" the signal say at 6.75v; that would be a stupid way of implementing it. That is crippling the system like how 300D was crippled as the system can take the entire range of signal. The curve is because gamma (like what littlewolf said); our eyes do not respond linearly to brightness.

Using your 0 to +8v example, on a 8 bit ADC, the quantization or the steps =8/256, each bit represents a delta of 0.03125v. On a 16-bit ADC, it is 8/65536th volts per bit. Although higher # of bits for ADC give you smoother gradients, etc, you can see, the increment becomes so small that noise in the circuits will overwhelm the subtleness in the ADC that it becomes counter-productive to include a higher end ADC. The balance seems to be 12 to 14-bits (D2X is 14 bits IIRC) for current DSLRs. The medium format digital backs uses 16-bits because of the higher end of the sensors that they use.

Originally Posted by AReality
Now, the crux of our arguement is "that each bit represents one f-stop".
Unless you can prove otherwise, or come up with an acceptable theory that this is false, then I'll admit defeat. Don't tell me whereever said that whatever camera has only 6.85 stops for 12 bits. It doesn't mean anything there.
The "crux" is never 1 bit == 1 f-stop; the DR is not primarily tied to the ADC, although it does influence it. It is actually the design and building of the sensor itself. If the "wells" of the sensors fill up too quickly, then the DR will be lesser. It is not predicated/determined by the ADC. I can have a 12 f-stop DR sensor (ignoring noise) but use a 8-bit ADC that will still give you 12 f-stops DR, just like I can have a 24-bit color LCD and still use 4-bit (per channel, giving 12-bit total) signal (EGA ha ha, remember? . ADC bits =/= DR; they are different things.
Originally Posted by AReality
U didn't explain this part properly... I'll explain it for u..
[/color]
I assumed that readers are also camera users, that understand that the difference 1 f-stop = 2x amt of light. I was trying to explain from the point of view of physics

Originally Posted by AReality
Talking about 11 f-stop DR, I am saying THEORETICALLY it's true. In the real world, there are many factors which prevent this from happening.
Yes, but the S/N prevents it, render it not practically working at 11 f-stop DR, but around 7-8 stops depending on ISO settings, etc with realistic limit. Search on DP review with( "dynamic range" stops ) inside the dpreview forums.

I'm not an eletronic engineer; but these are basic sampling and signal processing theories . The principals are the same across the entire spectrum of Analog to digital conversion process.

For a better understanding, please read up on the net or any signal processing book.

5. Guys, can you please not talk in engineering terms? All talk but no evidence waste time and server resources. Go take some pictures and show me the proof how many stops are there. I do not need to know how much you know.

6. I don't think there is a standard f stop scale is there? then just shoot that and check..

7. Originally Posted by theITguy
Guys, can you please not talk in engineering terms? All talk but no evidence waste time and server resources. Go take some pictures and show me the proof how many stops are there. I do not need to know how much you know.
Eh... this subforum is "General, Review, Tech Talk". I understand that it would be too overwhelming and I initially mentioned that I don't want to go into details. But if you don't need or want, please skip reading this.

However, this is like explaining how the chemical composition of film or the development liquid works... no difference. Also, it is not theory. I can bet you that signaling and ADC is already on your body: your mobile phone, your television, your computer, your portal music players (even CDs) all use these principles.

As for proofs, I'll take Thom Hogan and Bjørn Rørslett and other notables on DPReview on it.

8. Originally Posted by Watcher
It is not "truncated" although you can interprete it that way as the 16-bit to 8-bit conversion will "truncate" the LSBs. The digitization during the sampling instead makes each step bigger. Another example. When you display the colors on your screen, if you change the mode between 4bits to 8bits per channel, the range is the same, RGB (0,0,0) to RGB(255,255,255) on a LCD, but the steps are far bigger, giving you banding. It is the same with audio. Imagine a simple sine wave and you have 4 or 8 steps (2 bits or 3 bits). The magnitude is captured but not the subtlety.
Yes, this point i agree with u. No arguement here. I said "The 8 bits are actual truncated values of 16 bits.". Pls take the sentence as a whole, not just a word.

Originally Posted by Watcher
No. You assumed that the system maps the magnitude to bits with fixed intervals. That is the mistake. When an ADC works, it uses the entire range of analog signal, say -5 to +5 or 0 to +8. It does not "truncate" the signal say at 6.75v; that would be a stupid way of implementing it. That is crippling the system like how 300D was crippled as the system can take the entire range of signal. The curve is because gamma (like what littlewolf said); our eyes do not respond linearly to brightness.
Where did I assume that the whole range from 0V to +8V (in my previous example) is not used? I am now having doubts that u understood what I was saying. Once again, in my previous example, the whole range (0V to +8V) was used.

Originally Posted by Watcher
Using your 0 to +8v example, on a 8 bit ADC, the quantization or the steps =8/256, each bit represents a delta of 0.03125v. On a 16-bit ADC, it is 8/65536th volts per bit. Although higher # of bits for ADC give you smoother gradients, etc, you can see, the increment becomes so small that noise in the circuits will overwhelm the subtleness in the ADC that it becomes counter-productive to include a higher end ADC. The balance seems to be 12 to 14-bits (D2X is 14 bits IIRC) for current DSLRs. The medium format digital backs uses 16-bits because of the higher end of the sensors that they use.
This point I agree with u.
But I still doubt that u actually read and understood my previous post, as I also did s
tate "this is because there is no need for so many bits due to noise issues; don't need to be so precise.".

Originally Posted by Watcher
The "crux" is never 1 bit == 1 f-stop; the DR is not primarily tied to the ADC, although it does influence it. It is actually the design and building of the sensor itself. If the "wells" of the sensors fill up too quickly, then the DR will be lesser. It is not predicated/determined by the ADC. I can have a 12 f-stop DR sensor (ignoring noise) but use a 8-bit ADC that will still give you 12 f-stops DR, just like I can have a 24-bit color LCD and still use 4-bit (per channel, giving 12-bit total) signal (EGA ha ha, remember? . ADC bits =/= DR; they are different things. ... ...

Please do elaborate on your example. I can spot a few major errors though.
"If the "wells" of the sensors fill up too quickly, then the DR will be lesser." Please show some example that the speed of the wells filling up has something to do with DR.

Consider a sensor with only 2 wells.
Assume the wells are exposed to different amount of light.
Well #1 fills up in 1 second, well #2 fills up in 2 seconds.
Now, we set the shutter speed to only 1 sec.
This will cause #1 to fill to the brim, and not overflow, #2 will be only 1/2 filled.
So it can be said that the stop between #1 and #2 is one; #1 receives twice the amount of light compared to #2.

Using the example as in previous (0V to +8V),
#1 output will be at +8V, (11111111) in binary value.
#2 output will be at +4V, (01111111) in binary value.

This one most significant bit change accounts for the one stop difference in light.

NOW, another scenerio.
#1 only takes 0.5 secs to fill, #2 takes 1 sec to fill.
The shutter speed is set to 0.5sec so as not to let #1 overflow.
The above example still holds true.
Therefore, the speed at which the well fills up has got nothing to do with the DR.

Originally Posted by Watcher
Yes, but the S/N prevents it, render it not practically working at 11 f-stop DR, but around 7-8 stops depending on ISO settings, etc with realistic limit. Search on DP review with( "dynamic range" stops ) inside the dpreview forums.

I'm not an eletronic engineer; but these are basic sampling and signal processing theories . The principals are the same across the entire spectrum of Analog to digital conversion process.

For a better understanding, please read up on the net or any signal processing book.

Now you saying that ignoring S/N, there is 11 f-stop? Kind of contradicting right?

9. Originally Posted by LittleWolf
The 8 bits are usually not truncated, but result from a nonlinear transform (e.g. applying gamma) from the high-resolution data. The useful dynamic range is typically larger than linear 8 bit data.

Analog Voltage --> Quantisation --> Binary values --> Apply Non-Linear Transform

Is that what u're trying to say?

For JPEG output from camera, YES.
For RAW, the last step is omitted.

Again, "truncation" I'm refering to is tied to that example. Take the sentence as a whole, not the word only.

10. Originally Posted by AReality
Where did I assume that the whole range from 0V to +8V (in my previous example) is not used? I am now having doubts that u understood what I was saying. Once again, in my previous example, the whole range (0V to +8V) was used.
Ok, in that case, what happens when a signal of >+8V is applied? Remember that light -> electricity via sensor and then the signal is interpreted. What happens when more light is added? That is when the hightlights are blown.

Originally Posted by AReality
Please do elaborate on your example. I can spot a few major errors though.
"If the "wells" of the sensors fill up too quickly, then the DR will be lesser." Please show some example that the speed of the wells filling up has something to do with DR.

Consider a sensor with only 2 wells.
Assume the wells are exposed to different amount of light.
Well #1 fills up in 1 second, well #2 fills up in 2 seconds.
Now, we set the shutter speed to only 1 sec.
This will cause #1 to fill to the brim, and not overflow, #2 will be only 1/2 filled.
So it can be said that the stop between #1 and #2 is one; #1 receives twice the amount of light compared to #2.

Using the example as in previous (0V to +8V),
#1 output will be at +8V, (11111111) in binary value.
#2 output will be at +4V, (01111111) in binary value.

This one most significant bit change accounts for the one stop difference in light.

NOW, another scenerio.
#1 only takes 0.5 secs to fill, #2 takes 1 sec to fill.
The shutter speed is set to 0.5sec so as not to let #1 overflow.
The above example still holds true.
Therefore, the speed at which the well fills up has got nothing to do with the DR.
If the two wells fill up at different rate, then the sensitivity (can be interpreted as ISO sensitivity) is different. Which sensor can you name me that on the same piece of sillicon (or GaA or whatever) can be programmed with different sensitivity at a per-pixel level? Different ISO per pixel? Technology is not that advance.

A gain is applied uniformly across the sensor. Sensor A will have its pixel fill up with t seconds exposure, with DR range of say 7 stops. Sensor B will with t2 seconds fillup with 10 stops.

Now, if a scene has a uniform gradient of illumination of 7 f-stop is exposed to sensor A and B and is properly metered at m1 for both at the same "ISO" sensitivity, then both will have the same image without blown highlights. Sensor A has wells that are to the brim when recording the image of the items that are brightest but not blown, while sensor B does not fill up as it has a 'deeper' well compared to Sensor A. So at a given exposure (which is the aperature + shutter speed), those sensors that are filled to the brim is filled to the top faster, isn't it?

Originally Posted by AReality
Therefore, the speed at which the well fills up has got nothing to do with the DR.
I've got two pails. Pail A can hold 10 litres, Pail B can hold 20 litres. If I put the pails under taps that has water that flows at 10 litres per minute, which pails fills up to the top first?

If this scene, I then add in items that is brighter then the 7 f-stops say 9 f-stop, and I keep both metering at m1. Sensor A, because the wells are filled up to the brim at 7 f-stops will have blown highlights, while the wells in Sensor B has not reached at that state yet.

As for reading, please tell me why the audio example is not applicable as it is an easier way of viewing this issue. You have not responded to it yet.

Originally Posted by AReality
Now you saying that ignoring S/N, there is 11 f-stop? Kind of contradicting right?
No. The usable range is around 7 f-stops, is when you cannot distinguish between noise and actual information... But when you put in noise, it is a factor you cannot ignore... I can apply a gain across that when the S/N ratio is so high, I cannot differentiate between real or noise, this need not be very high. In any case, your original post, you tie the DR to the ADC in post #28. With different gain, the DR will change as the S/N drops. Therefore, the #bits of the ADC is irrelevant to the DR.

Let me put it this way: there can be sensors with usable 10 f-stops and I put in a 8-bit ADC and I can have a 4 f-stop sensor with 16 bit ADC. There is no direcrt restriction between these two attributes.

11. Originally Posted by AReality
[color=Blue]
Analog Voltage --> Quantisation --> Binary values --> Apply Non-Linear Transform

Is that what u're trying to say?
Dynamic range is limited by saturation (sensor, A/D converter) on one side and noise (sensor noise, quantisation noise) on the other side. Nonlinear encoding typically reduces the quantisation noise at low light levels, increasing the dynamic range beyond what the simple model "bit depth = f stops" predicts.

Also, one has to be careful in comparing pixelated sensors to traditional film. The characteristic curves of film are deceiving, as they do not reflect the noise level (graininess) of the film. Conversely, one could increase the dynamic range of pixel-based sensors by averaging over several pixels.

I do not want to heat the debate any further. I just want to warn about too simplistic assumptions when comparing two different beasts (film and pixelated sensor).

12. Originally Posted by Watcher
Ok, in that case, what happens when a signal of >+8V is applied? Remember that light -> electricity via sensor and then the signal is interpreted. What happens when more light is added? That is when the hightlights are blown.

If more light is added, the highlights will be blown. But, u'll get more readings from the darker areas.

Originally Posted by Watcher
If the two wells fill up at different rate, then the sensitivity (can be interpreted as ISO sensitivity) is different. Which sensor can you name me that on the same piece of sillicon (or GaA or whatever) can be programmed with different sensitivity at a per-pixel level? Different ISO per pixel? Technology is not that advance.

I didn't say that the sensitivity is different. I said that the amount of light landing on each of the wells are different. The sensitivity across the wells are the same.

Originally Posted by Watcher
I've got two pails. Pail A can hold 10 litres, Pail B can hold 20 litres. If I put the pails under taps that has water that flows at 10 litres per minute, which pails fills up to the top first?
Why are the pails of different size? The pails (sensor wells on the same piece of silicon) should be of the same size.

We are not compairing Silicon chip A vs Silicon chip B.

Originally Posted by Watcher
Let me put it this way: there can be sensors with usable 10 f-stops and I put in a 8-bit ADC and I can have a 4 f-stop sensor with 16 bit ADC. There is no direcrt restriction between these two attributes.

Ok, pls explain how u get 10 f-stops with an 8-bit ADC. It's ok if u need to be technical. Try to explain to the best of your knowledge...

F-stops are not measured linearly. They are log base 2. You can't directly associate some quantised light values with a sensor using a linear scale.

This is the last example.

Sensor A has sensitivity=1.
Let's say for every photon landing on a well, the voltage will increase by 0.1V. (max +8V as in the example).
So the max number of photons that the well can measure before overflowing is 80.

Sensor B has sensitivity=4 (4 times that of Sensor A).
Let's say for every photon landing on a well, the voltage will increase by 0.4V. (max +8V as in the example).
So the max number of photons that the well can measure before overflowing is 20.

For a given scene, lets say the scene has 20 photons (we do not want to overflow as that is not our aim here).
Sensor A will have a voltage of +2V. = 00111111b (binary value)
Sensor B will have a voltage of +8V. = 11111111b

For another scene, let's say the light is halved; 10 photons.
Sensor A will have a voltage of +1V. = 00011111b
Sensor B will have a voltage of +4V. = 01111111b

From here, it can be seen that regardless of the sensor, as long as the values read are in range, when the light is halved, the binary value is halved also.

The sensitivity issue only deals with the upepr limit of how much light can the sensor record. Sensor A can be exposed to a greater light source than sensor B. But sensor B can capture darker scenes than sensor A. (Given the same shutter speed)

.

13. Originally Posted by LittleWolf
Dynamic range is limited by saturation (sensor, A/D converter) on one side and noise (sensor noise, quantisation noise) on the other side. Nonlinear encoding typically reduces the quantisation noise at low light levels, increasing the dynamic range beyond what the simple model "bit depth = f stops" predicts.

Also, one has to be careful in comparing pixelated sensors to traditional film. The characteristic curves of film are deceiving, as they do not reflect the noise level (graininess) of the film. Conversely, one could increase the dynamic range of pixel-based sensors by averaging over several pixels.

I do not want to heat the debate any further. I just want to warn about too simplistic assumptions when comparing two different beasts (film and pixelated sensor).

We are not compairing film vs sensor here, nor discussing the gamma curves.

We are discussing whether each bit depth is equivalent to one stop of light difference. (note the word "difference")

.

14. Originally Posted by AReality

We are not compairing film vs sensor here, nor discussing the gamma curves.

We are discussing whether each bit depth is equivalent to one stop of light difference. (note the word "difference")

.

This thread IS comparing film and digital sensor. So keep this thread to its point.

15. just to butt in with some personal experience
because what Jempala said is really misinformation
I use an Imacon 848 scanner and I use a Nikon coolscan 9000
35mm film is NOT equivalent to 40megapixels
it's NOT equivalent to 20megapixels even
You can scan up to 40megapixels but most of it will be NOISE. There won't be any increase in your details past the grain.
And ISO400 35mm film is disgusting. you can't print past 11x14 inches with ISO400 film unless you want a heart attack, or like your pictures grainy. A lot of detail is lost already, with all that horrendous grain.

I don't know how the heck you got 3 stops of dynamic range for slide film and 7stops for negative film. 3 stops would mean WHITE-GREY-BLACK literally 3 colours only. We're looking at around 13 stops for negative film.
And the film used in moviemaking is slightly different.
You'll find all sorts of types of films used in moviemaking you'll never see in your camera store. A cinematographer told me about the films, one of fuji's ISO500 film has 11 stops latitude,which is amazing

Another thing, when you project an image on a screen it is not equivalent to printing a picture that big.

The dynamic range of film may be slightly better, but this applies to NEGATIVE film. One reason why film may look like it has more dynamic range is due to the unevenness of the grain and the structure

And negative film can only take about a stop of underexposure. even with the extra latitude, you'll just get a very grainy image.
Negative film does not necessarily print with much more dynamic range on PAPER than slide film.
Because the printing paper itself doesn't have that latitude to handle the dynamic range of negative film. And if it did, what you would get is a very flat uncontrasty image.
Slide film is more like digital, if you blow out the highlights, they're gone, if the shadows are too dark, you won't be able to get more detail out of it without getting horrendous noise.
Digital backs have great dynamic range. I am using a Kodak Pro Back and it can overexpose by 2 stops without losing detail in the highlights.
Underexposure will lead to the same problem as negative film:the shadows will be noisy.
I have yet to use a 16bit digital back, but their quality is higher.
I recently took two pictures, one with my digital back and one with 645 Kodak Portra 160VC film rated at 100 ISO
On scanning, the 645 film could not resolve tiny details, the grain was in the way. Interpolating it would have just made the picture look softer. Sharpening the file just made the grain more apparent.
The digital back pictures were clean as hell, without grain, and took very well to interpolation (and also abit of sharpening)

the Fuji Frontier is an amazing machine, so no bad words can be said about it. I used to do my own colour printing with a cold head enlarger and a kryptonite processor, and those prints cannot compare with the fuji frontier.

Digital cameras do NOT reduce a lens' optical quality, more likely they show the lens' defects more clearly because you can zoom in and take a look. you don't notice it so much with film because film tends to look softer than digital (thanks in part to the grain in film)

With regards to the Contax cameras, those are amazing, I have to say
I don't understand how it is, but my friend takes pictures with a Contax 35mm camera and can enlarge her prints to 24x36INCHES, with acceptable grain
Caveat: This is a huge exception, I can't say the same for any other 35mm cameras

sorry this is so long but I nearly had a heart attack reading jempala's post
you can read those magazines which talk all sorts of nonsense (esp Popular Photography---BEWARE!) but you'll only really know it when you have actually tested these yourself.

16. Originally Posted by AReality
Ok, pls explain how u get 10 f-stops with an 8-bit ADC. It's ok if u need to be technical. Try to explain to the best of your knowledge...
Let say a sensor can detect properly (ie not noise) at 0.001 lux (a unit of light intensity). For 10 f-stops, it would be 0.001*1024 = 1.024 lux. Assuming linear digitization using an 8-bit ADC, the system would be calibrated to have (1.024-0.001)/256 approx 0.003996 lux per bit. Now, if light falling on a sensor is between 0.001 lux to < 0.004996 lux, it would register the same "1" (00000001 binary), from 0.04996 lux to 0.008992, it would register as "2" (00000010 binary) and so on.

Originally Posted by AReality
F-stops are not measured linearly. They are log base 2. You can't directly associate some quantised light values with a sensor using a linear scale.
Why not? Sound intensity is are given in log base 10 (bel/decibel); we scale it accordingly to a linear scale in our volume control on our sound systems. Please explain why we can do for sound not for light?

You have not responded at all to the various different ways of digitizing (sound or display) that I have mentioned even though they are using exactly the same principle and support the fact that DR (signal width) is independant of resolution (ADC bit size).

<example chopped>

Note that when we discuss about DR, we assume that 1) sensitivity is the same (eg ISO 100), 2) we have a monochromatic and fixed amount of light to measure.

Your example, "The sensitivity issue only deals with the upepr limit of how much light can the sensor record. Sensor A can be exposed to a greater light source than sensor B. But sensor B can capture darker scenes than sensor A" is when you change the sensitivity of the sensor. It is exactly the same as on your one single DSLR with the same sensor, you increase the ISO settings from say ISO 100 ("sensor A") to ISO 400 ("sensor B"). Of course, "sensor A" (or more correctly, setting A) can be exposed to greater amount of light before blowing the highlights given a fixed aperture and shutter speed, while B can detect lesser light and capture darker scenes. Thanks for explaining how changing the ISO on film or digital work. .

This is not the same as measuring the DR between different types of sensors/film.

17. Originally Posted by mattlock
just to butt in with some personal experience
because what Jempala said is really misinformation
I use an Imacon 848 scanner and I use a Nikon coolscan 9000
35mm film is NOT equivalent to 40megapixels
it's NOT equivalent to 20megapixels even
You can scan up to 40megapixels but most of it will be NOISE. There won't be any increase in your details past the grain.
And ISO400 35mm film is disgusting. you can't print past 11x14 inches with ISO400 film unless you want a heart attack, or like your pictures grainy. A lot of detail is lost already, with all that horrendous grain.
Thom Hogan concurred (search for his post those interested), that 35mm on ISO100 is around 16-18+ MP equivalent for a properly exposed image. Any higher, you get the grain.

Originally Posted by mattlock
I don't know how the heck you got 3 stops of dynamic range for slide film and 7stops for negative film. 3 stops would mean WHITE-GREY-BLACK literally 3 colours only. We're looking at around 13 stops for negative film.
The generally accepted numbers are about 7 stops for slides, 11-13 stops for negatives (ISO 100).

Originally Posted by mattlock
<chopped>
Digital cameras do NOT reduce a lens' optical quality, more likely they show the lens' defects more clearly because you can zoom in and take a look. you don't notice it so much with film because film tends to look softer than digital (thanks in part to the grain in film)
Agree! If you want to see how lenses are taxed by the digital bodies, look at the highest end from Nikon and Canon. CA, light falloff, etc. In fact, there was a talk about how we are hitting the diffraction limit already! We don't notice that much in film is because the grains in the film tends to smudge the fine details at that high a resolution. In any case, we need to print very large for film in order to see what we see using digital at 100% even a 6MP DSLR.

18. Take a look here. http://www.photographical.net/canon_1ds_35mm.html .A comparision between the Canon 1ds (11MP)35mm film and Medium Format. In short 35mm < 1Ds < Medium format in terms of detail. In terms of noise, 1Ds beats medium format and of course 35mm.

The test was done with provia100 wonder what the difference would be if it was done with velvia or for theoretical pruposes, Kodak Technical Pan.

19. Originally Posted by Watcher
Let say a sensor can detect properly (ie not noise) at 0.001 lux (a unit of light intensity). For 10 f-stops, it would be 0.001*1024 = 1.024 lux. Assuming linear digitization using an 8-bit ADC, the system would be calibrated to have (1.024-0.001)/256 approx 0.003996 lux per bit. Now, if light falling on a sensor is between 0.001 lux to < 0.004996 lux, it would register the same "1" (00000001 binary), from 0.04996 lux to 0.008992, it would register as "2" (00000010 binary) and so on.

From your above example, the sensor can indeed detect anywhere in the range from 0.001 to 1.024 lux. That is for analog output only.

However, after you run it through an 8-bit ADC, the output will only be from 0.004996 to 1.024000 lux. Anything from 0.001 to 0.004996 lux is just black; no info can be recorded. Thus, the lowest 2 "stops" here cannot be recorded.

If you're talking about 10 stops directly from the analog output, yes, i can agree with you. But if you're saying that an 8-bit value can hold 10 stops of info, that is just impossible (For linear ADC only).

Want to give another example?
I'll let you have another try.

.

20. Originally Posted by AReality
From your above example, the sensor can indeed detect anywhere in the range from 0.001 to 1.024 lux. That is for analog output only.
Assuming that the output from the sensor is a analog signal from 0 to 8v. This voltage is the digitized. Thus we will capture the range from of 10 stops of light

Originally Posted by AReality
However, after you run it through an 8-bit ADC, the output will only be from 0.004996 to 1.024000 lux. Anything from 0.001 to 0.004996 lux is just black; no info can be recorded. Thus, the lowest 2 "stops" here cannot be recorded.

If you're talking about 10 stops directly from the analog output, yes, i can agree with you. But if you're saying that an 8-bit value can hold 10 stops of info, that is just impossible (For linear ADC only).
This is quantization error. You would need infinite number of bits to completely capture an analog signal. However, S/N ratio creates a limit that additional bits is fitting to noise. Also, with non-linear/adaptive encoding, it requires lesser bits to encode with lesser loss/error. In my example, < 0.001 gives "0", from 0.001 to 0.004966 will give a "1", from 0.004996 to 0.008992 gives a 2. The inability to differentiate between say 0.002 to 0.003 which still gives the output of "1" is the error.

See this example for a dynamic illustration of A to D conversion. You can see that it can capture the entire range but with fewer bits using linear encoding, the error (at the bottom) increases. The error =/= range; the topic here is the dynamic range, not error

This is the same as in sound and other analog signal which you have avoided explaining. If a sound with a dynamic range of 1 bel (100 decibel) is converted into a linear signal from 0 to 10v. With a ADC with 4 bits, the resolution is 1/16th volts = 0.0625v so if the signal rises only 0.05v, you would not record the difference. That is an error but the range is still 1 bel.

Read a simple but detailed explanation from here. They even said "Quantization error is due to the finite resolution of the ADC, and is an unavoidable imperfection in all types of ADC. The magnitude of the quantization error at the sampling instant is between zero and half of one LSB." So even if you hve a 24 bit ADC and 0 noise (practically impossible) with a 7 stop sensor, you will still have an error. The range btw, is still 7 stops. The error should be 0.289LSB (least significant bit), seen from the line "In the general case, the sampled signal is larger than one LSB, and the quantization error is not correlated with the signal. Its RMS value is then 1/sqrt(12) LSB = 0.289 LSB. In the eight-bit ADC example, this represents 0.113 % of the full signal range."

BTW, your sacasm and condescension does not add any value to this discussion; leave your attitude at the door .

Page 3 of 5 First 12345 Last

Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•