Digital Camera Sensor Technologies


Status
Not open for further replies.

espion

Deregistered
Aug 25, 2005
1,524
0
36
This is just for information: there are three types of sensor technologies in use for digital imaging, namely Bayer, Foveon and 3CCD. The latter is used almost exclusively for video cameras, perhaps the algorithmic processing required in the other two introduces too much lag for real time imaging.

Almost all digital cameras used the Bayer sensor, a technology patented by Kodak, and Foveon is only used by SIGMA digital cameras for the moment.

The sensor closest to reality is of course the 3CCD, subject to the number of pixels on the sensor, followed by the Foveon. The Bayer can be thought as only 33% "real", the rest are interpolated, ie a mathematical guess of what "reality" is.

But of course film was not real too.
 

Hmm. Then why aren't 3CCD sensors being implemented into cameras?

It won't really make sense if the other 2 sensors require algorithmic processing just to make things look more real (I assume) which also makes them less efficient than the 3CCD sensors.. Unless they have something that the 3CCDs can't provide..

Sorry, I don't know anything about all this sensors stuff. :bsmilie:
 

Hmm. Then why aren't 3CCD sensors being implemented into cameras?
I do not know too, but I can guess a couple of reasons. One is cost, for instead of one sensor as you have in your cam today, you need 3. Second could be form factor, as the optics may require more space. I think this is so as video cameras appear generally bulkier than DSLR cameras. Having said that there are digital cameras, such as from Fujifilm, Toshiba, Hamamatsu, etc using 3CCD but apparently only for scientific, medical, technical and industrial applications where minimal compromise of reality is important.

... to make things look more real ...
Nothing can be "more real" than real itself? What algorithms does is not to make real but just to seem real.
 

I do not know too, but I can guess maybe a couple of reasons. One is cost, for instead of one sensor as you have in your cam today, you need 3. Second could be form factor, as the optics may require more space. I think this is so as video cameras appear generally bulkier than DSLR cameras. Having said that there are digital cameras using 3CCD but apparently not for the consumer market.



Nothing can be "more real" than real itself? What algorithms does is not to make real but just to seem real.

That doesn't look very camera-like to me :bsmilie:

Ok ok, wrong choice of words. "Make things seem more real". But then again, sometimes, something realistic seems less real than something simplified. Due to what our minds already perceive as 'real'.
 

I think you are right. Cost and size should be the main concerns. You will realized that even in professional videocam, the CCD size used are very much smaller copared to DSLRs.

e.g. Canon GL2's CCD size is only 1/4" with 410,000 pixels.

The CCD cost of DSLRs is a major part of the camera cost. Imagine if you have 3 of them...

BC
 

if im not wrong the main thing about 3ccd is to bring out better colors, and also doesn't need that much of noise control since they are pretty small in dimension... the SD ain't even close to a MP...
 

I am not sure what you mean by "making it more real" or "faking reality". They all are just image capturing devices.

Interpolation... etc...etc.. are just image processing, translating light information captured on the sensor into digital data. It's nothing about real or unreal.

BC
 

I am not sure what you mean by "making it more real" or "faking reality". They all are just image capturing devices.

Interpolation... etc...etc.. are just image processing, translating light information captured on the sensor into digital data. It's nothing about real or unreal.

BC

I didn't say "making it more real". On contrary, I am really saying precisely the opposite, namely any image is always less real.

For what "image capturing devices" do is to capture images, ie a sampling or representation of reality but is never reality itself, even for 3CCD devices.

For example in the Bayer sensor, for red and blue data, every 4 pixels, only one is real, ie as directly sensed from the scene, the other three are mathematically guessed at, ie interpolation. For green pixels, 50% are guessed at.

And that is the reason why for scientific and technical applications the Bayer sensor is not used. For a Bayer image is "unreal" in the sense that the bulk of digital data it generated were not sensed but guessed at. It may guessed correctly, but it may not too, and we will never know.

What then is real or unreal to you?
 

I didn't say "making it more real". On contrary, I am really saying precisely the opposite, namely any image is always less real.

For what "image capturing devices" do is to capture images, ie a sampling or representation of reality but is never reality itself, even for 3CCD devices.

For example in the Bayer sensor, for red and blue data, every 4 pixels, only one is real, ie as directly sensed from the scene, the other three are mathematically guessed at, ie interpolation. For green pixels, 50% are guessed at.

And that is the reason why for scientific and technical applications the Bayer sensor is not used. For a Bayer image is "unreal" in the sense that the bulk of digital data it generated were not sensed but guessed at. It may guessed correctly, but it may not too, and we will never know.

What then is real or unreal to you?
doesn't that sounds like normal CCD and CMOS? 1 blue, 1 red and 2 green filter in every 4 pixels? :dunno:
information beside each individual pixel is used to create the array of RGB value..
 

doesn't that sounds like normal CCD and CMOS? 1 blue, 1 red and 2 green filter in every 4 pixels? :dunno:
information beside each individual pixel is used to create the array of RGB value..
yes it is, that's the "normal" Bayer sensor in your camera.
 

Come on.. what's the deal about 'real' and 'unreal' now? If that's the case, what we're seeing aren't fully 'real' either, since our eyes can't capture all spectrums of light and colours.

I think the main thing is, as long as the image captured looks ok to the photographer, and a higher standard isn't being demanded, then that's really all that matters? :dunno:

I forsee another big debate.. :sweat:
 

Come on.. what's the deal about 'real' and 'unreal' now? If that's the case, what we're seeing aren't fully 'real' either, since our eyes can't capture all spectrums of light and colours.

I think the main thing is, as long as the image captured looks ok to the photographer, and a higher standard isn't being demanded, then that's really all that matters? :dunno:

I forsee another big debate.. :sweat:
hahaha ... if "real" is no big "deal" to you, no problem; if it is, then it is. But facts are facts, ironic as it may be.
 

hahaha ... if "real" is no big "deal" to you, no problem; if it is, then it is. But facts are facts, ironic as it may be.

Haha not trying to step on anyone's tail.. just trying to advert any possible arguments :bsmilie:
 

Haha not trying to step on anyone's tail.. just trying to advert any possible arguments :bsmilie:
Why is anyone's tail to be stepped on? Since when has someone's tail been on the line knowing that the Bayer sensor is some guess at reality? Anyway as I say facts are facts, right?

And there is no problem with arguments. In fact I think arguments are good, if we learn something. Bad arguments are those where people refuse to learn.
 

hahaha ... if "real" is no big "deal" to you, no problem; if it is, then it is. But facts are facts, ironic as it may be.


actually what do you think about being "real" in data integrity being important to you. why is it important, in what way?

actually i think photography is entirely not supposed to totally represent our vision. photography relies on image reception with time, whereas our vision works on video mode (we dun collect exposure over 2 seconds, things come in and get sent off to the brain constantly). our angle of vision does not vary like what lens do, our perspective is almost always different from most focal lengths does. our perception of tonal range differs from that of the sensor. and last of all, there is no creative manipulation to our vision except for imagination in our mind.
 

actually i think photography is entirely not supposed to totally represent our vision ...
So what is photography "suppose" to be?
 

I think we need more than eyes to "see" ...

and that is imagination in the mind, not vision in the eyes. that imagination is use to spot interesting objects/moments/composition through the eyes applied on the camera to produce pictures the way the imagination sees. Through the lens and viewfinder, we see the picture to a certain degree of that imagination, and with further post processing, the picture is completed to satisfaction. Without the lens and viewfinder, our retina cannot see that imagination the mind do.

and i dun get what the article you post got anything to do with the retina can be manipulated like photography to produce the images the mind thinks about. when there is no important input though vision and yet images are totally from the brain, it would probably be dreams during sleeping or maybe in the blind.

when i talk about vision, see, eyes, they are as they are, not extended meanings, as it is already substantiated in the last sentence - the brain is important, even in sports. all i'm saying above is pretty straightforward, the way the eyes capture images and the way the sensor and camera capture images are not the same, basically for the reasons given above already.

anyway i think i have OT much. sorry to the others.
 

So what is photography "suppose" to be?

it is an art of image taking through a camera (which do not show the same object the same way we see with our eyes) and then we see that altered image on the screen/printout with our eyes.

that is the technical part. the non technical part is subjective and should be subjective.
 

Status
Not open for further replies.