Will resizing photo reduce noise?


maisatomai

Deregistered
Oct 26, 2006
357
0
0
I noticed when viewed at web size, image look good. But if it is viewed at a bigger size, the flaw started to come out. I was wondering whether resizing photo can reduce noise?
 

It just makes noise less obvious, but the noise is still there.

When you have less fine detail to differentiate noise from, obviously it is not so obvious.
 

edutilos- said:
It just makes noise less obvious, but the noise is still there.

When you have less fine detail to differentiate noise from, obviously it is not so obvious.

I don't fully agree. Resizing, if is reduction, does reduce noise, should the algorithm uses is an averaging filter or if the resizing kernel consult surrounding pixels to obtain the output pixel.

Noise is mostly inaccuracies in representation. Averaging technique on larger sampling is identical to multiple attempts on the same scientific laboratory experiments and obtain the average outcome of all samples. This helps to eliminate inaccuracies in the sampling process which contains errors.

However in a 2D image, averaging is damaging to edges of the image where there is a steep change in color values, hence slightly better algorithms which are polynomial in nature such as bicubic resampling are used to approximate the changes in the changes differential and also consult a much larger 2D sampling for better approximation.

The less detail is certainly true, but it comprise of more information than if taken with lower resolution in the first place. This is also fundamentally the same theory as why larger pixels found in full frame sensors are more advantages because of the larger surface area of larger sampling area which consolidate optically into one single pixel (simplified scenario, not considering RGB array arrangement)
 

I noticed when viewed at web size, image look good. But if it is viewed at a bigger size, the flaw started to come out. I was wondering whether resizing photo can reduce noise?

Is it noise or pixellation? if you display an image that is larger than the resolution is capable of, the picture will be pixellated.

Pixelation - Wikipedia, the free encyclopedia
 

Last edited:
I don't fully agree. Resizing, if is reduction, does reduce noise, should the algorithm uses is an averaging filter or if the resizing kernel consult surrounding pixels to obtain the output pixel.

Noise is mostly inaccuracies in representation. Averaging technique on larger sampling is identical to multiple attempts on the same scientific laboratory experiments and obtain the average outcome of all samples. This helps to eliminate inaccuracies in the sampling process which contains errors.

However in a 2D image, averaging is damaging to edges of the image where there is a steep change in color values, hence slightly better algorithms which are polynomial in nature such as bicubic resampling are used to approximate the changes in the changes differential and also consult a much larger 2D sampling for better approximation.

The less detail is certainly true, but it comprise of more information than if taken with lower resolution in the first place. This is also fundamentally the same theory as why larger pixels found in full frame sensors are more advantages because of the larger surface area of larger sampling area which consolidate optically into one single pixel (simplified scenario, not considering RGB array arrangement)

I'm afraid you lost me after the "I don't fully agree".

I just presented my view as a photographer without any technical knowledge of how resizing algorithms work, etc. In any case, I hope you're right, though I don't quite understand what is being explained here, nor do I really care about what is done. What matters to me is how to get what I want, and to me, the presence of noise in an image is with relation to the initial output, not the final. If I may speak my mind, I don't think many people here really care either. Cheers.
 

Wah.... Mati ah~ Sounds chim to the max Edu.... :'(

My brain cannot take it... AarGghh~ Can summarize? :p
 

I think David has already summarised it for our benefit. Lol. But a layman explanation may be, if the picture gets too small then it's definitely harder to detect noise. But if ran through a computer, it may detect there is indeed less noise.

Basically it's what our (untrained) eyes are seeing as opposed to what a (trained) computer is seeing. For example, if you can see the differences in noise just like how a computer can, then it matters to you. The rest of us untrained eyes won't notice the difference. Both ways are correct no right or wrong, just what you believe in and what you want to do about it.

I believe the common thing is we don't bother with it because there's little point in shrinking down an image so small you can't enjoy when looking at it. If noise is an issue for you then correct for it when shooting.
 

I don't fully agree. Resizing, if is reduction, does reduce noise, should the algorithm uses is an averaging filter or if the resizing kernel consult surrounding pixels to obtain the output pixel.

Noise is mostly inaccuracies in representation. Averaging technique on larger sampling is identical to multiple attempts on the same scientific laboratory experiments and obtain the average outcome of all samples. This helps to eliminate inaccuracies in the sampling process which contains errors.

However in a 2D image, averaging is damaging to edges of the image where there is a steep change in color values, hence slightly better algorithms which are polynomial in nature such as bicubic resampling are used to approximate the changes in the changes differential and also consult a much larger 2D sampling for better approximation.

The less detail is certainly true, but it comprise of more information than if taken with lower resolution in the first place. This is also fundamentally the same theory as why larger pixels found in full frame sensors are more advantages because of the larger surface area of larger sampling area which consolidate optically into one single pixel (simplified scenario, not considering RGB array arrangement)

Wow.. super technical. loads of informations.. but I like... thanks for the detailed explanation..
 

i think in layman terms

1. when you re size your image the noise also become smaller
2. smaller noise is harder to see than bigger noise
3. the act of re sizing will remove data and generate new average pixels, so some noise will get averaged
 

Thumbs up!!! I didnt know that you can explain on such a chim level.. Power!

I don't fully agree. Resizing, if is reduction, does reduce noise, should the algorithm uses is an averaging filter or if the resizing kernel consult surrounding pixels to obtain the output pixel.

Noise is mostly inaccuracies in representation. Averaging technique on larger sampling is identical to multiple attempts on the same scientific laboratory experiments and obtain the average outcome of all samples. This helps to eliminate inaccuracies in the sampling process which contains errors.

However in a 2D image, averaging is damaging to edges of the image where there is a steep change in color values, hence slightly better algorithms which are polynomial in nature such as bicubic resampling are used to approximate the changes in the changes differential and also consult a much larger 2D sampling for better approximation.

The less detail is certainly true, but it comprise of more information than if taken with lower resolution in the first place. This is also fundamentally the same theory as why larger pixels found in full frame sensors are more advantages because of the larger surface area of larger sampling area which consolidate optically into one single pixel (simplified scenario, not considering RGB array arrangement)
 

I don't see how a photographer should care less about technicalities. All over the Internet, we see not a few, but numerous photographers from casual to hobbyists to professionals that discuss about how fundamental on how cameras are made and how photographers depends on these technicalities either to exploit them or adapt to them to bring the best out of these tools. I have no intention to undermine what you know about photography, in any case, you could very well have much better appreciation in photography than I do. What I have explained is what I understood about the effects of sensor size with respect to the noise or errors as we observed.

In fact, it's not more than once that I have made personal experiments about these phenomenal before I write about them. I don't fancy writing for the sake of writing and mislead the community here. What the community would like to accept or dismiss is totally up to individual's digression. I observed inaccuracies in the information presented; I stand to correct to my best capabilities.

It doesn't have to do with the fact on who you are or what you do, it doesn't matter to me nor my explanation presented earlier. It's for the community to absorb should it make sense to them and for them to verify themselves that it's right or wrong.
 

Wah.... Mati ah~ Sounds chim to the max Edu.... :'(

My brain cannot take it... AarGghh~ Can summarize? :p

To elaborate further on what I have explained earlier. Noises in sensors are erratic and normally doesn't have a fixed pattern. This is true spatially and temporally. What spatially means is when I use 2x2 sensor pixels taking a flat surface of GRAY (assuming the GRAY is purely RGB(128,128,128)), noises exists in the capturing and hence
you get pixel (1) with values (127,128,128), pixel (2) with values (129,128,128), pixel (3) with values (128,125,128) and pixel (4) with values (128,130,128). The presented pixel values are hypothetical but highly possible. You perform a simple downsize by reducing the image by 1/2 along the width and height, meaning a 2048x2048 image becomes 1024x1024. Now the 2x2 pixels will becomes 1 pixel value right ?

Do a simple averaged math, you get the single pixel value becomes (128,128,128). It means even noise exist, it is possible to get back the original value. Is this too good to be true ? No it isn't, but will you definitely get back the original intended value ? No you wouldn't, but based on mathematical probability, you will likely get a nearer to original value than if you take at 1024x1024 RAW pixel in the first place, because it's a 1 time sample versus I have 4 samples for a 2x2 pixels and reduce into 1 pixel value. That is also true why if you go to casino and gamble, on BIG and SMALL across infinite number of times, your chances of winner is getter nearer and nearer to 50%. In realistic, you will never be 50% because you have less money and time compared to the casino, so in the end, you always LOSE if you are willing to stay long enough. (The casino example is on the presumption that LUCK is not involved and purely mathematical. Those still studying can go and verify with a mathematician in the school if you are not convinced)

The above is known as spatial noise reduction. How about temporal noise reduction. Temporal noise reduction is widely used in video post processing to reduce noise. The samples span across time instead of space. For still photography, this is possible too. I have did it before.

Placed your camera on a tripod, use M-UP mode or timer mode to take like 10 images of the same still scene using a high ISO settings like ISO3200 ? Then stack each photograph as layers in photoshop and apply mean operations across the stack of images, you will find the output image less noisy. Follow the instruction here to understand how the operation works Adobe Photoshop CS4 * Creating an image stack (Photoshop Extended)

If you are interested to learn more about noise reduction techniques, I found some good reading material found at
Noise Reduction By Image Averaging

There is no rocket science to this, you have learned in your school. The issue here is finding how to apply the technique into real life scenarios.

There is no line drawn between a scientist and an artist. The line is often drawn yourself.

Just a piece of information for some reading this explanation about temporal noise reduction and where else in Singapore you can definitely use it. I have come across multiple articles in clubsnap from some asking how to take long exposure on Singapore flyer at night. A partial answer to this is the following. If you are using long exposure to get a smooth water, then I'm afraid the movement of the rotation doesn't quite do it. You get smearing. If your intention is to get clean images, then the alternative is temporal noise reduction techniques on high iso setting for fast shutter speed.

Now the following assumption is requires for best output
1) The time taken between each shot is as fast as possible, best to be under 1s, but I guess 5s works too.
2) You use using on tripod
3) Realignment of images is required (can be easily done in photoshop using auto realignment)

Use the lowest ISO you can find for your camera and take multiple like between 5 to 10 images using continuous shooting mode for the same scene. The shots are to be taken as fast as possible like 1/30s or faster. You would want to use a relatively high ISO, but going as low as possible as long your exposure time is fast enough to prevent smearing.

Use the images stacking technique proposed earlier to reduce the noise, but you will need to align the images because across 5 to 10s, there will certainly be movement, just that been minor, the motion can be taken almost linear in nature. This will not help to recover lost in colours, but noise will certainly be very much reduced.
 

Last edited:
David Kwok said:
To elaborate further on what I have explained earlier. Noises in sensors are erratic and normally doesn't have a fixed pattern. This is true spatially and temporally. What spatially means is when I use 2x2 sensor pixels taking a flat surface of GRAY (assuming the GRAY is purely RGB(128,128,128)), noises exists in the capturing and hence
you get pixel (1) with values (127,128,128), pixel (2) with values (129,128,128), pixel (3) with values (128,125,128) and pixel (4) with values (128,130,128). The presented pixel values are hypothetical but highly possible. You perform a simple downsize by reducing the image by 1/2 along the width and height, meaning a 2048x2048 image becomes 1024x1024. Now the 2x2 pixels will becomes 1 pixel value right ?

Do a simple averaged math, you get the single pixel value becomes (128,128,128). It means even noise exist, it is possible to get back the original value. Is this too good to be true ? No it isn't, but will you definitely get back the original intended value ? No you wouldn't, but based on mathematical probability, you will likely get a nearer to original value than if you take at 1024x1024 RAW pixel in the first place, because it's a 1 time sample versus I have 4 samples for a 2x2 pixels and reduce into 1 pixel value. That is also true why if you go to casino and gamble, on BIG and SMALL across infinite number of times, your chances of winner is getter nearer and nearer to 50%. In realistic, you will never be 50% because you have less money and time compared to the casino, so in the end, you always LOSE if you are willing to stay long enough. (The casino example is on the presumption that LUCK is not involved and purely mathematical. Those still studying can go and verify with a mathematician in the school if you are not convinced)

The above is known as spatial noise reduction. How about temporal noise reduction. Temporal noise reduction is widely used in video post processing to reduce noise. The samples span across time instead of space. For still photography, this is possible too. I have did it before.

Placed your camera on a tripod, use M-UP mode or timer mode to take like 10 images of the same still scene using a high ISO settings like ISO3200 ? Then stack each photograph as layers in photoshop and apply mean operations across the stack of images, you will find the output image less noisy. Follow the instruction here to understand how the operation works Adobe Photoshop CS4 * Creating an image stack (Photoshop Extended)

If you are interested to learn more about noise reduction techniques, I found some good reading material found at
Noise Reduction By Image Averaging

There is no rocket science to this, you have learned in your school. The issue here is finding how to apply the technique into real life scenarios.

There is no line drawn between a scientist and an artist. The line is often drawn yourself.

Ah!!! Now THAT explains why there are some people who shoots 10 similar shots and then stack them up! Even at lowest ISO and tripod... For "best" possible IQ, right? :D
 

Ah!!! Now THAT explains why there are some people who shoots 10 similar shots and then stack them up! Even at lowest ISO and tripod... For "best" possible IQ, right? :D

They could be doing HDR using manual or automatic bracketing. But I wouldn't recommend using this method for best IQ when you can afford lowest ISO and mounted on tripod. Long exposure gives you better output versus the method I proposed earlier. You won't want to try use both at the same time either. It introduces too much errors which can drift the result.
 

david, ur going into the realm of programming and digital imaging and mathematics. even with my background of programming, videography and maths, i'm having a very hard time just reading it (but yes i get your message). i don't know if its just me or what, but ur past few posts have been getting into extremely technical stuff, which can end up confusing more people than helping them.


in short. alex ortega summed up everything nicely. so for the less tech savvy (most of us), i feel that will be good enough.

when u resize smaller, u take averages. when noise gets averaged, it can become less significant. smaller images hence can result in less "visible" noise. but on the down side, fine detail is also lost, due to averages taken.

does this make sense? let me know if i'm mistaken.
 

Last edited:
david, ur going into the realm of programming and digital imaging and mathematics. even with my background of programming, videography and maths, i'm having a very hard time just reading it (but yes i get your message). i don't know if its just me or what, but ur past few posts have been getting into extremely technical stuff, which can end up confusing more people than helping them.


in short. alex ortega summed up everything nicely. so for the less tech savvy (most of us), i feel that will be good enough.

when u resize smaller, u take averages. when noise gets averaged, it can become less significant. smaller images hence can result in less "visible" noise. but on the down side, fine detail is also lost, due to averages taken.

does this make sense? let me know if i'm mistaken.

I agree these discussions are relatively technical and I apologize if I have confused the community. I believe a couple of postshave summarized it pretty well.

For anyone interested to understand in depth, feel free to read deeper into it on the technicalities. Simply to put, averaging technique during image downsizing reduce errors, as I have mentioned right at the beginning. You can go ahead to use it as an approach to reduce noise. Nonetheless the usual tips on how a photographer will choose the ISO settings and aperture and exposure time for optically optimal quality is still the best way to go. Digital approach can only do so much, a clean source is still your best bet for best output quality.

Downsizing works because most photographers don't need 12MP or even 36MP for the upcoming D800. They simply overkill your workflow when you are just going for web or 4R printouts.