Natural looking HDR, why not stack instead?


Status
Not open for further replies.

ArchRival

New Member
Sep 17, 2006
555
4
0
Dudes, newbie to HDR here.

Was writing my own program to do HDRi, and was wondering why hardly anyone uses stacking as an alternative. Here's a comparison between stacking and merging:

The base images used for test are http://i286.photobucket.com/albums/ll89/ArchRival_2008/Photos for Web/O3.jpg and http://i286.photobucket.com/albums/ll89/ArchRival_2008/Photos for Web/O1.jpg.

The program uses a hybrid stack/merge method with global tone mapping. Comparing the results of a total stack to total merge straight from the program without further pp:

ArtGallery6.jpg


Left is stack only, right is merge only. Note that the aim of the program is not to get a good image rightaway, but to get a maximum dynamic range image for further post processing.

The results after further post processing with curves and saturation are:

ArtGallery7.jpg


Both stacking and merging by themselves will give a maximum dynamic range image as afforded by the base images.

The merged result is at the end slightly more pleasing, but the stacking result is not far behind.
Stacking is faster and easier to use and will preserve the relative brightness of parts of the image. Perhaps most importantly, it is impossible to get haloing in stacking.

So, for natural HDRis, why are so few people using stacking?
 

What is this Stacking you speak of and how does it work?
 

By merging i mean opening your images in photoshop, take the bright sky from your -2ev shot, the dark foreground from your +2ev shot, and use a soft brush to blend them.

By stacking i mean cut and paste your -2ev shot on top of your +2ev shot and change its opacity to 50%.

Or their algorithmic or mathematical equivalents.
 

I believe that both techniques are used by people, depending on the situation. If the foreground-background separation is quite distinct, 'merging' is easy. It eliminates movements between shots too.

But if separation is messy, 'stacking/layering' seems easier.

I suppose Photoshop HDR and Photomatix uses stacking, merging all shots into 32 bits data, and doing local contrast?
 

Yeah, people in the know do use both.
Just that the HDR bandwagon is huge now, so I think a lot of people may just be jumping on it blindly.
 

I tried stacking photos in PS CS3 and the result turn out quite good...its a noob atempt! haha

What i did was adjust a pict in terms of (1)Levels (2)curves (3)saturation or contrast
Save each settings individually and stack them. Vioala!!! Compare to the original pict and u will see the diff!
 

is this method known as DRI?
 

By merging i mean opening your images in photoshop, take the bright sky from your -2ev shot, the dark foreground from your +2ev shot, and use a soft brush to blend them.

By stacking i mean cut and paste your -2ev shot on top of your +2ev shot and change its opacity to 50%.

Or their algorithmic or mathematical equivalents.

Err, don't mind me, but this is not HDRI at all.

In the first place, if you created a HDR Image, you would not be able to view it at all (without really sophisticated HW). It would not be viewable here in the CS forums for sure.

HDRIs are what they mean - high dynamic range and all the normal digital images (generated from all digital cameras, scanners) are low dynamic range images. By using multiple exposures of the same scene, software can then combine them together to form a single image that has a large(r) dynamic range than just one image.

For example (highly simplified example), if you shot a pic three times, -2, 0, +2 EV to create a HDRI, then the dynamic range of the HDRI is actually from -2 to +2. As you know, normally, an image is either -2, 0 or +2, but it can't be all three at the same time. So, the image cannot be viewed normally.

After creating a HDRI, you need to usually use tone mapping to map the high dynamic range back into low dynamic range, so that it can be viewed on your monitor. It is the tone mapping (which has many algos and settings) that in fact generates the many very nice HDRIs that you see around the web. This means that if you have a HDRI, and use different tone mapping, you will get very different looking images (in LDR).

The big debate in HDRI at the moment (IMHO) is which is a "real" HDRI - if you shot one pic in raw and then used software to push and pull +/-2 EV versus shooting 3 actual pics at -2/0/+2. Personally, I think you really need to shoot 3/5 shots because software push/pull from raw is still slightly different from really capturing light at a specific EV.

So I hope you will have a better understanding what HDRI really is and then you will see that this "merging" and "stacking" in photoshop is just another digital darkroom technique and not really HDRI.
 

Dude, if i understand correctly, stacking and merging are forms of tone mapping.
 

Dude, if i understand correctly, stacking and merging are forms of tone mapping.

Yes, but they are tone mapping of multiple LDR images, still nothing to do with HDRI.

OK, imagine this example.

I have three 8-bit GIFs. I decide to "stack" them and I end up with some new 8-bit GIF. Clearly, this is different from using 3 8-bit GIFs and creating a special 16-bit GIF. Then, using some tone mapping method, convert that 16-bit GIF into a new 8-bit GIF. The "HDRI" would be the special 16-bit GIF, while the normal 8-bit GIFs are LDRIs.

How about another example: Imagine an image that is has the full AdobeRGB AND sRGB at the same time? Could you view it properly? No, because the colour gamut cannot be properly represented. Which is why, to really view a HDRI, you need special SW/HW that can display far greater gamuts than just our standard 16mil colours etc.

Similarly, if we are used to seeing 16bit images, how about viewing native 32bit or 48bit images? In fact, a normal LDR image (like a 16bit one) is based on integer arithmetic, but in HDRI, we are using 32bit floating point numbers to represent a single bit.

In HDR, we're not talking simply about bits or gamuts of course, but the tonal range. In a normal LDR image, the tonal range that can be represented is much smaller than a HDR. Why don't you check out OpenEXR from ILM? I use that format mostly myself. The 16bit floating point representation can (with extra SW) be displayed using nVidia hardware. But even this is called a "half", with "full" meaning 32bit floating point.

So the use of digital darkroom techniques or tone mapping techniques is not by itself equivalent to HDRI. They are more like techniques to make HDRIs viewable on traditional (common) LDR HW and SW.

When I first looked at HDRI, I had the same ideas you had, but I've got a much better understanding now. There are plenty of websites dedicated to HDRI (and I don't mean to view the pix), you should take a look at them to better understand what HDRI is all about.
 

Yes, but they are tone mapping of multiple LDR images, still nothing to do with HDRI.

OK, imagine this example.

I have three 8-bit GIFs. I decide to "stack" them and I end up with some new 8-bit GIF. Clearly, this is different from using 3 8-bit GIFs and creating a special 16-bit GIF. Then, using some tone mapping method, convert that 16-bit GIF into a new 8-bit GIF. The "HDRI" would be the special 16-bit GIF, while the normal 8-bit GIFs are LDRIs...............

Ahhh...dude thanks for the clarification and the info. And the link. It looks pretty useful.

So merging and stacking = some form of tone mapping.
And HDR must be = 32 bit = 2^32 = 4294967296 levels.

In this case i must apologise for not making myself clear. The photoshop example is highly simplified.

A proper stack is not just adding the pixels. Stacking is adding, then averaging, which means if the originals are 8-bit, we have to at least go to 16-bit for the final image.

but in HDRI, we are using 32bit floating point numbers to represent a single bit.

Impressive. So each of your 2^32 levels are represented by 2^32 levels = 2^64??

In any case i use floats for all image processing, so i guess i'm okay.
 

Ahhh...dude thanks for the clarification and the info. And the link. It looks pretty useful.

So merging and stacking = some form of tone mapping.
And HDR must be = 32 bit = 2^32 = 4294967296 levels.

Actually, it is not quite like that - you also need to consider floating point, the example you are talking about is still integer representation of the image. Imagine if we use 1 bit to represent a low dynamic range. 0 is black and 1 is white. How would you define grey? So floating point 0.5 would give you 50% grey. So now, we use 32 bits in floating point to define the different shades of colour - hence high dynamic range.


In any case i use floats for all image processing, so i guess i'm okay.

Let me put this forth again. If you hit "Save" on your stacked "HDRI" and get file "X", does your file have graphic information encoded in 32 bit floating point - a single "R" pixel represented in 32 bit floating point? Or is it still a 16-bit integer file of some type?

Finally, I'm not the one that defined HDRI. Look up Wikipedia, or if you read ILM's OpenEXR. ILM is in this business for real, and OpenEXR has been used in Men In Black II, Harry Potter, etc etc. So they are the ones that defined HDRI, not me.
 

Actually, it is not quite like that - you also need to consider floating point, the example you are talking about is still integer representation of the image. Imagine if we use 1 bit to represent a low dynamic range. 0 is black and 1 is white. How would you define grey? So floating point 0.5 would give you 50% grey. So now, we use 32 bits in floating point to define the different shades of colour - hence high dynamic range.

What the.......?? Dude, you are all mixed up.
A bit is only 1 or 0, no such thing as 0.5 bits.
32 bits in floating point simply means the values are represented in double precision real numbers. The range used can be anything.

Let me put this forth again. If you hit "Save" on your stacked "HDRI" and get file "X", does your file have graphic information encoded in 32 bit floating point - a single "R" pixel represented in 32 bit floating point? Or is it still a 16-bit integer file of some type?

Yes. The data type are all "doubles", which are your "32 bit floating point" values.
 

What the.......?? Dude, you are all mixed up.
A bit is only 1 or 0, no such thing as 0.5 bits.
32 bits in floating point simply means the values are represented in double precision real numbers. The range used can be anything.

Yes, you are right. You would need more than one bit to represent 0.5, whereas a 1 or a 0 is a single bit. Hence the higher dynamic range. However, I was merely giving a highly simplified example. Like my earlier example of how an image can be -2, 0 and +2 EV at the same time which of course sounds ridiculous in normal terms, but that's the higher dynamic range in action.

I think this all came about primarily because right in your very first post:

Dudes, newbie to HDR here.

Was writing my own program to do HDRi, and was wondering why hardly anyone uses stacking as an alternative. Here's a comparison between stacking and merging:

...

So, for natural HDRis, why are so few people using stacking?

I was trying to illustrate that stacking and merging are tone mapping techniques, not HDR. Hence, there is no comparison between stacking/merging to creating HDRIs (which is why people don't use stacking/merging techniques when creating HDRI).

OTOH, if you say (for example) you use a averaging technique from three different images weighted with a 10px radius to generate a HDRI, then use a form of merging/stacking (based on additional information from the three images again) to (better) tone map the resultant HDRI into LDR, OK, now that makes more sense.

Anyway, since you are writing your own program, I wish you the best of luck, and I look forward to your success.. May you show those Photomatix, ILM et al.. people a thing or two ok?
 

Anyway, since you are writing your own program, I wish you the best of luck, and I look forward to your success.. May you show those Photomatix, ILM et al.. people a thing or two ok?

Thanks for the encouragement dude, but i'm only 1 person and the program is only 1 week of effort.

In any case, while i don't claim my methods are as good as the pros, here's a comparison with photomatix's grand canal images.

Own program, merging with 32 bit floating point precision :
http://i286.photobucket.com/albums/ll89/ArchRival_2008/Photos for Web/RGB2-1.jpg

Own program, stacking with 32 bit floating point precision :
http://i286.photobucket.com/albums/ll89/ArchRival_2008/Photos for Web/RGB_Stacked.jpg

Manual stack in ps2, 8 bit image:
http://i286.photobucket.com/albums/ll89/ArchRival_2008/Photos for Web/Photoshop_Stacked.jpg

The photomatix result:
http://www.hdrsoft.com/images/grandcanal/tm.html

In all cases details have been recovered throughout the images.

I know cs3 has a stacking function. I'd appreciate if anybody with cs3 can do a benchmark to show.
 

I'll use stacking whenever necessary.Sometimes it doesn't end there.
I'll use the exposure from stacking and blend them again with other exposure maybe from
a tonemapped image.

All this techniques are there and available to use.I find them great with certain type of shots.There's no "barrier" for techniques in photography either you have the time to process or leave it as it is :cool:

Congrats on your new program :)
 

Tried an old code on a single image. Hybrid of stacking with some local tone mapping. The local tone mapping technique does a great job of enhancing the noise and is still kinda crappy, but still fun to see what the code can do.

The original image:
Original.jpg


The post processed image:
RGB_Pyramid_3.jpg
 

Status
Not open for further replies.