8-bit, 16-bit & colour gamut question


Status
Not open for further replies.

Veronica Choo

New Member
Feb 26, 2004
17
0
0
45
North
Harlow..!

I'm new here... was recommended by word-of-mouth after I got a 2nd hand Canon D30. I've a question which I hope the experienced and kind 'brothers and sisters' could help me! Thnx!

Before I convert my RAW images, there is an option in the software that allows me to sleect either 8 or 18-bit TIFF conversion. From whatever little I understand, there are more 'shades' of colours that can be represented by 16-bit.

1. Does that mean if I want an image (either to display on screen or for printouts) that fully utilizes a much wider range of colours, I should select 16 instead of 8-bit?

2. If the above makes sense, then does it mean using an 8-bit representation cause colours to be 'clipped'? Is this the same as saying the colour gamut (say I attach Adobe 1998 as my ICC profile to the image) is reduced also?

Ok, pardon me if the above sounds basic or inaccurate. I sincerely appreciate any help!!!
 

Pardon me for asking, but what software are you using that can select the 8 or 16-bit TIFF conversion?
 

Veronica Choo said:
1. Does that mean if I want an image (either to display on screen or for printouts) that fully utilizes a much wider range of colours, I should select 16 instead of 8-bit?
No. The number of bits affect the gradations of shades between the brightest and darkest pixel for the individual RGB channels.


2. If the above makes sense, then does it mean using an 8-bit representation cause colours to be 'clipped'? Is this the same as saying the colour gamut (say I attach Adobe 1998 as my ICC profile to the image) is reduced also?
With 8 bits, there are 256 gradations for each of the RGB colour channels. With 16 bits, there are 65536 gradations for each of the RGB colour channels. The actual gamut is affected by the colour space used, ie Adobe colourspace spreads the colours out more than sRGB colourspace. If you save an image using a narrower gamut colourspace, then you might 'clip' some of the colours. Whether it is 8 bit or 16 bit, the absolute maximum pixel value is identical if the same colourspace is used.
 

I'm using Canon's software...

The gradations you're referring to is not the same as 'shades' of colour (more colour) then? I might have gotten confused. :embrass:

So if 8 or 16-bit affects the brightness of pixel, does that mean using 16-bit conversion will give my image a better quality? I tried doing both but can't quite see the difference on screen with my own eyes.

Thanks again!
 

Veronica Choo said:
The gradations you're referring to is not the same as 'shades' of colour (more colour) then? I might have gotten confused. :embrass:
Gradations = shades of colour. However, all the extra shades of colours are in between the brightest and darkest pixel values.

For example, with 1 bit encoding, you can have either black or white. With 2 bits you can have black, dark grey, light grey and white.


So if 8 or 16-bit affects the brightness of pixel, does that mean using 16-bit conversion will give my image a better quality? I tried doing both but can't quite see the difference on screen with my own eyes.
16 bit conversion gives better image quality if you are adjusting the curves, contrast, colours, etc of the image. The more you adjust, the more obvious the difference in quality. If it is just a straight conversion without any adjustments, there will not be any discernable difference in image quality.
 

Status
Not open for further replies.