Guide: About digital aspects of DSLR


Status
Not open for further replies.

zoossh

Senior Member
Nov 29, 2005
8,725
0
36
Singapore
Please do not post in this thread in order to maintain a clean and efficient platform.

Status: 2009 Jan 03, still editing and re-verifying old information

Summary: About digital input and output

1. Light conversion into data: The sensor
1.1 What newbies need to know about the sensor
1.2 Types of sensors


2. Sensor size, frame and focal length
2.1 Concept of diagonal measurement of the rectangular sensor
2.2 Again the angle and the field of view, with change of frame contents and magnification of subject - and what is lens conversion factor?
2.3 Sensor size and resultant focal length, the relationship in crop factor/conversion factor
2.4 Change of sensor size in changing angle of view
2.5 Change of focal length in changing angle of view
2.6 Compensation of focal length to different sensor size


3. Sensor size
3.1 Why 35mm equivalents is not 35mm wide?
3.2 APS sizes of film and sensor
3.3 APS size sensors to 35mm equivalents
3.4 True sensor size and aspect ratio
3.5 The new 4/3 systems for DSLR
3.6 The larger formats


4. Sensor and image resolution
4.1 Resolution, pixels and megapixel
4.2 Does megapixel matters?
4.3 Can megapixel value always be compared?


5. Image quality
5.1 Image quality in general
5.2 Picture formats: RAW, jpeg and other formats
5.3 What jpeg compression should I use?


6. Image output size and aspect ratio
6.1 DPI
6.2 Printout size formats
6.3 Aspect ratio, cropping and composition


7. File size, transfer, storage
7.1 Keeping your digital photos


8. Digital effects, defects and applications
8.1 Processing: In-camera and post-camera processing
8.2 Concept of processing and realism


.
 

Last edited:
1. Light conversion into data: The sensor


1.1 What newbies need to know about the sensor.

Essentially the sensor replaces the film as a light receptacle and replaces the features inherent in film, e.g. fixed ISO per roll of film and lack of reciprocity law failure. This is independent of the rest of the camera body, but the feature that allows change of ISO without change of receptacle means that increased latitude to exposure is possible.

Wikipedia link

Unlike the standard 35mm film size, modern small format DSLRs (against medium format and large format) have variable sizes and aspect ratios. The compact camera again have a smaller size. These smaller sizes are known as APS sizes, mentioned as below.

The change of these sizes are measured in terms of their diagonal lengths (due to different aspect ratio, as well as compatibility to the image circle of the lens) and expressed in terms of crop factors. These crop factors convert the physical focal length into 35mm equivalent focal lengths that will give the same frame as a 36x24mm film size will show. This is critical because it affects one's purchasing consideration in lens and how one would shoot using the different focal length suitable for his sensor size and his standing distance from the subject.

Next, one need to understand the difference between physical size and pixels, if they are interested in knowing more about the next body upgrade and whether it is worthwhile getting the new body for the different sensor. However, in general, there is little one need to worry if his sensor is more than 6MP, if he prints less than a A5 printout. One who do printing or need to resize the photo may need to understand what is DPI as below.



1.2 Types of sensors

In digital cameras, at the moment, sensor technologies are either a coupled charged device (CCD) or a complementary metal oxide semiconductor (CMOS). Practical concern is that CMOS has lower noise and is present in higher end camera body, but most manufacturers use CCD and try to improve on the quality of the CCD itself, probably due to the cheaper cost of manufacturing or other technical factors.

At the moment, most sensors make use of what we call a Bayer's array to collect data of red, green and blue light (RGB) to formulate the eventual color. Hence each dot formed from the red, green and blue light forms a pixel. In contrast, a Sigma Foveon sensor calculates the red, green and blue light separately, forming 3 pixels from that. Some of the new sensors also have pixels that do not process RGB data but instead are used to collect and process other data, hence comparison of megapixel (number of pixels) cannot be taken at face value due to different calculations, unless the sensor types are within the same group.

Of special interest to good reproduction of skin hues, Fujifilm also has a superCCD sensor that gives a higher dynamic range and more vibrant colors. Discussion on current sensor types.

By 2006 Jun, Kodak annouces an image sensor technology that features panchromatic pixels. Further sensor technology are always occuring with each unveiling of newer generations of bodies from rival competitors out to surpass each others.

.
 

2. Sensor size, frame and focal length


2.1 Concept of diagonal measurement of the rectangular sensor

As you might have noticed in diagram below, the diagonal measurement is being done. This is a concept that need to be grasp when doing focal length comparison. The diagonal measurement is the largest dimension of the sensor and correlates to the image circle that is larger than the sensor (a circle that surrounds the rectangular sensor).

Question may be asked as to how are we going to compare sensors with different aspect ratio, other than perhaps the overall area (aka size) of the rectangular sensor. Remember that our lens give light through an aperture that is close to a circle - the image circle. However, it falls onto a 4 cornered (rectangular or square) sensor that must be within this image circle, i.e. the 4 corners just touch the outline of the image circle. The 4 corners can be at any point of the circle with different aspect ratios - but only a single feature determines the similarity - that is their diagonal measurement is a constant, which is literally the diameter of the image circle.

By using this value, we can ignore the aspect ratio and compare the sensor size faster and determine the so called lens conversion factor.

Another important thing to note is that the sensor size must fit into the image circle. If we have too small an image circle, as in the digital-only formats lenses meant for only APS-C size sensors, e.g. Nikkor DX formats, and put onto a full frame sensor, there will be no light falling onto the sensor leading to totally black areas at the corners (looking similar to a physical vignette).



2.2 Again the angle and the field of view, with change of frame contents and magnification of subject - and what is lens conversion factor?

Pending.


2.3 Sensor size and resultant focal length, the relationship in crop factor/conversion factor

a1.jpg


In the same article Ken Rockwell wrote about the CCD sensor size, he mention about the focal length and crop factor.

Essentially by the above diagram, it shows that a smaller sensor size will lead to a change in the angle of view and field of view at the same focal length. As different manufacturer comes out with different sensor size in different models, in order to compare the angle of view and field of view, they set the old film/full frame 35mm standard as the point of comparison. All sensors, regardless of their size, will be compared to how the 35mm sensor size will look at a different focal length.

Hence, as above, a 18mm x 12mm (diagonal 21.6mm) sensor size, being half the size of the full frame, has a conversion factor of two.
At 26mm focal length, it will give an angle of view at 45 degrees.
A full frame sensor will give a similar angle of view at 45 degrees at double the focal length at 52mm.
Hence, the smaller sensor will give a similar picture at 26mm to what a full frame sensor will give at 52mm.

Effectively, although the focal length is fixed at 26mm and it doesn't change physically, the picture output is similar to that of 35mm equivalents at double its focal length of 52mm. This conversion factor of two hence applies for all focal length. Whatever focal length this sensor shoots, the picture with the same angle/field of view and the same composition/perspective will be similar to that of a 35mm equivalents at double the focal length. A 10mm focal length will have a 35mm equivalents of 20mm, a 150mm focal length will become 300mm, and so on and so forth.

.
 

Last edited:
2. Sensor size, frame and focal length


2.4 Change of sensor size in changing angle of view

Clipboard02a.jpg
Clipboard03a.jpg


.
 

2. Sensor size, frame and focal length


2.5 Change of focal length in changing angle of view

Clipboard04a.jpg
Clipboard05a.jpg


.
 

2. Sensor size, frame and focal length


2.6 Compensation of focal length to different sensor size

Clipboard06a.jpg
Clipboard07a.jpg


.
 

3. Sensor size


3.1 Why 35mm equivalents is not 35mm wide?

The prevalent use of the 35mm SLR film format for the past few decades forms the basis of comparison of the sensor size of modern digital photography.

We commonly refer the comparison to this film format as 35mm equivalents, but as you read more, you may wonder why then is the dimension of 36mm x 24mm, with a diagonal distance of 43mm, ended up being referred to as 35mm. Where and how is this value derived from?

The value of "35" does not play much mathemathical value. It is merely the name quoted to describe the 35mm traditional film with the width of the film (height of image) including the sprockets. The dimension of each film's imaging unit is actually 36mm (width) x 24mm (height). When excluding the sprockets, that 35mm measurement of the width of the film roll becomes 24mm as the height of the image in a landscape orientation.

Sounds confusing if you can't picture well in words? Just look at this image from the panaroma factory.

Hence whenever we are talking about a full frame sensor size giving 35mm equivalents of focal length, we are talking about the sensor sizes that are closer to that of a film - a rectangular dimension of 36mm (width) x 24mm (height) = 43mm (diagonal). This diagonal length is also the same as the diameter of the image circle.

.
 

3 Sensor size


3.2 APS sizes of film and sensor

We often see the description of APS with regards to sensor sizes, likewise they are present for film format too. From wiki, APS stands for Advanced Photo System derived from the film setting, and subsequently applied on digital sensors. It simply means a smaller film size compared to the 35mm film size (135 film) and and comes in three image formats.

What we almost always see is the APS-C (classic) sized sensors which is in a very similar aspect ratio to 35mm film. Aspect ratio of an APS-C sized film is 1.503 (3:2) at 25.1 x 16.7mm while a 35mm film is 1.500 (3:2) at 36mm x 24mm. Just as a note, another two APS formats that we do not see in the conventional DSLR bodies at the moment are H for "High Definition" (30.2 x 16.7 mm; aspect ratio 16:9; 4x7" print) and P for "panoramic" (30.2 x 9.5 mm; aspect ratio 3:1; 4x12" print). The "C" and "P" formats are formed by cropping from "H".

When a description of APS-C size is applied on a sensor, it does have a similar aspect ratio at about 3:2 similar to an APS-C sized film and a 35mm film, but the sizes of such sensors are often smaller than a APS-C sized film. As mentioned above, an APS-C sized film is 25.1 x 16.7mm. The size of a Nikon APS-C sized sensor come close to that of an APS-C sized film, measuring 23.6 x 15.7mm, also at the aspect ratio of 1.503. The size of a Canon APS-C sized sensor measures 22.2 x 14.8mm, at the aspect ratio of 1.500 similar to the 35mm film.

Most of our current DSLRs comes with a APS-C size sensor (approximately) and some of the lens meant for digital cameras also have an image circle suitable only for the smaller sensor size, e.g. Nikon DX.



3.3 APS size sensors to 35mm equivalents

This is what we compare with film as a baseline of comparing our picture dimensions and that of the sensor size, in relation to the focal length and the distance of the subject from the camera, or more specifically the optic centre.
d2.jpg

Likewise most of our DSLR sensor size is smaller, coming close to that of a APS-C size, which is approximately 24mm (width) x 16mm (height) = 29mm (diagonal). For example, my Nikon D50 at a conversion factor of 1.5, specification given is 23.7mm x 15.6mm.

Read more about Bob Atkin's photography on full frame sensor (Canon EOS 5D) v.s. APS-C size sensor (Canon EOS 20D).

Ken Rockwell feels that the smaller APS-C size sensor will be the standard for the future because it makes the design smaller, lighter, cheaper and faster, whereas Bob Atkin feels that eventually the cost of making full frame sensors will be competitive and since it is better in noise control with a physical size that is not much bigger, full frame cameras will dominate the future, but he feels that he would still buy the cheaper APS-C size lenses for current use.

.
 

3. Sensor size


3.4 True sensor size and aspect ratio

While the physical size of the sensor is given, the true size of the sensor part that captures image data may be smaller. This is often not available from the manufacturer and can lead to spurious comparison.

This image from wrotniak shows the 35mm equivalent full frame (so called 135 full frame), Canon APS-C, Sigma Foveon and the 4/3 system sensors.

Aspect ratio refers to the visualised width versus height of a landscape orientation. It is best compared as a decimal point figure but most commonly described as a fraction with different denominators.

Common aspect ratio (given as a decimal point value relative to height)
1. 1.00: square formats of the traditional medium formats
2. 1.33: 4/3 formats of traditional TV screen, most of the compact camera's output and the new Olympus/Panasonic systems.
3. 1.50: 3/2 formats of old 35mm film and most DSLR systems
4. 1.78: widescreen 16:9 TVs
5. 1.85: most anamorphic widescreen dvd formats.
6. 2.35: cinemascope, for some widescreen movies.

read more into wikipedia and widescreen.org if you are interested, but they talks more about video rather than photography.



3.5 The new 4/3 systems for DSLR

I tried to raise my queries with regards to the value of the 4/3 systems in this thread. My derived opinion is that 4/3 more closely take up more area of the circular area where light enters through the lens. As a result of the proportion, a shorter width gives the same overall area of sensor, but allows more compact setup within the body. But in that case, am i right to say that by the same logic, it should have been 1:1 square format that should be the most optimal setup, and 4:3 (AR 1.33) is just a compromise between 3:2 (AR 1.50) and 1:1 (AR 1.00).

More about the sensor size and the 4/3 sensor is discussed in wrotniak and of cos in the relatively new 4/3 forum.



3.6 The larger formats: Medium and large formats

This is for interest sake, as the larger formats naturally means they are more bulky and heavier, and is less popular with the current photographic community in view of relative portability. The digital models are usually more expensive and not really affordable for most of us. However, the larger size also naturally means better resolution in output at same size and is still used by landscape photographers.

Current digital cameras that comes in these larger formats generally comes separately as digital backs to a front body.

As quoted from wikipedia, "Large format describes large photographic films, large cameras, view cameras (including pinhole cameras) and processes that use a film or digital sensor, generally 4 x 5 inches or larger. The most common large formats are 4×5 and 8×10 inches. Less common formats include quarter-plate, 5×7 inches, 11×14 inches, 16x20 inches, 20x24 inches, various panoramic or "banquet" formats (such as 4x10 and 8x20 inches), as well as metric formats, including 9x12 cm, 10x13 cm, and 13x18 cm."

And for the medium format, as per wikipedia, "the term applies to any film size in-between 35 mm and large format (4"×5" or more) sheet film and to the type of camera that uses the format. Due to the higher image resolution offered by the larger film size, the majority of medium-format users are professional photographers who often require fine image detail, but the format is also favoured by many amateur enthusiasts.

In digital photography, medium format refers to the use of cameras adapted from medium format film gear, fitted with digital backs incorporating sensors larger than 24 by 36mm (the typical frame size used on 35mm film). As of 2006, medium format digital photography peaks at sensors of 36 by 48 mm, with 39 million pixels. These new high resolution sensors bring the feedback and greater shooting speed of digital to the medium format world."

For simplicity, I used the long axis length as the comparison parameter and remember the 35mm film as 36mm wide, the medium formats as less than 100mm wide, and the large formats as more than 100mm wide.

.
 

4. Sensor and image resolution


4.1 Resolution - pixels and megapixel

Resolution, more specifically pixel (matrix) resolution or spatial (line pair) resolution, in a more laymen term, refers to how clear and sharp to our eyes is the data represented by the little dots that makes up our picture, which could be 100 larger dots or 1 000 000 000 much smaller dots, covering the same area. Small dots in the same area gives finer details, wider tonal range, wider color spectrum, closer representation to the true color & intensity. Assuming no other defects, the higher the resolution, the clearer and sharper the picture.

Apparently each of this dot-like information, carrying color and tone, are known as pixels (px), and together they give rise to details on a picture.

Very often we have manufacturer's not giving us the no of pixels per area of the sensor, but rather just the total number of pixels, conventionally known as megapixels (MP). A megapixel is equal to 1 million pixels, and a sensor with 2048px (width) × 1536px (height) will give 3145728 px or 3.1MP whereas 3008px (width) x 2000px (height) will give 6016000 px or 6.0MP.



4.2 Does megapixel matters?

Higher resolution gives a better representation of the actual light collection process on the sensor, but many other factors affect the optical quality and digital quality of the resultant picture. The eventual sharpness of the resolution also depends on the output medium which could be a computer screen or a printout. You can print the same amount of data from the same sensor on a small 3R picture or blown it up to A3 size, and it will look sharper on one and totally blur/pixellated on the other.

However, a higher resolution gives you more data per area to handle, and is important for post-processing. The bad part is that it takes a larger file size per picture. I do not agree with those who claim that the megapixels is not important at all, but this is true that a large increase of the no. of megapixels translate to actually only a small increase in the perception of size, especially at the higher pixel counts. Read Ken Rockwell's article on that.

A lack of resolution at those lower megapixel camera is probably better seen between those at 3MP to the next generation at about 6MP, whereas those at 10MP is less obvious compared to 6MP, especially when viewed over the screen. Correct color and brightness calibration on your screen probably makes much more difference than the true difference between that of the 6MP and 10MP, but printing wise, there might be more obvious differences depending on what DPI you print.

Ken Rockwell has mentioned that the 3MP pictures is not going to be of much difference than the 6MP pictures. As an upgrader from a 3.2MP Konica Minolta Xt ultracompact to a 6.3MP Nikon D50 entry level DSLR, i have noted how subtle picture quality has changed, especially when you do post processing. the lower quality pictures can still be very satisfactory although the post processing may be limited, and you may not always be able to get to your desired effect easily. Out of all my travel galleries, only 1 thread is by compact in/before 2004 and the others by DSLR starting from Australia and a few other countries, and some of my most favourite pictures are from the compact though, in Taiwan, the first country where i first started truely travelling and shooting at the same time.


For example, this picture taken on Hohuan San, is a simple patch of grassland under very good lighting. I have no problems with exposure and got a God granted bright blue sky. Very little post processing is needed and the 3.2MP jpeg output from the camera is enough for me.

07-305anw.jpg



The next picture actually is a difficult picture to edit, as a large area of sky falls into a bland sky and very little colors exist in the foreground. However, fortunately there is still a little color in the highlight in the sky and the ice freezing on the twigs give a lot of fine details. The initial efforts of post processing proves difficult, but with some further experience, it has given me one of the most refreshing look i can get. Despite of having a refreshing post processed appearance, it became evident that these post processing renders the pictures nice but not realistic as compared to the picture above.

13-14-011arw.jpg


P.S. This picture above is taken by an Olympus camera, which i strongly suspect is taken by some of my friends who went on the same trip. We shared our photos, hence i can't be absolutely sure who took the picture exactly, plus i can't remember who owned which camera. at that time, we are all snappers who just wanted to keep some memory of our tour, so to my surprise, i can actually edit and make nice of what a ordinary snapper (likely my friend) at even a lower resolution than my own camera. this picture's original resolution is only 1600x1200 pixels. Of cos the editing options is limited, and i have to make do with what i think i can with it.

And where the detail resolution and color accuracy isn't demanding for the theme (hues in the sky & silhouette versus face), a lower resolution can achieve good effects even with post processing.

feimao07 uses his Sony Ericsson W700i's in-built pano-mode which consists of 3 landscape shots (max can only take 3 shots then it will auto-output) to shoot this at about 7pm, width - 1664 pixels, height - 416 pixels. with his permission, it is resized and posted here.

feimaose.jpg




4.3 Can megapixel value always be compared?

To most of us who is using the typical Bayer RGB sensor, yes, it means a difference from 3 to 6 to 9 MP, although the difference is especially prominent on the lower end and less prominent when MP value increase.

However, other types of sensors with a different array to collect light of different wavelength may have a lower MP value but higher/similar quality of amount of data. That is so far true for Fujifilm and Sigma's sensor.

.
 

5. Image quality


5.1 Image quality in general

Basically how good is an image quality depends on a few processes

1. data input (entry of light)
2. data collection (sensor)
3. in camera processing
4. post camera processing
5. medium output (print or computer screen)

And yes, it is this wordy and this complicated, and that's why it is so easy to get poor image quality. All the step from taking the photography till you see the final output, and asking why the image quality is not good, dictates the image quality. Missed a step or badly done one step, the rest will all be affected in cumulation.

The entry of light, as mentioned in page 10, Focusing and sharpness, 1. Camera-Subject stillness: Sharpness and Clarity, relies on this two factors as well as exposure. When you get a desired focus and planned motion, you get the 1st step to good image quality, as they determined the basic block of data integrity before anything else. Erratic handshake and undesired out of focus areas will remain so even if step 2 to 5 is perfect, and no amount of unsharp mask is going to return you the same image quality that is achieved with the reduction/elimination of handshake and the best achieved focus. Clarity depends on the atmospheric clarity, pre-lens, intra-lens and post-lens clarity. Apart from the atmosphere, the rest is up to how good and how clean you maintain the lens, filters and camera body. Exposure is important, but its importance is dependent on the sensor quality as well as how far you need to correct the exposure or other data on post processing. Obviously if you get your desired exposure from an easy contrast condition, you need not worry too much about having to amplify the signal in the shadows or suppress the signal in the highlights. Once exposure is taken in a tough contrast condition where you need to do selective area post processing or that the exposure is off the desired amount, you need to rely on best preservation of data from step 2 to 5.

Sensor quality is dependent on many factors. Like glass of lens and the speakers, in general, the bigger the better. The type of sensor, the sensor array, the size of each pixel volume and the total number of sensor pixels (i.e. megapixel value) are factors that determine the eventual quality; the first three factors determine the quality directly, whereas the megapixel value only improves quality if the same image is compared at the same output dimension. Please read about DPI to have an understanding about resolution. 1MP in 100DPI, 2MP in 200DPI, 3MP in 300DPI will give increasing data in the same unit area, with increasing resolution and image quality till a point that the eyes can't tell the difference of increased resolution. In general, an appropriate output would be 300DPI, and the megapixel will determine how big your photo can be output at the same DPI. Likewise, 1MP in 100 unit area, 2 MP in 200 unit area, 3MP in 300 unit area will give increasing data in increasing area output, with the same resolution and the same image quality, but a difference of image size. Miles Hecker wrote in the Luminous landscape on digital camera image quality which is a good read. It tells you on the type of sensor: CCD and CMOS, sensor array: Sigma Foveon X3 sensor and the Fuji honeycomb super CCD sensor, and the sensor and film "Normalized Image Quality".

About pixel size and signal to noise ratio

Experienced digital camera users know pixel size isn't everything. Pixel quality matters a great deal. Generally speaking, bigger pixels are better pixels because they have more signal and less noise..... For a given semiconductor process, the noise stays the same regardless of pixel size and the signal increases with pixel size.

As quoted from Miles Hecker wrote in the Luminous landscape.

In camera processing, basically goes back to the choice of RAW versus jpeg, or in the old days or for some cameras, the use of TIFF files. The saving of file size on jpeg storage is not as simple as compression (rearrangement of data into a more compact form), but by discarding of data. Imagine these data as a thick piece of malleable metal sheet that serves as a roof of your house. If you alter this roof and make changes to the distribution of thickness of this sheet, if the metal sheet is thick, it can afford to give more thickness to certain areas without having losing its integrity. However, if you wanted a thin and light roof, and shave the metal sheet till it is paper thin, when you need to increase the thickness of an area, you need to scrape the surrounding area and leave areas of extremely thin metal sheet or even a hole. In jpeg output, data determined as not immediately perceptible may be discarded, and these data may be important as filling up of gaps when photo-enhancement is performed. Using Raw format gives you the optimum data storage that is determined by the quality of data input and sensor quality. Further can be read on 3.1 RAW, jpeg and other formats following this a few posts below.

Post camera processing is dependent on user and is a highly variable factor. Some of the most lossful actions are best done as a last step in photo-enhancement, such as sharpening. Also the opening and saving file format should be retained as its best, and not via lossy formats, such as jpeg.

Medium output has been partially mentioned as above under megapixel and resolution - the size of your printout. Too big the size with limited data, each data will be blow up to a bigger size till you see the pixelations. Too small a size limits the assessment of available details, for example, if the face of a person is as small as 1cm2, you will have difficulty telling if there is noise hair or not - you need to magnify it, and you may just see it. The size assessment is also important depending on your viewing distance. The nearer the better, but not too near of cos, due to our eyes being two separate eyeball and we need to converge and accomodate (medical term) to see near items. Of cos, how good is the paper, how good is the dye/ink, how good is your monitor and whether it is calibrated, all decides on how good it looks.

.
 

5. Image quality


5.2 Picture formats: RAW, jpeg and other formats

Jpeg as we all know is the standard format of most images, and that helps as it shows reasonable details with small compressed size, and being a standard, can be read by all programs and viewed universally on the net.

However jpeg is a compressible format that discard details everytime it is saved. Although it discard details that is less likely to be visualised by our eyes, upon editing and post processing, the lack of these "less visualised" details will lead to a lack of ability to form transition areas, and lead to defects or general impression of poor quality images.

As such DSLR have the ability to record images in what we call a RAW format, which means it records the original amount of data captured on the sensor, leaving the user to convert to other formats thereafter, doing all sorts of editing on the retained data, and finally saving in the end format, usually a jpeg. The non discarded data allows smoother transition when data is being edited, such as change of white balance and salvage of details from shadow and highlight areas. As raw formats are developed by individual companies, and is not an universal standard like that of jpeg, they can be different from each other and it can only be read by software that is programmed to read that specific format. In the case of Nikon, it is called NEF, which is compressed approximately 4 times in a lossless format (not the same as jpeg compression) for the case of Nikon D50.


Despite of the advantages raw format has, there is a few issues with it

1. larger file size means
- your memory card fills up faster with less shots
- transfer to the computer takes a longer time
- you need a larger harddisk to store them
- processing on the computer takes a longer time

2. the need to do conversion
- takes time for conversion
- require programs that is able to read the particular raw format
- older raw formats may not be supported in newer programs (my greatest fear)
- may not be "viewable" before conversion, does not give immediate results


So, proposal is to do raw format if you wanted the best quality out of your pictures which can't be returned to shoot again (e.g. travel), if you are not tight of storage space and if you want to do some editing, such as change of white balance. But if anyone is happy with his jpeg quality, is tight with storage and won't bother to do any editing (or is sure that his jpeg output is spot on perfect and does not require any correction), jpeg should be fine, to whichever degree of compression he is happy with. As such, the ironical part is that professionals tend to shoot jpeg whereas some amateurs stick to raw while some newbies not knowing how to do raw conversion shoot in jpeg instead.

Ken Rockwell propose in short to do jpeg instead of raw, because he prefers to get it all right within the settings of the camera itself, instead of doing finer adjustments and enhancement later, and for the same reasons that is stated above. Despite with agreeing with the contents of his article, I in my known capacity as a newbie, still propose to shoot in raw with jpeg. it is sort of a kiasu mentality to keep the negative and to maintain it for post processing, of cos with additional storage and time to convert/process. I believe the value lies in subtle superiority over jpeg fine, which to some are absolutely negliable now (which Ken Rockwell have written quite comprehensive in his article), and may in time be narrowed down when compression technology improves.

Equally excellently written is a RAW advocate, by Petteri Sulonen, who have balanced opinions such as when not to use RAW, benefits of RAW in having a wider latitude and less jpeg artifacts, why the RAW limitations is not as bad as it sounds, and what you needs for a RAW workflow.

There are advocates on both grounds, hence it is up to you to see what suits you best.


Other file formats include stuff like jpeg2000, tiff, dng. They are less commonly used, and hence if interested, do google them, if not leave it alone.



5.3 What jpeg compression should I use?

As mentioned above, Raw is preferred by most users, while jpeg files may be used by very confident & undemanding users or by those with size limitations. Given the pricing and size of modern formats of memory, if jpeg is used, even the largest and best quality jpeg files, there is a lot of space saved. Jpeg large fine format is preferred if raw is not used. Downgrade accordingly to the strain of memory shortage.

.
 

6. Image output size and aspect ratio


6.1 DPI

DPI means dots (pixels) per inch (area of inch square).

It virtually means how much of the same amount of data is compressed over a smaller area or spread across a larger area. This is not a measure of input resolution, whatever number of pixels (amount of data) that is collected on the sensor is already done. This is a measure of output resolution, or the so-called printing resolution. On web browsers, it usually does not matter on the DPI as the amt of pixels will be displayed as they are on the amt of pixels on the screen, (i.e. if the screen is 1000 x 600 pixels, and your picture is 500 x 300 pixels, not matter what the DPI is, your picture will fill up a quarter of the screen).

DPI does not measure the quality of the image as it is dependent on the standing distance from the picture.

One can read more into wikipedia or google. Good information is in rideau.

In short, you need not worry about DPI when you shoot or when you post in the net. You however need to know it very well when you do resizing or when you print.

1. When resizing to reduce size of file, make sure it is the total number of pixels that are reduced. (Note: file size can be reduced also via higher compression and lower quality jpeg output, without change of number of pixels)

2. When resizing to reduce size of area without change of data integrity, make sure the total number of pixels are not reduced.

3. Market standard of printing is usually set at 300DPI.



6.2 Printout size formats

Sizes for Documents: Littlebit, Cl Cam

papersizes.jpg


.
 

6. Image output size and aspect ratio


6.3 Aspect ratio, cropping and composition

Aspect ratio (AR) as described above is the long axis length versus the short axis length. It can thus be a minimal of 1.00 or 1:1 in a square sensor that may be adopted in some of the medium format digital backs not usually used by the DSLR users. Do not confuse this ratio with the lens conversion factor which can be described in ratio at times too. When it is in a rectangular format, it is either used as a ratio with full integers or in my preference, a decimal point value. Hence it is usually called 3:2 or 4:3 or in my preference, 1.50 or 1.33. I prefer decimal point becos it gives an immediate perception of how panaromic the rectangular extends.

Whichever aspect ratio you prefer, it can always be achieved by cropping in whichever manner deemed fit by you. However, most people may prefer a 1.50 AR sensor, which is similar to film and is the main sensor AR commonly used in most DSLR brands now. because it gives you the wider AR immediately with less need to crop into a panaromic picture or perhaps no need to crop further, saving time and effort in post processing - of cos it depends on what is your preferred AR format in most of your outputs. Doing less cropping also means that less sensor input is wasted. Take for example, arbitrarily, if the original picture on the sensor is 3cm by 2cm (3x2=6) and if you crop it to 1.5cm by 1.0cm (1.5x1=1.5), you still can output your printout into the same size of 12cm by 8cm with a difference of cos in the contents where the latter is showing the central components, but one of them make use of 6 unit area of the sensor, says perhaps 6MP resolution, whereas another one make use of 1.5 unit area of the sensor, says perhpas 1.5MP resolution. The more you crop, the more loss of resolution you have when compared at equivalent output size, and if you dun resize or if you resize both at the same magnitude, then you will merely get a reduction of relative size at the same resolution.

We tend to prefer panaromic pictures becos of the natural tendency to crop off featureless sky and to include more details close to the horizon. Comparatively the vertical shots tend to look better on 1.33 AR. Well, as i said, it depends totally on your own preference, but there should be reasons for your preference, as for me, i have stated mine. Some purists of photography have separate reasons - they wanted to get everything done perfect out of the camera shooting, hence they stick to their aspect ratio without cropping to other aspect ratio, and zoom in/out or walk nearer/away to get the desired magnification without having to crop at the same aspect ratio.

I'm a partial purist in my works - that is I will stick to the same aspect ratio and crop to the same aspect ratio, but this is just personal preference of same dimensions when i put them in ordered sequences. As such i cropped less than 5% of my shots. Michael Reichmann who wrote the luminous landscape website however does have differing views that is true also - cropping as necessary after the desired contents are included. The article is Understanding Aspect Ratios and The Art of Cropping - A Bandage for Poor Composition, or a Creative Tool? with some pictures that showed his thought process on cropping.

.
 

7. File size, transfer, storage


7.1 Keeping your digital photos

For storage of digital photos on the go, please refer to the Centralised thread for backpacking photographers (05).

For storage of your large volume of photos and other things, currently the safest method is regular backup to multiple hdd, including inside and outside the CPU box. A thread on RAID has a discussion on it.



8. Digital effects, defects and applications


8.2 Concept of processing and realism

David Norton said in his article in Practical Photography, Apr 2007, The Camera never Lies?, "In what way is a shot with a burnt out sky and a black foreground more real than one using a filter and post-production techniques to record the full range of tones in the scene, which is, after all, closer to what our eyes actually see?"

.
 

Status
Not open for further replies.