By compressing the image, you can reduce the image dimensions creating a smaller overall image and therefore reducing the overall space required to store the image on your memory card/disk.
Another way to reduce the file size would be to crop your image, digitally cutting off the edges of your picture to make it digitally smaller.
Now although this may look good still on the computer or even mobile screen, it will mean some heavy degradation when looking a little closer.
An image when taken straight from the camera in RAW could be 9216 x 6912px – Straight Out Of Camera (SOOC) – 100 MB
Part 2 – RAW vs JPG
So when it comes to IQ, should we shoot in RAW or JPG?
To be very honest, shooting in RAW or shooting in JPG has nothing really to do with IQ... I will explain.
You may have noticed when shooting images in both RAW & JPG (if your camera does this) that the JPG image is a clearer, sharper and with better colour than the RAW image.
This is true of any camera; but why? Well...
A computer screen is built in a way that fails to show the image in its entirety. Most screens have a limited number of colours and display a limited amount of data. Even a High Definition Rendering of a RAW image is still only a JPG image.
A 4K screen is only 8.3 megapixels. Whilst the majority of us still using HD are only seeing 2.1MP.
That in itself is a lot of wasted information.
The same goes for the LCD screen on the back of the camera. They’re all JPEGs.
For now, we’ll just say that a PC Screen and mobile device is only giving you a small JPG version of what the camera took.
Sharpness and Resolution go hand in hand, however the human eye is very weak when it comes to determining how sharp something is.
We determine sharpness by looking at the contrast between edges in an image whereas a digital image will also use resolution to assess how sharp an image is.
The Oxford English Dictionary gives the meaning of Acutance as “The sharpness of a photographic or printed image” however, in reality Acutance is more related to the periodic variance in measurement and the changing sensitivity of brightness with respect to space.
Now you can see why the Oxford Dictionary explains it this way… but it’s actually very simple.
If you look at the images, you’ll see an example of how sharpness is perceived to the human eye (in basic form).
It is very difficult to show digitally how the eyes use contrast in space to define sharpness however you’ll get a basic understanding from this.
In this illustration, you can see the normal visual of how an image may appear in RAW format.
I have used a simple form to show this clearly.
The JPG version of this image will have some sharpening added to it for screen purposes. This will show around the contrasting edges.
You’ll notice a slight change in the apearance when close however if you zoom out, it will look far clearer.
To do this, the camera, or processing software, will create something called an unsharp mask.
The unsharp mask will look similar to the illustration here.
You’ll notice the contast has been reversed, like a negative, to provide a sharp edge to the unsharp areas of the original image; hence the name ‘Unsharp Mask’.
If the masking is overdone and the sharpening too much, you’ll lose the natural look and possibly some of the quality from the original image.
This is often used for artistic effect or during HDR processing.
So what is Noise and what are Artefacts?
Whilst both affect the quality of an image, they appear for very different reasons.
Speaking only of digital photography, noise is usually the result of electrical distortion on the image during creation. This being said, it can be the result of trying to push processing too far with insufficient data on the image to apply reasonable results; like trying to recover detail from shadows or dark areas.
See image to the left...
There are many ways to reduce noise and many specific programs available to control the amount of noise reduction during post processing.
The easiest way to reduce or remove noise in an image is to ensure you have adequate lighting and the correct exposure is used.
Most cameras will have some sort of Noise Reduction built in, and although some are very aggressive reducing the fine detail, most will process the JPG image to very acceptable standards.
Artefacts are usually the result of how an image is processed or compressed.
See image on the right...
Artefacts are slightly different; they can appear for a multitude of processing reasons.
These can range from, but are not limited to, compression, sharpening, colour processing, contrast adjustments and resizing.
The easiest way to reduce or remove artefacts is to lower the amount of, or reverse the, processing you have just completed, adding adjustments in a smaller amount.
If you are able to create a blurred mask and paint over the artefact, it can also work.
Another way is to ensure the way in which you are processing your image is correct.
For example; if you are processing an image for viewing online, you do not want to create a JPG image in Adobe colours, it should be converted to sRGB first.
If the artefacts show after resizing, try using the High Quality Bicubic option or see if you have used too much Gaussian Blur.
Part 5 – Colour
Because colour is very subjective, I will explain a little about perception below.
The eye and the brain determine colour in very different ways, giving us what we know today as the visible light spectrum.
The eye collects light in cones; Blue (Short-Wave), Green (Medium-Wave) and Red (Long-Wave), known to us as RGB.
The brain will then calculate the amount of light falling across each cone on 3 different channels; Red-Green, Blue-Yellow and Black-White, also known to us as luminance.
If light is poor and not enough collected information is received by the cones, another area of the eye will act like an amplifier to provide slightly better vision.
I say slightly because this does not provide a very good colour rendition (more like shadows and similar to increasing the ISO).
We only see 3 colours, however, the number of perceived colours will be considerably more due to the various intensities of these colours.
So we see in three channels at different intensities giving us colours.
The average human can actually determine ‘just’ 10 million colours, whereas the modern sensor can capture millions more.
When we talk about colour in digital form, we use ‘bits’ but here it gets a little confusing.
Bits are just the smallest piece of information available in digital form, in binary it is either 0 or 1.
When it comes to photography 0 is assigned to black and 1 to white.
Therefore, a 12-bit image simply means that each byte contains 12 bits of colour
4xRed, 4xGreen, 4xBlue = 12 bits
Each one of these has a possible 0 or 1 value (one of 2 values), so:
- 2 to the power of 12 = 4096 colours per pixel.
- 4096 (R) x 4096 (G) x 4096 (B) = 68.7 billion colours (Intensity/Bit Depth)
Now, I’m going to address the fact some of you will now say “but my TV/PC shows 24-bit True Colour” (also known as Direct Colour).
Whilst this may be true, what they are referring to is slightly different.
24-bit in this case refers to the bit depth and not the bits per pixel.
Bit depth refers to the number of possible colour intensities when all three colour channels are combined.
So a 24-bit ‘True Colour’ has the same technical make up as an 8-bit ‘per pixel’ image.
8bits (R) + 8bits (G) + 8bits (B) = 24-Bit True Colour
Next to light, colour can be one of the most important parts of photography.
Simply put the colour accuracy is the similarity of the captured image to the original scene.
Since the first colour image was captured in the mid 1800’s people have strived to achieve bigger, brighter, clearer colours.
With the birth of digital imaging colour control moved from the darkroom, to the computer and instead of studying for years, it has become ever easier to control the overall appearance of light in an image.
Because colour control is still as important-as-ever, you will see colour charts in use by the more serious photographer.
This allows you to control everything from White Balance to Saturation and Vibrance.
“But what about Black and White”
Well even black and white images rely on colour for tone and fine detail.
Gamma and/or Contrast
Some people have difficulty differentiating between gamma and contrast. Whilst they may be similar, they are in fact very different.
If you are anything like me, the mathematical explanation of the two is far too advanced and so I like to think of it more this way.
Contrast is the linear control, of the ratio between the lightest and darkest areas of the image.
Think of a see-saw, as you raise one side, the other lowers and so the ratio is further apart. The contrast will then show great contrast and you will end up with a 1970’s style music album cover.
Gamma is non-linear; raising the gamma darkens the dark areas, lowering the gamma lightens the dark areas.
This is more of a balance ball, when you stand with your feet together that one point goes down and the darkest of darks goes very black. As you move your feet, different areas of the ball go down decreasing the darkness of varying amounts however nothing moves up, the light areas remain a light as they were.
Pronounced “mou-ray” is a weird pattern that may appear in some of your images.
This is normally when shooting synthetic objects, fine suits, and shirts with thin patterns or tableware.
It can also happen in street photography when shooting brickwork or buildings with patterns in the windows or stonework.
It is caused when the pattern in your frame matches or conflicts with the pixel pattern of your sensor.
Whilst this can be annoying, if you know what to watch out for and check your images at magnification when shooting, you can overcome it by changing your aperture and/or zoom slightly.
You can also correct this post process, to a degree. Check out this tutorial by Gavin Hoey for more on this:
Dynamic range is the distance between the highest and lowest of signals, both in sound and light.
For the purposes of photography, we measure this as stops or EV’s between the lightest and darkest areas of the image. The ability to see detail in both the light and dark areas of the same image.
- Older 35mm film cameras had a Dynamic Range of between 11-13 stops
- A modern Olympus MFT camera will capture 12-13 stops.
- Most high-end digital cameras will capture between 12-15 stops.
With the super-fast processing of information between our eyes and our brain, the average human can detect changes in light very quickly, giving us around double that of any consumer camera.
The equivalent Dynamic Range of 20-30 stops (although this is subjective).
Poor dynamic range could result in bleaching or underexposed areas that require either advanced post processing or an increase in noise with decreasing detail.
Resolution is commonly measured in lp/mm (line pairs per millimetre) and determines the minimum distance between two lines where they can still be separately distinguished.
When thinking about resolution in photography, you need to consider two things.
- Your equipment
- Your finished product
If your equipment can only provide you with low-resolution images, your finished product will be low resolution, regardless of the quality of print or screen.
You will need to think about this before creating your images. Who are your audience?
At the same time, if your finished image is only for social media or mobile viewing, you are unlikely to need high-resolution equipment and so may not need to spend so much on high-end gear.
Because the lens resolution is often more restrictive than the camera, companies will release MTF charts (Modulation Transfer Function) to compare the quality of optical performance.
Whilst I will not go deeply into MTF charts (for a later tutorial), I will say they compare the optical ability of a lens using various methods and techniques.
Every lens will have some form of distortion, but what is acceptable and what may be required are up to the user.
Fisheye lenses are a good example of this.
Many use them specifically for their distortion as they can give an angle of view greater than 180 degrees. This can be seen most obviously when watching action sports fanatics with cameras strapped to their head.
Flare is usually apparent when shooting directly into a bright light source (like the sun) and you receive a wash of white across your image, dulling the contrast and bleaching the overall look.