Loading

IQ Learning with MUNNS - February 2020

There are a great number of aspects to consider when we talk about IQ (Image Quality).

Generally, when speaking about IQ we refer to the perceived image degradation in comparison to a ‘perfect image’.

Sharpness, colour, contrast and the detail a camera-lens combination ‘can’ achieve are all factors.

Because IQ can also cover the quality of a single image and will change dependent on camera and lens settings, we will cover a few topics.

Let’s start by remembering that IQ has no influence on how good or bad the final image is.

A really good image can still have low IQ and vice-versa.

Part 1 – Compression vs Size vs File Size

Compression, Image Size and File size are some of the most basic of settings we will change.

Even the purest of photographers will change these things and they can have a direct effect on the final IQ.

Whilst compression is used on most PC’s to reduce the size of a file, compression in photography can reduce the quality of your images.

Generally, it is used to make images more suitable for things like publishing and sharing online or for saving your storage space on hard drives.

Different file formats and different cameras use different levels of compression.

Lossless Compression: This is where the image is compressed to make it smaller, however no data or information is lost. Similar to ZIP files on the PC, a TIFF (.tif) or Digital Negative (.dng) will compress the file size without losing any information from the image.

Although compressed, these files are still quite large in comparison to those used for sharing online.

Lossy Compression: Because most image sensors now capture far more information that the human eye (and most PC screens) files can be heavily compressed to remove ‘useless’ information.

This reduces the file size quite dramatically and makes it easier for sharing online.

The most commonly used form of file for compressed imagery files is JPEG (.jpg)

The compression of JPG images is usually around the 10:1 ratio and noticeable when zooming in, compared to an uncompressed version of the image.

Although you can get JPG formats that still carry metadata (EXIF) it is usual for images compressed to JPG to lose the majority of background image data.

Whilst compression reduces the file size and degrades the image quality, simply reducing the file (pixel size) can also reduce the quality, but in different ways.

By compressing the image, you can reduce the image dimensions creating a smaller overall image and therefore reducing the overall space required to store the image on your memory card/disk.

Another way to reduce the file size would be to crop your image, digitally cutting off the edges of your picture to make it digitally smaller.

Now although this may look good still on the computer or even mobile screen, it will mean some heavy degradation when looking a little closer.

An image when taken straight from the camera in RAW could be 9216 x 6912px – Straight Out Of Camera (SOOC) – 100 MB

Large - Super Fine Compression

9216 x 6912px – Straight Out Of Camera (SOOC) – 100 MB

Small - Super Fine Compression

1280 x 960px – Super-Fine Compression (S-SF) – 0.3 MB

Small - Normal Compression

960 x 1280px – Normal Compression (S-N) – 0.1 MB

As you can see in the images, there is no discernible quality difference when looking at your screen. This is why compression is good for people only using their images on the computer, mobile devices or for sharing with others over digital media.

However, when we zoom in 100%,

Large - Super Fine Compression

Small - Super Fine Compression

Small - Normal Compression

…then at 200% you can see why we do not compress when using images for print.

Large - Super Fine Compression

Small - Super Fine Compression

Small - Normal Compression

With the advances in technology and digital photography as a whole, enthusiasts are always looking for the cameras with the biggest sensors and most megapixels. They want the latest cameras and look to continually upgrade.

However.. if these same people don’t look after compression and pay attention to the way they process their images or publish them, this vast amount of money spent on new gear is wasted and not used to its full potential.

I’ll talk a little more about resolution later.

Part 2 – RAW vs JPG

So when it comes to IQ, should we shoot in RAW or JPG?

To be very honest, shooting in RAW or shooting in JPG has nothing really to do with IQ... I will explain.

You may have noticed when shooting images in both RAW & JPG (if your camera does this) that the JPG image is a clearer, sharper and with better colour than the RAW image.

This is true of any camera; but why? Well...

A RAW image contains more data, more information and more colours, shades and details than the JPG, however, this information is unprocessed.

When I say unprocessed, I mean, it is like a puzzle, or a code for the imaging software/camera to correctly arrange. Software like Adobe, Capture One or Affinity can be used to process RAW images.

It requires some help, adjusting everything from White Balance and Sharpness, to Vibrance and Saturation.

You greatly reduce the restrictions when editing RAW images.

You will also notice that when you print from the processed image (usually a TIFF), that the image can be slightly different to what you see on the computer screen.

“Why” I hear you ask again, well…

A computer screen is built in a way that fails to show the image in its entirety. Most screens have a limited number of colours and display a limited amount of data. Even a High Definition Rendering of a RAW image is still only a JPG image.

A 4K screen is only 8.3 megapixels. Whilst the majority of us still using HD are only seeing 2.1MP.

That in itself is a lot of wasted information.

The same goes for the LCD screen on the back of the camera. They’re all JPEGs.

For now, we’ll just say that a PC Screen and mobile device is only giving you a small JPG version of what the camera took.

There are three main areas for concern when deciding whether to shoot in JPG or RAW.

1. Will you be using your images only online, digitally?
  • Yes – JPG
  • No – Possibly RAW
2. Will you want to process all your images on the computer?
  • No – JPG
  • Yes – Possibly RAW
3. Will you be selling images, printing images or providing them to other people?
  • No – Possibly JPG
  • Yes – RAW

So now, when someone tells you that you’re shooting in the wrong format, you can correct them.

Part 3 – Sharpness (Acutance)

Sharpness and Resolution go hand in hand, however the human eye is very weak when it comes to determining how sharp something is.

We determine sharpness by looking at the contrast between edges in an image whereas a digital image will also use resolution to assess how sharp an image is.

The Oxford English Dictionary gives the meaning of Acutance as “The sharpness of a photographic or printed image” however, in reality Acutance is more related to the periodic variance in measurement and the changing sensitivity of brightness with respect to space.

Now you can see why the Oxford Dictionary explains it this way… but it’s actually very simple.

If you look at the images, you’ll see an example of how sharpness is perceived to the human eye (in basic form).

It is very difficult to show digitally how the eyes use contrast in space to define sharpness however you’ll get a basic understanding from this.

In the next couple of images, you’ll see how digital sharpening uses a similar technique to define and sharpen edges to provide a clearer image.

In this illustration, you can see the normal visual of how an image may appear in RAW format.

I have used a simple form to show this clearly.

The JPG version of this image will have some sharpening added to it for screen purposes. This will show around the contrasting edges.

You’ll notice a slight change in the apearance when close however if you zoom out, it will look far clearer.

To do this, the camera, or processing software, will create something called an unsharp mask.

The unsharp mask will look similar to the illustration here.

You’ll notice the contast has been reversed, like a negative, to provide a sharp edge to the unsharp areas of the original image; hence the name ‘Unsharp Mask’.

If the masking is overdone and the sharpening too much, you’ll lose the natural look and possibly some of the quality from the original image.

This is often used for artistic effect or during HDR processing.

The illustration here shows an overly sharpened image, and below you can see an example of this and in an actual photo.

One of the drawbacks from digital sharpening or sharpening post process is the increase in noise or artefacts that may appear in an image.

Whilst these can also be adjusted it is something to keep in mind as the quality of an image can be dramatically reduced if processed incorrectly.

Because you may not want to sharpen every element of an image, many processing programs will allow you to control the mask by way of Amount (%), Radius (Pixels) and Threshold (levels).

I will cover this in more detail in a later tutorial.

Part 4 – Noise and Artefacts

So what is Noise and what are Artefacts?

Whilst both affect the quality of an image, they appear for very different reasons.

Speaking only of digital photography, noise is usually the result of electrical distortion on the image during creation. This being said, it can be the result of trying to push processing too far with insufficient data on the image to apply reasonable results; like trying to recover detail from shadows or dark areas.

See image to the left...

There are many ways to reduce noise and many specific programs available to control the amount of noise reduction during post processing.

The easiest way to reduce or remove noise in an image is to ensure you have adequate lighting and the correct exposure is used.

Most cameras will have some sort of Noise Reduction built in, and although some are very aggressive reducing the fine detail, most will process the JPG image to very acceptable standards.

Artefacts are usually the result of how an image is processed or compressed.

See image on the right...

Artefacts are slightly different; they can appear for a multitude of processing reasons.

These can range from, but are not limited to, compression, sharpening, colour processing, contrast adjustments and resizing.

The easiest way to reduce or remove artefacts is to lower the amount of, or reverse the, processing you have just completed, adding adjustments in a smaller amount.

If you are able to create a blurred mask and paint over the artefact, it can also work.

Another way is to ensure the way in which you are processing your image is correct.

For example; if you are processing an image for viewing online, you do not want to create a JPG image in Adobe colours, it should be converted to sRGB first.

If the artefacts show after resizing, try using the High Quality Bicubic option or see if you have used too much Gaussian Blur.

Although artefacts are generally a bad thing, some use the effect intentionally for artistic purposes calling it ‘Glitch Art’, ‘Datamoshing’ or ‘Pixel Bleed’.

Part 5 – Colour

Because colour is very subjective, I will explain a little about perception below.

The eye and the brain determine colour in very different ways, giving us what we know today as the visible light spectrum.

The eye collects light in cones; Blue (Short-Wave), Green (Medium-Wave) and Red (Long-Wave), known to us as RGB.

The brain will then calculate the amount of light falling across each cone on 3 different channels; Red-Green, Blue-Yellow and Black-White, also known to us as luminance.

If light is poor and not enough collected information is received by the cones, another area of the eye will act like an amplifier to provide slightly better vision.

I say slightly because this does not provide a very good colour rendition (more like shadows and similar to increasing the ISO).

We only see 3 colours, however, the number of perceived colours will be considerably more due to the various intensities of these colours.

So we see in three channels at different intensities giving us colours.

The average human can actually determine ‘just’ 10 million colours, whereas the modern sensor can capture millions more.

When we talk about colour in digital form, we use ‘bits’ but here it gets a little confusing.

Bits are just the smallest piece of information available in digital form, in binary it is either 0 or 1.

When it comes to photography 0 is assigned to black and 1 to white.

Therefore, a 12-bit image simply means that each byte contains 12 bits of colour

4xRed, 4xGreen, 4xBlue = 12 bits

Each one of these has a possible 0 or 1 value (one of 2 values), so:

  • 2 to the power of 12 = 4096 colours per pixel.
  • 4096 (R) x 4096 (G) x 4096 (B) = 68.7 billion colours (Intensity/Bit Depth)
Now, I’m going to address the fact some of you will now say “but my TV/PC shows 24-bit True Colour” (also known as Direct Colour).

Whilst this may be true, what they are referring to is slightly different.

24-bit in this case refers to the bit depth and not the bits per pixel.

Bit depth refers to the number of possible colour intensities when all three colour channels are combined.

So a 24-bit ‘True Colour’ has the same technical make up as an 8-bit ‘per pixel’ image.

8bits (R) + 8bits (G) + 8bits (B) = 24-Bit True Colour

8 Bit vs 10 Bit vs 16 Bit
There are a few areas we talk about that you should consider for good quality colour.

Next to light, colour can be one of the most important parts of photography.

Simply put the colour accuracy is the similarity of the captured image to the original scene.

Since the first colour image was captured in the mid 1800’s people have strived to achieve bigger, brighter, clearer colours.

With the birth of digital imaging colour control moved from the darkroom, to the computer and instead of studying for years, it has become ever easier to control the overall appearance of light in an image.

Because colour control is still as important-as-ever, you will see colour charts in use by the more serious photographer.

This allows you to control everything from White Balance to Saturation and Vibrance.

“But what about Black and White”

Well even black and white images rely on colour for tone and fine detail.

Gamma and/or Contrast

Some people have difficulty differentiating between gamma and contrast. Whilst they may be similar, they are in fact very different.

If you are anything like me, the mathematical explanation of the two is far too advanced and so I like to think of it more this way.

Contrast is the linear control, of the ratio between the lightest and darkest areas of the image.

Think of a see-saw, as you raise one side, the other lowers and so the ratio is further apart. The contrast will then show great contrast and you will end up with a 1970’s style music album cover.

Gamma is non-linear; raising the gamma darkens the dark areas, lowering the gamma lightens the dark areas.

This is more of a balance ball, when you stand with your feet together that one point goes down and the darkest of darks goes very black. As you move your feet, different areas of the ball go down decreasing the darkness of varying amounts however nothing moves up, the light areas remain a light as they were.

Moiré,

Pronounced “mou-ray” is a weird pattern that may appear in some of your images.

This is normally when shooting synthetic objects, fine suits, and shirts with thin patterns or tableware.

It can also happen in street photography when shooting brickwork or buildings with patterns in the windows or stonework.

It is caused when the pattern in your frame matches or conflicts with the pixel pattern of your sensor.

Whilst this can be annoying, if you know what to watch out for and check your images at magnification when shooting, you can overcome it by changing your aperture and/or zoom slightly.

You can also correct this post process, to a degree. Check out this tutorial by Gavin Hoey for more on this:

https://www.youtube.com/watch?v=ZFteoCptzvM

HDR Film Camera - NASA

Part 6 – Dynamic Range

Dynamic range is the distance between the highest and lowest of signals, both in sound and light.

For the purposes of photography, we measure this as stops or EV’s between the lightest and darkest areas of the image. The ability to see detail in both the light and dark areas of the same image.

  • Older 35mm film cameras had a Dynamic Range of between 11-13 stops
  • A modern Olympus MFT camera will capture 12-13 stops.
  • Most high-end digital cameras will capture between 12-15 stops.

With the super-fast processing of information between our eyes and our brain, the average human can detect changes in light very quickly, giving us around double that of any consumer camera.

The equivalent Dynamic Range of 20-30 stops (although this is subjective).

Poor dynamic range could result in bleaching or underexposed areas that require either advanced post processing or an increase in noise with decreasing detail.

Part 7 – Resolution

Resolution is commonly measured in lp/mm (line pairs per millimetre) and determines the minimum distance between two lines where they can still be separately distinguished.

When thinking about resolution in photography, you need to consider two things.

  1. Your equipment
  2. Your finished product

If your equipment can only provide you with low-resolution images, your finished product will be low resolution, regardless of the quality of print or screen.

You will need to think about this before creating your images. Who are your audience?

At the same time, if your finished image is only for social media or mobile viewing, you are unlikely to need high-resolution equipment and so may not need to spend so much on high-end gear.

Because the lens resolution is often more restrictive than the camera, companies will release MTF charts (Modulation Transfer Function) to compare the quality of optical performance.

Whilst I will not go deeply into MTF charts (for a later tutorial), I will say they compare the optical ability of a lens using various methods and techniques.

Part 8 – Lens Distortion

Optical, or lens distortion can play havoc with your finished image and can take hours of painstakingly boring adaptions in post process to correct.

Whilst perspective distortion can be corrected by changing your angle, distance or focal length, optical distortion can only be changed in post and is considered to be ‘lens error’.

Straight lines will become wavy and/or bent in various ways (known as aberrations).

The two main types of lens distortion are Barrel (Convex) and Pincushion (Concave) whilst sometimes you can find mixtures of them both (Moustache).

Every lens will have some form of distortion, but what is acceptable and what may be required are up to the user.

Fisheye lenses are a good example of this.

Many use them specifically for their distortion as they can give an angle of view greater than 180 degrees. This can be seen most obviously when watching action sports fanatics with cameras strapped to their head.

Other types of aberration can include Chromatic, Spherical, Astigmatism and Coma.

Flare is usually apparent when shooting directly into a bright light source (like the sun) and you receive a wash of white across your image, dulling the contrast and bleaching the overall look.

Depending on the design of your lens, you may also find various colours and shapes, ghost spots (or ghosting) across the image like small round dots where the light has bounced back and forth off the elements inside the lens.

These will usually take the shape of the aperture due to the nature of the light movement through the lens.

Most lenses these days have specialist coatings and designs to avoid the more harsh and colourful flare.

So, this has been a long one and lots to read. I will leave you with it for now and add some more tutorials later on for the bits in between.

As always, these are only guidelines. Now that you have a basic understanding of what IQ is, you can experiment yourself and have some fun!

Look out for more tutorials coming soon.
Created By
David Munns
Appreciate