Bit-depth is one of the more esoteric and misunderstood concepts in digital photography. At my camera club, I’ve often been asked whether images should be edited in 8- or 16-bit mode in Photoshop. The idea of bit-depth is closely related to the ideas underpinning RGB colouring and JPEG compression, so it’s well worth taking a little time to consider.
Let’s start with the bit itself. The word is short for binary digit, and it is the fundamental unit of data storage in digital systems like computers and modern cameras. Put simply, a bit can be either a 0 or a 1. Thus, a single bit gives us two possible values. You can, in theory, interpret these any way you like; for example, you might interpret 0 as false and 1 as true, or, for that matter, 0 as gold and 1 as silver. But a single bit provides for only two distinct values.
Adding a second bit doubles the number of possible values to four: 00, 01, 10 and 11. If we convert these values from binary into our familiar decimal numbering, they translate to 0, 1, 2 and 3. Every additional bit we add doubles the number of values available: three bits give eight values, four give 16 values, five give 32 values and so on. In digital imaging, we use bits to represent the shades a pixel can be. Here is an image that uses just one bit per pixel. If that bit is a 0, the pixel is black; if it is a 1 the pixel is white:
Clearly, one bit-per-pixel doesn’t allow for much tonal variation. What if each pixel were represented by two bits? This would allow four possible tones for each pixel. The image below uses two bits-per-pixel, giving us black and white, with two additional tones of dark and light grey:
A little better, but not much. A classic bit-depth over the past several decades has been eight. With eight bits, we can have 256 possible shades per pixel, numbered from 0, black, to 255, white. Most modern histograms are based on 8-bit imagery, and give a horizontal scale of 256 levels. Here is the same picture rendered with eight bits per pixel:
Now the detail is much better. JPEGs are 8-bit images, so this bit depth is exceedingly common. But raw files use greater bit depths. Many modern raw files are 12-bit, meaning that each pixel can have 4,096 potential shades, giving even greater fidelity of capture. If bit depths were ladders, then 12-bit ladders would have 4,096 rungs, whereas 8-bit ladders would have just 256 to cover the same span. A good number of modern sensors can capture 14-bit data, with each pixel having 16,384 shades, or levels, available.
At present, however, we’re only considering greyscale images. How does colour factor into this? When a raw file is processed by software such as Adobe’s Camera Raw, a process known as interpolation constructs the full colour of each and every pixel. You can read more about this process in my introduction to demosaicing. The upshot is that every pixel has not just one value, but three. There are amounts of red, green and blue that ultimately merge to form the colour of each pixel.
In a classic 8-bit colour image, what we really have, in each pixel, are eight bits of red data, along with eight bits of green data and eight bits of blue data. Recall that eight bits give 256 possible values, so the total number of colours theoretically available for any pixel is:
256 shades of red × 256 shades of green × 256 shades of blue = 16,777,216
That’s a little shy of 17 million possible colours. Of course, it doesn’t mean an 8-bit image will have this many colours in it (it may not even have 17 million pixels), but it could. Also note this doesn’t tell us what these colours actually are: that depends upon the colour space used, about which I will write a future article. The bit-depth simply tells us how many unique colours are available to a pixel, in theory. Sometimes, 8-bit imagery is called 24-bit, since a colour pixel has eight bits for each of the three colour channels, red, green and blue.
Let’s consider higher bit-depth raw files. Earlier, I mentioned that 12-bit raw images have 4,096 tonal levels per pixel: but when the demosaicing is done you end up with 4,096 levels for each colour channel. Thus, the total number of colours available to each pixel is, in theory,
4,096 shades of red × 4,096 shades of green × 4,096 shades of blue = 68,719,476,736
That’s very nearly 70 billion colours. A pretty large number, I think you’ll agree. But when we consider 14-bit raw files, we go down Alice’s rabbit hole. Each colour channel has 16,384 levels, so the arithmetic is:
16,384 shades of red × 16,384 shades of green × 16,384 shades of blue = 4,398,046,511,104
That’s getting on towards four and a half trillion theoretical colours—an unimaginable number. Let’s spend a few moments reflecting on these theoretical colours. As detailed above, the pixels in an 8-bit image can have any one of nearly 17 million colours. Each colour is represented by the bits that make up the red, green and blue data for a given pixel. In other words, it’s all a numbers game, and computers are highly adept at manipulating numbers. Thus, your software can easily distinguish between all of these different colours. The question is, can you?
Think about a device that reproduces these colours, such as your screen. Screens are designed to take a set of red, green and blue values for each pixel and generate the corresponding colours by emitting light. However, screens cannot distinguish 17 million colours. Imagine you have, in your image file, two shades of red that are almost (but not quite) identical. The computer sees them as different, because their numbers are different; thus, software like Photoshop can happily work with them. But when these two colours are sent to your screen, their distinction is too fine, so the screen simply emits the same colour for both of them. Could we not improve the technology of our screens? Well, some are indeed better than others at showing larger gamuts (or ranges) of colour, but human vision itself imposes a limit.
It’s reckoned that the human visual system can distinguish about ten million colours. Beyond this, we just can’t tell the difference—even if one exists—and therefore screens can go only so far before the law of diminishing returns kicks in. A similar problem occurs with printers, which mix inks to recreate these digital colours. There is a limit to the number of colours they can reproduce: a limit far exceeded by the number of theoretical colours available in the image file. At this point, we can ask a sensible question: if these theoretical colours are mostly unreproducible, why bother to record them? If the 17 million colours in 8-bit files are overkill for our output devices and eyes, why would we need the seemingly insane numbers of colours available in 12- or 14-bit raw files?
There are a couple of answers. In the remainder of this article, I will cover the idea of losing tonal variations due to post-production. (In a following article, I will discuss the non-linearity of camera sensors and how bit-depth drops away in the shadow regions of images.) One very good reason for having all of these colours available is that manipulation of images in post-production causes colours to be lost. How does this happen? It occurs when adjustments, like Photoshop’s Levels and Curves, cause distinct colours to merge, and become the same colour. Imagine having a collection of small squares of Plasticine with which you’re making a mosaic. These squares are of many different colours, allowing you to create gradients and other sophisticated effects. The more you work them, and press them together, the more they start to join. When two squares fuse, their colours merge into a single shade and, over time, the variety of colour is lost. A conceptually similar process happens to pixels in post-production. Consider these crops of a sunset sky:
Above is the original, out-of-camera image. Due to the millions of colours available, the gradients in the sky can be nicely rendered as smooth transitions, just as the scene would appear to your eyes. In a modern image, there would likely be millions of pixels to create this area, and each one could be a subtly different shade. Below is the same image after a ton of adjustments in Photoshop. You might like to imagine a stack of adjustment layers heaped upon the original base. Some of these adjustments might cause a compression, where multiple colours become the same; others might then try to expand the data, trying to pull the colours apart. But, as with the Plasticine, once colours have merged, they cannot be separated. As more adjustments are added, the total number of colours is reduced. The gradients of the sky become distinct and obvious bands of solid colour, an effect called posterization, after the small range of colours used to render poster graphics.
It is for this reason that having all those millions, billions, or trillions of theoretical colours is a boon. The 17 million colours in an 8-bit file are quickly and easily reduced to posterization by zealous and careless post-production. I see such problems as the banded sunset quite regularly in camera club competitions and exhibitions. This is because a lot of people edit in Photoshop in the standard 8-bit mode, even when pushing their image to extremes.
At the end of this article is a link to a Photoshop document I created to exemplify this problem. I’ve exaggerated the effect for the sake of instruction, but if you want to follow the effect in a live environment, then download the file and open it up. The base layer, Background, is nothing more than a simple black-to-white gradient from left to right, created with the Gradient tool. To see the tonal levels in this greyscale image, choose Window > Histogram to open its panel. Click the panel’s fly-out menu in its top right corner and choose Expanded View, which shows a wider histogram.
Histograms are based on 8-bit data, and show 256 levels of tone across the horizontal axis. On the left is black, at level zero, and on the right is white, at level 255. The height of each bar shows how many pixels in the image have the corresponding level. Since we have a full gradient from black to white, we have a full histogram, with plenty of pixels at every level. Above the base layer are two Levels adjustment layers. Turn on the layer named Reduce colours. It deliberately chops off every level below 100 and above 155. All that remain are the tones from level 100 to 155. In other words, the full range of 256 levels has been reduced to just 55. We no longer have black, or dark tones, or white, or bright tones. Only the mid-level tones remain. The histogram will reflect this butchery, showing bars of data in the centre, but none towards the edges:
Now turn on the Expand tones layer. This is another Levels adjustment, heaped upon the first, which attempts to restore the full range of tones. It takes that clump of mid-level data, and stretches it out to cover the full 256 levels once more. The trouble is, there wasn’t enough data left to cover the range fully and smoothly. The histogram shows toothless gaps where tonal levels are missing. We have black and white again, and light and dark tones, but the sudden leaps from one tone to the next are now very visible. It’s a bit like taking a ladder with 256 steps, reducing it to 55 steps with a band-saw, and then somehow stretching those 55 steps out to cover the original length of the ladder. The gaps between the steps would make it useless. To fully appreciate the posterization, zoom the image to 100%.
The problem here is caused by editing the image in Photoshop’s 8-bit mode, which provides only 256 levels. I’m using a greyscale image here for the sake of simplicity, but a colour image, recall, has 256 levels per colour channel. Nevertheless, the effect of compression and expansion is the same. Since adjustment layers are applied on the fly, we can simply upgrade this document to the 16-bit mode to improve the situation. Leave the two adjustments in play, and choose Image > Mode > 16 Bits/Channel. Now, refresh the histogram by clicking the circular refresh button towards the top right of its panel. Those toothless gaps are now gone and we still have a smooth and continuous histogram. The reason is because 16-bit mode offers so many more levels than 256: even if you get rid of a great number of them (two-thirds in our case), enough remain to be happily stretched back out to cover the original range.
So, where does this leave us regarding raw capture? If you edit in Camera Raw or Lightroom, you effectively have a 16-bit space to work in. All of the billions or trillions of theoretical colours from your raw files are present, and you can push edits without worrying unduly about posterization. But when you click Open in Photoshop, the default behaviour is to transfer a copy of the image into an 8-bit document, which offers 17 million colours at most. If you’re only intending a little light work, perhaps a few adjustments and some sharpening, this won’t be a problem. (In truth, I’ve done extensive editing in 8-bit mode and, provided you’re careful, it’s not a problem.) However, if you intend to do some extreme editing, or must ensure that colours are preserved wherever possible, then 16-mode is the better option. Camera Raw allows you to copy the image into a 16-bit document simply by tweaking the Workflow Options, available via the link at the bottom centre of its dialog.
There is more to the story of bit depth, and why higher depths are desirable, and I’ll cover it in an article on non-linear capture.