Overview 4 – Photography E – Resolution

The Choices

  1. Flow phenomenon: Water boiling? Faucet dripping? Why does it look like that?
  2. Visualization technique: Add dye? See light distorted by air/water  surface?
  3. Lighting: Continuous? Strobe? Sheet?
  4. Photography
    A: Framing and Composition
    B: Cameras
    C: Lenses
    D: Exposure
    E: Resolution
  5. Post processing: Creating the final output. Editing: at least cropping the image and setting contrast.

A flow visualization – photo or video – is really measurement of light. Like all measurements, it can be evaluated based on its resolution. Here resolution means the ability to discern small differences. We’ll discuss three types of resolution: spatial, temporal and measurand. Spatial resolution is the smallest difference in space that can be seen; how close can two objects be in the image before they become indistinguishable or seen as a single object? Temporal resolution is how close in time can two events be before they are indistinguishable? Measurand resolution is how small of a difference in light or color can be detected. We’ll unpack each of these concepts, and see how they impact the choices of visualizations, and vice versa.

A word about describing resolution: the best resolution means that very small differences can be determined, and it can be described as ‘high resolution’ or ‘fine resolution’, or ‘well-resolved’, but saying ‘large resolution’ is confusing.

Figure 1: Recreation of a video test pattern from the Philips PM5540, circa 1970. 4throck, CC0, via Wikimedia Commons.

Spatial Resolution

Based on the definition above, in digital photography the resolution is generally determined by the number of pixels in the image; the size of the pixel determines how close together two objects in the image can be. If there is no blank pixel separating the two objects, it will look like a single object. Figure 1 is an example of a video test pattern used to analyze analog video; here it has a maximum of 768 pixels across. The finest lines may be blurred, depending on the resolution of your viewing device.

Figure 2: Water-based paint is thrown upwards by a loudspeaker, forming ligaments. Jonathan Severns, Team First Spring 2013.

How Much Spatial Resolution is Needed?

Consider an image, say, 4000 pixels wide. This means we can image an object that takes up that whole 4000 pixels. In the language of fluid mechanics, we would say that we can resolve a flow feature of that scale, meaning size. Maybe it’s a vortex, or the width of a boundary layer, or the thickness of a flame. Then the smallest scale that we can resolve is on the order (within a factor of 10) of a few pixels. So, a ratio between the largest and smallest scale is 1:1000, or 1:10 3, using orders of magnitudes. We can then say that we can resolve three decades of scales. Figure 2 is an example of such an image. The whole phenomenon can be seen, taking up more than 1000 pixels. The smallest detail of interest might be the width of the thinnest ligament, a few pixels wide.

So, how many decades do we need in a typical flow visualization? Small, simple laminar flows can be well-resolved with three decades, but large-scale or turbulent flows require five decades or more. When can you be sure you have enough? One answer is when you get no new information as resolution is increased, similar to a strategy used in computational fluid dynamics. Five decades is beyond the limits of current sensor technology; no consumer-level camera sensors are 100,000 pixels wide yet, but four decades will be available soon; that’s a 10,000 px wide image, or 100 Mpx professional medium-format camera, currently starting at $6000 . Such a sensor will probably be more affordable in a few years. Until then, you’ll have to choose which scales to visualize and which to leave out. For comparison, the human eye has an estimated resolution of 576 Mpx , depending on how you count; vision is complicated.

Loss of Spatial Resolution

Spatial resolution can be degraded by a number of factors: pixelation, bad focus, low contrast,  ISO noise,  motion blur, compression artifact (in jpegs),  and diffraction effects. Pixelation is caused by not having sufficient pixels; for example, a single pixel is representing a multitude of details. Bad focus smears object information over a number of adjacent pixels, allowing the circles of confusion of nearby object images to overlap. Low contrast makes it difficult to tell the difference between nearby objects. Noise from a high ISO can also reduce the visibility of details. Motion blur is another factor that reduces spatial resolution, although it can provide additional information at the same time. Motion blur can be distinguished from focus blur by having sharp edges parallel to the motion direction; more on this below. We’ll see compression artifact from lossy file formats a bit later.

Diffraction effects are a fairly new problem that reduces sharpness in digital photography and, thus, reduces spatial resolution. Whenever anything in the optical train approaches the size of light’s wavelength, light, in its manifestation as a wave, will bend around a corner; this is diffraction. We’ll look at diffraction in depth later on, or you can skip ahead to it now. When the aperture in a lens (particularly a small diameter lens) becomes very small, diffraction effects make single points generate circles of confusion all across the image, even if perfectly focused. If these circles are smaller than your pixel size, then you won’t notice the diffraction effects. But if you have a tiny sensor – like a phone camera – then the circles can cover several pixels and will be quite noticeable. In this case, the system is ‘diffraction limited.’ Here is a nice explanation: , and here’s one with the math: .  Thus, while you will get the best depth of field at high f/, you won’t get the best overall sharpness. Instead, many lenses are optimized to give best sharpness at a moderate f/ in the middle of the lens’ range.

Hands on! Do this now!

Make three test shots of a ruler, with the best possible focus, at 1) wide open aperture, 2) moderate aperture, and 3) smallest aperture. Upload the images to your computer. Disable any post-processing sharpness improvements and compare the three images. What f/ gives the best sharpness?

Figure 3: Dry ice placed underwater forms bubbles of CO2 filled with water fog. The bubbles are illuminated by a flash. The room lights made the red smear while the shutter was open. Amanda Barnes, Sean Hulings, Mu Hong Lin, Vanessa Ready, and Brian Roche, Team First, Fall 2007.

Temporal Resolution

If a shutter speed is short enough to ‘freeze’ the flow with no noticeable motion blur, we call it ‘time-resolved;’ we’ve captured an instant in time, relatively speaking. When shooting video or image bursts, if the flow changes smoothly from one frame to the next that would also be considered time-resolved. If there are large spatial motions from one frame to the next – more than a few pixels – then you are missing information between frames, and the clip is considered poorly resolved. However, motion blur is not necessarily bad; it can be used to estimate velocity in particle tracking.

As we saw in the section on shutters, using a short shutter time to achieve time resolution can result in rolling shutter artifacts. Instead, consider using a flash. With the shutter fully open, the flash will provide the time resolution; typical durations are 1 to 10 μsec (1/100,000 to 1/10,000 seconds), much shorter than a typical shutter can achieve. If you didn’t read the section on flash and strobe , now would be a good time. The room should be dark when using this method; otherwise you’ll get ghosting from the continuous light. Of course, you might like this effect; Figure 3 illustrates one possibility. Unfortunately, flash won’t work with light-emitting fluids such as flames. Instead you’ll have to open the aperture and use a high ISO to compensate for short shutter times.

Figure 4: Payette River by Charles Knowles from Meridian Idaho, USA, CC BY 2.0, via Wikimedia Commons.

The opposite of time-resolved is time averaging. A long shutter time can produce beautiful flowing fog effects from natural streams or shores, as shown in Figure 4. The photographer writes:

“I have been waiting for a chance to get to the Payette River while the flow was still high to try some long exposure shots. I stopped last weekend. In full daylight, in fact around noon, I took this 2 minute exposure. I stacked a 4 stop and a 10 stop ND filter to cut the light. A couple of kayakers went through the middle of this shot. They were too fast to see though.”

A two minute exposure at noon is a lot of light; the minimum aperture and lowest ISO won’t be sufficient to prevent saturating the sensor. Instead, neutral density (ND) filters were used. ND filters reduce light equally across the color spectrum, preserving the color balance. They used to be specified by factors of 10; an ND of 2 is a factor of 1/100, which is 6.6 stops, but are now commonly specified in stops as well.

Try This Motion Blur Exercise

Let’s say you have an 18 Mpx image with a field of view of 10 cm, and a feature of the flow in the image is smeared over 25 pixels. Would you call this time-resolved? The shutter speed was 1/1000 sec. How fast was the flow moving? You should get 0.48 m/s. Here’s the calculation. But try it yourself first.

Resolution in the Measurand: Light

So, how well can the sensor tell the difference between two levels of light? How well can humans do this, for that matter? Can we expect a camera to do as well as our eyes? Let’s think about the maximum range first, and then see how well we can do with subtle differences in light level.

Figure 5: High dynamic range image of altocumulus and wave clouds, January 29th, 2013 at 9:12 AM in Boulder Colorado. Kelsey Spurr, Clouds First, 2013

Dynamic Range

The range of light levels (luminosity) that we can measure (either with eyes or a camera) in a scene is the dynamic range . The human eye can see about 14 to 24 EV, but this number varies depending on the average brightness . Our devices cannot match this. For comparison, a sheet of paper can span about 7 EV from white to black. Typical computer monitors can display about 6 EV. Surprised? Check it out; hold up a sheet of white paper to your device, and compare how bright its white is, and do the same for something printed a good black. Projectors are even worse.

The range of light levels, of EV, in the world around us is much bigger than our eyes can see, and much bigger than our devices can represent. Enter high-dynamic-range, HDR. The idea behind HDR is that the range of light levels in the real world is compressed into an image in a functional way, so that we can see both highlight and shadow detail in a representation. Many cameras now quickly take a handful of images, while bracketing the nominal exposure by several stops. These photos are then synthesized into a single HDR image, either in-camera or in post-processing.

Hands on! Try This Now!

Image a gray card. At low ISO, see how many stops of underexposure will make it black, and how many of overexposure will make it white. You’ll probably find a total range of 6-9 EV. The best cameras can do 14.

Figure 6: Light is quantized into pixel values by the sensor system.

Bit Depth

The dynamic range of an image then gets quantized, or digitized into steps, as illustrated in Figure 6. (The reality is more nonlinear and complicated than this example, of course.) If an accurate quantitative representation of light is required, you’d need to calibrate. For now, notice that the highest light level is represented is (255, 255,255), or (FF,FF,FF) if you are counting in hexadecimal (base 16) . Remember there are three numbers – one each for the red, green and blue color channels – with 255 being the highest you can count in 8 binary digits, bits (11111111). This standard of 8 bits (one byte, or two nibbles) per color channel is called 24-bit or True Color . Your camera might actually have higher color resolution, 12 or 14 bits per channel, but data will likely be up or down-sampled to fit in 8- or 16-bit file formats. Most post-processing software and output devices still work best with 8 bits per channel, i.e., True Color.

Figure 7: Michael Gäbler modified the image by AzaToth, CC BY 3.0, via Wikimedia Commons, to demonstrate the effect of increasing jpeg compression from right to left.

Camera File Formats

A poorly chosen file format can cause loss of both spatial resolution and color depth. Cameras usually offer more than one file format to store your images in, and you must select the format you want before shooting. When storage space is at a premium, you may want to choose the format that compresses the data to take up the least amount of storage space. Do not! Flow vis images contain valuable information, and you want to keep all of it.  Almost all cameras offer the jpeg, or jpg format, and this format is often the default. Unfortunately, this is the worst format to use, because it uses a ‘lossy’ algorithm; in other words, it loses information every time an image is stored in this format. Figure 7 illustrates this progressive loss of information with increasing or repeated compression.

Instead, choose a ‘raw’ format if your camera offers one. This format retains the maximum possible information. Larger cameras all offer this, and even phone cameras are beginning to offer this option. You may need to use the manufacturer’s software to convert the raw format to a loss-free format, although some editing programs are able to handle the raw files directly. During the editing process, you can certainly use the editor’s native format; it should retain all the original information in the image, and possibly some information about the editing workflow. For output to other programs I recommend png . This format has loss-free compression and is widely accepted by web services such as WordPress and Facebook.

 

References

Overview 4: Photography D: Exposure
Overview 5 – Post-Processing