Monday, October 3, 2011

High Fidelity Pictures

Today, I will discuss some exciting innovations in the world of photography. My interest was awakened by a recent article in The Economist titled Cameras Get Clever. It described some of the leading edge developments in the field of photography. Among them was a technology called high dynamic range (HDR) which can enhance your everyday pictures by overlaying three separate shots and then using the processing power of the camera to get the best exposure from each one of them.

Basically, a normal shot will have areas that are either too dark or too bright because the camera has to set the exposure based on a compromise across the entire image or a selected area. With HDR, the camera takes three separate shots within fractions of a second - one with exposure for the high tones, one for the low tones and one for middle tones. Then, the camera overlays these three pictures using algorithms that combine the best exposure from all three shots. The result are images that have a much better balance of tones and colors than the single shot based on a compromise. Or, do they?

Well, it was easy to test since the standard camera built into iPhone has the HDR feature today. Not many people know it but there is a button in the top center of the screen that allows you to turn on HDR which results in two pictures for each shot. One is a “normal” picture based on the compromise exposure while the other one is improved by HDR. Here is my test, using our dog as a model:

This is a  'normal' picture with a compromise-based exposure 
This is the same picture with a high dynamic range (HDR)
Now, which shot is better? The HDR shot clearly has a better balance. The dog’s face is not just black and white, it has some tone depth in it (number of shades). That comes at the cost of contrast and color richness which we can see in the normal picture. The problem is that on a bright sunny day, the colors really were very bright and the contrast was very strong. And, the dog is black and white, not gray. I checked.

That leads me to my point. With all the power of modern camera and post-processing software such as Photoshop, Aperture, iPhoto, or Picasa to name just a few, who is to decide that the picture I take should have more depth in the low or high tones? Why are the colors all wrong in artificial light? Why are the shots on a beach often overexposed? I understand all the technical issues behind it but what I want is a picture that looks exactly the way reality did.

Back in the late 60s, the electronics industry came up with a notion of high fidelity (hi-fi) which meant that the music you heard from your record, tape, radio, or CD was supposed to sound exactly as in the studio. What I want is hi-fi for pictures. I want the assurance that my picture will look the way I see the real world in that very moment.

Sure, there will be artists - and consumers - who will want to distort the reality for special effect just like there are artists who feel that they have to add a dramatic sky into every picture. That is the right of any author and it should always remain that way. But 99% of all pictures taken are not meant to be art. They are meant to visually capture reality. The real reality. My tiny little (yet incredibly powerful) camera is full of features that can alter the picture - from color accent to fish-eye effect. But there is no button for “authentic picture”.

Go ahead and test the HDR functionality on your iPhone. Features like HDR are important because they allow us to push the boundaries. Some of the innovations in the world of photography are just incredible and I can’t wait to use them. When I read the article mentioned above, I got very excited about Lytro and other technologies. The impact of these technologies could be amazing. Just like the change that digital cameras brought upon us when they replaced film.

As for my test above, I prefer the “normal” picture because it looks more like the actual scene I remember.

1 comment:

  1. HDR is just the start, especially on the iPhone. to do HDR right, you really need three or more images overlayed upon each other to get a full tonal range. Two images doesn't get it all, so you dont have the opportunity to get closer to that "authentic picture". I have a friend who shoots 7 images when doing HDR and then puts them together with Photomatix. They can be stunning.

    Still, the real answer is for one image to capture ALL available data that can then be automatically combined into an "average" image, that can still be manipulated as needed. Today, if the highlights are washed out or an area is dark, there is no real way to salvage it. There are such plans in the work and hopefully at some point it will get cheap enough to make its way into a cell phone camera. Until then, HDR is about it.
    P.S. Check out Photomatix, pretty cheap and does a very nice job combining multiple bracketed images into one.

    ReplyDelete