Shared publicly  - 
How Much Dynamic Range Do You Need?

If you thought last week's discussion was heated, this one will probably contribute to global warming. This week's image, by the way, was taken with a Nikon V1.

By my calculations, I needed a minimum of three more stops of dynamic range to fully open up the shadows on the left side, even more to get fine detail on the shady right side. Even then, given the low light and the small sensor, I'd be fighting with not enough electrons.

Or maybe not. I've deliberately left this rendering dark for a couple of reasons. First, I want to provoke a discussion of dynamic range;~). Second, it was dark. Not only is there no internal lighting during the day in most of this Quito church, but it was late in the day and the sky was full of menacing rain clouds. The feeling you got inside the church was pretty much what the designers probably intended: look up at the stained glass and the light from above. Thus, I've tried to express that in my post processing.

But let's say I had a camera that gave me unlimited dynamic range. What would happen then? I'd still have to post process, because our output media (prints, displays) has limited dynamic range capability. Anyone who talks about wanting more dynamic range also has to confess that they're a tonal rearranger. 14 stops of dynamic range rendered straight will look very flat in a print. Post processed correctly, it will look, well, dynamic.

So that brings me to this: the V1 is a small-sensor camera with little dynamic range, right? This image, though, proves it has more than enough for very tough subjects. Say what? Well, that wall on the right? It's post processed +3 stops and then carefully brought down in tonal value to where I want it. The stained glass you're looking at? Post processed -2 stops with 100% recovery. So whatever the base dynamic range of the V1 in a regular conversion, I've actually extended it 5 stops here via my post processing. And then, in order to recreate the visual impact of a church interior lit only by ambient light coming through stained glass, I actually took a lot of that dynamic range and either removed it in places, or moved it more than a stop in tonal position.

Is there noise? Surprisingly, not really. I didn't use any noise reduction at all in the conversion or post processing. There is a slight blockiness to the deep shadow detail that I've had to spend some extra time working on, but in a 24" print of this I doubt anyone would really expect it was an image taken with a small sensor camera.
Thom Hogan's profile photoStephen Harris's profile photoMiguela Arana's profile photoDolores Perez's profile photo
Great stuff +Thom Hogan. All about how you use what you've got. You tried the F-mount for the V1 yet?
Looks like you are standing there in the shadows - nice. Just to clarify is this an HDR composite or did you get the dynamic range you needed out of a single shot?
Great shot with good exposure. So much for HDR. Well, i think a HDR should have so much dynamic range as the photographer has seen the scene or the scene can be rendered "as natural plus". As in a room shot with the windows "right" or like that...
Only just discovered you on google+ and here I am straight away reading a very useful discussion. Thank you - I look forward to reading much more of your stream. I have never yet felt the urge to meddle with HDR - I've mostly found most of what I feel I need in one raw image.
Personally, I think that the present crop of digital cameras capture the right amount of DR, around 12 stops. However, the I think the real discussion point that Thom eludes to or skirts around has to do with composition and contrast. Having a good tonal range in an image, darkness to light helps make it interesting. (It is what makes Thom's image here interesting.) Sometimes it helps to have subtle detail in the shadows, but too many images these days are flattened out via HDR making them less than appealing.

What I am waiting for from digital camera makers is a sensor that responds to light in a non-linear, film like manner, that holds the bright or dark elements of a seen better. I don't think this is a DR issue.
In the extreme example of "unlimited" dynamic range, I would be incredibly happy- not so that I could use the full dynamic range of a scene, but so that I could use any shutter speed and aperture combination for creative effect and combine that with whatever exposure information supported my creative vision. I would definitely be willing to push tones around in a situation like that.

On a more practical level, I don't really have a specific number in mind for the "ideal" amount of DR, but (at least on m4/3) I would still like more. In particular, I find highlight clipping much harsher on the m4/3 sensor compared to my Canon bodies. As Pablo says, "enough to match human vision" would be a nice target.
Your question was, how much dynamic range do you need? well, clearly much more then what your camera originally captured... I mean the result from the single exposure frame you captured without advanced PP shouldn't look as good as your final picture. Yes, the Quito Basilica is dark, but I bet your eyes didn't see the church as much dark as your final image (where you already expanded DR).
+Thom Hogan i'd have to say how much you needs depends on what you are trying to accomplish. HDR does have its place and can be extremely natural looking. pushing the contrast on purpose to focus everything on the windows as shapes and colors could work too. as +Pablo García says, are you documenting or interpreting?
+Earl Robicheaux "Flattened out via HDR." Not typically. Most HDR users typically just go too far with micro contrast. Their dial goes to 11 so they put it there ;~). One of the things that might not be obvious from this image at this size is that I've done the opposite of what most HDR users do: the mid-tones are not really changed. I made only adjustments in the highlights and shadows, and even then, not the typical "squeeze out all bits of contrast" that is typical of many HDR uses.
+Pablo García "Match human vision"? Boring. We didn't do that in film, I don't see any reason to do it with digital. This is a different discussion ("can you capture reality?"; the answer is no). Moreover, the human "dynamic range system" is an adjust-on-demand system. While we can resolve a six magnitude star and a bright sunny beach in Cancun, we don't do so simultaneously. Our brain processes and makes those adjustments "simultaneous" in appearance, but we're really just fooling ourselves ;~).
The image has a powerful, brooding feel that the dark areas really accentuate. However -- and this is very subjective -- I think the far lower left side is distracting. The featureless black strip just takes up space. For me, it's not required for balance, the column alone would anchor that side. A little cropping and maybe a little perspective correction would improve the image.

Once I push through the black hole, the image is wonderful. The darkness within the arches adds mystery. The eye is then drawn up and across the stone work and down the hall to the round stained glass.

So how much dynamic range is needed? It depends. As input for post processing, I want as much as I can get but I won't use it all. A 14 stop image with excellent detail from highlight to shadow is going to be a super boring mix of flat midtones fading softly into mediocre highlights and dull shadows. Dynamic, yes but boring nonetheless. The advantage is not that you can show all the tones its that you can choose which tones use and how.

I often compress and distort tones image by using a levels adjustment and pushing more tones toward the edges. Often this results in losing detail in the shadows and highlights but makes the image pop by stretching the mid tones.

If I really need more DR, I'll shoot 2-3 shots and combine them.
+Bram Singleton "Highlight clipping on m4/3." From a technical standpoint, you should be able to capture highlights the same. Perhaps you need to expose differently. Highlights are rarely the problem in digital. Basically you just have to be sure to keep under the ceiling. Of course, when you do, you may find that you don't have the shadows you thought you did. However, that's one thing that's changed a lot in the past decade. You didn't dare underexpose shadows with a D1x or else you got mush. With a D7000 you can underexpose to your heart's content (well, not really, but you can severely underexpose and recover) and keep reasonable looking shadows. That's one of the benefits that lowering read noise via on-board ADC is giving us.
+Paulo Serrão Well, it indeed that dark and looks like that when you walk in off the street ;~). The light level difference--at least on this day at that time--was more than enough that my eyes had to switch between cones and rods, and that takes time. And when the adjustment is done, looking directly at the stained glass will cause your pupils to dilate. It's not that your eyes are actually "seeing" all that dynamic range, it's that your eye mechanism is pulling off rapid switching that the brain "stitches" into the perception of more dynamic range.
+Thom Hogan Do you mind posting the "original" image, resized at the same resolution (ooc jpeg if you have it, otherwise some reasonable default from raw)? I would like to appreciate more what your did in post-process.
Love what the 'small sensor' cam can capture.

Thank you for the tip on the D1x, +Thom Hogan.
Been wanting to upgrade from the D1h for a while.
I don't get the feeling of this being overly processed..just like it is here. In fact the way you've got it does noot draw the eye to the glass, but more to the arch both wall and ceiling..The natural tendency I think would be to focus on the stained glass for some..but I really like the shadows and depth ...playing with that light! ;)
+Thom Hogan "The feeling you got inside the church was pretty much what the designers probably intended: look up at the stained glass and the light from above." Well said and thanks for sharing your thoughts.

The italian photographer Luigi Ghirri (1943-1992) wrote:

"You see many things. The fascination of the image is also in finding a balance between what you see and what should not be seen. It should not be a photocopy of reality. The problem is always the same. There is also an important current photography of research that makes this extreme definition and precision, that they see everything, absolutely everything in a homogeneous way, all well balanced, all well-graded, his poetry and his line of work . I prefer this continuous polling of what you see and what should not be seen. Show how in reality there is always an area of mystery, an area that is unfathomable to me and determines the interest of the photographic image. I do not like to see all this as a synonym for depth of vision. I think it's a mechanism of surface structures and the depth must be sought in other values. Other values that are then also the problem of giving space to things."

Finally, a photo I took some time ago, the interior of a church, no stained glass window, no HDR, me too in post-production I asked myself what to see and what not:
i wonder how many of you are looking at the image on a calibrated monitor with a color managed web browser and assuming that there is an embedded or assumed profile that matches what +Thom Hogan intended when he exported and uploaded. it's pretty much a given that what +Thom Hogan sees in the version he exported and what we see aren't the same thing.
Great image great post +Thom Hogan I would think that since all of this we do is subjective and artistic, that "want" is a better word than "need" in the title.
When one incorporates HDR ...I feel the mere usage of HDR insinuates in a way that there's a desired effect that you're going for that other editing aside from HDR will accomplish. Sort of to me the word "need" is in line with that desired effect you want to achieve versus simple edit? Perhaps? Therein lies the what point does one stop ? Well, that all depends.
What do you think of the little V1 so far? I hear it's a lovely camera, even with it's small sensor. Great image.

I do notice the difference in dynamic range between my Sony NEX 5 (or Nikon D90) VS the Canon S95 or Leica D-Lux 4. The Canon seems to have the least amount. I love the sensor in the Sony, (and Nikon), but hate Sony lenses. Love the portability of the Canon S95, and the rendering of the Leica D-Lux 4, but the ISO can only be set so high before the dreaded noise comes in.

I don't know the answer, but it goes along with your post about getting a compact that does what we want with a quality sensor and the ability to shoot in low light without a lot of noise.
Enough DR to shoot directly into a mountain sunrise without blowing out all the highlights or the shadows, thus using postprocessing capabilities of a RAW capture to eliminate the need for graduated ND filters? But we've already got great dynamic range on several platforms.

On the D700 that I'm most fond of taking up into the mountains, I often find myself wishing for just one or two more stops, but when I get into post it seems like there's always something to work with. In the few cases where I am absolutely beyond what RAW is going to allow me to work with, two or three frames worth of exposure fusion tuned to fully render both highlights and shadows is an extremely simple solution, and it can be done without horrible midtone mangling.

The limitations of the output media can't be overstated. Wet process, Lightjet, Ultrachrome HDR, all of these tools are working with less dynamic range than the camera. I'm getting set up to work with both carbon transfer and photogravure, and neither of those will give me 14 or 15 stops.
Ok; I have my own corollary question for Thom and the rest. What is more important to the photographers, megapixels or dynamic range? If there are two cameras with the same feature set, same sensor seize, at approximately the same seize, however one has substantially more megapixels and the other has approximately 50% fewer megapixels but larger pixels producing better dynamic range and ISO response, which one would people buy?
+Earl Robicheaux you assume that they will buy only one of them and not both, and that they won't have both with them at the same time. one size doesn't fit all. the average camera buyer won't care. the discerning one will want both and use the one necessary for the situation at hand. maybe that is out of reach pricewise for an individual but i think that anyone who progresses to the point where the distinction between the two matters, they are going to want both.
Depends on the number of megapixels. If it's 12MP vs 24MP, I'll take 12MP with better dynamic range and high ISO capability. If it's 5MP vs 10MP, I'd choose the latter. Once I have "enough" pixels, more doesn't help me much. 10-12 is enough.
+Dan Rode There's actually detail in the area you don't like. In reducing the image for Web display, it's lost a very faint, subtle thing in that area. I suppose if I were processing solely for the Web, I'd bring that up a bit.
+Thom Hogan If you have detail there, I can see why you choose not to jam the column up against the side of the frame. The web can be a nightmare. There is so little control and so much constriction. I wonder how different it will look tonight on a calibrated monitor.
No less then 10..I'd say. I'm not a limit person...haha...
RE: dynamic range. The more the merrier. One can always eliminate data, but it's hard to create data that isn't there. This image is an interesting choice for this question since the image doesn't need that much DR to be successful. If one filled in the shadows it would (imho) ruin the beautifully soft line of detail leading to the stained glass window, which is of course the focal point of the image. In an image like this more DR would actually reduce depth in the image, whereas in other images (your previous post for example), more DR might have enhanced depth. I don't feel going one way or the other is "right" or "wrong" just two different artistic visions. That said, given the choice I'd rather have the option to bring out detail if I wanted it vs being handcuffed to bracketed images which may, or may not, work for a given subject matter.
+Earl Robicheaux My answer as always been "capture optimal data." Optimal will mean different things to different people. To me, pixel integrity is most important, thus DR over pixel count.
Sure, it's adaptative, but I bet your eye saw a lot more than the picture you took, even lookin up to the stained glass, What I really meant is if you can all the DR the eye/brain is able to get, then you can do whatever you feel in the final product, but you start with more choices.
How the image is going to be displayed also matters - some mediums can display a higher contrast ratio than others, which affects how much DR you "need" before you need to spend significant time in post.

Also, while I'm happy to spend hours in post for the small% of great images, I'd like my filler images to be good enough that I don't need to spend much time massaging them into shape - I'd like to shoot them so that they're as good as they can reasonably be, and a high DR helps with this (as, of course, do many other things). ND grads are a significant faff when shooting, and I wouldn't like to have to use them more because of the limitations of my system.
+Pablo García It might be less than you think, and in some places and times you'd lose your bet. The max dynamic range of the eye is usually stated as 10,000 to 1. That's for high contrast, bright scenes measured in the middle receptors.
+James Findley Uh, no. You have something wrong. If you had a camera that captured 20 stops of dynamic range, you'd be fiddling with EVERY image. This is the thing about HDR: when you've got your multiple image 20-stop 32-bit data set, it looks flat when rendered on your display (and definitely flat in a print). You now have to shape the contrast, both overall and at micro levels.
Well, I love the image as shown. It captures the darkness I've experienced in similar spaces, and it also matches the way I would (try to) finish the image if it were mine. Does it look like there was a lot of PP on my screen? No, as it should be. Do I care, as a viewer, whether it has a lot/little of PP? Not in the slightest. The image is the product.

Reality as a goal is a myth; we've never had it in photography and never will. For most of photography's history, it was a two-dimensional B&W medium representing a 3D color world. Now it's a color medium, but still 2D. The objective is to construct an image with meets some need or goal: "accuracy" in order to impart information about the scene to the viewer; "mood" in order to let the viewer appreciate what the photographer was perhaps feeling at the time. But reality? Reality is so overrated.
+Thom Hogan I already play with the curves of almost every image I process. Tonal rearranging is what gives me an edge over the camera's JPEG engine. More DR (and more bits to capture that DR) means I would have more control. Even if I throw those bits out for many of my images, it's worth it for other images that I get to keep that would otherwise have blown highlights or crushed shadows.
+Thom Hogan Reminds me a little of Ralph Gibson, though I don't recall any church pictures from him. I like the contrast in this image, but then again I still choose to shoot transparency film at some locations. I have yet to have a client want an HDR image in my commercial work. Control of DR is important to me, but not so much a wide DR range. Nice image.
+Thom Hogan Er, perhaps I wasn't clear enough? I didn't at any point suggest that 20 stops of DR would be a generally useful feature for most images. It was written as an argument for the levels of DR found in typical DSLRs over the levels of DR typically found in even the better compacts, although you don't always need it for the final image - with the corollary that you need to take into account what medium you're shooting for, as the effective DR of the medium is, in my view, important - which is a point that seems to have been missed.
Don't lenses impose a significant limit on just how much DR is attainable?
Very good point. I started out doing a lot of HDR, wanting to capture all the fine details... And looking back, for the past few months, I realized I didn't do a single HDR. One reason is that it's damn difficult to do it right. 99% of HDR I see have that "HDR look" - and when you have a "look" that is immediately identifiable, that look becomes boring fast ;) There are a handful of photographers who do it well, like Trey Ratcliff, but that's a lot of skill.

Another reason is that I realized that I don't care about all those little shadow details. I still post-process heavily, but it's not about details, it's more about balance (colours, lighting, impressions I wanna convey). I am somewhat amused by this obsession with dynamic range, and when I see people doing 9+ shots HDR to "capture all the details" - for what? This photo you posted is a good example - I love the mystery in those deep shadows, the general atmosphere of the shot and the framing, the blacks on the left and right edges (though I'd crop a few pixels from the right edge).
Look at any single point for a minute then stair at a white wall. You'll see the negative because our eyes increase and decrease the sensitivity of rods and cones according to the hue and luminosity of each element so, in theory, if you stared at something long enough your receptors would calibrate so that the image was gray. HDR allows greater sensitivity in dark areas and less in bright to mimick how we naturally see things, but for the purpose of making a monitor or print with limited range keep both contrast and highlight and lowlight details. It's cheating and disingenuous, but your retinas do it. Depending on its application and what you're going for, it can flatten an image, destroy the mood, or make it pop.
+Thom Hogan "The light level difference was more than enough that my eyes had to switch between cones and rods, and that takes time. And when the adjustment is done, looking directly at the stained glass will cause your pupils to dilate".
Your eyes view angle is wider then the camera, you can see the stained glass and the dark areas at the same time, so your eyes adjust to both and can capture both light situations at the same time. Your "HDR" can't still capture that dynamic range.
PS: When you look at light (stained glass) your eyes contract, they dilate when its dark.
PS2: And yes I know its dark, I've been there 2 weeks ago and I got the same crappy cloudy weather :) Did you climb to the tower? A bit dangerous but nice view there...
How long does it usually take to post-process like this?
+harold yun Took me less than 5 minutes to get my initial version. Later, I went back and spent another 10 minutes because there were some small things I wanted to work on. If I were doing this for gallery view or selling it as a print, I'd probably spent at least 30 minutes on it.
I see your point (we have enough tools to take great photos) and raise it (we always want more, at least in theory). Sure, if you capture 20 stops of data and mush it together, it looks flat - but there's no reason why your camera couldn't capture that much raw data and then render it to JPG with appropriate curves to make it look natural. "Optimal data capture" should include the idea that sometimes, you have a creative goal in mind that exceeds what the "normal" rendering should be. (You've given us examples where you shot with a specific post-processing in mind, such as B&W, local contrast adjustments, even IR photography.) In order to accommodate that, we need cameras that sample enough digital data to allow tweaking without overly visible artifacting.
More pragmatically: Sure, I'd much rather shoot great shots OOC 99.99% of the time and not spend time post-processing later, but while I'm doing that, I'd also love the flexibility to potentially recover highlights and shadows on the 0.01% of the shots that I don't expose optimally ;-). Also, if I'm shooting moving subjects, multi-image HDR is not an option. Nothing can replace the skill of composition, but isn't exposure forgiveness something that even skilled shooters can (and should) use?
+Yugo Nakai True, we could apply some curves and create JPEGs in camera, much as we do with Active D-Lighting and the new in-camera HDR facility. But now we've stuck 32-bit data into 8-bit results. You'd better hope that the curves were EXACTLY what you wanted.
+Anthony Beach A lens might change DR from edge-to-edge due to vignetting in the corners, but they'd do that regardless of the underlying DR of the sensor. A few very old lenses may have issues with telecentricity, but then it's going to be more about microlens and sensel design than underlying DR.
I have encountered veiling flare even when not shooting into the sun or a direct light source (bright skies and reflections off the water for instance) that when overexposed by two or three stops destroyed shadow detail.
Excellent discussion, thanks to all. It does make me want to up my post processing game (I've learned CNX and PS CS5 layers by T&E!!). Are there any recommended books, websites or blogs that have best practices for handling dynamic range?
+Thom Hogan Actually, I meant shooting RAW+JPEG, which is what I usually do. I mostly use the JPEGs but if I need post-processing (occasionally to rescue an exposure mistake, but more often because colors need significant tweaking, or more importantly, because the in-camera JPG processing guessed poorly), I really want as much RAW data as possible. And wouldn't it be great to have more clean stops of DR in your RAW files?

Now, I expect you're going to analogize to amateurs wanting 36 MP so they have the option to occasionally crop small and retain detail. ;-) But exposure is different, because there isn't that perspective issue, and also because with moving subjects, you only get one frame, often hastily shot. Yes, maybe there are some high contrast moving scenes where an experienced shooter won't even attempt to capture the DR visible to her brain+eye, but a high-DR camera could allow those shots to succeed with a single frame. Even if the JPEG isn't set to do a single-frame HDR, at least you could have the RAW file available to create it after the fact.
+Yugo Nakai This is just another thing that the camera makers just don't get. The problem is that if you optimize for the JPEG, you deoptimize the NEF. And vice versa. Why can't we have both?
Will we ger high end cameras with 16 bit RAW? That would allow us to push shadows and achieve a greater dynamic range. I'd prefer a 16 bit 12Mpixel than a 14bit 16 or 36Mpixel.
+Jose Vigano You can put as many bits into a system as you'd like, the question is whether that will gain you anything. As it currently stands, there'd be no real advantage in creating 16-bit raw files with the current DSLR sensors. Right now 14-bits is about right for the sensors. Perhaps if we increased efficiency or well size or had spillover wells or something new technically happened it would be worth adding bits.
My view is that an image should look the way you want it to look. Dynamic range, in an of itself is good thing but it must take a back seat to the look your after.
Add a comment...