Quote Originally Posted by DrPablo View Post
Bit depth has nothing to do with dynamic range. It only has to do with the number of intermediate tones between extremes.

If a sensor can record detail over a 1000:1 brightness range (approx 10 stops), then increasing from 24 to 32 to 48 to 96 bits is not going to change the upper and lower limits of that range -- it will only change the number of intermediates.

To get a higher dynamic range, you need a physical sensor that is responsive over a greater range of light. Once you've accomplished that, then you'll need higher bit depth to accomodate all the data -- but it's the sensor and not the bit encoding that makes this possible.

The 32-bit files in HDR are necessary because each individual pixel holds the R/G/B information from 5, 8, 10, etc individual captures. So you need extra bit depth just to hold all that data. But it's not the bit depth per se that creates the higher dynamic range -- it's the content of that pixel information that does.
I am well aware of this - However, I think it is pretty obvious that digital sensor manufacturers wouldn't create a 32-bit chip unless the sensors had the dynamic range to take advantage of it.