Sensitometry and fog
This might just be a question for PE, but figured I'd throw it out there anyway. Given a specific film, hold time, developer, CI etc. etc., how do we know base fog (measured at zero exposure) is constant/fixed as exposure/density increases?
Because anytime you develop unexposed film for a specified period of time, the base + fog doesn't change.
It's when you develop less/more time, or expose the film in a way that can affect the entire sheet, that you start to see changes in base + fog, right?
Hi, I'm not sure that it makes any difference, as the total density affects any use that I can see.
I think it's clear that that a sub-visible exposure WILL affect the result of any following exposure, as this IS equivalent to "flashing." I don't think there is any simple way to know that this is going on, so the rule is to be sure no significant light is present during the entire process.
I know that you must have had a reason for asking, if you care to share.
I wonder too... Maybe my answer didn't help your question.
Perhaps you are wondering if ALL the density that accumulates for exposures above zero, is above your baseline base+fog solely because of the exposure. (All image forming exposure)
Or maybe you are wondering if the base+fog might grow due to increased developer activity because the exposure caused increased local development. (Some infectious development effect)
This could be the start of a long thread. My empiric data suggest that base fog in old film does not improve sensitivity (like flashing or pre-exposure might). In fact all the expired/fogged film I have tested the fog seems to eat away at the toe and make the film slower.
Originally Posted by Michael R 1974
However, fog level is also dependent on degree and type of development. So, one might wonder about a very active developer used to produce super speed but which also gives a high dense fog level. Would the high fog negate any speed benefit of super-active development? Apparently one CAN have very high fog and increased speed as in this example:
Dotted vertical line is the fractional gradient speed point. Delay is the time from mixing the developer; it lost activity quickly after mixing, so time-zero gave both the highest speed and highest fog.
JOURNAL OF THE OPTICAL SOCIETY OF AMERICA
An Evaluation of Film Speeds Obtained with Kodak SD-19A Developer
Last edited by ic-racer; 09-11-2013 at 11:50 PM. Click to view previous post history.
Sponsored Ad. (Subscribers to APUG have the option to remove this ad.)
An interesting question. Of course the model sets emulsion fog as a constant. This is in contrast to developer fog which increases with development. If the film is exposed to create an image and the film is not developed the activation sites (latent specks) will migrate about destroying the image and resulting in an overall increase in fog. This is ensured by the Second Law of Thermodynamics. But this would take a long time. The time would be shortened by higher temperatures. This is why exposed film should be kept cool before development.
A rock pile ceases to be a rock pile the moment a single man contemplates it, bearing within him the image of a cathedral.
~Antoine de Saint-Exupery
Thanks everyone for responding so far. Interesting points. Where I'm coming from on this - we have an implicit (or explicit) assumption in any sort of sensitometric study of a film/developer combination, whether it be ISO, Zone System etc. that whatever we measure as base fog density can be subtracted from all exposure densities to determine net density (or image density). In other words, suppose you plot a characteristic curve for a film. Base fog density is assumed to be constant all the way across. In a Zone System-style test, we are usually told explicitly to subtract this base fog density from all exposure densities. In the ISO triangle, it is not an explicit procedure because we are concerned with low densities above base fog (speed point) and an increase of 0.8 above m at an exposure 1.3 log H higher. But there is still an implicit assumption the delta D of 0.8 is "usable" or "imagewise" (pick your term) density, meaning fog is assumed to be constant.
Said another way, we know base fog (zero exposure) for a given film type can sometimes vary with age, hold time, developer and development CI. Can it also vary with exposure?
Example - suppose you do a sensitometric study of FP4 developer in ID-11. You measure fb+fog density to be 0.30. Choose any other exposure value on the curve, say one that produced a gross density of 1.50. Is the base+fog density still 0.30 at that gross density? Are we correct in assuming net density at that point is 1.20? Then change the developer to something with different developing agents and/or a stronger alkali and repeat. What then?
The emulsion could be a variable, and the developer could be also. We know certain developing agents and/or formulas have tendencies to "discriminate" better than others when it comes to exposed vs unexposed silver halide grains. Is it possible in areas of high exposure where lots of reduction is occuring, more unexposed grains might be "infectiously" developed?
Granted, even if this does happen, the differences are probably very small, and I don't know how you could measure it anyway. So yes this is probably unimportant in the grand scheme of things, but I'm still curious. PE once told me (if I'm remembering this correctly) that when they tested films etc. at Kodak they always plotted gross density.
Does it matter? If the fog somehow contributed to an increase or decrease in a way proportional to exposure it's just another factor in the curve. In effect by measuring the curve you are taking all such effects into account (if they exist at all).
It doesn't matter because the growth of fog with increasing density is probably small (if it exists at all). But I don't agree this effect (again, if it even exists) is taken into account when the curve is plotted. If the plotted curve represents gross density, then fog is included everywhere. No problem there. But characteristic curves are very commonly plotted showing net density, where a constant fb+fog density is subtracted from all the measured densities. People calculate contrast, etc. this way.
But if you subtract the fb+fog based on a reading of the clear area from all the density and variation on the fog based on density will still be left in the curve. So I don't see how it matters with net or gross density regarding any theoretical fog variation to exposure. If fact I'm not sure how your ever be able to figure out if this effect exists since all we can measure is the final density in response to exposure.