Average Gradient Methods
Average gradient can be a helpful tool. A single number describes the contrast of the film processed at a specific time and how it will respond to exposure. It is a description of the film's exposure input to density output. A collection of these gradient values obtained from a few development tests can be applied again and again to any photographic process, from traditional silver and platinum printing to digital scanning and printing. All that is required is to determine the conditions how the film is to be used, and plug it into a simple equation. Then find the processing time from the film test that produced the required gradient and you’re good to go. There is no need to do further film tests if there is a change the printing or shooting conditions. All that is required is recalculate.
There are a number of different techniques and variations used to find a film’s average gradient. The three principle methods are Gamma, Average Gradient or G Bar, and Contrast Index. The attached article, “Contrast Measurement of Black and White Materials,” and the attached paper, “Contrast Index,” discuss the different methods as well as their strengths and weaknesses.
Gamma uses the straight line portion of the film. Fuji and Agfa use Gamma. Ilford’s Average Gradient, sometimes written as a capital G with a bar over it, measures 1.50 log-H units from 0.10 Fb+f. And Kodak’s CI uses two arcs 2.0 log-H units apart (see attachment).
What it all comes down to is which method produces the best results in the greatest number of situations with the greatest number of film types. As each method is an average of the film's gradient, how it averages is key. What part of the film curve should and shouldn’t be measured. For example, would the gradient obtained from measuring the curve from the lowest point on the toe to high up into the shoulder produce an accurate picture of the film because it is representative of the whole range of the film curve or would measuring an area that isn’t going to be used in most shooting situations produce unrealistic results?
The attached Graph A is of two film curves – a long toed curve (A) and a medium toed curve (B). The point where each of the films is measured will produce a different value. My programs use two different methods automatically and has an option for a third. According to a variation of the Contrast Index method that I use, the two curves in Graph A have identical CIs.
Graph B shows a number of the different methods, including the Zone System’s Negative Density Range method.
So what would be the best method and why? Is there one? Does each method work equally well at different levels of contrast? What are the factors that must be considered?
I use the Contrast Index meter on the left, printed on overhead material. I lay it on top of paper charts.
One advantage of this meter is the wide range of films accommodated without changing the setup of your test equipment.
At speeds of 400 or more, my thinnest densities are over 0.1 when I use the 10 ^ -2 exposure setting.
The left-hand arc hits part of the curve anyway, so I can get a reading almost all the time.
The other methods, if you have to have an 0.1 reading, will force you to adapt your equipment.
I would imagine that the best method is to not use these average gradient methods for a normal user who mostly uses one process. Better would be to just consider SBR as a function of dev. time as Phil Davis suggests. All the average gradient methods will produce errors in extreme situations, either by neglecting the toe characteristics or premature shouldering for low contrast negatives.
Beyond the Zone System uses an average gradient method. As this thread is about the analysis the different methods, how does the BTSZ method work and what are its strengths? It's been awhile since I've read the BTZS specifics. I have a copy of the fourth edition open now.
in BTZS you start as always by determining your desired negative density range (DR) and find the appropriate subject brightness range (SBR) for your film curve. Usually you will convert the SBR to stops to facilitate your metering. Then as you gather data for many different dev. times for the same curve, you plot SBR as a function of dev. time. The strengths in this method are that you don't do any straight line fitting as you would when using the average gradient method for a priori given DR (according to some standards.) The obvious weakness is that this process is designed for one specific DR and if you work with vastly different printing methods, you have to analyze your core data more often. The average gradient methods (CI, G bar) are to my knowledge really designed for data-sheets etc. and not the individual user.
Sponsored Ad. (Subscribers to APUG have the option to remove this ad.)
I'll have to pay my library fines before I can check out BTZS again, but I recall it uses the same data set as I use now. Sensitometer exposed films developed at various intervals and read on a densitometer.
Then the Wonder Wheel or software determines the development time based on a paper (LER) and (SBR). Maybe Fred Newman can chime in but it's not that BTZS avoids using average gradient or CI... Just that it provides a practical interface to the user making it easy to apply the data in the field and in the darkroom.
I didn't really mentioning BTZS in my original post, and my point was not about metering technique, which BTZS really refers to. I was just saying that using average gradients for a common user is not the most accurate. Just to be clear BTZS does not really use average gradient. The disadvantage of average gradient method (whatever type) is that it is a straight line approximation of a much more complicated curve.
Now that I know you aren't talking about a method specific to BTZS, I don't have to bone up on BTZS and can address the general concept. Basically, what you're talking about can be considered an average gradient method. You have rise and run. Just because you might decide not to calculate slope doesn't mean the principles or concerns are any different. Nor does it mean that you shouldn't be asking a few basic questions to confirm how well any technique works because no method is perfect. So, let me just state before going any further, pointing out any pros and cons of a methodology shouldn't be construed as an attack on any particular method or suggesting that any method is a failure.
For a short toed or medium toed curve, any average gradient method will work fine. Long toed films are another matter. How much and what parts of the curve to be measured need to be considered. Take Curve/Graph B from the examples in post #1. The method of using the desired density range and the subject luminance range would be like the Zone System example. For a seven stop subject luminance range, the Δlog-H is 2.10 logs to the right of 0.10 Fb+f. A line is drawn until it intersects with the film curve. The Δlog-H value is then divided into the density of the curve at this point minus 0.10 Fb+f to produce the average gradient for that film. In this example, the vertical line intersects the three curves at three different densities. This would indicate the three curves all have different average gradients. However, a different average gradient method concluded that two of the curves had the same average gradient value.
So, where the curve is measured is just as important with the density range method as any other. The density range can be a very accurate method if used properly. That means considering all the variables involved. In Curve/Graph B, there's an additional factor that needs to be considered for the method to reach its full potential. While the subject luminance range maybe 7 stops, the Δlog-H range isn't 2.10 logs. Camera flare reduces the illuminance range at the film plane. The testing method is flare free, which means an adjustment to the Δlog-H range is require to make the Δlog-H range of the test conform to the type of range encounter in use. For the Curve/graph B example, with a 1 1/3 stop flare factor, that would mean measuring not for a Δ2.10, but for a Δ1.70. At this point, Curve A and C have the same NDR.
This is basically the third method I use with the programs. It's a good method when doing detailed analysis. I've found that with short to medium toed curves, the projected negative density ranges using the other average gradient methods is spot on with the negative density range method. The disadvantage of the negative density range method is any log-H range adjustments when working with curve families using the more realistic variable flare models (the amount of flare changes with the luminance range, so the steps between each log-H reading won't be consistent).
You are right, these methods are totally equivalent because you can specify the NDR you want to calculate the average gradient with. These are just two numbers which are the inverse of each other. There is no approximation involved and hence you are using the data, you have worked hard to produce, to it's fullest. However, when you use some standards others have created, for instance average gradient with a specific NDR not connected to your process or CI, you *may* be making inaccurate assumptions about the film curve. The same film curve you have in front of you and really don't need to make any assumptions about. I hope you understand what I mean.
I must admit that I didn't read your previous post "zone system placement" in full detail, but I have a question about the flare. Do you add the flare (fixed or practical model) to the film curves before you analyze them or do you compensate afterwards?
Second question, how do you use your average gradient numbers in practice (in the studio or in the field) ?
We seem to be in agreement.
Flare isn't part of the determination of the average gradient of a particular film curve. The flare models are a way to determine the average gradient value to process the film to. For example, an average grade 2 LER for a diffusion enlarger is 1.05 (rise). The average scene luminance range is 2.20 (run) minus 0.40 flare for a run of (2.2-.40=1.8). 1.05/1.80 = 0.58. After processing your family of curves, determining their average gradients, and plotting those values on a Time/Gradient curve, you simple find the time for 0.58. If you decide to print on a condenser enlarger instead, you just change the rise/LER value to what you've got in your personal paper test (0.95 average grade 2 with condenser), plug it in to the equation, then find the processing time on the Time/Gradient curve. The gradients of the film haven't changed and you don't have to retest the film. Only the aim gradients have changed.
Originally Posted by ffg
For the pluses and minuses, you add or subtract 0.30 for each stop change from the scene's luminance range:e.g. 1.30 = +3, 1.60 = +2, 1.90 = +1, 2.20 = N, 2.50 = -1, 2.80 = -2, 3.10 = -3.
The negative is the intermediary step between the subject and print. The gradient tells you how the luminance range will translate into the density range. As this is done with processing, average gradient is used in the darkroom. However, you still decide how the film is going to be processed in the field, N, N+1, etc. It's when you get back to the darkroom that N becomes CI 0.58 and according to the Time/CI curve example for Tri-X in Xtol that means a processing time of 6:15. For a condenser enlarger, N becomes, 0.95 / (2.2-0.40) = 0.52 and the processing time becomes 5:30.
Second question, how do you use your average gradient numbers in practice (in the studio or in the field) ?