There's basically one speed for a given film / developer combination at any given contrast. Different developers will produce different speeds with different films at a given contrast.
I remember using a seasoned T-Max R/S in a dip and dunk Refrema. It produced close to the ISO speeds at normal development with TMX and TMY, but for many of the conventional grain films, it was too active. It tended to obtain normal contrast before the film speed had a chance increase to a desirable level. HP5-P was an good example of this. For a contrast index of 0.58, the processing time was 3:30 seconds (very short). The effective film speed only reached around 100 (for a 400 speed film). The tests were made using a calibrated sensitometer, so the results are relatively accurate. The way we fixed the low speed was to reduce the activity of the developer by adding a small amount of acetic acid to lengthen the development times need to achieve CI 0.58 which gave the lower densities more time to build. I was eventually able to bring the HP5-P to withing a third of a stop of the ISO speed. With a few additional chemicals, I was able to bring it up slightly past ISO speed. In essence, the variance in film speed at the same contrast index was the result of basically three different film / developer combinations.
Most general purpose developers used as one shot developers are designed to produce film speeds very close to the ISO speed.
The best way to test depends on the extent you wish to take it. There's the Zone System approach, but the resulting speeds aren't comparable with the ISO film speeds and the aim negative density range is conceptually misleading. There's the BTZS approach that will only give you relative film speeds, but it's far superior to the Zone System method for making comparisons. There's the trail and error method where you first find the right contrast and then see what EI works best for you. And then there's strict sensitometry using a sensitometer and densitometer, and the correct methodology in the determination of the film speed and film contrast.
If you are really interested in comparing film / developer combinations, my suggestion is to use the approach with the strictest controls and best scientific methodology possible. Otherwise you won't be sure what factors you are actually comparing. This is the approach I took with lab work and research. For personal work, I test a film I like in a general purpose developer I like using a calibrated sensitometer. I determine the range of contrasts, and use the film speed that accompanies them.
"Development does not change the film speed"
That is true if we are talking about using the SAME developer. About 20 years ago the ISO film speed standard was revised to reflect this. You can look of the exact text of the standard if you choose to.
In short film speed depends on the film AND the developer. Some developers give less speed but also less grain. Others devlopers like DDX or Xtol can give a true speed increase. For example you can shoot Tri-X at EI 500 and get the same shadow detail and contrast as you would in D-76. It is not a big difference but it is what you get.
This is a very complex question really outside of the scope of APUG to answer. Read a book on photographic chemistry such as Mason, "Photogrphic Process Chemistry" or Grant Haist's magnum opus.
The choice of developer can determine how well the latent image is developed. Low pH, high sulfite PQ developers give a modest speed increase of 1/2 to 2/3 of a stop. In contrast, paraphenylenediamine developers cause a severe loss in film speed.
A rock pile ceases to be a rock pile the moment a single man contemplates it, bearing within him the image of a cathedral.
~Antoine de Saint-Exupery
Sometimes the effects can be due to interaction of the solvent sodium sulfite with latent image specks.
At moderate concentrations it uncovers them,eg when added to Rodinal or when present in Microdol 1+3 or Perceptol 1+3, giving a speed increase.At high concentrations and long development times (Microdol or Perceptol 1+0) it may dissolve some of them leading to a reduction in speed.
But as mentioned before there are many other different effects.
My foremost reason to ask about this is very practical, which is how to know what speed does my current combination of developer and film is best at. And I guess the only answer to that is to do some testing. I think what Matt suggest doing is doable
FYI my current developer is a home made PaRodinal (other easily attainable developer are a local clone of D76 and privately Imported ID11) and usually I use Tmax 400, Lucky 100, and when it is available Tri X.
Sponsored Ad. (Subscribers to APUG have the option to remove this ad.)
Just be careful to control your variables when testing or the data could be worthless and you would never know the difference. Also, don't forget to include the testing of the film contrast. In my opinion, there tends to be an over emphasis with testing for film speed and not enough emphasis in determining the film's contrast.
I reiterate that this is a very complex subject. But the choice of developer can (but does not always) affect the effective speed of the film. Consider that film speed is judged by what exposure, given standard development, produces a certain effect on the negative (like giving a density of 0.1 above base and fog). Changing the conditions of development can change the characteristic curve, which can change the exposure needed to produce the condition. Usually the change isn't great. But in some cases it can be very great. The most notable effect is that of silver solvents and developing agents which are silver solvents. They usually produce finer grain, but at the expense of a significant loss in film speed. Sodium sulfite is an interesting exception. In some developers, additional sulfite will reduce the effective film speed. But in developers like D-76, the higher sulfite concentration actually increases film speed compared to development in some of the older standard developers. I'm not sure why, but it probably has to do with a combination of the complex sulfite reactions in MQ developers and the physical development that takes place while using these developers. There are other possible reasons for a change in speed with some more exotic developers. They may react with the emulsion compounds in ways that change the development process a little bit. I can imagine that a high fog level could mask the toe of the characteristic curve and result in a lower film speed, for instance.
When following this discussion, one question come to my mind... Many of us have done lot of curve drawing/plotting, but how many of us have checked the variations of normal developing process?
Say, developing test film to 'normal' contrast and then again another film after week or month or couple of months.. And after four or five films are ready, then analyze all and draw curves. I mean really that the whole analyzing part is done after the test is over.
This should rule out all unintentional corrections that might be done if one knows that there is already difference between results.
How much variation there will be?
This variation will be the real world limit that tells how precise the tests and curves are.
(my guess is that many will do too exact calculations and assumption based on results of film testing. Is the CI 0.57 or does it vary from between 0.55 and 0.59? Does the shape of curve vary? Is the H&D curve toe always exactly same?)
I agree. How many people do the testing only once or until they get a result that sounds reasonable and then stop? Of course, there's variance to all systems which is why scientific testing uses multiple samples which are usually averaged (as with the ISO film speed testing). Add to this is the tendency for testing errors amongst amateurs. A second test isn't necessarily enough to confirm the accuracy of the testing procedure, but it beats doing it only once.
Something else that most people don't really consider is the accuracy of the testing methodology. For instance, film contrast. Plotting the film and paper curves are just the beginning. There are a number of different methods to define film contrast and film speed. Which method of evaluating the data is most accurate and best represents the type of results to be encountered in normal use? What's the point of testing if the answers produced are wrong?
A key question with testing for contrast is if the portion of the curve being evaluated represents the portion that will be utilized.
Gamma is no longer considered to be an accurate method to define the film contrast. Gamma is derived from only the straight-line portion of the curve, so it doesn't represent the entire area used by the photographer.
Ilford's Average Gradient uses a fix log-H range of Δ1.50 starting from 0.10 over Fb+f. The question this brings up is whether the Δ1.50 incorporates enough of the film curve to represent working conditions? I believe the 1.50 comes from Jones' use of it with the fractional gradient method. The uncoated lenses back then had higher flare values which would have brought the average luminance range of 2.20 down to around a log-H range of 1.60 or 1.50. This means that under those conditions this method does accurately represent usage, but it isn't valid for the lenses of today. Some use a variation of Average Gradient with an extended log-H range to 1.80 which more accurately represents modern coated lenses. The problem with any fixed log-H length is that it can only accurately represent one set of conditions (usually normal). For longer luminace ranges, it doesn't cover enough of the curve; and for shorter luminance ranges, it covers too much. Some use a variation of Average Gradient with an extended log-H range to 1.80 which more accurately represents modern coated lenses, but again, the fixed length limits it's accuracy to normal conditions.
Kodak’s Contrast Index solves this problem by using an arc at a Δ 2.20. As it arcs up, it crosses the film curve further in as the contrast of the film increases. This way it better represents usage as shorter scenes require higher contrast. For a normal scene luminance range of 2.20 and normal flare of 0.40, the CI arc crosses the film curve at approximately the 1.80 range.
Using the negative density range approach might at first to seem logical because it represents the negative's aim results that match the paper’s LER, but the method doesn’t say anything about the film. You have to bring with it additional factors in order for it to make sense, like the assumed luminance range of the subject. Using any of the average gradient methods, it’s the other way around. You know what the film is and then you can apply the specific conditions. It's much more flexible.
Let’s say that a film has an average gradient / CI or 0.58. I can then calculate the resulting negative density range for a number of conditions: including different luminance ranges and different flare factors and apply it to different papers.
Using a fixed negative density range across the film curve makes it hard to factor in flare and almost impossible to take into account the way flare changes based on luminance range. Average flare increase with higher luminance ranges and decreases with lower luminance ranges which results in different effective log-H ranges for the pluses and minuses. The fixed density range method is also harder to extrapolate results when they fall between curves. With gradients, a Time / Gradient curve can be used for the extrapolation.
The following example helps to illustrate the short coming for the negative density method. The Zone System uses a negative density range of 1.25 for a normal negative from a seven stop luminance range that will print on a grade two paper using a diffusion enlarger while in sensitometry, the NDR is 1.05 for a 7 1/3 stop range. This sounds like the different methods have two different aims, so how can both negatives print on the same grade two paper? The answer comes from understanding the set of conditions and the resulting average film gradient. If you calculate the average gradient (slope) for these two sets of conditions, the gradients are almost identical.
ZS 1.25 / 2.1 = 0.59 Sensitometry (includes flare) 1.05 / 2.2 – 0.40 = 0.58
In reality, the sensitometric log-H range is not 2.20 but 2.20 - 0.40 (from flare) for a log-H range of 1.80. While both methods will result in identical film contrast because they are developed to the same gradient, only one method represents actual conditions and only one of the density ranges defined by the two methods will be the actual one produced. A grade two paper is defined as having the LER falling within a range from LER 0.95 to LER 1.14 for a diffusion enlarger. This points to the sensitometric values as being the ones that most closely represent conditions of use and the value most likely to be produced by a film with an average gradient of 0.58 / 0.59. Perhaps most importantly, this evaluation wouldn't even be possible using the negative density method.
Now think about how problematic the negative density range approach becomes with approaches that use NDR without the benefit of curve plotting (like the traditional Zone System method).
Film speed methodology is the same way. There are a number of methods yet they can't all be completely accurate. Only one of them can produce the highest accuracy with the widest range of film types. And most people don’t use it or even know it exists (hint: it's not the Zone System method). How many people do you suppose use a given speed testing method without even questioning its accuracy?
Practically any method of film speed determination, if it is meticulously followed, will be sufficient for comparing relative film speeds. Itís a completely different matter if you want to produce precise and accurate speeds. Throughout photographic history film speed determination has changed both in method and concept. Attached is a paper, ďEmulsion Speed Rating SystemsĒ that has a nice historic over view of monochrome speeds.
Today, there remain two camps. In their purest forms there are the popular amateur method of in camera testing such as the Zone System method, and the sensitometric method of the ISO standard. Obviously, the more scientific ISO standard has to be considered ďthe standardĒ from which to compare, but few people have the equipment capable of adhering to it. This begs the question whether it is even possible using what is available on a commercial level to achieve anything close enough to actual speeds to even make speed testing worthwhile? Now, film speed, EI, and exposure are not the same thing.
Letís look at some of the technical issues and assumptions that should be considered.
Concepts and Assumptions
Film speed is determined by the exposure required to produce a point of density on the film curve that is then divided into a constant. The use of a constant means that the actual point of density or exposure value isnít intrinsically important in and of itself. One consideration for the determination of a specific aim point of density is for it to be an easily found point (ever wonder why 0.10 just happens to be such a round number and in logarithmic thirds?). But more importantly it has to have a relationship with the important defining aspects of the material and how they relate to the averaged meter exposure. With black and white film, itís the shadow. With transparency film, itís the mid-point.
What is the assumption of the shadow exposure? All film speeds are based on the statistically average set of conditions. Basically, itís Sunny 16 and a 7 1/3 stop luminance range. As the film speed has to relate to the exposure meter and itís placement of the exposure, itís necessary to first know what that is and itís relationship to the speed point and shadow placement (which arenít necessarily the same thing). The exposure meter will want to make a midtone exposure of 8 / ISO. The constant used in the B&W film speed equation is 0.8. You can calculate the value of the exposure require for a given film speed as 0.8 / ISO.
As you can see the difference between the metered exposure and the exposure at the speed point is 10x or 1.0 log-H. Thatís 3 1/3 stops. The highlight falls just a hair over 3 stops above the metered exposure. That only comes to 6 1/3 stops which leaves an extra stop left over. That stop goes on the shadow side. While the speed point falls 3 1/3 stops below the meter exposure, the average shadow falls 4 1/3 stop. Why are they not the same.
Two reasons. The density of 0.10 is only a point of reference. Under the processing parameters stated in the film standard, speed point of 0.10 falls almost exactly one stop above the minimal point of exposure that will produce a quality print. This point is known as the first excellent print point. From the standpoint of quality, itís important to know where the minimum is and by defining the absolute minimum point you know where base is to work from. The actually density isnít important as density was found not to be the determining factor in the perception of image quality but the gradient of the shadow area was. This point is the minimum useful gradient and is part of the Fractional Gradient Method which is described in the attached paper. The minimum gradient as defined in the Fractional Gradient Method has been found to be the best method of film speed determination because it produces a highest rate of quality images over the greatest range of film types.
The second reason is that while film testing is done using a flare free method, flare exists with the use of a camera. Average flare is around 1 stop to 1 1/3 stops. Flare effectively increases the exposure to the shadows by at least one stop, effectively making the film one stop faster than it would be with out flare. Flare brings the shadow exposure from the 4 1/3 stops down from the metered exposure to 3 1/3 stops which brings it up around the 0.10 density point. Flare also gives a one stop safety factor not only with errors in the camera exposure, but it guarantees that even under low flare conditions, the exposure wouldnít drop down below the minimum gradient point.
To be able to accurately determine the film speed, you need to know the actual value of the exposure and thatís not easy. The only way is to use a sensitometer and even then there are issues as to the best type (I once had a loud disagreement with a low level tech at Kodak because I was using an intermittent sensitometer and he refused to accept the results). A sensitometer has a known repeatable exposure. Film speed can be calculated from that.
Testing with a camera has a number of issues. The f/stop is a mathematical value and do not take into account light loss due to absorption and reflection of the lens elements. True stops or T/stops are determined using an optical bench. Itís standard for motion picture lenses to be calibrated for T/stops not so much for accuracy but for consistency between shots. Shutter speeds can vary between the setting as well as between different shutter types. In Zone System testing, one approach is to use a single shutter speed and then make a series of exposures changing the f/stop at 1/3 stop increments . How accurate can that those increments be as most 35mm and many medium format cameras donít have 1/3 stop indications? Even if they do, how precise are they or can the operator be?
According to Zone System testing, Zone I is four stops down from Zone V. As meters donít see in percentages, Iím not going to claim the meter sees 18% or any percentage. We know the meter wants to make an exposure on the film plane of 8 / ISO. We know the statistically average scene falls 4 1/3 stops and not 4 stops below the metered exposure or in this instance Zone V. The 1/3 stop difference already makes it difficult to be able to compare film speeds resulting from Zone System testing and ISO speeds. I tend to believe the scientifically derived 4 1/3 stops is a more accurate figure. Then thereís the added problem of the speed point being only 3 1/3 stops below the metered exposure while Zone I being both where the shadow falls and the speed point is 4 stops. Thatís a 2/3 stop discrepancy. The potential inaccuracies of f/stops, stop increments, shutter speeds, and a few other variables gives most Zone System practitioners speed results that generally vary between Ĺ to 1 stop below ISO speeds. Letís not forget also that in 1960 film speeds changed. They increased a full stop. This was basically the result of reducing the safety factor by changing the constant which the exposure value is divided into. While ASA / ISO film speeds changed because of a change in methodology, Zone System methodology didnít change and neither did the speeds that result from itís testing methods. So you really canít compare the speeds obtained from the Zone System / in camera method and the sensitometric / ISO method. And if you consider the above concept of speed as having a known relationship with the minimum useful point and the metered exposure, then you canít consider the in camera method as a way to determine true film speed. Itís more a way to find a workable EI for exposure determination.
Isn't flare automatically incorporated into Zone System testing as it uses an optical whereas the ISO method contacts the film and has to factor in the flare? Not really. While an in camera test does have an optical system, there is minimum flare from this kind of test. Most flare comes from the subject and is dependent on the range of the subject. The longer the range, the higher the flare. In camera tests shoot a card with a single tone. It's as short of a range one can possibly get. In addition, even under average flare conditions, little flare reaches the metered exposure point, Zone V. With a single toned subject, such a test can be considered practically flare free.
While the in camera test can give a working EI for the conditions of the exposure system being used, it doesnít produce an accurate film speed or in even reliable repeatability.
What about the method of contacting a step tablet under an enlarger? The range of the steps in a step tablet are known but there is always variation in any system. The actual differences in each step of density can vary slightly. In order to be able to accurately determine the exposure, the actual density of each step must be known. Most consumer densitometers are only accurate to a +- 0.01 or 0.02. This can make a difference when calculating film speed especially if the speed falls around the break point of two different speed ratings. This error can be compounded when you consider itís not only about the readings from the step tablet but the readings from the film test made from the step table. One way to minimize this is to purchase a calibrated step tablet (if they still make them).
Still, the biggest problem has to do with not being able to measure accurately measure the incident light from the enlarger. Exposure meters arenít designed to be precise enough for the job and the enlarger bulb or timer don't produce dependably repeatable results. The best you can get from this method are relative speeds.
Even if itís possible to get a hold of a sensitometer with a know exposure value, thereís one more thing to consider when determining film speed. The ISO standard has a set of contrast conditions the film most have before speed can be determined. The reason for this parameter is frequently misinterpreted. Itís incorporation into the standard is that under those parameters Ė a density difference of 0.80 over 0.10 over Fb+f with a log-H range of 1.30, there is a known relationship between 0.10 over Fb+f and the minimum gradient point which is where the most accurate speed is calculated from (the original Fractional Gradient Method). That means if film speed is calculated at any contrast other than what is in the ISO standard, there no longer exists the relationship between the two methods resulting in a film speed that isnít accurate. And even if you properly calculate the film speed under the ISO conditions, continuing to calculating film speed using the density of 0.10 simply isnít accurate for extended and contracted development. So while the film speed for normal is accurate, the film speeds for every other contrast isnít. You have to use a different method to calculate speeds for extended or contracted development. For that you have to resort back to using the Fractional Gradient Methodology or a modified version of it called the Delta-X Criterion Method (or a method called the w-speed method).
While none of this means you canít get good exposures using any method you want, as that is about exposure and exposure index and not film speed, these are some of the things to consider when pondering how confident you are with the results from your recent film test.