You can go back and read post #36. There's also a paper I wrote that spells it out in more detail.
Originally Posted by Rafal Lukawiecki
What is Normal.pdf
Contrast Index (CI) was designed to pick points that will be useful on paper. So it "kind of" picks Zone I to Zone VIII.
Originally Posted by Michael R 1974
Using the chart from Stephen and choosing 6, 7 and 8 stops, is similar to finding where Zone VII, VIII and IX hits the paper. (Giving CI for N+1, N and N-1).
I double-checked by looking at the graph (which includes a Time/CI curve), and it looks like it might be right.
I think we're kind of saying the same thing. CI works fine within a "normal" exposure range. For high contrast scenes it doesn't tell me enough about the shape of the shoulder for example, particularly when applying minus development. This is where targetting the paper range can be dangerous.
Thank you for sharing, Stephen. I have been re-reading this entire thread, including your detailed posts, and attachments, and the linked articles, over the past few days.
Originally Posted by Stephen Benskin
I think I am trying to solve too many things at the same time: figuring out N,+/-1 Dev times for the negs in hand, learning about the relationship of CIs to curves and how hard finding the numbers may be, figuring out flare as a very real but debatable factor, reconciling WBM and AA ZS testing approaches with their criticism that I found here, fashioning a make-shift sensitometer, making it repeatable enough for now and good enough for a future EI test, and trying to follow it all in my mind with general equations of exposure. I'm sure, with time, I will get there, as that is how I think and process information, and, in my non-photographic professional life, I have managed to comprehend, and pass on to others, fairly complex abstractions. I like dealing with complexity in a systematic and a logical way. In the meantime, thank you for your patience, and I apologise if I have asked about something that you have already explained—I would hate to make you feel that I was wasting your time, as on the contrary, I am very grateful for it. How I wish these discussions could be more face-to-face and interactive.
Fortunately in Rafal's case, we are sticking to N-1 at most. His current curve crosses the "paper" right at the end of the data and there isn't a shoulder on the graph. The danger in this case is going to be uneven development...
Originally Posted by Michael R 1974
But if that is the danger, your advice is still good, to look at the "greater picture". It may be better to give normal development to N-1 and expect to print on a No. 1 paper than to risk uneven development. Or repeat a test using a compensating developer to see the possibilities. Depending on the developer and technique chosen, the shape of the curve may change.
Sponsored Ad. (Subscribers to APUG have the option to remove this ad.)
Thank you, gentlemen, for continuing to explain the complexities of tone reproduction to me. Stephen, I have now reviewed your paper "What is Normal", and the other paper that you recommended, "Exposure-Speed Relations and Tone Reproductions", and I have, also, re-read parts of BTZS, and WBM. I plan on finishing reading BTZS, and the other suggested items, and I have ordered a copy of "Sensitometry for Photographers" by Eggleston.
I am puzzled by how the flare/non-flare computations arrive at almost identical aim CIs: Stephen's practical flare model, and the WBM gradients, mentioned on page 212, especially as there is one other major, I think, difference in approaches. Stephen uses the generally accepted LER of 1.05 for Grade 2 paper. This, by ISO definition, only uses a portion of the available scale of a paper, notably not exceeding 90% of the available DMax. WBM carefully points out that "log exposure range of grade-2 paper is limited to 1.05, but this ignores extreme low and high reflection densities." and it suggests "We have no problem fitting a negative density range of 1.20 onto grade-2 paper, if we allow the low end of Zone II and the high end of VIII to occupy these paper extremes." WBM uses the figure of 1.2 in the calculation of aim average gradients on page 213.
Perhaps I am wrong in thinking so, but my observations of my own, and of other peoples' prints, suggest that a full range of paper DMax is often used and aimed for, and even enhanced further with Se toning. If we took the longer LER, as suggested by WBM, into the calculation of a contrast gradient using Stephen's approach, surely we would come up with a different set of aim CIs. Should we follow WBM advice, or was flare somehow part of the explanation?
I am confused by the apparent lack of consistency between testing based on old school, pre-ISO-change-in-1960 ZS, later improvements to ZS tests, WBM, and even Stephen's carefully thought through explanations, and BTZS. And I have not even thought much yet about the impact of enlarger flare.
Noticing the apparent disagreement between experts, in a discipline as old as photography, helps me understand why many experienced photographers and printers recommend not to go too deep into the logic of this matter, and just to rely on the graceful forgiveness of the process. Still, there is a part of me that would like to get my mind around it. Perhaps, in time, I will, or perhaps I was born too late to have a chance.
Last edited by Rafal Lukawiecki; 09-19-2012 at 08:36 AM. Click to view previous post history.
From Minor White's Zone System Manual 1961/63edition...
Originally Posted by Rafal Lukawiecki
"The high values of a scene must be distinguished from the "highlights" or White Keys...very tiny spots in the print... require no detail... The presence of highlights or white keys in a photograph with a long series of grays gives richness; or in a long series of textured whites the white keys give vitality."
So it's important to have Black Keys and White Keys on the print, they are out of the aim points.
Whether you take 1.05 or 1.20 is a matter between you and your "teacher." It may be an important part of the system in Way Beyond Monochrome, so I would hesitate to steer you away from it if you follow Ralph Lambrecht's teaching. I found my LER by serendipity. I had a negative hard to print on Grade 2 and another one that was hard to print on Grade 3. I decided my aim was going to be right between these two negatives' characteristics. I didn't want any negatives to be harder to print than these two.
Rafal, there is no right or wrong answer here, and as with CI and other measures, I believe the LER is something to be considered part of the exposure/development decision, not the sole determinant. Said another way, it should be somewhat flexible. Since I'm at least as interested in local contrast as total contrast, I don't target paper grades. I include expected print manipulations (burning, dodging etc) in my negative exposure/development decisions. Simply put, it depends on the scene. All these methods work fine for subject brightness conditions within a range. Outside that range, some flexibility and deviation is often helpful.
Ultimately I'm not sure the various writings/methods are really in conflict. Not only is the context important, it also depends when things were written. Keep in mind materials have changed over time. Films can record longer luminance ranges, and variable contrast papers were not really taken seriously until well into the 1980s or even 1990s. The limitations have changed.
Bill, you make a great point about serendipity helping you find your LER from a couple of negatives. Though I have been printing for a fairly long time, I have only owned a densitometer for just about a month. I bought it with the purpose of doing these tests, specifically to learn more about 320TXP, and longer term, to build a logical understanding of all of my materials, to have more control over them. I like your experience, and I think I will go back to some of the negatives I know well, and I will analyse them with the densitometer, to learn more about what I like, and what I do not like printing.
Feel free to steer me! My only teachers of photography taught me when I was 7 years old. It was very enjoyable, but since I got my darkroom when I was 9, I had no other teachers, except for one amazing master workshop recently, and great many books, trial and error. Almost 40 years later, WBM finally pushed me to do the testing, and I am very grateful for that. It must have been the line "The final results are well worth the time commitment of about 8 hours" on page 217. I am, however, a little disconcerted on account of: reading more than 300 posts about the importance of flare, and the lack of account for it in the WBM method, the lack of precision in the recommendation for the making of the exposure, and my experiences of that, and also by the other points raised, such as the choice of a different speed-point (as opposed to a different constant), and its less common choice of LER. All of this makes discussing my—arguably less compatible—findings, with others, harder. I am sure this system works well for those who get to know it well, but I would rather follow a more trodden path, for which I am likely to get more support from those who have tried it successfully. WBM is a wonderful book, and I value it very highly, perhaps the "Customizing Film Speed and Development" part is just one I might have to rely a little less on, but I am glad to have tried it. Besides, I feel this is the best compendium of practical analogue monochrome, and the most up-to-date.
Originally Posted by Bill Burk
Nonetheless, at present, I am rereading BTZS, I am half-way through chapter 8. At least I will be able to plot my curves, I hope, without having to beg you, Bill, for your services, though I might use R for that purpose.
I don't think I will give up on the Zone System, as I like it very much, but I hope the BTZS lesson of practical sensitometry will help me use ZS in a more measured way, especially for materials that are new to me. Who knows, we might be faced with a need to work with constantly changing materials over the coming years, as the players change, or even exit the market. Any direction you can steer me in will be good learning to help me cope with the new world of analogue photography.
Also, as I read between Michael's various comments, I gather that a more holistic approach to all the components of the tone reproduction system, once assimilated, ought to help me achieve my goals.
For what it's worth, as I am re-reading BTZS, I have just decided to plot my own curves (using R) from the flawed test data, which I enclosed in the first post on this thread. Thanks to your comments, especially Stephen's patient demystifying of the concepts, I feel more able to do it without having to ask Bill for his services. This is the resulting set of curves, I have smoothed them using LOESS before plotting, and I am glad to say this looks almost identical to Bill's hand-made plot:
Reading page 91 of BTZS, "Troubleshooting the Film Test" section, Figure 7-6d, shows a similar situation, where the curves' toes stack up on top of each other this way. Mr Davis helpfully comments: "You'll get curves like this if you test films in the camera (by photographing the step tablet on a light box, for example). These curves are not usable.". I suppose I know from you, gentlemen, that the key reason for the curve stacking is the flare. I wish I read BTZS before the test, but at least I have learned something useful my usual way: try and err.
I am looking forward to retesting, most likely using the flash exposure technique. I hope I get a repeatable exposures, but just in case, I will shoot 25, or so sheets, so I can average the readings from each of the five development batches. I need to do that, as I usually develop 5 at a time in my HP CombiPlan. I wonder, however, if I should agitate a little less than usual, perhaps down to 3–4 inversions only every 30s.