Before I even joined Kodak, the standard method comprised of slits, not knife edges. And the slits were used to create images with Xrays and with visible light as a measure of turbulence in the emulsions.
The light exposures were both negative and positive, and we used several slit widths. In the example it is 10, 100 and 1000 microns but I have used 1 micron as well. The height of these at different exposures will give the relative contrasts as seen in the second image. This difference represents say 35mm vs 4x5 images and thus you "see" the image differently.
In 35mm, the 10 micron line may represent a telephone line, but in 4x5 that might be a 1000 micron line.
So, we never (AFAIK) used knife edges because they were not very revealing in many ways.
Pictures courtesy of Mike Kriss.
In 2003 Geoffrey Crawley was publishing film test results using a microdensitometer scanning negatives of lines of decreasing separation, see attachment of my tracing of one.
On the LHS of this diagram apparently the amplitude (height) of the trace is proportional to the ability of the film to resolve what he calls "overall main subject outlines".
I believe the little peaks at the top corners of the curve on the first cycle are some kind of measure of adjacency effects.
On the RHS of this diagram the amplitude is apparently higher with films having superior fine detail definition.
So Crawley could get on one diagram evaluation of acutance and resolution but no numbers for either.
IDK if this method is used by film manufacturers more recently.
This graph confirms that you must expose the film for the shadows and develop for the highlights.
Since apparent sharpness is linked to the ability to resolve items, and this is linked to contrast, then the Crawley method will miss any image effects linked to contrast.
There are many ways to measure apparent sharpness, but it must be done at the scale at which you intend to work, and at the contrast you are using. The technical data are hard to interpret, when in actual fact one image may appear sharper than another when the data says the opposite must be true. Nothing beats the eye for telling what is "right", and if your test print vs your test subjects pick one over another, then that is the way to go.
Ok, so, Kriss points out that the grain (or noise in the measurements) also contributes to image sharpness or overall quality and he uses the term "Film Information Capacity" to express all aspects of the quality of a film. In his article, he cites 103 references.
Do you know what method Kodak used when IIRC they put the words "Worlds Sharpest" on the new version TMY2 of the 400 ISO TMY film?
Crawley's method, Amateur Photographer 10 May 2008 confirmed that TMY2 gave a higher amplitude at both high and low frequencies, agreeing with this although as you indicate other factors are involved (probably outside the scope of this discussion but interesting to know what Kodak did.)
He noted other 400 ISO films have different advantages.
I got to see some prints of the Degas made from cancelled (scored) plates a month or so ago. Very interesting indeed.
Alan, AFAIK, the method Kodak is using n ow is the same as what we used as far back as the 60s. The algorithms have improved and thus data gathering and presentation is far superior.
So, on a 35 mm x 12" strip, the 1000 micron down to the 1 micron slid exposures appear as about 1/2" square "boxes" containing all 4 exposures, and there are 11 of these per strip, each box having 1 stop more exposure. They are all plotted as I have shown above in an exposure series, and they are treated mathematically to give a large set of data presentations, some of which are shown on the Kodak web site. Grain is introduced and this is then treated to give the result Kriss describes as "Film Information Capacity".
With color, it is done with R/G/B/N, and of course a set of X-Ray data accompanies all of this so that we can determine turbidity and the proper level of acutance dyes.
PE, in the first graph you posted, would it be fair to say the different slit widths also illustrate the Eberhard effect?
I’d like to look a little at the math now, in the context of the acutance formula (and variations) from Higgins/Jones and Higgins/Perrin/Wolfe.
The basic formula discussed by Perrin, and used by Richard Henry in his book is the mean square gradient for the transition from high to low density. I’m assuming the basic math would be the same whether the exposure is with x-rays, visible light, through a slit, or using a knife edge test (Perrin, Henry). So, G2x. Henry then explains Higgins and Jones thought the total change in density should also be a factor so they proposed G2x * DS where DS is Dhi-Dlo. Apparently based on experimental data Higgins/Perrin/Wolfe later modified the formula to be G2x / DS , but that’s a separate issue I’m unclear on.
Here is what puzzles me. Perrin (and later Henry in his tests) says that edge effects need to be accounted for by modifying the formula, but that nobody has figured out how to do it. I don’t understand why this is so difficult. It is even more odd that as late as 1986 Henry would say it still has not been done. By introducing DS, weren’t they almost there? While Jones, Higgins, Perrin etc. Were undoubtedly a lot smarter than I am, why didn’t they just convert DS into some sort of “factor”? For example, suppose we have a given acutance experiment (either comparing two films, or the same film with different developers, or different exposures etc.), for each trace why couldn’t we multiply G2x by something like C (for Contrast) where:
Acutance = G2x * C
C = delta Dedge / delta Dexp
delta Dedge = (max Dhi – min Dlo) across the transition
delta Dexp = (Dhi – Dlo) away from the transition based on the high and low exposures given.
Michael, this work moved far beyond Higgin's work. There was the work of Bartleson and Breneman, the work of DeMarsh and the work of Kriss. And, much of this was never published outside of EK. So,yes the slit exposures show what we now call "edge effects". These are the "ears" on the exposure. Those who worked with knife edge exposures missed a lot of information such as fill in and bloom.
Kriss' method of defining "image content" codifies all of these into an appropriate equation which I would have to scan in, but the explanation is nearly 30 pages long in his article. It can be found in the bok "Color Thory and Imaging Systems" published by the SPSE.
I'll have to look into that. When was this work done?
What sort of techniques and calculations would Altman/Henn been using in their famous developer study (1950s)?
Of course I suspected much more had been done on this after the initial work. The problem is for the most part none of us know of any of that stuff, nor is anyone in the position to run such tests at this point. So I figured I would tackle the traditional or "classical" :laugh: notions of acutance, edge effects, resolution, and granularity without a unified field theory of image structure. The reason is that these are the ways formulators and laymen have referred to the working characteristics of developers and developed negatives, and still do (rightly or wrongly).
Where I'm ultimately going with all this is to suggest acutance as it has traditionally been characterized by people (ie edge sharpness of "grains") is virtually meaningless in the selection of a developer. There are a variety of reasons for this. I just don't think it makes much sense when people talk about a certain developer giving a sharp edge to a grain vs an etched edge. Among other things, Haist's discussion regarding the formation of metallic silver during development, and the microscopic images of developed silver would seem to suggest this notion of etching = unsharp image is rather silly.