We have computers nowadays, and it's easy to fit a cubic spline or somesuch to the toe-area to get accurate measurements of its gradients. So I'd like to compute speeds based on the fractional gradient method directly.
I'm familiar with the fractional gradient method of Gmin = 0.30*Gbar(1.5), where Gbar is the mean gradient over a log-E range of 1.5.
But I question that 1.5. Page 440 of The Theory of the Photographic Process (3rd ed, Mees-James) states:
The scene chosen was an average one giving an image illuminance range on the negative material of 32, which is very nearly the statistical average of a large number of scenes when photographed with a camera system having average flare characteristics.
Note the 32. Log-base-2 of 32 is 5, so this is a 5-stop range on the negative. 5*.3 = 1.5, so this is the source of the log-E range of 1.5.
Note the "average flare". That was for cameras made in 1935-1945, which had uncoated lenses. It seems to me that modern multicoated lenses have less flare, and thus the log-E range of 1.5 should be higher nowadays.
Stephen has said that after accounting for flare, the average range is about 1.8. So if we were to use the fractional gradient method with modern cameras, should we change the formula to this?: Gmin = 0.30*Gbar(1.8)
If so, do you think the 0.30 would also change?