Okay, first we have to make sure we are referencing the same material. I believe you are using the pdf file from the Beyond Monochrome's website entitled "Testing Film Speed and Development." If you are then you also must be referring to fig 1 on page one of the document. And if that is the case, you are mistaking the log-H range for negative density range. Your use of the term "nomograph" if referring to figure 1 also confused me because I didn't think the term applies to that type of graph. It may, but in my earlier response, I was referencing the fig on page 140 of the first edition of Ralph's book. So I hope we are now talking about the same thing. BTW, figure 1 is from the ISO standard for determining black and white negative film speeds: ISO 6 - 1993.
The figure 1 defines the contrast parameters under the ISO standard for which the film has to be developed before film speed can be determined. The reason for this isn't to reflect "normal" processing conditions but because under this set of conditions there's an agreement between the fractional gradient method of film speed determination with a fixed density method. I've attached a rather technical paper that explains it all. I've also attached something I've written that isn't as technical. Another paper, "Simple Methods for Approximating the Fractional Gradient Speed of Photographic Materials" is too big to upload, but I am willing to email it to anybody interested.
Let's take a look at the quote, "In general, advertised ISO film speeds are too optimistic and suggested development times are too long." Why are the ISO film speeds too optimistic? Based on what? What quality standard is the ISO speed being compared to? Ralph mentions the Zone System. The problem is the testing parameters and assumptions between the two methods are different, so it's like comparing apples to oranges. The ISO film speeds might be considered optimistic compare to the Zone System speeds, but to conclude the ISO speeds are wrong, you'd have assume that in some way the Zone System method produces more accurate results. In reality, there is only one method to determine film speeds and that is the ISO method (see the safety factors paper). All else is more about preferred exposure. You can call it EI if you want.
Part of the purpose of film speed is to define the exposure boundaries that will produce a quality image. In determining color reversal film speed, the high and low points are defined first. The speed point is then the mean of the two. Why? Because with transparencies, quality is determined by how the middle tones are reproduced. With black and white negative film, it's the shadows that are critical, so the minimum gradient (not density) needs to be found. Once this point is found, any exposure increase, within limits, over this point will produce a quality print. This point is known as the fractional gradient point and is found 0.29 log-H units below the 0.10 fixed density point when using the ISO contrast parameters.
What about the second part of the quote? Are the suggested development times too long? They are if you assume the film speed is determined simply using a fixed density method of 0.10 over Fb+f, except it's not. There is a hidden equation behind the contrast parameters of the ISO standard. Only under these conditions is the 0.28 log-H relationship between 0.10 and the fractional gradient point certain. And if you are assuming the speed point of 0.10 also necessarily defines the point where the shadow exposure is supposed to fall, you'd be wrong. The speed point may or may not define where the shadow exposure is supposed to fall, but that's not it's main purpose. It's purpose is to calculate a film speed. According to the scientific papers, if the contrast of the film is less than or greater than the the ISO contrast parameters, the use of a fixed density method isn't recommended as it will give inaccurate results. A different method must be used.
The negative is the intermediary step between the subject and the print. It's purpose is to take the luminance range of the subject and produce it in a reasonably accurate way on the print. In order to determine the necessary contrast of the negative required to achieve this goal, you need to know the log exposure range of the paper (LER) and the luminance range of the subject (LSLR). The LER of a paper is determine from the density points 0.04 and 90% of the paper's D-max. The mean LER associated with a grade 2 paper is 1.05.As for you calculating 0.58 as normal when using a 1 1/3 stop flare factor and a SBR or 7 stops, you have used a target CI of 1.05 and I don't know where you got that from. If instead we assume a target CI of 1.30 as I mention above, and convert the 7 stops to the correct units by multiplying by 0.3 to then the adjusted CI =(1.3/(2.1-0.4))=0.76, not 0.58.
The LSLR used for a "normal" negative is based on the statistically average scene which is 7 1/3 stops or 2.20 logs. While this may not seem like a big difference from the often used 7 stops, it can make a difference with making sense of the calculations: 1.05 / (2.2 - .4) = 0.58. Anybody remember when Kodak used CI 0.56 as normal? That was because coated large format lenses have slightly less flare than 35mm lenses with the greater number of elements: 1.05 / (2.2 - .34) = 0.56.
Ralph likes to use a negative density range of 1.20 for normal. His normal CI (average gradient) is also 0.58. As both aim Contrast Indexes are identical, that means that any scene photographed and developed to a CI of 0.58 is going to be the same on the negative. How is this possible with different aim negative density ranges?
Ralph's model - 1.20 / 2.1 = 0.57
Flare model - 1.05 / (2.2 - 0.40) = 0.58