ISO film speed testing is done following rigorous parameters. It yields one film speed for that set of variables (exposure, developer, agitation, temp, contrast index, etc.).

Different developers and other variables yield different working E.I.s. The largest of these is contrast index adjustments to adapt to the subject brightness range (the entire reason for the Zone System, BTZS, etc.).

The relationship between development time and effective E.I. is well-known. Generally speaking, less development results in a lower E.I. For example, I rate my film differently for each Zone System scheme (e.g., N+1 has a higher E.I. than N). I generally just use exposure adjustments to do this, but the result is the same: different E.I. for different development times for the same film in the same developer.

If you have a contrasty scene and need to reduce development time to compensate, then your E.I. will be lower, assuming the developer is the same. The opposite is true if you need to develop longer to increase contrast. This is what Ilford et al. are talking about when they mention using different E.I.s for different lighting conditions.

No film manufacturer is going to do a lot of testing with other developers and give you the entire spectrum of film speed/development time possibilities. We photographers need to do those tests ourselves. However, some give a little. Ilford gives "starting point" suggestions. If I'm not mistaken, Kodak supplies contrast index curves for different development times in their film data sheets. Of course, it will likely not be for DDX.

Theoretically, the relationship between E.I. and film speed is a function of time and a continuous change. You should be able to make a few tests at a few developing times and plot the results on a graph.

Have fun,

Doremus Scudder