Mark, this is a really important insight. I've been toying with the idea of starting a thread to pose just such a question. What if some of the assumptions in a testing method are simply wrong, or are only accurate or acceptable under limited conditions, or contain justifications that require clarification? Wouldn't this lead to erroneous results? Sure, probably not disastrously so, but what's the point of putting in the effort if it's only going to be little better than using rule of thumb or guesswork?
It's always a bit of a surprise when people accept a testing method without first questioning its legitimacy. Let's take as example that Steve Simmons' article that kbrede linked to. Within the first couple of paragraphs, Simmons introduces three concepts that are key to the methodology: just black printing time, a wall for a target, and stopping down four stops from the metered exposure. As they are stated in the article, all three are flawed. How will this effect the testing results? Without potential problems being described in the procedures, the uninitiated would never be the wiser and would confidently accept any results.
Go for it Stephen.
I think being "right in limited conditions" is a big trap when that info is passed on because we each have differing sensibilities about the expected result. It is surely a trap that I fell into.
Mark Barendt, Beaverton, OR
"We do not see things the way they are. We see things the way we are." Ana´s Nin