Mark, this is a really important insight. I've been toying with the idea of starting a thread to pose just such a question. What if some of the assumptions in a testing method are simply wrong, or are only accurate or acceptable under limited conditions, or contain justifications that require clarification? Wouldn't this lead to erroneous results? Sure, probably not disastrously so, but what's the point of putting in the effort if it's only going to be little better than using rule of thumb or guesswork?
Originally Posted by markbarendt
It's always a bit of a surprise when people accept a testing method without first questioning its legitimacy. Let's take as example that Steve Simmons' article that kbrede linked to. Within the first couple of paragraphs, Simmons introduces three concepts that are key to the methodology: just black printing time, a wall for a target, and stopping down four stops from the metered exposure. As they are stated in the article, all three are flawed. How will this effect the testing results? Without potential problems being described in the procedures, the uninitiated would never be the wiser and would confidently accept any results.
Last edited by Stephen Benskin; 01-13-2013 at 10:01 PM. Click to view previous post history.