In a recent thread, there was a beginner who generated a gray scale on his computer and was planning on using it to do speed and development testing. To this person, the black to white scale represented the full black to white range he needed for the test. What he didn't realize was the average scene range is greater than what can generally be reproduced on paper, so his target didn't represent actual shooting conditions. The results of his test would therefore be in question, but since he wasn't aware of this, he could easily have been satisfied with whatever came from the tests.
It seems to me that in order to engage in reliable testing it's important have an understanding of the conditions of the test and an idea of the expected results from the test. Having an example of the exposure with and without flare for a typical scene, I was able to compare the conditions and results of Bill's recent flare test to what is expected. The results Bill experienced didn't appear to match what should be expected based on the model. His shadow density resulted from a patch in which the reflectance should have resulted in a much higher density. Even without knowing all the details of the testing procedures, this type of discrepancy could indicate a problem with the test.
When I was setting up my sensitometer, I had to determine the amount of exposure I needed for a given film speed. I chose a density from the step tablet which I wanted to fall around the speed point. First thing I need to know was what the value of the exposure needed to be for the different film speeds. I wanted the speed point to fall around 4 steps down from the highest density on the step tablet. The instructions with the sensitometer had a chart of the average exposure expected from the different settings. From that I was able to determine the filtration I needed which I included when I had the sensitometer calibrated. Then I ran some tests to determine if the exposures at the different settings created the densities I wanted. So, I knew what I had and what the results should be before I actually did any testing. That way if there were any results were off, I could look for unwanted variables instead of accepting any bad results as representing an accurate test.
I think when testing for an unknown variable, like a flare test, it's important to know what the results should be without the variable and to also know approximately what to expect from the tested variable so if the results fall outside a certain range, the test should then be in question. In other words, know the theory and control the variables.