I came late to this thread. Glad the OP's problem is solved, but the discussion of test strips has been fascinating. You would think they'd be simple.

At present I have only a clockwork timer, and have always done test strips on an arithmetic scale centred around a guess at the right exposure. I have been considering making myself a digital timer using an Arduino so that I could develop the functionality further at a later stage; but while thinking out the code for 1/3 stop increments I began to wonder how accurate it needed to be. The contrast between Smiegltiz's covering method and Thomas' uncovering method set me thinking about the errors involved in each method. It occured to me that in the covering method any timing errors accumulate to affect all subsequent strips. However, if you start at 8 sec as described and your timer can be set to, say, the nearest second, the accumulated error in the covering method is never more than 2% of the exposure. In the uncovering method the errors do not accumulate, and the accuracy of each strip is as good as your timing with the card against the second counter - except with the exposure that should be 5.6 sec: if you make this 6 sec you have incurred a 7% error. I have seen 5% quoted as the difference between a wet and a dried print, so that error would be quite significant.

Would be glad to hear your thoughts.