I'm glad that at least one person agrees with me!
I've also noticed that the base adds a uniform level of difference between the UV and visual readings (UV density being higher than visual, as Jorge's readings) - but this does not, of course, alter the relative exposure values. So in the case where the UV and visual relative densities (relative step to step) are the same, it should not matter whether UV or visual is used - the curve is just shifted along the x-axis. (I couldn't think of a concise enough way to describe this 'relatively neutral' aspect of base plus silver when I wrote my previous posts, so lazily used 'neutral'.)
In that case (the UV and visual densities of the step wedge being uniformly different) it would not matter which mode was used unless the densitometer was out of calibration. Then it may be better to measure the step wedge in the mode in which you were going to measure the film density because then the calibration error would partially cancel - the curve would be stretched or compressed diagonally, but would maintain the same general shape. Note that the error would not be cancelled entirely.
So, if your UV and visual density step wedge measurements are different by a non-constant value there is either something wrong with your densitometer or the density steps in your step wedge are not neutral. It is acceptable for the base not to be neutral, as long as it is uniform.
The preferable way to avoid errors caused by the possible non-neutrality of the step wedge is to have a calibrated densitometer, and to measure the step wedge in the mode that most closely resembles the way the film will be exposed (usually visual) then to measure the image of the step wedge on film in the mode that most closely resembles the way in which the paper (or next stage in the process) will be exposed.
How does that sound?