This is the idea I'm trying to get across. http://www.apug.org/forums/viewpost.php?p=1166813
Yes I know it's specifically about B&W but the theory is sound. The significant differences between B&W and Color are simply 3 curves instead of 1 and dyes replacing the silver.
The boxes in Ralph's diagrams define the portion of the curve that will actually print to paper in a straight print. The mid-tones from the scene can be placed properly and as expected on paper using various negs that have been shot anywhere from N to maybe N+3.
This is scene dependant. It assumes scenes/subjects with normal or narrow brightness ranges.
This is not some off the wall thought, Jose Villa, Jonathan Canalas and others make a nice living shooting weddings in exactly this manner. Landscape shooters that intend to burn and dodge a bunch to put more of the films curve on the paper may want to be a bit more picky about placement.
In practical application, the scene's relationship to paper is maintained and the negative and enlarger exposures are simply allowed to float in the middle somewhere.
Then stop using the term saturation to describe hue accuracy.
In practical application, saturation drops. Hue accuracy also decreases.
Neither of your posts are actually a response to the central point being discussed.
OK, let's assume this is true to some extent, but up until this point all discussion why it would be so has been on densitometry using false assertions and unneeded complications. What I think is that it isn't just a densitometry (amount of dye per amount of exposure) issue.
Originally Posted by Athiril
The "compression" you describe is just a description of the basic densitometry, namely the "shoulder". If you hit the shoulder, you are exactly right, but I claim that you usually do not hit the shoulder with today's color neg film with most subjects. The sunset/sunrise is one of the few exceptions. But well the curve isn't perfectly linear but we are not talking about big differences here. There must be some other reason to your experience.
When we are talking about color saturation, we usually talk about "perceived saturation" which is just contrast, discussed above. Increasing contrast naturally increases perceived saturation and sharpness and vice-versa, but if we are not affecting contrast, then this is not what we are interested in.
However, the real color saturation is defined otherwise. AFAIK, it is dependent on how wide the absorption peaks of the sensitization dyes are. If they overlap much, a large number of hues can be reproduced life-like because there are no deep "valleys" in the sum of the three spectral curves between the primary colors. But at the same time, "color saturation" is somewhat subdued. OTOH, if the dyes have narrower absorption spectra, color saturation can be increased at the cost of "reality" of the hues. Some hues are now reproduced too light and some hues too dark, but that is exactly one of the reasons why the image looks more vivid.
But, is this connected to exposure or densitometry in any way? Or is it just a film design parameter that keeps constant regardless of exposure? How the color saturation is controlled when designing a film? Can we affect the real color saturation, not contrast, in exposure choice and/or processing? This is a question that always goes unanswered.
Each layer in color film only makes one color and it is always fully saturated because there is no other choice. The density is the only variable I see.
Balancing the exposure of the layers is another story because each layer responds differently, has it's own EI needs, dependent upon the lighting. If there are two subjects lit by differing sources correcting/filtering for balance on one, will skew the balance for the other.
It is also quite easy to underexpose one layer just in the shadows while shooting at box speed. This makes for a nasty printing problem.
LOL my first question as well. Because Rolleis work miracles with shitty light, don't ya know ;)
Originally Posted by 2F/2F
Mustafa, please clarify what you are trying to achieve. Are you trying to put in colour that you didn't see with your eyes at the time? Not trying to be flippant but that's what it sounds like to me.
Or: is the issue that your film isn't handling the colour temp in low light / shade? That is a common issue with colour film.
If you want more saturation in flat, low light then you might try slide film, but just be aware that colour temperature can shift wildly from 5000K in low light/shade. You're not going to get daylight tones from any colour film unless you take that into account and filter appropriately. Colours will wash out quickly if you don't.... no matter how you expose.
That said, I do sometimes mildly overexpose colour neg film to enhance the primary colours and give more saturation. But again, if colour temp is off... is that your issue?
Rollei has zero to do with it. Not even sure the film has much to do with it, really. This is probably a colour temp issue, but why not show us an example. Maybe I am misunderstanding your issue, sorry if I am.
hrst: Contrast decreases with increased exposure too. What other reason? The best saturation and contrast point is obtained for x at x @ mid tones at box speed (unless you want to sacrifice by exposing less and pushing etc etc and other non-standard things) as defined by the manufacturer. People not seeing this I would suggest haven't experienced it, and there is some other reason for their improper exposure.
marK: No, the layers affect the other layers, on top of which increased exposure raises the minimum density of the complementary colour of the subject/area/point/etc. There'd also be greater stray absorption.
I love Rollei cameras, but they are certainly capable of being used for terrible pix. And this I know quite well from personal experience.........
Originally Posted by keithwms
I don't want to argue with you on this, but I want you to revise your thinking a bit. You are pissed about the net myth which is actually a faulty oversimplification, but you are actually making a same kind of faulty oversimplification yourself. Please.
Originally Posted by Athiril
Both are problematic assumptions that cannot be used as rules of thumb.
I summarize once again;
In cases where shadows are exceptionally low and hold important detail, overexposure increases contrast in them and underexposure decreases contrast in them.
In cases where highlights are exceptionally high and hold important detail,
underexposure increases contrast in them and overexposure decreases contrast in them (this is what you are talking about). This can happen with clouds, especially with sunsets and similar conditions.
The first one of these is probably more frequent because the linear part of the films is long and manufacturers want to take most of the speed available thus rating the films at the highest EI leading to "good" results. (Which is the basis of the ISO standard).
As a sum, in most "typical" conditions, giving a little bit extra exposure as a precaution will result in as much contrast and saturation as possible. Box speed is fine if you are carefully metering from "a little bit shadowy side of the midtones" or something like that. Or you can overexpose a stop "just in case" and then slack in the metering. This will not lead you to shouldering.
If you are seeing a change in contrast, that may be due to many reasons;
(1) we really are off from the linear part of the film
(2) you have some postprocessing problem or feature. For example, scanner CCD's have small toe and shoulder, furthermore you never really are scanning "raw", you have some piece of firmware and software that can do something unexpected. I have measured the characteristic curve of Epson V700 for example and it is not linear but it clearly starts shouldering at D=2.4, thus having lower contrast when scanning dense negs! You may be seeing this. This does not happen in RA-4 printing that easily.
And last, you cannot evaluate the neg with your eyes for contrast. Your vision system does not have linear response.
So, in those cases where contrast really changes, the perceived saturation changes accordingly. I think everything has been said regarding the contrast and cases where it really changes. And, the fact that a big part of perceived saturation is directly related to contrast. But, saturation can also change without changing the overall contrast. For example, Portra 160 VC and Ektar 100 have very similar contrast but Ektar has more saturated colors.
"Each layer in color film only makes one color and it is always fully saturated because there is no other choice. The density is the only variable I see."
That's true, but densities resulting in from a given scene is not only dependent on exposure. They are also dependent on spectral sensitivities of the sensitization dyes in film. AFAIK, this is the variable that controls saturation along with image-forming dye absorption spectra. But I would expect this is constant regardless of exposure. I might be wrong. But at least we can control this by selecting a different film. At least I would like to hear if there is more to this.
For example, it is known that when printing with sharp-cut RGB filters instead of CMY filtered white light source, saturation is increased because of less crosstalk between the color channels, and this is different from contrast. So, there are more variables in the play.
This is the key: your eyes respond in a very different way to colour temp variations than film. Our eyes and brains do a pretty impressive adjustment to ensure that colours stay more or less true in shadows, and that works at least until the light is low enough to switch from cone to rod vision, then the colours are almost completely dehued.
Originally Posted by Mustafa Umut Sarac
But let's assume you have enough light that you're not seeing that nighttime monochrome effect...
In low light and shadow, you can have substantial changes from 5000K. this site seems to have a pretty good chart...
Some films will respond very poorly to deviations from 5000K. Others will do a bit better, but for the most part, if you really want to address this issue you need to get yourself a basic filter set and play around, and try some colour metering too if you can (n.b. a lot of digicams can act as colour meters and they are typically less expensive than a colour meter!).
Now, some films can handle mixed colour temps better than others. So as not to ruffle feathers, I won't mention specific brands that I didn't care for. There was one which reminded me of that terrible movie Ishtar. Anyway, I'll just say that I've had pretty good luck with fuji pro s and pro h in mixed / non-optimal light, and I thought reala was pretty good too.
The big problem with filtering is that it's very easy to overdo it. What drives me nuts is a landscape scene shot at sunrise or sunset which is nevertheless filtered to look like something else. One has to use restraint, of course!
And again, this is issue is quite separate from all the exposure stuff. Welll... okay, it's related in the sense that shooting colour film is kinda analogous to shooting several stacked, colour-filtered black & white films together, simultaneously. If you want to think that way, then to get the colours right, you will need to apply zone principles to expose each separate layer correctly. But the only sensible way to do that is to compensate for the colour temp as a whole. At least that way you aren't attempting to three or four coupled zone system problems :)
In other words, adjusting the overall exposure to all layers does not solve the problem of whether each layer is getting optimal exposure. That's why we filter.
I do agree that the layers affect each other, that's how intermediate hues are created, so what, that doesn't change the film's response curve.
No offense intended here but essentially you are asking us to believe that both Kodak's and Fuji's published technical data is wrong, in that the curves don't really have a straight line portion. You are also asking me to ignore the results of my own print results from bracketed tests.