OK, let's assume this is true to some extent, but up until this point all discussion why it would be so has been on densitometry using false assertions and unneeded complications. What I think is that it isn't just a densitometry (amount of dye per amount of exposure) issue.
Originally Posted by Athiril
The "compression" you describe is just a description of the basic densitometry, namely the "shoulder". If you hit the shoulder, you are exactly right, but I claim that you usually do not hit the shoulder with today's color neg film with most subjects. The sunset/sunrise is one of the few exceptions. But well the curve isn't perfectly linear but we are not talking about big differences here. There must be some other reason to your experience.
When we are talking about color saturation, we usually talk about "perceived saturation" which is just contrast, discussed above. Increasing contrast naturally increases perceived saturation and sharpness and vice-versa, but if we are not affecting contrast, then this is not what we are interested in.
However, the real color saturation is defined otherwise. AFAIK, it is dependent on how wide the absorption peaks of the sensitization dyes are. If they overlap much, a large number of hues can be reproduced life-like because there are no deep "valleys" in the sum of the three spectral curves between the primary colors. But at the same time, "color saturation" is somewhat subdued. OTOH, if the dyes have narrower absorption spectra, color saturation can be increased at the cost of "reality" of the hues. Some hues are now reproduced too light and some hues too dark, but that is exactly one of the reasons why the image looks more vivid.
But, is this connected to exposure or densitometry in any way? Or is it just a film design parameter that keeps constant regardless of exposure? How the color saturation is controlled when designing a film? Can we affect the real color saturation, not contrast, in exposure choice and/or processing? This is a question that always goes unanswered.