# Algorithm/Numerical Approach for Computing CI (Contrast Index)

Discussion in 'B&W: Film, Paper, Chemistry' started by Rafal Lukawiecki, Aug 30, 2013.

1. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
Recently I've been plotting my film test curves using R (sorry Bill), which I also use for smoothing them. I can calculate a myriad of gradients by getting a linear regression of any section of the fitted smooth curves, but I would like to code a procedure for calculating CI. I have used the hand-held technique, using the CI "ruler" with points marked off at 0.2 and 2.2 log E, and I think I understand the concept of the two concentric arcs described here. I have also seen discussions about automating curve-related plotting in several threads:

While there is a lot of discussion about curve fitting, that is not one of my issues—I find that R fits a good enough curve for me. It usually does it using LOESS, and the fitted curve is a little odd just to the left of the point where the toe takes off, but this is not my concern, as the search for CI takes place further to the right of that point anyway, in the section where the curve seems to look good. Besides, I am sure it wouldn't be too hard to fix the wonky section if needed, after all the smooth fit uses splines.

What I would like to ask for, please, is a pointer towards a suitable algorithm to use the just found curve to calculate the CI. I assume no weird curve shapes, and generally good inputs. If nothing can be found, then I will be resigned to writing a short loop, that "walks" the curve looking for the intersection of the 3 points on a straight line, first at x=FB+F, the other distant 0.2 and lying on the curve and the third, distant from the first 2.2, and also on the curve. But I am sure someone must have already coded that in a more elegant way than just doing a brute search along the curve.

FYI, example of plots, I'd be happy to share the few lines of R code if anyone cares:

Many thanks.

2. ### Bill BurkSubscriber

Messages:
4,984
Joined:
Feb 9, 2010
Shooter:
4x5 Format
If I were programming this, I'd mimic what I already do. I'd treat curves as triangles and solve for intersections. I'd do the same when it came to curves (I would solve them as triangles). Since I already physically slide back and forth to find where the curve crosses the two arcs at the same angle, I'd do the same thing by recursively trying different points, until I get within 0.01 of the answer.

3. ### clivehSubscriber

Messages:
4,717
Joined:
Oct 9, 2010
Shooter:
35mm RF
What on earth are you talking about?

4. ### ic-racerMember

Messages:
7,473
Joined:
Feb 25, 2007
Location:
Midwest USA
Shooter:
Multi Format
Do you know Microsoft EXCEL very well? I made two spreadsheets that could be modified to do what you want. One walks along the data points using an "IF-THEN" algorithm to find the correct points to create an ASA Triangle and find the 0.1 point.
The other spreadsheet finds a few points on the curve to calculate "W" speed.
Let me know if you want the spreadsheets. http://www.apug.org/forums/forum37/...preadsheet-easily-calculatre-0-3g-speeds.html

You would be free to experiment with them to make the needed changes to do a proper contrast index test, assuming this type of nomenclature makes any sense to you:
=IF(AND(B8>=\$E\$35,B7<\$E\$35),(((C7-C8)/(B7-B8))),0)

PS: your links above reminded me that I have changed my viewpoint on curve fitting. As may not be evident, my spreadsheets for EI and contrast calculations DON"T curve fit. I use simple triangulation to interpolate between the dots. That simple compromise has saved years of my life worrying about solving polynomial film curve equations for unknown variables and allowed me to take more photographs

Last edited by a moderator: Aug 30, 2013
5. ### Bill BurkSubscriber

Messages:
4,984
Joined:
Feb 9, 2010
Shooter:
4x5 Format
One measure of a film's characteristic curve is how steep it is. Gamma is one name for the slope of the curve. Think of 45-degrees as 1.0 because for every unit of exposure difference on the film you get one unit of density difference on the negative. 0.0 would be if you forgot to develop the film. Infinity would be where it goes instantly from clear to black. (Actually that is unlikely to ever happen, and a special meaning is given to the term: "Gamma Infinity" - which is the "most contrast you will ever get" in that developer, no matter how long you develop it, even if it's only 1.5 or 2.0.)

Contrast Index is another term used to describe the slope of the curve. Contrast Index and Gamma just take two points on the curve and calculate the slope of them, they don't use curve fitting to describe the curve, it's just a straight-line between two points. The two differ from each other by the choice of endpoints you measure.

Contrast Index uses a couple points spread out about as far apart as the part of the film you would normally use to print from. So it has the advantage of relating development to meet printing needs. Think of a protractor with an inner arc and an outer arc that you slide left and right until both arcs touch the line at the same angle (marked as slope to the center of the arcs). That's all Rafal wants to do, but he wants to do it with a computer program instead of on paper.

Contrast Index Meter
Here is a graph of Rafal's film tests, with "C" marks on each curve showing the two endpoints used to determine Contrast Index.

The ASA triangle is also shown, it is very specific, and Rafal luckily hit it at 11 minutes. Note that even though the ASA triangle literally measures 0.62 slope by the points that define it, the Contrast Index of that same curve is 0.75 - significantly different...

If I had a Normal scene, I would develop it 7 minutes because I aim for specific CI and for Normal I would aim for 0.62 CI. If I used Gamma instead, I might arrive at a different recommend developing time for Normal, and if I were to develop to ASA standard, I'd possibly develop 11 minutes.

Last edited by a moderator: Aug 30, 2013
6. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
Thank you, Bill, for the suggestion of solving triangles. Dale, I'd love to have a look at those spreadsheets, I suppose I know enough Excel. I'd still plan to lift the logic into R, as I find it easier and faster to manage the data+code+plotting (ggplot2) in it for my film tests. I'd be happy to share the resulting code. Having said that, it seems like you're "walking" along the axis, similar what I thought of doing. I wonder if there is some more holistic approach (solving geometric inequalities?) that doesn't involve checking all the points, until a certain precision of the solution has been reached. Many thanks, everyone.

7. ### Gerald C KochMember

Messages:
6,248
Joined:
Jul 12, 2010
Location:
Southern USA
Shooter:
Multi Format
Contrast Index is something that Kodak came up with to replace the traditional Gamma for their films. It never really caught on with other manufacturers.

8. ### dpgoldenbergMember

Messages:
51
Joined:
Feb 18, 2009
Shooter:
Med. Format RF
Rafal,
I am the fool who started the first of the APUG threads you listed, in which I suggested a simple mathematical function for fitting densitometric data. Suffice it to say that my methods were not instantly adopted by thousands of photographers, leading to the fame and fortune that I had imagined for myself. But, I have used that function to calculate the Kodak contrast index, and I think that I can suggest a general way to formulate the problem.

The key thing that is needed is a relationship between the exposure and density, which is presumably based on your measured data. I used the function I described, with fit values for the 3 parameters that define that function. Having an analytical function makes things a bit easier, but one could also use a table of closely spaced points.

Let's call the exposure values "x" and the density at a given value of x "D(x)". To use the Kodak definition, the exposure values should be given in log base 2 units (i.e. doublings or "stops"). The CI definition specifies 3 points that lie on a common line:

1. A point on the baseline, where density - (base + fog) = 0.
2. A point to the right of point 1 that lies on the curve and is 0.2 units away from point 1.
3. A point to the right of point 2 that lies on the curve and is 2 units away from point 2.

Call these points:
x1,D(x1)
x2,D(x2)
x3,D(X3)

From Pythagoras, the relationships among the points can be written as:

(x2 - x1)^2 + (D(x2)-D(x1))^2 = 0.2^2
(x3 - x2)^2 + (D(x3)-D(x2))^2 = 2.0^2
(x3 - x1)^2 + (D(x3)-D(x1))^2 = 2.2^2

This gives three equations in three unknowns. Once the values for the three x values are known, then the contrast index can be calculated from the slope of the line, using any pair of points:

CI = (D(x3)-D(x2))/(x3-x2)

Solving the equations has to be done numerically, i.e. iteratively, at least for my analytical function or a table of data points. This requires a starting guess for the three x values, but that isn't too difficult.

I have done this in Python, using the optimize.fsolve function in the scipy package, but I'm sure that similar functionality can found in R.

I hope that this is a bit of help. I'm happy to send you the Python code if that would be useful.

David

9. ### Bill BurkSubscriber

Messages:
4,984
Joined:
Feb 9, 2010
Shooter:
4x5 Format
Hi David,

The clinker in the formula is the point is not 0.2 units to the right...

You have a baseline zero at B+F and two arcs, one with a radius of 0.2 units and the next with a radius of 2.2 units

You look for a condition where the curve, zeroed at B+F, crosses the two arcs at the same angle.

10. ### dpgoldenbergMember

Messages:
51
Joined:
Feb 18, 2009
Shooter:
Med. Format RF
Bill,
My words may not have been as clear as I intended, but I think that the equations are correct. The equations define the distances between the points in two dimensions, not just the distance along the horizontal axis. (That's where Pythagoras comes in.) The first two equations define the distances between the first and second points (0.2) and the second and third points (2). The third equation ensures that all three lie on the same line. Also, D(x1) should be specified to be b+f.

David

Last edited by a moderator: Aug 31, 2013
11. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
David, your equations look good. My data is zeroed at fb-f, having subtracted the y-axis min of the points, so D(x1) = 0, makes it even simpler. As it seems that the fitted smooth spline model is of high quality (p<0.01), for my curent data, and I expect it should be so for most successful film tests, the R predict(x) function will safely give me the interpolated value of D(x), using the spline, for any point I need along the range of rel logE axis. So it looks like I'm now in position to use the curve to solve your equations for the CI condition, looking for the first occurrence of it, I suppose, from left, in case the condition could be fulfilled more than once on a strange curve. Thanks for the suggestion of optimize.fsolve in scipy, that's a great tip. I will try uniroot or rootSolve/multiroot in R. If I can't make it, I'll ask for your Python.

Last edited by a moderator: Aug 31, 2013
12. ### Bill BurkSubscriber

Messages:
4,984
Joined:
Feb 9, 2010
Shooter:
4x5 Format
Oh, I see. Sounds good

13. ### ic-racerMember

Messages:
7,473
Joined:
Feb 25, 2007
Location:
Midwest USA
Shooter:
Multi Format
Though, a little off topic, here is the "W-speed" meter that for which I made the Excel spreadsheet to imitate its function. As you can see it is more complicated than the Contrast Index Meter, however, my solution to the problem is simpler because I use linear interpolation between the datapoints, rather than a complex function for the film curve.

My Excel calculated approximation for "E" used to calculate the W-speed = the red dot. I think it is a pretty good approximation:

Last edited by a moderator: Aug 31, 2013
14. Sponsored Ad
15. ### RudeofusSubscriber

Messages:
2,570
Joined:
Aug 13, 2009
Shooter:
Medium Format
Rafal, the CI sort of assumes that you have a straight line as characteristic curve, while your LOESS approach most likely won't give you a straight line but some spline curve with variable slope over its range. I would therefore suggest a slightly different approach: create a model for what you expect as characteristic curve, e.g. a flat line in the b+f region, a straight line in the linear region and some form of smooth transition between these two straight lines. Once you have a model that covers all characteristic curves you are likely going to encounter, you can do an LMS fit to get the parameters for each set of measurement data. It should then be simple to associate a CI with the parameters you obtain for your model.

16. ### ic-racerMember

Messages:
7,473
Joined:
Feb 25, 2007
Location:
Midwest USA
Shooter:
Multi Format
Seems like CI was created for films that don't have a straight line.

17. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
R Code for Computing CI

Mission accomplished, thanks to everyone, especially David for pointing out the system of non-linear equations based on the Pythagorean triangle relationship. If you run this, this is what you get:

And the data you feed into it comes from a text file that looks like this:

If you want to play with it, you need my code (attached) and a copy of R, which is a free programming language and an environment for statistics/numerical analysis, and which does nice plots. It runs on Windows, Macs, and Linux, get it from here if you feel geeky, or read about on Wikipedia. My code is in the attachment, all you need to do is to load it in R by using 'source("CI_plotter.R.txt")' having placed it in the current directory. If you open that file, you can see how to use it, I gave some examples. Let me know if anyone would like any pointers. I don't mind plotting your data, if you wish, to find the CIsI suppose this won't be a very popular request, anyway.

The R code, which I wrote, works for the various curve data I fed into it. You need to give it a series of numbers representing your step tablet exposures (so from 0 on the left increasing towards the right) in a column called "He" and any number of columns containing your densitometer readings. R will plot the points and it will try fitting curvesusing Bezier splines with 6 degrees of freedom by default. You can decrease that number for more smoothing (left-of-toe will look odd but calculations will work) or increase it, for less smoothing. You can also play with other smoothers if you uncomment the right lines in the code, including LOESS. In either case, the CI computations should work. I uses nleqslv, which is pretty much as close to SciPy fsolve, which David mentioned, as I could find today. The alternative which I have tested, multiroot (from rootSolve) was less forgiving of bad choices of the "starting point". This is a weakness of my code (I am sure one of many!). Sometimes, for some curves, it will not compute CI, it will display "NA" on the plot, and it will make a suggestion that you change the "starting.point". This is a parameter at the top of the program, which denotes the best guess of the x[1], x[2], and x[3] as per David's explanations, that is the rel log E values that are supposed to correspond to the projections onto the X axis of the 3 key CI determination points. In other words, those are: the centre of the two arcs, and the intersections of the 0.2 and the 2.2 arcs with x axis. I defaulted it to (0.9, 1, 2) by trial and error, but if it does not work for you, nudge them a bit either way, especially the first one. Anyone who would like to contribute a good way to guess those automatically, please let me knowI have tried interpolating them from a straight-line regression of the data points etc but that was worse than using my best manual guesses. Some other time I will add a function to plot the CI ~ dev time curve...

Ok, back to the darkroom tomorrowI miss a low-contrast test in this data, will try a 4.5 min time to complete the exercise.

File size:
112 KB
Views:
24
File size:
640 bytes
Views:
18
File size:
6.9 KB
Views:
25
18. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
Delta 100 4x5 DDX 1+4 Tank Extended Results

...and in case anyone were interested in the actual subject of my testing, Delta 100 4x5 developed in DDX 1+4 20C in a tank (CombiPlan) with fairly vigorous agitation (30 sec initially, followed by 3 inversions lasting about 5 seconds in total every 30 sec), I attach "extended" results. I obtained those by exposing two strips with the Eseco SL-2, both using green light. The second strip got three times the exposures of the first, but it does not actually amount to triple exposure, it seems, and I am in reciprocity territory, which this film seems to be good at. Undoubtedly, this is a questionable way of doing this, as I am numerically combining two separate tests into a single "extended" one, as if it came from one, longer step tablet—unlike in my previous post, where tests were "normal". Nonetheless, data seems OKish to my untrained eyes, and I welcome your feedback about this 9-stop range behaviour.

PS. To compute the first of the curves, I needed to adjust the "starting.points" to (0.95, 1, 2). Perhaps this should be the default.

19. ### Bill BurkSubscriber

Messages:
4,984
Joined:
Feb 9, 2010
Shooter:
4x5 Format
Nice Rafal!

Now all you need is a Time/CI curve

20. ### RudeofusSubscriber

Messages:
2,570
Joined:
Aug 13, 2009
Shooter:
Medium Format
The author who proposed CI did not have Rafal's jiggly spline curves in mind, though. Niederpruem et al. tried to account for toe shape and possible upswept/downswept curves, not for randomly swept spline segments that pointlessly try to fit noisy measurement data. It speaks volumes that Rafal's curve fit look, let's call it euphemistically "a bit odd" in the toe region. Oh well ...

21. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
Indeed, Rudi—the upsweep towards less-than-none exposure looked humorous. Bear in mind, however, if I you increase the number of degrees of freedom on the Bezier spline to 7 or more, the toe looks just fine, yet the calculation still gets the CI—what we miss then is the smoothing that neatly takes care of the statistical measurement error. Having said that, I have to play, yet, with other types of splines, including log-influenced ones, which might have that "perfect" look.

But the next thing I plan to do is what Bill has suggested, that is the Time/CI curve. By the way, if anyone has a curve data series with their own, known CI, I'd be happy to check the algorithm against those numbers.

22. ### ic-racerMember

Messages:
7,473
Joined:
Feb 25, 2007
Location:
Midwest USA
Shooter:
Multi Format
Does the program you wrote for R let you solve the resulting spline equation easily for various x values; like x = 0.1 and give the resulting y value? This has been the stumbling block with the software I had been using, thus my conversion to point-to-point interpolation.

Last edited by a moderator: Sep 1, 2013
23. ### RudeofusSubscriber

Messages:
2,570
Joined:
Aug 13, 2009
Shooter:
Medium Format
Splines are commonly used to describe arbitrarily curved shapes, because they can be made to fit almost any point set with a smooth looking curve. In the same fashion you can make them LMS fit a point cloud, and that's what you did with your LOESS approach, or to say it more accurately: that's what the LOESS algorithm did for you. The more degrees of freedom you allow the LOESS algorithm, the closer the result will fit your point cloud, but remember that your point cloud is still noisy data!

What you really want is the following:
1. find a model that fits the characteristic curves you expect. Allow this model to use as few parameters as possible, because the more parameters you allow, the more data points you need to get meaningful and reliable numbers for each parameter. Look at curves of the log(1+exp(a*x)) or log(1+exp(a*x + b*x2)) type, you can describe most characteristic curves that a reasonable developer will give you. The first type describes straight slopes, while the second type also allows for upswept (b>0) and downswept (b<0) curves. Use shifting to place the toe where you need it, and use scaling to match the size of the toe area.
2. Use LMS fit to fit that model to your noisy data set. Since you have very few parameters with very distinct effects, numerical stability should be fine ---> no more NaN results.
3. Extract CI or whatever you were looking for from the best fit model. Since all data points have their impact on the final result, you should get very reliable results even with very noisy data. And yes, densitometer readings can be quite noisy.

24. ### RalphLambrechtSubscriber

Messages:
8,214
Joined:
Sep 19, 2003
Location:
Florida
Shooter:
Multi Format
Tat's rather complicated.that's ehy I use a simple average gradient as phil davies shows in his book''Beyond the ZO
ZONESystem'.Tha's all you needin my opinion.

25. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
Dale, indeed you can do both ways. If you know x (say log rel E) and want to get y (say density), you just use predict() which takes the curve model as its first parameter, and a vector of x values for which you would like to find y—it can be just a single-valued vector, as in:

predict(my.curve, data.frame(He=1.4))

which will compute density at logE of 1.4. Since I use this a few times, I define function D that takes an x and the curve and returns the y (see around line 130 in the code). However, to do it the other way, that is to find the x for which y has a certain value, say to find the logE at which density is 0.1 over fb+f, you use uniroot(), which takes a function, including non-linear ones that may results from using splines, and looks for an x at which y is 0. So if you want to find x for y=0.1, you just subtract it from the value. The call would be something like:

uniroot(function (x) D(x) - 0.1, c(0:3), curve.model)

assuming D was the function mentioned earlier. Let me know if you would like me to add code for this into the script. I might extend the code, at a later point, to look for ISO triangles, and possibly the other gradient calculations.

26. ### Rafal LukawieckiSubscriber

Messages:
809
Joined:
Feb 23, 2006
Location:
Co. Wicklow,
Shooter:
Multi Format
I would not disagree with you, Ralphafter all, it was your great book that got me onto the path of material testing, many thanks for your very useful Excel spreadsheets.

However, I have found that I wanted to understand the various ways of parametrising the curves, and their gradients, as I was getting into it all. I suppose it is personal. Also, at this stage, I am finding that, for my work, CI matches my needs a little better than other gradients. Since there were no easily available tools for computing it, I just wrote one yesterday, so it is quite easyfor me at leastto have a CI at a push of a button.