The contour plot above maps a real production process where the desired product immediately decomposes into a pesky, tarry byproduct that is difficult and expensive to remove. The
process is currently averaging approximately 15 percent tar, and each achievable percent reduction equates to $1 million (1970 dollars) in annual savings.
Process history has determined three variables to be crucial for process control: temperature, copper sulfate concentration, and excess nitrite. Any combination of these three variables within the ranges of temperature, 55-65° C; copper sulfate concentration, 26-31 percent; and excess
nitrite, 0-12 percent would represent a safe and economical operating condition. The current operating condition is the midpoint of these ranges.
For purposes of experimentation only, the equipment is capable of operating in the following ranges if necessary: temperature, 50-70° C; copper sulfate concentration, 20-35 percent; and excess nitrate, 0-20 percent.
Suppose you had a budget of 25 (expensive) experiments that need to
answer these questions:
- Where should the three variables be set to minimize tar production?
- What percent tar would be expected?
- What's the best estimate of the process variation (i.e., tar ± x percent)?
This is the scenario I use to introduce my experimental design seminars. I form groups of three to four people and give them a process simulator where they can enter any condition and get the resulting percent
tar.
It almost never fails: I get as many different answers for optimum settings, resulting tar, and variation as there are groups in the room – and just as many strategies (and number of experiments run) for reaching their conclusions. Human variation!
I have each group present its results to me, and I act like the many mercilessly tough managers to whom I have made similar presentations.
General
observations:
- Most try holding two of the variables constant while varying the third and then try to further optimize by varying the other two around their best result.
- Each experiment seems to be run based only on the previous result.
- Some look at me smugly and run the cube of a 3-variable factorial design (many times getting the worst answers in the room).
- Some run more than the allotted 25 experiments.
- Some
go outside of the established variable safe ranges.
- Most find a good result and then try a finer and finer grid to further optimize
- There is always one group who claims to have optimized in less than 10 experiments, and they (and everyone else) look at me like I'm nuts when I tell them:
- They should repeat their alleged optimum (and it uses up an experiment).
- Repeating any condition uses up an experiment.
- I'm
accused of horrible things when the repeated condition gets a different answer (sometimes differing by as much as 11-14). I simply ask, "If you run a process at the same conditions on two different days, do you get the same results?"
What usually happens as a result:
- I'm often told the "process is out of control," so there's no use experimenting.
- Most estimates of process variability are naively
low.
- Groups have no idea how to present results in a way that would sell them to a tough manager.
- The suggested optimal excess nitrate settings are all over the range of 0-12 -- even though it is modeled to have no effect and should be set to zero.
My simulator generates the true number from the actual process map above along with a random, normally distributed variation that has a standard deviation of four (the actual process had a
standard deviation of eight!). In looking at the contour plot, tar is minimized at 65° C and approximately 28.8 percent CuSO4, resulting in six to eight percent tar + ~8-10 for any production run.
In 1983, I heard the wonderfully practical C.D. Hendrix say: People tend to invest too many experiments in the wrong place!
As it turns out, by the end of the class, human variation is minimized when every group independently
agrees on the same 15 experiment strategy (a few choose an alternative, equally effective 20 experiment strategy). When they see quite different numerical results from each individual design, they are initially leery, but then pleasantly surprised when they all get pretty close to the real answer.
Reduced human variation = higher quality, more consistent results in only 15-20 experiments! They are now in "the right place" and have 5-10 more
experiments to refine their optimum.
More next time...
Kind regards,
Davis