Tag Archives: corrective action

Retraining Can’t Fix This

In the course of an average workday we make hundreds of decisions. Some of those decisions require engaging our conscious awareness. In my previous post I described how the quality of those decisions deteriorate as that awareness or willpower fatigues with use.

However, there are decisions where human error occurs with certainty even if our attention is totally focused on the task. Consider the Muller-Lyer[1] illusion below:

The two vertical lines are of the same length. Even after knowing this, we all continue to perceive the line on the left to be longer than the line on the right. The “fact” that the two lines are of different lengths is simply obvious to us. Because of its obviousness we don’t stop to check our judgment before acting on it. Such actions, based on erroneous perception, are likely to produce faulty outcomes.

This error in our human perception/cognition system is hard-wired into our brains. No amount of retraining or conscious effort will correct it. So corrective actions that identify retraining as the way to prevent recurrence of this type of error won’t be effective. It will only serve to demoralize the worker. What, then, is an effective corrective action for such errors?

We can develop and use tools and methods that circumvent the brain’s perception/cognition system, for example with an overlay (red lines in the figure below), or actually measuring each line and comparing those values to one another. This does add a step to the evaluation process; an after-the-fact fix to a faulty design. Ideally, though, we would want our designs to take into account human limitations and avoid creating such illusions in the first place.

Links
[1] Muller-Lyer illusion https://en.wikipedia.org/wiki/Muller-Lyer_illusion Retrieved 2017-06-22

Human Error

Often, investigators identify the root cause of a problem as human error. But what exactly is human error?

An action may be judged as an error only in relation to a reference or standard. So first a standard on how to perform the task must exist. Sometimes such a standard is defined in a documented procedure. On occasion it may also be taught by a master to an apprentice on the job. Most times we just figure it out through a combination of past experience, current observations, and some fiddling. Human error, then, is action by a human that deviates from the standard.

When we judge the root cause of a problem as human error we’re making certain assumptions: 1] that a standard exists, and 2] the standard, if it exists, is adequate to the degree that mindfully following it produces the expected outcome.

Let’s grant that both the above assumptions are true, and even grant that the root cause of a problem was the failure of the worker to follow the standard. What, then, should the corrective action be that will prevent the recurrence of the problem? In my experience it has almost always been defined as “retraining”. But such a corrective action assumes that the worker failed to follow the standard because they don’t know it. Is this true? If not, retraining is pure waste and won’t do a damn thing to prevent the recurrence of the problem.

If a proper standard exists and the worker has been trained to it, then there must be some other reason for their failure to follow it. Skill-based errors (i.e. slips and lapses) can occur when the worker is unable to pay attention to or focus on performing the task they are otherwise familiar with. So it’s not a training issue. In my previous post I wrote about how willpower, our conscious awareness, is like a muscle. It can fatigue from use. As willpower is depleted the mind resorts to mental shortcuts or habits. This is how errors creep in.

We should not rely only on our ability to remain attentive and focused to ensure that the task is performed without failure. For that we must design tasks in such a way that failure is unlikely, if not impossible, to occur. Through design thinking we can develop tools, methods, and systems that help us perform better.

Links
[1] Understanding human failure. http://www.hse.gov.uk/construction/lwit/assets/downloads/human-failure.pdf Retrieved 2017-06-15

Dealing with Nonconforming Product

A particular process makes parts of diameter D. There are 10 parts produced per batch. The batches are sampled periodically and the diameter of all the parts from the sampled batch is measured. Data, representing deviation from the target, for the first 6 sampled batches is shown in Table 1. The graph of the data is shown in Figure 1. Positive numbers indicate the measured diameter was larger than the target while negative numbers indicate the measured diameter was smaller than the target. The upper and lower specification limits for acceptable deviation are given as +/- 3.

NCM-Tab1

Table 1. Data for six batches of 10 parts each. The numbers represent the deviation from the target.

NCM-Fig1

Figure 1. Graph of the data from the table above. The most recent batch, batch number six, shows one part was nonconforming.

The most recent batch, sample batch number six, shows one of the 10 parts having a diameter smaller than the lower specification limit. As such, it is a nonconforming part.

The discovery of a nonconforming product triggers two parallel activities: i) figuring out what to do with the nonconforming product, and ii) addressing the cause of the nonconforming product to inhibit the nonconformance from occurring again.

PRODUCT DISPOSITION

Nonconforming product may be repaired or reworked when possible, but it can always be scrapped. Each one of these three options has its own set of complications and cost.

Repairing a nonconforming product involves additional steps beyond what are usually needed to make the product. This additional processing has the potential to create previously unknown weaknesses in the product e.g. stress concentrations. So repaired product will need to be subjected to testing that verifies it still satisfies its intended use. For this particular case, repairing is not possible. The diameter is smaller than the target. Repair would have been possible if the diameter had been larger than the target.

Reworking a nonconforming product involves undoing the results of the previous process steps, then sending the product through the standard process steps a second time. Undoing the results of the previous process steps involves additional process steps just as were required to repair a nonconforming product. This additional processing has the potential to create previously unknown weaknesses in the product. Reworked product will need to be subjected to testing that verifies it still satisfies its intended use. For this particular case, reworking is not possible.

Scrapping a nonconforming product means to destroy it so that it cannot be accidentally used. For this particular case, scrapping the nonconforming part is the only option available.

ADDRESSING THE CAUSE

In order to determine the cause of the nonconformity we have to first determine the state of the process i.e. whether the process is stable or not. The type of action we take depends on it.

A control chart provides a straightforward way to answer this question. Figure 2. shows an Xbar-R chart for this process. Neither the Xbar chart (top), nor the R chart (bottom) show uncontrolled variation. There is no indication of a special cause affecting the process. This is a stable process in the threshold state. While it is operating on target i.e. its mean is approximately the same as the target, its within-batch variation is more than we would like. Therefore, there is no point trying to hunt down a specific cause for the nonconforming part identified above. It is most likely the product of chance variation that affects this process; a result of the process’s present design.

NCM-Fig2

Figure 2. Xbar-R chart built using the first six sampled batches. Neither the Xbar chart nor the R chart show uncontrolled variation. There is no indication of a special cause affecting the process.

In fact, the process was left alone to collect more data (Figure 3.). The Xbar-R charts do not show any unusual variation that would indicate external disturbances affecting the process. Its behavior is predictable.

NCM-Fig3

Figure 3. More data was collected and the control limits were recalculated using the first 15 sampled batches. The process continues to look stable with no signs of external disturbance.

But, even though the process is stable, it does produce nonconforming parts from time to time. Figure 4. shows that a nonconforming part was produced in sampled batch number 22 and one in sampled batch number 23. Still, it would be wasted effort to hunt down specific causes for the creation of these nonconforming parts. They are the result of chance variation that is a property of the present process design.

NCM-Fig4

Figure 4. Even though the process is stable it still occasionally produces nonconforming parts. Sampled batch number 22 shows a nonconforming part with a larger than tolerable diameter while sampled batch number 23 shows one with a smaller than tolerable diameter.

Because this process is stable, we can estimate the mean and standard deviation of the distribution of individual parts. They were calculated to be -0.0114 and 0.9281. Assuming that the individual parts are normally distributed, we can estimate that this process will produce about 0.12% nonconforming product if left to run as is. Some of these parts will be smaller than the lower specification limit for the diameter. Others will be larger than the upper specification limit for the diameter. That is, about 12 nonconforming pieces will be created per 10,000 parts produced. Is this acceptable?

If the calculated nonconforming rate is not acceptable, then this process must be modified in some fundamental way. This would involves some sort of structured experimentation using methods from design of experiments to reduce variation. New settings for factors like RPM or blade type among others will need to be determined.