Tag Archives: nonconforming

Whose Measurement is Right?

Every company I’ve worked for inspects the product it receives from its suppliers to determine conformance to requirements. The process is variously referred to as incoming inspection or receiving inspection.

Sometimes the receiving inspection process identifies a lot of product that fails to conform to requirements. That lot is subsequently classified as nonconforming material and quarantined for further review. There are many reasons why a lot of product may be classified as nonconforming. Here I wish to focus just on reasons having to do with measurement.

Once a company discovers nonconforming parts, it usually contacts its supplier to share that information. It is not unusual, however, for the supplier to push back when their data for the lot of product shows it to be conforming. So, how can a given lot of product be both conforming and nonconforming? Who is right?

We need to recognize that measurement is a process. The measured value is an outcome of this process. It depends on the measurement tool used, the skill of the person making the measurement and the steps of the measurement operation. A difference in any of these factors will show up as a difference in the measured value.

It is rare that a measurement process is the same between a customer and its supplier. A customer may use different measurement tools than its supplier. For example, where the customer might have used a caliper or micrometer, the supplier may have used an optical comparator or CMM. Even if both the customer and the supplier use the same measurement tool, the workers using that tool are unlikely to have been trained in its use in the same way. Finally, the steps used to make the measurement, such as fixturing, lighting and handling the part, which often depend on the measurement tool used, will likely be different, too.

Thus, more often than not, a measurement process between a supplier and a customer will be different. Each measured value is correct in its context—the supplier’s measurement is correct in its context, as is the customer’s measurement in its context. But because the measurement process is different between the two contexts, the measured values cannot be compared directly with one another. So it is possible that the same lot of product may be conforming per the supplier’s measurements and nonconforming per the customer’s measurements.

But why are we measuring product twice: once by the supplier and again by the customer? Measurement is a costly non-value adding operation, and doing it twice is excess processing–wasteful. One reason I’ve been told is this is done to confirm the data provided by the supplier. But confirmation is possible only if the measurement process used by the customer matches the one used by the supplier.

Besides, if we are worried about the quality of supplier data, we should then focus efforts on deeply understanding their measurement process, monitoring its stability and capability, and working with the supplier to improve it if necessary. With that we should trust the measurement data the supplier provides and base our decisions on it, and eliminate the duplicate measurement step during receiving inspection.

[1] Eliminate Waste in Incoming Inspection: 10 ideas of where to look for waste in your process http://www.qualitymag.com/articles/92832-eliminate-waste-in-incoming-inspection Retrieved 2017-06-29

Quality People, Stop Holding Product Hostage

Any process will inevitably generate nonconforming product at some point in its operation. Companies typically define the method for handling nonconforming product in a formal procedure that is part of their quality management system. As part of such a procedure, when nonconforming product is discovered, usually during the inspection step, it is quarantined. Two separate but related questions must then be answered: 1] What do we do with the nonconforming product? and 2] How do we prevent it from recurring?

I have observed a troubling pattern across multiple companies in how their quality control professionals are managing nonconforming product. They are holding it hostage—not allowing it to flow after it has been properly dispositioned—in order to compel others to comply with the other requirements of the formal procedure for handling nonconforming product. Specifically, the requirement to determine the root cause of the nonconformity and put in place countermeasures to prevent its recurrence.

I have also identified several reasons why quality control personnel are taking this counterproductive approach. Almost all feel, with good reason, that without it they cannot comply with all the requirements of the formal procedure or the standards and regulations they are designed to meet. They don’t believe that their coworkers are intrinsically motivated to take ownership for investigating the cause of the nonconformity and putting in place the appropriate countermeasures. Nor do they believe that they are externally incentivized to do so. And, of course, there are some quality control personnel who use this tactic to assert themselves in an environment that they feel otherwise does not respect them.

The effect of such behavior though is to reinforce the perception non-quality people have that quality professionals create blocks or bureaucratic hurdles instead of working with others to support the company’s objectives by helping to improve process and product quality. I wonder whether quality control personnel are aware that nonconforming product is still counted as inventory, and holding properly dispositioned nonconforming product hostage has a myriad unintentional consequences like lost sales, inaccurate accounting of assets, using up precious storage space, wasted manpower to monitor and manage the material, etc.

When people do give in to such arm-twisting, they do it with resentment, to meet a quality function demand, and not because they see the value in fixing the process. This is a pyrrhic victory. Pressured, resentful and motivated by the wrong goal, how thorough or accurate can their root cause investigation be? Countermeasures developed in response to sloppy cause analysis will at best address the symptoms of the nonconforming product. So recurrence is all but assured! And it’s quiet possible that these countermeasures may destabilize a process and increasing its variation leading to the creation of more nonconforming product.

Holding properly dispositioned nonconforming product hostage is not the right way to improve the performance of the nonconforming product handling process. Not only are you not adding value by doing that, your actions are costing the company. So please stop doing that. There are other better ways to improve the performance of quality system processes that support the company objectives.

Targets Deconstructed

“Eliminate numerical goals, posters, and slogans for the work force, asking for new levels of productivity without providing methods.”

— Point No. 10 in Dr. W. E. Deming’s 14 points for management as written in “Quality, Productivity, and Competitive Position.”

A few weeks ago I had an excellent exchange on Twitter with UK Police Inspector Simon Guilfoyle on the topic of setting numerical targets. He asked “How do you set a numerical target without it being arbitrary? By what method?” Unfortunately, Twitter’s 140 character limit isn’t sufficient for adequately answering his question. I promised him I would write a post that explained my thinking.

When I was working for Samsung Austin Semiconductor (SAS) as a quality assurance engineer, one of my assigned responsibilities was to manage the factory’s overall nonconforming material rate. Over the course of my second year, the factory averaged a four percent* nonconforming material rate. The run chart for the monthly nonconforming material rate showed a stable system of variation.

As the year drew to a close, I began thinking about my goals for the following year. I knew I would continue to be responsible for managing the factory’s overall nonconforming material rate. What should I set as my target for it? Not knowing any better, I set it to be the rate we achieved for the current year: four percent. If nothing else, it was based on data. But my manager at the time, a Korean professional on assignment to the factory, mockingly asked me if I wasn’t motivated to do better. He set my target at two percent*; a fifty percent reduction.

What was the two percent number based on? How did he come about it? I had no insight and he didn’t bother to explain it either. From my perspective, it was an arbitrary numerical target; plucked out of thin air. I remember how incredibly nervous I felt about it. How was I going to achieve it? I had no clue nor guidance. I also remember how anxiety filled and frustrating the following year turned out for me. I watched the rate with a hawk eye. I hounded process engineers to do something whenever their process created a nonconforming lot. It was not a pleasant time for anyone.

Since then I’ve worked at several other companies in different industries. Nevertheless, my experience at SAS seems to be the norm when it comes to setting targets. This is regardless of the role, the industry or the culture. And, as far as I’ve been able to figure out, this approach to setting targets is driven more by tradition and arrogance than any objective thoughtful method. “Improve performance by 50% over last year!”, so the mantra goes. Worse still, no method is provided for achieving such arbitrary improvement targets. I’ve been told “You’re smart. You’ll figure out how to do it.”

So it’s not a surprise for me that folks like the good Inspector have become convinced all numerical targets are inherently arbitrary; that there is no objective and justifiable way to set them. Having been on the receiving end of such targets many times, I used to think the same, too. But just because people don’t know of a different way to set a target, one that is objective and can be justified, doesn’t mean there isn’t one. I believe numerical targets can be set in an objective fashion. It, however, requires thoughtfulness, great effort and understanding on the part of the person setting the target.

One way to set a target is to use the performance of a reference for comparison. In my case, the SAS factory I worked at had a sister facility in Korea. It would have been reasonable, albeit crude, to set my target for the nonconforming material rate to that achieved by the sister facility (if it was better.**) An argument could have been made that the target was achieved elsewhere, so it can be reached.

As part of our Twitter exchange, the Inspector made the point that regardless of whether these factories were defined to be sisters, there would still be differences between them. Therefore, they will generate a nonconforming material rate that is a function of their present system architecture. He is absolutely right! Setting a target for my factory based on the performance achieved by its sister facility alone will do nothing to improve the performance of my factory. It’s already doing the best it can.

But that’s not the point of setting the target: to operate the same system and expect an improved performance. The point of setting the target is to trigger a change in the system, a redesign in such a way as to achieve a level of performance that, in this case, has been achieved elsewhere. The sister system can be treated as a reference and studied. Differences between systems may be identified and eliminated. Along the way we may find out that some differences cannot be eliminated. Nevertheless, by eliminating the differences where possible the two systems are made more similar to one another and we will have improved the performance.

In the absence of a reference, simulations may be used to objectively define a target. The factory’s overall nonconforming material rate is the combined result of the nonconforming material rates of its individual processes. Investigating the performance of these inputs can help identify opportunities for improvement for each: stabilizing unstable processes, running stable processes on target, reducing the variability of stable on-target processes. All of this can be simulated to determine what is ideally possible. A justifiable target for the nonconforming material rate can then be set with the results. Best of all, the method by which it can be achieved gets defined as part of the exercise.

Finally, targets may be set by the state of the greater environment within which a system operates. All systems operate in a greater environment (e.g. national or global economy); one that is continuously changing in unpredictable ways. City populations grow or shrink. Markets grow or shrink. Polities combine or fragment. What we once produced to meet a demand will in a new environment prove to be too little or too much. A change in state of the external environment should trigger a change in the target of the system. A change in the target of the system should trigger a redesign of the system to achieve it. In Systems lingo, this is a tracking problem.

Targets are essential. They help guide the design or redesign of the system. They can be defined objectively in several different ways. I’ve outlined three above. They do not have to be set in the arbitrary way they currently are. But setting targets isn’t enough. Methods by which to achieve them must be defined. Targets, even objective ones, are meaningless and destructive without the means of achieving them. Failure to achieve targets should trigger an analysis into why the system failed. They should not be used to judge and blame workers within the system.

Sadly, people are like water, finding and using the path of least resistance. Setting arbitrary improvement targets is easier than doing all the work required to set objective ones. They have been successfully justified on the grounds of mindless ambition. No one questions the approach out of fear or ignorance. Positional authority is often used to mock or belittle the worker for not being motivated enough when the truth is something else: managerial ignorance and laziness to do their job.

* I’ve changed the numbers for obvious reasons. However, the message remains the same.

** As it turned out, the nonconforming material rate achieved at my factory was the best ever in all of Samsung!


Dealing with Nonconforming Product

A particular process makes parts of diameter D. There are 10 parts produced per batch. The batches are sampled periodically and the diameter of all the parts from the sampled batch is measured. Data, representing deviation from the target, for the first 6 sampled batches is shown in Table 1. The graph of the data is shown in Figure 1. Positive numbers indicate the measured diameter was larger than the target while negative numbers indicate the measured diameter was smaller than the target. The upper and lower specification limits for acceptable deviation are given as +/- 3.


Table 1. Data for six batches of 10 parts each. The numbers represent the deviation from the target.


Figure 1. Graph of the data from the table above. The most recent batch, batch number six, shows one part was nonconforming.

The most recent batch, sample batch number six, shows one of the 10 parts having a diameter smaller than the lower specification limit. As such, it is a nonconforming part.

The discovery of a nonconforming product triggers two parallel activities: i) figuring out what to do with the nonconforming product, and ii) addressing the cause of the nonconforming product to inhibit the nonconformance from occurring again.


Nonconforming product may be repaired or reworked when possible, but it can always be scrapped. Each one of these three options has its own set of complications and cost.

Repairing a nonconforming product involves additional steps beyond what are usually needed to make the product. This additional processing has the potential to create previously unknown weaknesses in the product e.g. stress concentrations. So repaired product will need to be subjected to testing that verifies it still satisfies its intended use. For this particular case, repairing is not possible. The diameter is smaller than the target. Repair would have been possible if the diameter had been larger than the target.

Reworking a nonconforming product involves undoing the results of the previous process steps, then sending the product through the standard process steps a second time. Undoing the results of the previous process steps involves additional process steps just as were required to repair a nonconforming product. This additional processing has the potential to create previously unknown weaknesses in the product. Reworked product will need to be subjected to testing that verifies it still satisfies its intended use. For this particular case, reworking is not possible.

Scrapping a nonconforming product means to destroy it so that it cannot be accidentally used. For this particular case, scrapping the nonconforming part is the only option available.


In order to determine the cause of the nonconformity we have to first determine the state of the process i.e. whether the process is stable or not. The type of action we take depends on it.

A control chart provides a straightforward way to answer this question. Figure 2. shows an Xbar-R chart for this process. Neither the Xbar chart (top), nor the R chart (bottom) show uncontrolled variation. There is no indication of a special cause affecting the process. This is a stable process in the threshold state. While it is operating on target i.e. its mean is approximately the same as the target, its within-batch variation is more than we would like. Therefore, there is no point trying to hunt down a specific cause for the nonconforming part identified above. It is most likely the product of chance variation that affects this process; a result of the process’s present design.


Figure 2. Xbar-R chart built using the first six sampled batches. Neither the Xbar chart nor the R chart show uncontrolled variation. There is no indication of a special cause affecting the process.

In fact, the process was left alone to collect more data (Figure 3.). The Xbar-R charts do not show any unusual variation that would indicate external disturbances affecting the process. Its behavior is predictable.


Figure 3. More data was collected and the control limits were recalculated using the first 15 sampled batches. The process continues to look stable with no signs of external disturbance.

But, even though the process is stable, it does produce nonconforming parts from time to time. Figure 4. shows that a nonconforming part was produced in sampled batch number 22 and one in sampled batch number 23. Still, it would be wasted effort to hunt down specific causes for the creation of these nonconforming parts. They are the result of chance variation that is a property of the present process design.


Figure 4. Even though the process is stable it still occasionally produces nonconforming parts. Sampled batch number 22 shows a nonconforming part with a larger than tolerable diameter while sampled batch number 23 shows one with a smaller than tolerable diameter.

Because this process is stable, we can estimate the mean and standard deviation of the distribution of individual parts. They were calculated to be -0.0114 and 0.9281. Assuming that the individual parts are normally distributed, we can estimate that this process will produce about 0.12% nonconforming product if left to run as is. Some of these parts will be smaller than the lower specification limit for the diameter. Others will be larger than the upper specification limit for the diameter. That is, about 12 nonconforming pieces will be created per 10,000 parts produced. Is this acceptable?

If the calculated nonconforming rate is not acceptable, then this process must be modified in some fundamental way. This would involves some sort of structured experimentation using methods from design of experiments to reduce variation. New settings for factors like RPM or blade type among others will need to be determined.

The State of Chaos

When a process is out of control and it is producing nonconforming product it is in a state of chaos. The State of Chaos is one of the four states a process can be in as shown in “What State Is Your Process In?“. The manufacturer cannot predict how much nonconforming product his process will produce in any given hour or day. At times the process will produce nothing but conforming product. Then without warning it will produce nothing but nonconforming product. It might seem as if there were ghosts in the machine.

A process in such a state is affected by assignable causes that are easily identified through the use of process control charts. The effects of these assignable causes have to be eliminated one at a time. Patience and perseverance are necessary. It is essential that the process be brought under statistical control and made predictable. Once the process has achieved stability further improvement efforts can be made to reach the ideal state.


Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


The Brink of Chaos

Of the four states a process can be in (see “What State Is Your Process In?“) the most sinister state is the one where it is producing 100 percent conforming product but is operating in an unpredictable way. That is, the process is not under statistical control. Such a process is, in fact, on the brink of chaos. But, hold on. There is no nonconforming product, therefore there is no problem, right? It is easy to get lulled into complacency by this happy circumstance.


But because the process is not under statistical control it is impossible to predict what it will do in the next instance. Various assignable causes are affecting the process in an unpredictable fashion. The effect of these causes could very well be the production of nonconforming product without any warning. When that happens the process has moved into a state of chaos.

The only way to address a process on the brink of chaos is to use process control charts to identify assignable causes and eliminate their effects one-by-one and bring the process under statistical control. You can then start other improvement efforts like moving the process mean to the process aim and reducing the process variation by minimizing the influence of common causes affecting the process.

Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


The Threshold State

A process that is predictable or in a state of statistical control, but producing nonconforming product can be described as being in the Threshold State. This is one of the four possible states that a process can be in as noted in “What State Is Your Process In?“. But, what might such a process look like?

A process in the threshold state might be operating with its mean higher than the process aim,


or it might be operating with its mean lower than the process aim,


or it might be operating with a process dispersion greater than the product specification window,


or it may be operating with some combination of a shift in its mean and breadth of its dispersion.

Nevertheless, the fact that such a process is in statistical control means that it will continue to produce consistent product so long as it stays in control. This in turn means that the producer can expect to produce a consistent amount of nonconforming product hour after hour day after day until a change is made in the process or a change is made to the specifications.

It is important to say here that exhorting your workers to work harder or to “Do it right the first time” or to show them the examples of nonconforming product from a process in the threshold state will not lead to improvements. They are not the cause for the failures. The causes for the nonconforming product are systemic and must be dealt with at the system level. Focusing on the worker will only serve to demoralize and frustrate them. It may lead to tampering with the process turning a bad situation worse.

You can always share your process data with your customer to demonstrate its stability and ask for a change in the product specifications. However, if specifications cannot be changed your only recourse is to modify your process to shift it from the threshold state into the ideal state. Adjusting the process mean to match the aim is usually relatively simple. In comparison, reducing the process variation requires an understanding of the common causes affecting the process and their respective effects – a much more involved activity.

While you are working on improving your process you are still producing nonconforming product. Until such time as you achieve the ideal state for your process, you must screen every unit or lot before shipping product to your customer – a 100 percent inspection and sort. This should be treated as a temporary stop-gap measure. You must recognize it as an imperfect quality control method and be mindful that defectives will escape.

Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


The Ideal State

In “What State Is Your Process In?” I noted that a process can be in one of four possible states. Here I write about the Ideal State wherein a process is predictable and is producing 100 percent conforming product.

A process that is predictable is one that is in a state of statistical control. The variability of the product from one unit to the next is randomly distributed about the average and bounded within statistically established limits – its natural limits (red solid lines in the figure below). So long as the process remains “in control”, it will continue to produce units within these limits.


Complete product conformity comes about when the process’s natural limits fall within the product’s specification limits (blue solid lines in the figure below). This depicts the ideal state for a process.


In order for a process to achieve this ideal state

  • The process must be inherently stable over time. This means that in the absence of external disturbances – what Dr. Shewhart referred to as assignable causes – the process’s natural variability does not change over time. (Note: There are processes that are inherently chaotic. An excellent reference to such processes is “Nonlinear Dynamics And Chaos” by Professor Steven Strogatz)
  • The process must be operated in a stable and consistent manner. The operating conditions cannot be selected or changed arbitrarily. Often machine parameters are tweaked in response to natural fluctuations in the process’s output. These actions add to the process’s natural variation disrupting its stability. Dr. Deming demonstrated the effects of such tampering through the “Nelson Funnel Experiment”. (Bill Scherkenbach has an excellent discussion of it in “Deming’s Road to Continual Improvement”.)
  • The process average must be set and maintained at an appropriate level. If you refer to the charts above, you can imagine the consequence of moving the process average up or down from its aim. The result is the production of nonconforming product either on the high or low side.
  • The natural tolerance of the process must be less than the specified tolerance for the product. This is obvious upon a first glance at the second chart above.

A process that satisfies these four conditions will be in the ideal state and the manufacturer can be confident that he is shipping only conforming product. In order to maintain the process in the ideal state he must use process control charts. He must act on the signals they provide to promptly identify assignable causes and eliminate their effects.

Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.