Tag Archives: quality

Book Review: “The Toyota Product Development System”

The Toyota Product Development SystemWriters and readers are in a relationship. Each has responsibilities. The writer is responsible for the structural quality of his writing such as spelling, grammar, punctuation, and vocabulary. (The language should be invisible in high quality writing so that the reader can focus on the content i.e. the writer’s message.) The reader is responsible for being fluent in the language to understand high quality writing.

This book should never have been published just for its atrocious quality of writing. It is filled with spelling mistakes, terrible grammar, and horrid punctuation. These issues, in addition to needless Japanese jargon, car jargon, and undefined acronyms, interrupted reading so often that I had to put the book away every few pages. This is especially frustrating as the book is fundamentally about building quality in into a product! There is no indication that this was done in the production of this book.

I am deeply interested in the product development process. Experience with several companies has shown me that their respective product development process, if it exists as such, is poorly designed, poorly defined, and not effective in operation. So I have been studying—I’ve read Stuart Pugh’s “Total Design”[1], Don Clausing’s “Total Quality Development”[2], and Ulrich/Eppinger’s “Product Design and Development”[3], among other books and papers. Given that Toyota excels at bringing great products to market quickly, I really wanted to learn and understand its approach. So it was with this intent, and Jeffrey Liker’s reputation, that I picked up “The Toyota Product Development System”[4].

The book does not deliver what its title promises. The authors do not provide a model of the product development process, instead discussing the sociotechnical system (STS) at Toyota, the V-Comm communication system, and PDVSM—product development value stream mapping to improve the product development process. This is already superbly detailed in Jeffrey Liker’s “The Toyota Way”[5]. We get it—the product development process at Toyota is grounded in its world leading STS, but what is the process specifically? The authors don’t detail the product development process as I’ve come to expect from reading Pugh, Clausing, Ulrich/Eppinger. Perhaps that is a failure on my part.

How are design inputs collected and/or developed? How are those inputs converted into engineering terminology? If Toyota doesn’t use the House of Quality, what does it use? How are engineering requirements converted into sets of concepts? There is no usable explanation of set-based concurrent engineering. For crying out loud, Jeffrey Liker wrote several papers on this! What is the method for vetting various concepts? How does detailed engineering happen i.e. converting requirements into drawings? How are those concepts verified and/or validated? What type of testing is performed or skipped? When is it done? None of the things that would help a design and development engineer to understand the design and development process at Toyota is covered in any useful way.

When these questions are touched upon, they are done so piecemeal and superficially; disconnected from one another. The authors make the reader work very hard to extract nuggets from their writing. The discussion often happens in the context of an example, but the examples require you to know car terminology! So if you don’t have experience in the automobile industry, good luck trying to figure out what the authors are trying to communicate. (Thank you Google & Wikipedia for helping me see what is meant.) The matter is made worse by the fact that an example doesn’t carry through between discussion of topics.

One final note, there is a ridiculous amount of adoration of Toyota’s results that borders on worship. I didn’t care for that, especially when what I was looking for—the description of process—was missing. I didn’t buy the book for the authors to tell me how good Toyota is and how bad everybody else is. I already know this. It is unfortunate that several masters in lean wrote rave reviews for the book. I wonder if they bothered to actually read it. I am now less inclined to be guided by their reviews and recommendations. My suggestion to you is you skip this book. It isn’t worth anyone’s time.

[1] Pugh, Stuart. Total Design. Addison-Wesley Publishers Ltd. 1991. ISBN 0-201-41639-5

[2] Clausing, Don. Total Quality Development. ASME Press: New York, NY. 1993. ISBN 0-7918-0035-0

[3] Ulrich, Karl T., and Steven D. Eppinger. Product Design and Development. New York, NY: McGraw Hill Education. 2012. Print. ISBN 978-93-5260-185-1

[4] Morgan, Kames M., and Jeffrey K. Liker. The Toyota Product Development System. New York, NY: Productivity Press. 2006. Print. ISBN 1-56327-282-2

[5] Liker, Jeffrey K. The Toyota Way. New York, NY: McGraw Hill. 2004. Print. ISBN 0-07-139231-9

The State of Chaos

When a process is out of control and it is producing nonconforming product it is in a state of chaos. The State of Chaos is one of the four states a process can be in as shown in “What State Is Your Process In?“. The manufacturer cannot predict how much nonconforming product his process will produce in any given hour or day. At times the process will produce nothing but conforming product. Then without warning it will produce nothing but nonconforming product. It might seem as if there were ghosts in the machine.

A process in such a state is affected by assignable causes that are easily identified through the use of process control charts. The effects of these assignable causes have to be eliminated one at a time. Patience and perseverance are necessary. It is essential that the process be brought under statistical control and made predictable. Once the process has achieved stability further improvement efforts can be made to reach the ideal state.


Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


The Brink of Chaos

Of the four states a process can be in (see “What State Is Your Process In?“) the most sinister state is the one where it is producing 100 percent conforming product but is operating in an unpredictable way. That is, the process is not under statistical control. Such a process is, in fact, on the brink of chaos. But, hold on. There is no nonconforming product, therefore there is no problem, right? It is easy to get lulled into complacency by this happy circumstance.


But because the process is not under statistical control it is impossible to predict what it will do in the next instance. Various assignable causes are affecting the process in an unpredictable fashion. The effect of these causes could very well be the production of nonconforming product without any warning. When that happens the process has moved into a state of chaos.

The only way to address a process on the brink of chaos is to use process control charts to identify assignable causes and eliminate their effects one-by-one and bring the process under statistical control. You can then start other improvement efforts like moving the process mean to the process aim and reducing the process variation by minimizing the influence of common causes affecting the process.

Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


The Threshold State

A process that is predictable or in a state of statistical control, but producing nonconforming product can be described as being in the Threshold State. This is one of the four possible states that a process can be in as noted in “What State Is Your Process In?“. But, what might such a process look like?

A process in the threshold state might be operating with its mean higher than the process aim,


or it might be operating with its mean lower than the process aim,


or it might be operating with a process dispersion greater than the product specification window,


or it may be operating with some combination of a shift in its mean and breadth of its dispersion.

Nevertheless, the fact that such a process is in statistical control means that it will continue to produce consistent product so long as it stays in control. This in turn means that the producer can expect to produce a consistent amount of nonconforming product hour after hour day after day until a change is made in the process or a change is made to the specifications.

It is important to say here that exhorting your workers to work harder or to “Do it right the first time” or to show them the examples of nonconforming product from a process in the threshold state will not lead to improvements. They are not the cause for the failures. The causes for the nonconforming product are systemic and must be dealt with at the system level. Focusing on the worker will only serve to demoralize and frustrate them. It may lead to tampering with the process turning a bad situation worse.

You can always share your process data with your customer to demonstrate its stability and ask for a change in the product specifications. However, if specifications cannot be changed your only recourse is to modify your process to shift it from the threshold state into the ideal state. Adjusting the process mean to match the aim is usually relatively simple. In comparison, reducing the process variation requires an understanding of the common causes affecting the process and their respective effects – a much more involved activity.

While you are working on improving your process you are still producing nonconforming product. Until such time as you achieve the ideal state for your process, you must screen every unit or lot before shipping product to your customer – a 100 percent inspection and sort. This should be treated as a temporary stop-gap measure. You must recognize it as an imperfect quality control method and be mindful that defectives will escape.

Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


The Ideal State

In “What State Is Your Process In?” I noted that a process can be in one of four possible states. Here I write about the Ideal State wherein a process is predictable and is producing 100 percent conforming product.

A process that is predictable is one that is in a state of statistical control. The variability of the product from one unit to the next is randomly distributed about the average and bounded within statistically established limits – its natural limits (red solid lines in the figure below). So long as the process remains “in control”, it will continue to produce units within these limits.


Complete product conformity comes about when the process’s natural limits fall within the product’s specification limits (blue solid lines in the figure below). This depicts the ideal state for a process.


In order for a process to achieve this ideal state

  • The process must be inherently stable over time. This means that in the absence of external disturbances – what Dr. Shewhart referred to as assignable causes – the process’s natural variability does not change over time. (Note: There are processes that are inherently chaotic. An excellent reference to such processes is “Nonlinear Dynamics And Chaos” by Professor Steven Strogatz)
  • The process must be operated in a stable and consistent manner. The operating conditions cannot be selected or changed arbitrarily. Often machine parameters are tweaked in response to natural fluctuations in the process’s output. These actions add to the process’s natural variation disrupting its stability. Dr. Deming demonstrated the effects of such tampering through the “Nelson Funnel Experiment”. (Bill Scherkenbach has an excellent discussion of it in “Deming’s Road to Continual Improvement”.)
  • The process average must be set and maintained at an appropriate level. If you refer to the charts above, you can imagine the consequence of moving the process average up or down from its aim. The result is the production of nonconforming product either on the high or low side.
  • The natural tolerance of the process must be less than the specified tolerance for the product. This is obvious upon a first glance at the second chart above.

A process that satisfies these four conditions will be in the ideal state and the manufacturer can be confident that he is shipping only conforming product. In order to maintain the process in the ideal state he must use process control charts. He must act on the signals they provide to promptly identify assignable causes and eliminate their effects.

Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


What State Is Your Process In?

You can take one of two approaches to controlling the quality of your product. Once manufactured, you can compare it against its specifications and sort it as either conforming or nonconforming. This, however, will guarantee the greatest degree of variability between units as anything within specifications is considered acceptable. Alternatively, you can work to run the manufacturing process as consistently as possible to produce units that vary as little as possible from one another.

There is no path to improving performance with the first approach where you sort manufactured product as conforming or nonconforming. However, there is a clear path to improving performance through the use of process control techniques introduced by Dr. Walter Shewhart. But, how do you gauge improvement? One measure is achieving a state of statistical control for the process so that its behavior is predictable. Another measure is the manufacture of 100 percent conforming product by the process.

When these two measures are taken together, there are four possible states a process can be in (see figure below): The process is


Note: I learned this material from reading Dr. Wheeler’s writings. My post is intended to reflect my current understanding. None of the ideas herein are original to me. Any errors are my failures alone.


A Short Introduction to the Philosophy of W. Edwards Deming

These three videos provide a short introduction to the philosophy of Dr. W. Edwards Deming.

A [Breakdown in] Validation of Quality

Recently MD+DI (Medical Device and Diagnostic Industry) published “A Validation of Quality”. I would like to think it was motivated by a desire to inform and educate the readers of the value of conducting proper process validations. But, instead the author, Jean Mattar, misinforms the readers and perpetuates validation mythology. And, it seems no one at MD+DI bothered to do a spelling or grammar check, much less fact check what they were publishing. This is particularly troublesome as many medical device industry professionals use such articles in MD+DI as reference when setting up their quality systems or performing various quality assurance activities like process validation.

My aim is to march through the article, quoting the author and pointing out the misinformation. Hopefully this will help you understand process validation correctly or at least prevent you from learning the wrong things.

Because it is a repeatable process, laser welding can be statistically proven and easily validated.

Nothing about the laser welding process suggests that it is inherently a repeatable process. Like any process, if the inputs to the laser welding process vary, so will its output. That is why you need to perform a process validation: to demonstrate that expected variation in inputs to the process will still yield an output that meets performance requirements. There is nothing easy about this effort. It must be properly planned, executed with greatest care, and the resultant data analyzed with the objectivity of statistical tools. In doing so, nothing is proved statistically. Statistical analysis merely details the probabilities of occurrence. Those probabilities must then be evaluated in the context of business goals as either acceptable or not.

In a perfect world, you’d have the time to validate 100% of your samples.

It is obvious that the author does not understand the concept of sampling or sample size nor the difference between verification and validation. If you’re “validating 100% of your samples” i.e. inspecting every piece, you are performing verification and you’re not sampling.

When your company is faced with a validation issue, you’ve got a host of problems to deal with, from embarrassing to expensive.

Seriously? Embarrassment is the problem? Expense of the fallout of a validation issue is the problem? How about patient safety? Where does that fall in the spectrum of problems? Failure to validate a process that cannot be verified means you have no idea whether your product will perform as expected or whether it will fail at the most inopportune time e.g. in the middle of surgery. Just as the primary duty of a physician is to do no harm, the primary motivation of a medical device manufacturer should be to ensure that their product will perform as expected. That assurance is partly obtained through performing a manufacturing process validation and partly through continuing quality control activities. Embarrassment can be overcome and reputations can be rebuilt. Patients’ lives remain forever changed.

In the best-case scenario, your customers will require validation as part of the complete manufacturing process and will audit it closely.

If Tegra Medical truly believes this, then its customers need to reevaluate their relationship with the company. It is not your customers’ responsibility to ensure you are manufacturing good product. That responsibility is entirely yours. It should be your company’s culture to ensure that production is done properly such that your product performs as expected. So, the best-case scenario is that regardless of customer requirements you should perform process validation as part of your quality assurance activities.

When processes or parts are especially complex, validation provides a way to help control them. It enables real-time monitoring and process adjustments so you can improve processes statistically and evaluate your performance daily.

What nonsense! Process validation has nothing to do with the complexity of a part. The benefit of process validation is identical whether you are manufacturing a simple part or a complex one: assurance that it will function as expected.

Just what exactly is the author referring to when saying “validation provides a way to help control them.” Control what? If we’re talking about controlling process parameters, then quality control tools such as statistical process control (SPC) and run-rules are necessary.

Validation does not “enable real-time monitoring and process adjustments”. More importantly, when you’re conducting a process validation, process parameters should not be adjusted at all or you will contaminate the result you’re trying to validate. Process parameters’ operating windows should be established during process design; not during process validation.

The Quality System Regulation (QSR) known as 21 CFR Part 820 and ISO 13485:2003 require that validation include installation qualification (IQ), operational qualification (OQ), and process qualification (PQ).

No, they don’t. Neither 21 CFR Part 820 nor ISO 13485:2003 require that validations include IQ, OQ, and PQ. The regulations state:

Sec. 820.75 Process validation.

(a) Where the results of a process cannot be fully verified by subsequent inspection and test, the process shall be validated with a high degree of assurance and approved according to established procedures. The validation activities and results, including the date and signature of the individual(s) approving the validation and where appropriate the major equipment validated, shall be documented.

(b) Each manufacturer shall establish and maintain procedures for monitoring and control of process parameters for validated processes to ensure that the specified requirements continue to be met.

(1) Each manufacturer shall ensure that validated processes are performed by qualified individual(s).

(2) For validated processes, the monitoring and control methods and data, the date performed, and, where appropriate, the individual(s) performing the process or the major equipment used shall be documented.

(c) When changes or process deviations occur, the manufacturer shall review and evaluate the process and perform revalidation where appropriate. These activities shall be documented.

ISO 13485:2003 states:

7.5.2 Validation of processes for production and service provision General requirements

The organization shall validate any processes for production and service provision where the resulting output cannot be verified by subsequent monitoring or measurement. This includes any processes where deficiencies become apparent only after the product is in use or the service has been delivered. Validation shall demonstrate the ability of these processes to achieve planned results. The organization shall establish arrangements for these processes including, as applicable

a) defined criteria for review and approval of the processes,

b) approval of equipment and qualification of personnel,

c) use of specific methods and procedures,

d) requirements for records (see 4.2.4), and

e) revalidation.

The organization shall establish documented procedures for the validation of the application of computer software (and changes to such software and/or its application) for production and service provision that affect the ability of the product to conform to specified requirements. Such software applications shall be validated prior to initial use.

Records of validation shall be maintained (see 4.2.4) Particular requirements for sterile medical devices

The organization shall establish documented procedures for the validation of sterilization processes. Sterilization processes shall be validated prior to initial use. Records of validation of each sterilization process shall be maintained (see 4.2.4).

The Global Harmonization Task Force (GHTF) does recommends that validation activities be broken up into installation qualification, operational qualification and performance qualification. But, it is not required. And, yes, PQ stands for performance qualification; not process qualification as the author writes.

…all data are maintained in the company’s design history record (DHR)…

Obviously the author has confused a device history record (DHR) – a compilation of records containing the production history of a finished device – with a design history file (DHF) – a compilation of records which describes the design history of a finished device.

The entire section on laser welding IQ talks about equipment IQ. It does not address process IQ. An equipment qualification (equipment IQ, OQ, and PQ) is a portion of a process IQ which also includes, among other things, operator training for running the process, standard operating procedures for the process, a process risk analysis (FMEA), ensuring all process equipment are laid out properly, etc. A discussion of a proper process IQ is beyond the scope of this essay.

The author completely mischaracterizes the activity performed during process OQ. Suffice it to say that you shouldn’t be developing or designing your process (i.e. establishing operating windows) during validation. The process OQ is where you challenge the process by operating it at its inputs’ maximum and minimum values. These runs, performed most efficiently through the use of a properly designed experiment, should demonstrate that the process will yield an output that meets expectations even when the inputs are operating off their nominal values and at their extremes. Again, a discussion of proper process OQ is beyond the scope of this essay.

…process capability studies (known as gage repeatability and reproducibility (GR&R) or measurement systems evaluation…

Process capability studies are not GR&R studies. Process capability studies typically reflect the manufacturing process’s ability to make product within specification limits. GR&R studies are done on the measurement process, and do not have direct relationship to the manufacture of any given product.

This test uses a statistically significant sample plan such as a size of 60 parts based on a reliability and confidence level of 95%

Where does this sample size of 60 come from? For the life of me, I can’t figure it out.

The outcome of the test is torque or tensile data that shows with 100% accuracy how the material will hold up under different conditions.

There is no such thing as 100% certainty in the real world. Only degrees of confidence.

The author does a similar hack job in discussing laser welding PQ as he did with the IQ and OQ sections. Process performance qualification should be a final check of the process by running it at nominal levels of the process inputs. That is, during PQ you’re running production! Consider the initial batches as “risk production” if you will. The process performance here on out is monitored using statistical process control tools. Alas, a discussion of proper process PQ is also beyond the scope of this essay.

…run your laser welding parameters at the nominal condition three times in a row…

What is the statistical basis of running the process three times or whether those runs are consecutive? This is an industry myth that keeps being perpetuated over and over with no basis in fact.

I hope that I’ve successfully detailed where this article misinforms. It is throughout the entire body. Based on the type and scope of the misinformation, I can infer only that the author does not have a good understanding of process validation, how to perform it or what it is intended to achieve. Worse, his focus does not appear to be aligned with that of the FDA: patient safety. The fact that he is the vice president, quality assurance and regulatory affairs at Tegra Medical should give all of us pause. Additionally, we should all wonder about the vetting process MD+DI uses in deciding what to publish.



Correction: The gender of the author was corrected in the sentence “The author does a similar hack job in discussing laser welding PQ as she did with the IQ and OQ sections.” from “she” to “he”. The author is a man, not a woman. (11:10 AM, 25 Oct, 2011)

On the Purpose of Performing the Operational Qualification (OQ) for an Equipment

A company I worked for believes that the purpose of performing an operational qualification of an equipment is to demonstrate that the equipment functions at the settings expected to be used in production. As such, it does not feel it is necessary to validate the manufacturer’s claims about its product’s greater functional range. I wholly disagree with this limited perception.

Consider a simple hypothetical example wherein I intend to make bread. The recipe calls for the bread to be baked in an oven at 400 F for 25 minutes. Applying this company’s point of view, the operational qualification of my oven would require only demonstrating that it functions at 400 F for 25 minutes. There is no need to validate that the oven is capable of functioning to its manufacturer’s claim of 170 F to 550 F. I believe this model for conducting an equipment operational qualification to be shortsighted at best and a total misunderstanding of what an operational qualification is at worst.

While such an approach fulfills the immediate need to demonstrate that my oven is capable of generating 400 F, such a limited test does not take into account my future potential needs. Perhaps better bread might be made by baking it at 350 F for 35 minutes. Or, I might want to make pizza – that requires my oven to operate at 550 F, the hottest it gets. Or, I might want to keep food warm in the oven at 170 F. An operational qualification of this type precludes me from using my oven at temperatures that haven’t been tested. I would have to repeat the operational qualification at other temperatures before use.

But, I believe that this approach to performing an operational qualification is actually more than just shortsighted. It is wrong. It is an incorrect interpretation of what a proper operational qualification is intended to demonstrate: that the equipment is capable of performing to its manufacturer’s claims. A proper operational qualification should be done such that the functionality of the equipment is evaluated at a minimum at the high & low settings of all its critical input controls. The design of experiments provides a statistically robust and cost efficient method to do this.

In an abstract sense an operational qualification is part of the receiving inspection process, albeit a complex one. And just as in the typical receiving inspection process, it is better to find defects or deficiencies in the equipment at this stage (even those that don’t directly impact our immediate application) than to discover them during production. Defects, like cracks, tend to migrate into the production operating zone over time.

At its most basic level an operational qualification has very little, if anything, to do with using the equipment for any given application. Operational qualification of my oven has very little to do with me using the oven to bake bread, make pizza or keep food warm. Validating that the equipment will perform as expected for a given application is done as part of a performance qualification (PQ) of the equipment. So, when my former employer performed an operational qualification of a tool at an application specific setting, it was actually performing a performance qualification; bypassing operational qualification.

When a proper operational qualification has been performed for a piece of equipment, it does not need to be repeated until the equipment undergoes some sort of preventive or breakdown maintenance when consumable or broken parts are replaced. As such, a company that intends to extend the use of a piece of equipment to multiple applications needs to only perform a performance qualification of the equipment for a given application’s settings before releasing the tool to run that new application.

It is disturbing when quality managers do not take the time to understand the purpose of a particular assurance activity. As you can imagine, this leads to wrong tests being run, wrong data being collected and wrong conclusions being drawn. The problem is compounded by their willingness to take shortcuts. Is getting to the wrong place faster better? It doesn’t make any sense to me why a company would not perform a check of the manufacturer’s claims when the effort required isn’t any greater and the long term benefits are significant.