Tag Archives: validation

A Simple Process Validation Example

Consider the bread making process shown in the figure below. Someone developed it for making bread. The bread made using this process has various characteristics that consumers find desirable: look, feel, taste, etc. Each of these characteristic can be measured and will have some target value (based on consumer research) such that if a loaf is made with all its characteristics on target, there is a high probability that it will meet the consumer’s expectation and the consumer will enjoy eating it. The question that process validation seeks to answer is will this process consistently produce bread loaves of the specified quality.

A fundamental assumption in manufacturing is that if the inputs to the process remain constant e.g. you use exactly 6 cups of bread flour each time, and the process itself is constant e.g. the oven generates 350 F of heat every time, then each output of the process will be the same as the previous with no discernable difference. However, nothing is constant: there is natural variability in the quantity of the flour used; sometimes you might use as little as 5.5 cups; other times you might use as much as 6.5 cups. Even the oven periodically turns its heating mechanism on and off to provide a mean temperature of 350 F, but the actual temperature at any given instant is more likely than not to be above or below the mean. So in the physical world each output of the process i.e. each loaf of bread will be different from the previous.

The question then becomes is the loaf to loaf variability in the output, the result of the variability in the inputs and the process, noticeable by the consumer? Each characteristic of the output not only has a target value but also a range about the target that is considered acceptable. The bread may be okay if its crust is slightly more or less brown, but rejected if it is significantly dark (suggesting burnt) or light (suggesting underdone). What exactly are the limits of acceptability for each characteristic? That is decided through consumer research. Assuming, for our purposes that these limits are already specified, then if the measured value of a particular characteristic for a given loaf of bread falls within its upper specification limit and lower specification limit, it is considered acceptable.

During process validation the process is kept constant i.e. step sequence, parameter settings, etc. are fixed, while its inputs are varied between their extreme possible conditions. The thought is if the output of the process subjected to such extreme conditions of its inputs is within acceptable limits, then the output of the process with normal conditions of inputs will also be acceptable. The intent of this exercise is to demonstrate the robustness of the process to the natural variations in its inputs.

The design of experiments provides an efficient way to simultaneously vary every input between its extremes. For the bread making process in this example, there are 6 inputs: amount of bread flour, salt, vegetable oil, active dry yeast, white sugar and water. If we assume that each of these inputs will vary from their specified quantity as shown in the table below, then we can construct a two level six factor experiment for the process validation study.

 

 

Low (-)

High (+)

A

Bread flour (cups)

5.75

6.25

B

Salt (teaspoon)

1.25

1.75

C

Vegetable oil (cups)

3/16

5/16

D

Active dry yeast (tablespoon)

1.25

1.75

E

White sugar

5/9

7/9

F

100F warm water

1.75

2.25

Such an experiment is referred to as a full factorial experiment i.e. one where every combination of high and low values of every factor is made. Each combination will then be run through the process in randomized order. And each resulting loaf of bread will have various quality characteristics measured e.g. look (I), feel (II), and taste (III). These measured values will be plotted on separate run charts with their respective specification limits drawn in. The expectation is that the actual values will all fall within the spec limits. If that is the case, we can state with confidence that as long as the input variables remain within the upper and lower limits of their respective specifications, the quality characteristics of the resulting output will also be within their respective specification limits. And, thus we can conclude that the process is validated… for the set of inputs specifications defined.

Links
[1] Guidance for Industry — Process Validation: General Principles and Practices. U.S. Department of Health and Human Services, Food and Drug Administration, CDER/CBER/CVM. 2011. Web.

Appendix – Full factorial experiment design (order not randomized)

A

B

C

D

E

F

I

II

III

1

+

+

+

+

+

+

2

+

+

+

+

+

3

+

+

+

+

+

4

+

+

+

+

5

+

+

+

+

+

6

+

+

+

+

7

+

+

+

+

8

+

+

+

9

+

+

+

+

+

10

+

+

+

+

11

+

+

+

+

12

+

+

+

13

+

+

+

+

14

+

+

+

15

+

+

+

16

+

+

17

+

+

+

+

+

18

+

+

+

+

19

+

+

+

+

20

+

+

+

21

+

+

+

+

22

+

+

+

23

+

+

+

24

+

+

25

+

+

+

+

26

+

+

+

27

+

+

+

28

+

+

29

+

+

+

30

+

+

31

+

+

32

+

33

+

+

+

+

+

34

+

+

+

+

35

+

+

+

+

36

+

+

+

37

+

+

+

+

38

+

+

+

39

+

+

+

40

+

+

41

+

+

+

+

42

+

+

+

43

+

+

+

44

+

+

45

+

+

+

46

+

+

47

+

+

48

+

49

+

+

+

+

50

+

+

+

51

+

+

+

52

+

+

53

+

+

+

54

+

+

55

+

+

56

+

57

+

+

+

58

+

+

59

+

+

60

+

61

+

+

62

+

63

+

64

Advertisements

On Validation and Verification

The intent underlying validation or verification activities is to answer the question “How do you know?”. How do you know the product you designed meets the requirements for its intended use? How do you know a given unit you manufactured, based on that design, will perform as expected in the field? Objective evidence is needed to demonstrate that requirements of a given product that define its fitness for a specific purpose will be consistently fulfilled. For industries whose products can have a harmful impact on life, the requirement to perform validation and verification activities is codified in regulations.

For the purposes of this discussion, let’s assume that the product’s design fulfills the requirements of its intended use. This will allow us to focus on just its manufacturing process. How can you show that an output of the manufacturing process meets the requirements that define its fitness for use? One mechanism is to inspect and test every manufactured unit. And, so long as such activities do not destroy the manufactured unit in the process, it is a perfectly acceptable method to show its fitness for use. This sort of check is referred to as product verification.

However, inspection and testing of certain performance characteristics of a manufactured unit do destroy the unit in the process. Verifying such characteristics of each manufactured unit would not leave any units for use or sale. To address this issue, we have to look at data from samples of manufactured units, viewed through the lens of statistical theory, to draw conclusions about their overall population. This is the basis of process validation.

As long as the distribution of data points describing a particular performance characteristic of the product, collected from samples of manufactured units, falls within the limits that define the performance requirements for that particular product characteristic, we can be confident that the rest of the population of manufactured product meets those performance requirements as well. The theory of statistics provides us with a mechanism by which to quantitatively express the degree of our confidence that untested units will perform as expected.

An implicit point made in my assertion is that the manufacturing process is subjected to the range of variability present in its inputs. Each input to the manufacturing process has a distribution that describes its center and variation. These inputs interact with the manufacturing process and with each other as they are transformed into the output with its own distribution. However, the shape, location and spread of the output distribution is only revealed after significant data has been collected over time. And, it is the boundaries of this distribution when compared against requirements that demonstrates whether the manufacturing process can consistently produce units that are fit for use.

Companies do not have unlimited time or money to collect such data by conducting large numbers of manufacturing runs. And, such a large data set isn’t necessary either. Subjecting the manufacturing process to extreme values of each input will yield output values that represent the boundaries of the manufactured product. It is reasonable to expect that if the inputs are within their extreme values, the output will be within its boundary values. If the boundary values of the output are within the limits that define performance requirements, we can rest assured that the manufacturing process will produce units that are fit for the intended purpose. And, this manufacturing process can then be said to have been validated to produce the particular part.

A final thought: manufacturing processes have several inputs. It is not efficient to vary them one at a time. In fact, varying them one at a time doesn’t give you the complete picture of how they interact with the manufacturing process or with each other. Running controlled experiments that are properly designed can paint a more full picture. The science of the design of experiments should be the tool of choice when validating a process.

Fork in the Road

Excepting in the presence of active research in a pure science, the applications of the science tend to drop into a deadly rut of unthinking routine, incapable of progress beyond a limited range predetermined by accomplishments of pure science, and are in constant danger of falling into the hands of people who do not really understand the tools that they are working with and who are out of touch with those that do…

Harold Hotelling, Memorandum to the Government of India, 24 February 1940

That is the predicament I found myself in.

I was hired to support the company’s efforts to validate their processes in preparation for registering their manufacturing facility with the FDA. The FDA defines process validation as:

…establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications.

— 21CFR820.3(z)(1)

While the definition is succinct, process validation is not a trivial task. And, contrary to the belief of management that is not literate in the quality system regulations or the subject of quality assurance, it is certainly not something you can “whip out”. It requires an understanding of the process in question – its key inputs and the key attributes of its output – coupled with an understanding of statistical principles such as the design of experiments (DOE) and analysis of variance (ANOVA) necessary to generate the objective evidence that will establish for the company and the FDA that the “process consistently produces a result meeting its predetermined specifications”. And, it also helps to have a proper plan that allows management to identify and allocate the resources required to successfully meet its objectives. At a minimum, a basic project plan should include a detailed checklist of actions items with clearly defined owners and due dates.

But, when management does not understand or trivializes these requirements, it makes decisions that endanger the best interests of the company. Unnecessary risk is assumed. Resources are wasted. Workers are put in a chaotic situation that is the primary source of much of their frustration and fear. So it should not be a surprise when “new” action items “pop up” in crunch time; when there is confusion around the ownership of a task, or when deadlines are missed and missed again causing tensions to flare. And, while these gaffes in project management might be overcome through working harder (translated as management by proclamation – “because I said so” – and long hours), there is no hope to compensate for poorly designed experiments with insufficiently identified process parameters through brute force. Without the right data there is no way to show the process’s capability or even that it is in statistical control.

So there it is – a fork in the road. A choice that we are all confronted with more often than we would like: follow the whack-a-mole tactics of a management team without a strategy or as Dr. Hotelling put it “people who do not really understand the tools that they are working with and who are out of touch with those that do”, or make a swift exit to focus on developing your personal knowledge and skills while searching for a better opportunity. As scary as it seems, the latter choice will always lead to a better outcome. At least that has been my experience.