Print This Page

Process Capability Study

Custom Search

Low Cost Six Sigma Tools $9.99 

What is a Process Capability Study?

A Process Capability Study is the direct comparison of voice-of-the-process (VOP) to the voice-of-the-customer (VOC). It's the ability of a process to meet requirements, either internal or external, without process adjustment.

Its primary metrics are Cpk and Ppk.

LOW COST PAPERLESS (WEB BASED) QMS


VOC and VOP

Voice of the Customer

Voice of the Customer & Voice of the Process

A Process Capability Study is a key quality improvement tool. It tells you:

  • If your process has a targeting problem.
  • If your process has a variation problem.
  • If your process has both targeting and variation problems.
  • If your specification limits are not appropriate.
  • How well the process is capable of performing.

Process Capability studies have traditionally been performed on process outputs (Y's) but are even more important for process inputs (X's).

Control The Inputs (X's)....Monitor The Output (Y's)

Data Types


Attribute Data

With attribute data a processes capability is defined in terms of pass/fail.

Pass and Fail Bar Graph

Process Capability For Attribute Data

Continuous Data

With continuous data a processes capability is defined in terms of defects under the curve and outside of the specification limits.

Process Capability

Continuous Data Process Capability


Process Capability Training Module

Over 200 MS PowerPoint Training Presentation Slides covering all aspects of process capability and process stability. Normal and non-normal studies, all of the indices (Cpk, Cp, Ppk, Pp) short and long term studies, and process control.

Thousands have been trained using this material. Also included are more than 15 MS Excel templates so that you can easily assess process control and capability.

Instant Download

HIGH VALUE TRAINING - CUSTOMER FIRST PRICING

More Than 200 PowerPoint Training Slides, More Than 12 MS Excel Tools, Great High Quality Material

$1.99

Safe and Secure Checkout with PayPal

Safety First - All Files Have Been Scanned And Are Safe

Add to Cart View Cart

space block

Process Capability Study Steps


1Select Study Target
2. Verify Requirement
3. Validate Specification Limits
4. Collect the Data
5. Determine Data Type (short-term or long-term)
6. Check Data Normality
7. Calculate Cp, Cpk, Pp, Ppk

Click here if you're looking for inexpensive process capability study software. It's Excel based and you can download and use it immediately. It also comes with a lot of goodies...

Step 1 - Select Study Target


There should be a good reason to perform a Process Capability Study and that reason must be that some parameters performance must be understood in relation to specification requirements.

Steps 2 & 3 - Verify Requirement & Validate Specification Limits


Specifications or requirements should be verified and validated before starting the Process Capability Study.

I can't tell you how many times I've seen situations where the specification was not what folks thought it was. In the majority of these instances the misconception about what the specification was had been in place for years.

What's the source of the specification(s)?

  • Customer requirement?
  • Business requirement?
  • Regulation requirement?
  • Design requirements?
  • Is the specification current? 
  • Has the specification changed?
  • Is the specification understood and agreed upon?
  • Is the specification clear?

Step 4 - Collect The Data


If you're going to go through the time, effort and expense to perform capability analysis it's important to pause and really consider data collection.

How you collect the data will determine how much performance information can be extracted from it. The most information is avalaible when the data is collected in rational subgroups.

Rational subgroups are simply items that are alike. They are an attempt to separate what's called "common-cause and special-cause" variation.


A Rational Subgroup Could Be-

  • Close in time
  • Made within the same set-up
  • Done by the same people
  • Processed using the same method
  • Using the same material batch
  • etc

The goal is to have a "rational" for collecting the data that will minimize the variation within each subgroup.

By collecting data this way we force a theoretical condition where only common-cause (normal) variation exists within each subgroup. All of the special-cause variation therefore lies between the subgroups. 

If you can collect the Process Capability Study data in rational subgroups, not only will you get the overall capability data, you'll also be able to understand the process potential (what is possible).

Process Variation

Process Variation

Process Variation Between Lots

The Real World

Some process capability target inputs or outputs do not naturally lend themselves rational sub-grouping.

  • A transactional metric that is only calculated once a day.
  • A test that is performed once a shift.
  • Any production, transactional or service process with very low volume.


In these instances try to create a rational subgroup of two or more by using the closest consecutive data points. These consecutive data points will be the best estimate of short-term common-cause variation.

This will be the closest that you'll be able to get to estimating short-term variation.

If we do something on Monday and don't do it again until Wednesday, combine these two consecutive data points into one subgroup.

IMPORTANT - The goal is to separate special-cause variation from common-cause variation where practically possible. If rational sub-grouping just doesn't make sense due your volume, forget about it and use single sample data points.

Process Control and Process Capability

The validity of the results of a Process Capability Study depends upon the overall long term stability of the process. In Quality terms "the process must be in a state of statistical control."

When a process is in a state of statistical control it is predicable. Control Charts should be used to assess stability and control. Control Charts separate common-cause variation from special-cause variation.

A process is said to be in a state of "statistical control" when no special-cause variation exists within that process. Only common-cause variation remains.

The control chart below shows a process parameter in a "state of statistical control". The parameter is stable over time. None of the data points are falling above or below its calculated +/-3 sigma control limits.

Control Chart

Control Chart

Control Chart Showing Only Common Cause Variation

The probability of getting a data point above or below these limits, with the existence of only common-cause variation, is approximately 0.3%. It's not very likely!

With 99.7% certainty we can conclude that any data point(s) that fall outside of these limits is an indication of special-cause variation. In other words, something meaningful has occurred to cause the point to go outside of the calculated control limit.

Its important to know that these +/- 3 sigma control limits have nothing to do with the specification limits. Think of control limits as the mathematical boundaries of the Voice-of-the-Process.

The actual specification limits are used during the process capability study itself.

Takeaway - for the results of the Process Capability Study to be predictable long term, the process must first be stable.

Processes that are not in statistical control are unpredictable. No projections of long term capability should be made until you know that the process is stable.

Step 5 - Determine Data Type - S/T or L/T


Customers are typically interested in long-term data (L/T) or performance over time. And as you would imagine, variation is at its maximum long term.

Short term data (S/T) is also important. With short-term data we can learn what is possible for our process with existing resources. Variations impact on a process is typically at its minimum short-term.

Short term variation is called "entitlement". Entitlement is the best that the process can do with no changes. Entitlement is represented by the capability index Cp. More on Cp later.

Taking measurements for a day is short-term data. Taking measurements over the course of a few months is most likely short-term data. Taking measurements for an entire calendar quarter or more is typically long-term data.

Short Term and Long Term Data

Control Chart with Distributions

Short Term and Long term Data Examples

space block

Step 6 - Check Data Normality


Common-Cause Variation (Normal Variation) is inherent in the process/system itself and can only be reduced by changes to the system.

It's a direct result of the way the system operates. It usually requires management action due to managements control over the system - changing a process or upgrading equipment.

Special-Cause Variation is directly assignable and can often be tracked down and fixed without extensive changes to the system  -broken or worn equipment, wrong materials, etc.

A process that is free from special-causes of variation, only common-causes exist, is said to be "in statistical control" and stable. An "in-control" process experiences only normal variation.


When a process is in a state of statistical control a fundamental rule of statistics applies, the empirical rule.

The rule states that for a normal distribution (bell shaped):

  • 99.7% of the measurements will fall between +/-3 standard deviations from the mean.
  • 95% will fall between +/- 2 standard deviations from the mean,
  • 68% will fall between +/- 1 standard deviations from the mean.
SPC Empirical Rule

Area Under The Normal Curve

Step 7 - Calculate Cp, Cpk, Pp, Cpk


Process Capability Measures - Cp, Cpk, Pp, Ppk

Cp is Process Capability: Cp compares the width of the tolerance or specification (USL-LSL) to the width of the short term process variation. Cp is entitlement which is the best the process could possibly do with no changes.

Cpk is Process Capability Index: Cpk is almost the same as Cp but it imposes a "k factor" penalty for not being centered on the target. It adjusts Cp for the effect of the non-centered distribution within the specification limits. Like Cp it uses the short term standard deviation in its calculation.

Calculating Cp and Cpk

Cp and Cpk Formula

Cp and Cpk Capability Calculation

Pp is Process Performance: Pp compares the width of the tolerance or specification (USL-LSL) to the width of the long term process variation.

Ppk is Process Performance Index: Ppk is almost the same as Pp but it imposes a "k factor" penalty for not being centered on the target. It adjusts Pp for the effect of a non-centered distribution within the specification limits. Like Pp it uses the long term standard deviation in its calculation.

Calculating Pp and Ppk

Pp and Ppk Formula

Pp and Ppk Capability Calculation

Capability Analysis Example


A precision motor manufacturer produces a model of motor whose specification is 60.5 +/- 1.0 RPM. The company has been producing this model for some time with varying results and feedback from its customers.

In order to understand the behavior of the process the company decides to perform a Process Capability Study.

The target for the study is the requirement of 60.5 RPM - Step 1.

This requirement and its tolerance of +/- 1.0 RPM are verified and validated against customer requirements and the design drawings and check out - Steps 2 and 3.

A Data Collection Plan is created to collect and format the existing data - Step 4.

Checking Data Normality (Step 5)


Looking at the Histogram below we can see that our RPM data resembles the general shape of a bell-curve. This is our first strong hint that our data is normally distributed and can be modeled by the normal distribution.

Secondly, the P-value of 0.611 is greater than our level of significance 0.05. This tells us that we can accept our hypothesis that the data is normally distributed. 

All is okay here! Now let's take a look to see if our RPM data is stable over time and in a state of statistical control

space block

Histogram Example

Histogram Graphical Summary

Histogram Showing A Normal Distribution

Checking Process Stability (Step 6)


The X-bar & R Chart below of the last 20 weeks of data shows that all of our rational subgroup averages (sample means) and ranges are falling within the mathematically calculated upper and lower control limits (UCL/LCL). These limits are mathematically set at +/- 3 standard deviations from the mean (average).

99.7% of our plotted data points will fall between these limits unless something in the process changes.

Our process is in statistical control - it's stable. Only common-cause, or inherent and natural, variation is acting upon it.

If special-cause variation was in the process you would see it in the control chart. One or more of the plotted data point would fall outside of the +/- 3 standard deviation (sigma) control limits.

Our process can be modeled using the normal distribution. And, because our data shows statistical control, our performance is very predictable long term.

Now it's finally time to see how we're performing against the RPM specification of 60.5 +/- 1.0 RPM.

Control Chart

Xbar and R Chart

Xbar and R Control Chart

Check Process Capability (Step 7)


In the Xbar & R Chart above take note that there is no mention of the specification.

The graph below is the actual Process Capability Study. It's the direct comparison of the voice-of-the-process, which is data. To the specification which is the voice-of-the-customer.

The study in the graph below shows an issue. 

The motor RPM is not centered on the target of 60.5 RPM. The data is to the left and is centered around 60.0 RPM. The actual mean or average RPM is 60.01.

Our Cp and Pp values are 1.86 and 1.81. Cp and Pp measure the width of the tolerance band, which in this case is 2 RPM, to the width of the process output.

Cp uses the within subgroup variation in its calculation while Pp uses the overall variation.


Our Cp and Pp values show that the tolerance width is almost twice the size of the process output width. This is great news and is clearly visible in the capability study chart below.

Our Cpk or Ppk values are less than 1.0 which means that we will produce motors outside of the RPM specification. By viewing the PPM (parts per million) data we can estimate our performance over the long term.

Process Capability Study Example

Process Capability

Capability Study with Calculated Statistics

Handling Non-normally Distributed Data


An assumption with a Process Capability Study is that our data comes from a normal distribution. It is based upon having normal or near normal data. Process Capability Analysis loses its accuracy, either high or low, when the data departs from a normal distribution.

With a normal distribution the data falls symmetric about the mean or average. The distribution is bell-shaped with approximately 50% of the data falling on each side of the the mean.


Knowing that a normal distribution is bell shaped we can use this criteria to give our data the "eyeball test" to assess whether or not it is normally distributed.

Looking at the Histogram below we can clearly see that the data is symmetric about the mean or average value of approximately 50. The data output is bell-shaped with approximately 50% of the data falling on each side of the mean.

Normal Distribution Example

Normal Distribution

Normal "Bell Shaped" Distribution

Another technique used to assess whether data is normally distributed is probability plotting. If the data is normally distributed the plotted data will approximate a straight line.

The data in the Histogram above is shown below in a Probability Plot. In general it follows a straight line. Pretty solid evidence that the data is normally distributed.


P-Value Method: Advanced Topic

Another technique is the P-value approach which is also widely used to assess whether data is normally distributed.

If the calculated P-value exceeds the significance level of the test, you conclude that the data is normally distributed. The significance level, or  alpha risk , is usually set at 0.05 (5%) to 0.10 (10%),

In the Probability Plot below you can see that the calculated P-value is 0.716. This is much greater than our alpha risk of 0.05 (5%). Further evidence that the above distribution of data is normally distributed.

Note: The p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.

Probability Plot

Straight Linear Plot

An "eyeball test" of the data's Histogram  (shape) and Probability Plot is all that is typically needed to judge a data sets distribution normality.

But, as distributions depart from this perfect bell-shape it becomes more difficult to judge normality and the probability plot and P-value is needed.

Process Capability - Non-normally Distributed Data


Many processes do not output data that follows a normal distribution.

Cycle-time data from transactional processes is rarely normally distributed. Luckily though there are methods to perform Process Capability Studies on non-normally distributed data.

To perform the analysis you first must transform your non-normal data distribution to a normal distribution.

With transformation magic you force your non-normal data to become normal........

Here's an example......

Non Normal Data Example


A custom engineering company has determined that responding to its customers Request-for-Quote (RFQ) within 5 business days is critical-to-satisfaction.

They decide to perform a Process Capability Study to quantify their performance against this 5 day requirement.

Generally following steps 1-5 above, they're fine.

Then Step 6-Check Data Normality. Looking at the Graphical Summary below they see that the RFQ cycle-time data is not symmetrical or bell-shaped.

It does not pass the "eyeball test" for a normal distribution.

Non-normal Distribution Example

Non-normal Distribution

Data Distribution Not Bell Shaped

The probability plot confirms it. The RFQ cycle-time data  does not follow a straight line. The P-value is also below the alpha risk of 0.05 (5%). No doubt about it - this is not a normal distribution!

Non-normal Data Probability Plot

Non Straight Line

With non-normally distributed data you need to find a statistical transformation function to transform the non-normal distribution into a normal distribution.

This is best accomplished with a good statistical software package, there are many available. Data transformation can also be done with Excel or any standard spreadsheet type software package.

Transformation Functions

Two of the most popular transformation functions used with Process Capability Analysis are the:

  • Box-Cox Transformation: transforms the data by raising it to the power lambda, where lambda is any number between –5 and 5.
  • Johnson Transformation: optimally selects a function from three families of distributions of a variable, which are then easily transformed into a normal distribution.

Either the Box-Cox or Johnson Transformation will always work. Using statistical software to identify the best transformation choice shows the results in the Probability Plot graphs below.

With these plots you're looking for the Goodness-of-Fit to a straight line. 


To determine the best data transformation choice or "best fit" use the P-value. The higher the P-value the better the fit of the data to the model.

In this case we choose the Johnson Transformation Function - second graph bottom right corner. It's P-value is 0.95. The Box-Cox P-value is 0.769 which is second best. Many other transformation functions are available but the Johnson or Box-Cox will typically work.

Probability Plots

Probability Plots

space block

Probability Plots

Once you know which transformation function to use to transform your non-normal data to normal, you perform the transformation and perform the Process Capability Study on the transformed data.

In the analysis below notice that the observed performance is that approximately 17% of the RFQ's exceeded the 5 day requirement.

The long-term expected overall performance is that approximately 19% of RFQ's will exceed the 5 day turn around limit.

Capability Study - Transformed Data

Non-normal Distribution Process Capability

Transformed Data

From Process Capability to Six Sigma Tools

WORKFORCE TRAINING SOLUTIONS

Convert PDF
QA Pro's Wanted

Recent Articles

  1. Free Six Sigma Tools and Training.

    Sep 27, 14 08:18 AM

    "MASSIVE TOOLBOX" of 100's of Free Six Sigma Tools, Free Quality Improvement Tools, and Free Training, Procedures and Forms.

    Read More

  2. Operational Definition, The Same Meaning Yesterday, Today And Tomorrow

    Aug 27, 14 07:31 PM

    An operational definition is one that people can do business with. It has same meaning to vendor as to purchaser, same meaning yesterday and today.

    Read More

  3. Lean Manufacturing, Simple Concepts To Deliver Quick Value

    Aug 27, 14 07:09 PM

    Lean, or Lean Manufacturing, is the elimination of waste and implementation of continuous flow for products and information.

    Read More