Open access

Analysing Portfolios of LeanSix Sigma Projects

Written By

Theodore T. Allen, James E. Brady and Jason Schenk

Submitted: October 14th, 2010 Published: July 14th, 2011

DOI: 10.5772/16732

Chapter metrics overview

2,367 Chapter Downloads

View Full Metrics

1. Introduction

The widespread acceptance of Six Sigma as a systematic program of process control, planning, and improvement has led to the creation of many databases describing the performance of individual projects, timing, and the techniques used. These databases provide resources for the analysis of quality management practices. Specifically, there are three levels at which analysis can occur in this context:

Micro level – lowest level dealing with individual tools and statistical methods

Meso level – mid level dealing with groups of individual tools and supervisor level decision-making about method selection and timing

Macro level – highest level dealing with organization and institutions and related to overall quality programs and stock performance

Reviewing the literature reveals a large portion concerning macro-level decision-making, particularly the decision whether to implement a Six Sigma program at a company, e.g., Yu and Popplewell (1994), Yacout and Hall (1997), Bisgaard and Freiesleben (2000), Yacout and Gautreau (2000), and Chan and Spedding (2001). Most of this research is based on individual case studies and anecdotal evidence. A second large grouping of studies deals with the micro-level, investigating component tools and techniques for green and black belts (Hoerl 2001a). Little work is published that relates to the meso-level of mid-level managing and operational decision-making (Linderman, Schroeder, Zaheer, and Choo 2003). The uses of these databases for these types of investigation are likely being ignored at most companies for at least two reasons. First, there has traditionally been little assistance from academics in how to make sense of them. Second, the people with the most statistical expertise are involved in the individual projects and not in cross project evaluation. Most managers are not statisticians and need help in making sense of the data now available to them. The growing database of project related quality improvement activities could be useful in the empirical study of some important meso-level research and real-world questions, including determining the health of a given company’s quality system, modeling Six Sigma, optimizing the selection and ordering of component methods.

According to Juran and Gryna (1980) the activities that assure quality in companies can be grouped into three processes: quality planning, quality control and quality improvement. Policies, standard practices, and philosophy make up the quality planning of a system. A good quality system is proactive not reactive. Quality improvement consists of the systematic and proactive pursuit of improvement opportunities in production processes to increase the quality levels. Typically, quality improvement activities are conducted in projects. This proactive and project-based nature distinguishes improvement from quality control, which is an on-line process that is reactive in nature. In Harry (1994) all things are a process. A central belief of Six Sigma is that the product is a function of the design and the manufacturing process which must produce it.

With Juran and Harry in mind, Six Sigma can be viewed as a process and subject to the same controls and improvement objectives of other processes. Determining what methods to use, when to transition to different phases of the project, and under what circumstances to terminate a project could conceivably make the difference between a healthy and profitable program and a failed one. Against this background, the purpose of this study was to look at this growing database in a way that could help management better run improvement projects.


2. Methods

The use of the many databases of project related quality improvement activities could be useful in the empirical study of some important research questions. As stated earlier, potential research topics include: the health of a given company’s quality system, modeling Six Sigma, or the optimality of selection and ordering component methods associated with Six Sigma. Researchers focus on what they have data and tools for. Martin (1982) pointed out that the availability of certain types of data might disproportionately influence the problems investigated and the conclusions drawn. Now, new data sources and the associated ability to ask and answer new types of questions are more readily available. For example, “Is my quality system out-of-control?” “Which method would lead to greatest expected profits in my case?” “Under what circumstances does it make business sense to terminate a project?” If these kinds of questions can be systematically explored in the Six Sigma discourse, then important lessons can be learned regarding investment decisions.

This paper discusses two analysis methods designed for meso-level analysis: exponentially weighted moving average (EWMA) statistical process control (SPC) and regression. Since its introduction by Shewhart in the 1930s, the control chart has been one of the primary techniques of Statistical Process Control (Shewhart 1931). Considering how important individual projects can be and that they require months or even years, the logical subgroup size is n = 1 project. With only one measurement per subgroup (a project), a subgroup range can not be calculated. The data is comprised of a small number of non-normal observations. The exponentially weighted moving-average (EWMA) control chart is typically used with individual observations Montgomery (2004). The exponentially weighted moving average is defined as:


The constant λ takes on the values 0 < λ ≤ 1. The process target value or the average of the preliminary data can be used as the starting value so that


The EWMA control chart has the following control limits and center line and is constructed by plotting Zi versus the sample number, i :


According to Montgomery (1997) values of λ in the interval 0.05 ≤ λ ≤ 0.25 work well, with λ = 0.05, λ = 0.10, and λ = 0.25 being popular. L values between 2.6 and 3.0 also work reasonably well. Hunter (1989) has suggested values of λ = 0.40 and L = 3.054 to match as closely as possible the performance of a standard Shewhart control chart with Western Electric rules (Hunter 1989).

Regression is another tool that may be employed to model and predict a Six Sigma program. The familiar regression equation is represented by equation 7 below:


where f(x) is a vector of functions only of the system inputs, x. Much of the literature on Six Sigma implementation converges on factors such as the importance of management commitment, employee involvement, teamwork, training and customer expectation. A number of research papers have been published suggesting key Six Sigma elements and ways to improve the management of the total quality of the product, process, corporate and customer supplier chain. Most of the available literature considers different factors as an independent entity affecting the Six Sigma environment. But the extent to which one factor is present may affect the other factor. The estimation of the net effect of these interacting factors is assumed to be partly responsible for the success of the Six Sigma philosophy. Quantification of Six Sigma factors and their interdependencies will lead to estimating the net effect of the Six Sigma environment. The authors are not aware of any publication in this direction.


3. Data base example: midwest manufacturer

The company used for study is a U.S. based Midwestern manufacturing company which manufactures components for the aerospace, industrial, and defense industries. It has approximately 1,000 employees, annual sales of $170 million, with six factories located in five states. The data is all derived from one of its six manufacturing sites. This site has 250 employees with sales of $40 million. Quality improvement and cost reduction are important competitive strategies for this company. The ability to predict project savings and how best to manage project activities would be advantages to future competitiveness of the company.

Table 1.

Definition of Variables

Over the course of this study data was collected on 20 variables and two derived variables: Profit (Actual Savings minus cost), and a Boolean variable, Formal Methods (FM) which is “true” if any combination of Charter, Process Mapping, Cause & Effect, Gauge R&R, DOE, or SPC is used and false otherwise (see Table 1). Thirty-nine improvement projects were included in this study, which generated a total of $4,385,099 in net savings (profit).

Data was collected on each project by direct observation and interviews with team members to determine the use of a variable such as DOE or Team Forming. No attempt was made to measure the degree of use or the successfulness of the use of any variable. We only were interested if the variable activity took place during the project. A count was maintained if an activity was used multiple times such as multiple DOE runs (i.e. a screening DOE and an optimization DOE would be recorded as 2 under the variable heading).

Expected Savings and Actual Savings are based on an 18 month period after implementation. The products and processes change fairly rapidly in this industry and it is standard company policy to only look at an 18 month horizon to evaluate projects, based on a monthly production forecast. Costs were tracked with existing company accounting procedures. All projects were assigned a work order for the charging of direct and non-direct time spent on a specific improvement activity. Direct and non-direct labor was charged at the average loaded rate. All direct materials and out side fees (example, laboratory analysis) were charged to the same work order to capture total cost.

One of the main principles of Six Sigma is the emphasis placed on the attention to the bottom line (Harry 2001 and Montgomery 2000). In the literature reviewed, bottom line focus was mentioned by 24% of relevant articles as a critical success factor. Profit, therefore, is used as the dependant variable, with the other 18 variables constituting the dependant variables.

3.1. EWMA

A common first step in deriving the process control chart is to check the assumption of normality. Figure 1 is a normal probability plot of the profits from the projects. The obvious conclusion is that project 5 is an outlier. There is also a possible indication that the other data divide into two populations.

Next, we constructed an EWMA chart of the profit data. We start with plotting the first 25 points to obtain the control limits as shown in Figure 2. One out of limit point was found and discarded after the derivation of this chart, which was the same project as the outlier on the normal probability plot (number 5). This was the sole DFSS project (Design for Six Sigma) in the data base. The others were process improvement projects without design control. A second graph was developed without the DFSS project point to obtain the chart shown in Figure 3. These charts were constructed based on Hunter (1989) with λ = 0.40 and L = 3.054.

Of special interest are the last seven projects. These projects took place after a significant Six Sigma training program. This provides strong statistical evidence that the training improved the bottom line of subsequent projects. Such information definitely supports decisions to invest in training of other divisions. Similar studies with this same technique could be used to verify whether training contributed to a fundamental change in the process.

Figure 1.

Normal Probability Chart for Six Sigma Projects.

Figure 2.

EWMA Control Chart for first 25 Six Sigma Projects{XE “ system“}.

Figure 3.

EWMA Control Chart for Six Sigma Projects {XE “ system“}.

3.2. Regression

Many hypotheses can be investigated using regression. Somewhat arbitrarily, we focus on two types of questions. First, we investigate the appropriateness of applying any type of method as function of the expected savings. Therefore, regressors include the expected savings, the total number of formal methods (FM) applied, and whether engineering analysis (EA) was used. Second, we investigate the effects of training and how projects were selected. In fitting all models, project 5 caused outliers on the residual plots. Therefore, all models in this section are based on fits with that (DFSS) project removed.

The following model resulted in an R-squared adjusted equal to 0.88:


Fig. 4. is based on predictions from equation (8). It provides quantitative evidence for the common sense realization that applying many methods when engineers do not predict much savings is a losing proposition.

The model and predictions can be used to set limits on how many methods can be applied for a project with a certain expected savings. For example, unless the project is expected to save $50,000, it likely makes little sense to apply multiple formal methods. Also, the model suggests that relying heavily on engineering analysis for large projects is likely a poor choice. If the expected saving is higher than $100,000 it is likely not advisable to rely solely on engineering analysis.

Figure 4.

Surface Plot of the Regression Model in Equation (8)

Figure 5.

Main Effects Plot of Predictions of the Simple Regression Model{XE “ system“}.

A second regression model was created using the indicator variables: Ι = 0 if the project was not influenced by training and Ι = 1 otherwise and J = 1 if the project was management initiate and J = 0 otherwise. This model is represented by equation 9, and shows a positive correlation between both independent variables non-management initiated and training with profit:


This model has an adjusted R-squared of only 0.15 presumably because most of the variation was explained by the variables in equation 8. Note that multicollinearity prevents fitting a single model accurately with the regressors in both equations. The predictions for the model in equation (9) are shown in Figure 5.


4. Discussion

The ability to estimate potential effects of changes on the profitability of projects is valuable information for policymakers in the decision-making process. This study demonstrated that utilizing existing data analysis tools to this new management data source provides useful knowledge that could be applied to help guide in project management. Findings included:

  • Design for Sigma Projects (DFSS) can be significantly more profitable than process improvement projects. Therefore, permitting design control can be advisable. In our study, probability plotting, EWMA charting, and regression all established this result independently.

  • Training can significantly improve project performance and its improvement can be observed using EWMA charts.

  • Regression can create data-driven standards establishing criteria for how many methods should be applied as a function of the expected savings.

Also, in our study we compared results of various sized projects and the use of formal tools. We found that determining the estimate of the economical value to be important to guide the degree of use of formal tools. Based on the results of this study, when predicted impact is small, a rapid implementation based on engineering analysis is best. As projects’ predicted impact expands, formal methods can play a larger role.

The simple model also tends to show a strong benefit to training. This model has good variance inflation factors (VIF) values and supports the findings from the SPC findings. Of interest is the negative correlation on management initiation of projects. In this regard, there is still ambiguity in the results. For example, it is not known if people worked harder on projects they initiated or if they picked more promising projects.

The research also suggests several topics for future research. Replication of the value of the methods in the context of other companies and industries could be valuable and lead to different conclusions for different databases. Many other methods could be relevant for meso-analysis and the effects of sites and the nature of the industry can be investigated. Many companies have a portfolio of business units and tailoring how six sigma is applied could be of important interest. In addition, the relationship between meso-analysis and organizational “resilience” could be studied. These concepts are related in part because through applying techniques such as control charting, organization might avoid over-control while reacting promptly and appropriately to large unexpected events, i.e., be more resilient. Finally, it is hypothetically possible that expert systems could be developed for data-driven prescription of specific methods for specific types of problems. Such systems could aid in training and helping organizations develop and maintain a method oriented competitive advantage.



We thank Clark Mount-Campbell, Joseph Fiksel, Allen Miller, and William Notz for helpful discussions and encouragement. Also, we thank David Woods for many forms of support.


  1. 1. BisgaardS.FreieslebenJ.2000Quality Quandaries: Economics of Six Sigma Program,Quality Engineering, 13 (2), 325331
  2. 2. ChanK. K.SpeddingT. A.2001On-line Optimization of Quality in a Manufacturing SystemInternational Journal of Production Research11271145
  3. 3. GautreauN.YacoutS.HallR.1997Simulation of Partially Observed Markov Decision Process and Dynamic Quality ImprovementComputers & Industrial Engineering691700
  4. 4. HarryM. J.2001A new definition aims to connect quality with financial performance, Quality Progress, 33 (1) 6466
  5. 5. HarryM. J.1994The Vision of Six Sigma: A Roadmap for Breakthrough,Sigma Publishing Company: Phoenix).
  6. 6. HoerlR. W.2001aSix Sigma Black Belts: What Do They Need to Know? Journal of Quality Technology, 33 (4): 391406
  7. 7. HunterJ. S.1989A one Point Plot Equivalent to the Shewhart Chart with Western Electric RulesQuality Engineering2
  8. 8. JuranJ. M.GrynaF.1980Quality Planning and AnalysisNew York: McGraw-Hill
  9. 9. LindermanK.SchroederR. G.ZaheerS.ChooA. S.2003Six Sigma: A goal-theoretic perspectiveJournal of Operations Management193203
  10. 10. MartinJ.1982A garbage model of the research process, InJ. E. McGrath (Ed)., Judgment calls in research, Beverly Hills, CA: Sage,
  11. 11. MontgomeryD.2000Editorial, Beyond Six Sigma, Quality and Reliability Engineering International, 17(4): iii-iv,
  12. 12. MontgomeryD. C.2004Introduction to Statistical Quality ControlJohn Wiley & Sons, Inc. New York).
  13. 13. ShewhartW. A.1931Economic Control of Manufactured Product, New York:D. Van Nostrand, Inc.,
  14. 14. YacoutS.GautreauN.2000A Partially Observable Simulation Model for Quality Assurance PoliciesInternational Journal of Production Research382253267
  15. 15. YuB.PopplewellK.1994Metamodel in Manufacturing: a Review, International Journal of Production Research, 32: 787796

Written By

Theodore T. Allen, James E. Brady and Jason Schenk

Submitted: October 14th, 2010 Published: July 14th, 2011