Kamis, 04 November 2010

Top 10 Improvement Tools Named After Lean Sensei




By Jon Miller | Post Date: July 9, 2007 5:41 PM | Comments: 11

1. Ohno Circle
Taiichi Ohno was the Toyota executive largely responsible for structuring and implementing the system known today as the Toyota Production System over four decades after World War II. Ohno was known for drawing a chalk circle around managers and making them stand in the circle until they had seen and documented all of the problems in a particular area.
ohno%20circle.PNG
Today the "stand in a circle" exercise is a great way to train one's eyes to see waste and to provide structure for the team leader to do daily improvement or for the busy executive with limited time to go to gemba.
When you spend time on the gemba standing in the Ohno Circle, you will see the gap between the target condition and the actual condition. It's time to decide where to start first in closing this gap, using the Pareto principle.
2. Pareto Chart
In 1906 Italian economist Vilfredo Pareto simplified the world for us with his 80/20 rule, or what is known as the Pareto principle. This is most often expressed in a Pareto chart.
Pareto.png
Identify the vital few that will give you the biggest impact towards closing the gap between current condition and target condition, and when that's done, move onto the next tallest bar in the Pareto.
To focus on addressing the root causes of the top 20% factors that are keeping your from hitting the target, the next step is to dig deeper into the root causes using the Ishikawa Diagram.
3. Ishikawa Diagram
The Ishikawa Diagram (also called the fishbone diagram or cause and effect diagram) was introduced in the 1960s by Kaoru Ishikawa. Ishikawa pioneered quality management processes at the Kawasaki shipyards, and in became one of the founding fathers of modern management. The diagram shows the causes of a certain event or condition. The Ishikawa Diagrma is one of the seven QC tools including the histogram, Pareto chart, check sheet, control chart, flowchart, and scatter diagram.
Ishikawa.png
It is quite a flexible tool. Root cause analysis can be conducted for manufacturing or production-type processes using the 4M (man, material, machine, methods) or sometimes up to 6M (add mother nature, measurement) as well as 4P (price, promotion, place, product) for a marketing and sales kaizen.
Now that you have identified the root causes of your problem, you are ready to implement countermeasures. For that, you'll need an action plan.
4. Gantt Chart
Henry Gantt was a management consultant who popularized the project management tool known as the Gantt Chart some time around 1910.
Gantt%20Chart.png
Anyone who has used Microsoft Project or who has used this classic project management tool has Mr. Gantt to thank. He revolutionized the managing of large, complex projects such as construction, worldwide when he introduced his Gantt Chart.
Gantt was a very early Lean sensei in that he set the foundation for later developments such as standard work combination sheet, scheduling a day's work and work balancing. The action plan must not be limited to "plan and do" but also "check and act / adjust" according to the PDCA Cycle, also known as the Deming Wheel.
5. Deming Wheel
The Deming Wheel is also known as the PDCA Wheel. Edwards Deming is credited with teaching PDCA to the Japanese, but proper credit should be given to Walter Shewhart, the pionnering statistician and teacher of Deming, who originated the PDCA notion.
PDCA.png
A more full explanation of the Plan, Do, Check and Act steps can be found on the Gemba Research website. One of the more powerful ways to test out your ideas through experiments is the Taguchi Method.
6. Taguchi Method
Genichi Taguchi took the notion of R.A. Fisher's Design of Experiments and sought to understand the influence of parameters on variation, not only on the mean. In conventional DOE, variation between experimental replications is considered a nuisance that experimenters would rather eliminate, whereas in Taguchi's mind, variation is a central point of investigation.
The diagram below shows the Taguchi Loss Function, which Ron Pereira at the Lean Six Sigma Academy explains the workings of Taguchi Method in a series of informative articles.
Taguchi%20Loss%20Function.PNG
Using these tools, you will have the data to prove that your experiment is a success! But how do you motivate people to come around to your way of thinking and adopt a new way? It might be helpful to know something about human motivation and Maslow's Hierarchy of Needs.
7. Maslow's Hierarchy of Needs
Abraham Maslow was an American psychologist who is most famous for the hierarchy of human needs. Maslow's model gives us the foundation for understanding how to motivate people to change, which is a topic of great interest to us, addressed in part 1part 2 and part 3 of a series of previous posts.
Maslow.png
Improvements made, you now need a way to check and audit the process regularly so that the process does not revert to the old way, and that new problems are discovered quickly. The Oba Gage is a useful means to enable a visual workplace for abnormality management.
8. Oba Gauge
A 4 foot tall Japanese Lean sensei named Mr. Oba was notorious for insisting that nothing in the factory be taller than his eye-level This resulted in the "Oba Gage" for a visual workplace. The idea is to avoid creating view-blockers in your workplace whenever possible. It is also called the "4-foot rule" or "1.3 meter rule".
oba%20gage.PNG
The workplace is more visual, many large problems have been solved and the process is stable. But how can we avoid complacency and keep continuous improvement going?
9. Heinrich Principle
H.W. Heinrich taught us through his Heinrich Principle that we must pay attention to even the smallest of safety incidents or so-called "near misses" if we want to find the root causes of what could become larger safety accidents. The same principle applies to 5S, the elimination of waste, and awareness of quality problems. Lean management means everyone is vigilant about even the smallest problems. This requires constant education and attention to maintain a heightened sensitivity and avoid habituation to the warusa kagen (condition of badness).
Heinrich%20principle.png
The first nine tools used properly will result in improved safety, quality, cost and delivery. This will also open up capacity in your company to develop and deliver new products and services. But which products and services will give you a market advantage? The Kano Model helps you answer this question.
10. Kano Model
When we go back to the beginning in the cycle of continuous improvement, we have to ask again "What does the customer want?" Professor Noriaki Kano gave us a model to answer this question more effectively. The chart below illustrates how there is the Voice of the Customer (spoken needs) as well as what is sometimes called Mind of the Customer (latent or unspoken needs).
Kano%20Model.png
Quality Function Deployment (QFD) makes effective use of the Kano Model, as does fact-based Hoshin Kanri (policy management or Lean strategic planning). C2C Solutions offers a Flash tutorial of the Kano Model, about 8 minutes long.
Kano%20Model%202.png
You might ask why to include a Professor who developed a model largely used for product development and strategic planning on this list of improvement tools named after Lean sensei. If we follow Pareto's Law, 80% of the waste in a product is in the design phase and likewise 80% of the waste in management effort is probably in misdirected or unaligned strategy. So although the Kano Model ranked at #10 on the list because it is far less practical and hands-on useful on a daily basis than the other nine, one could say that it has the biggest potential impact on the overall system.
There are many tools in the world. Knowing how to use them is important, but even more important is knowing how to put them to use as an overall system in such a way that helps people see things in a new way, to change how they think and work.


Source :
http://www.gembapantarei.com/2007/07/top_10_improvement_tools_named_after_lean_sensei.html
November 4th, 4:57PM

Rabu, 13 Oktober 2010

Kla Project - Menjemput Impian

Sabtu, 09 Oktober 2010

TAGUCHI METHODS

There has been a great deal of controversy about Genichi Taguchi's methodology since it was first introduced in the United States. This controversy has lessened considerably in recent years due to modifications and extensions of his methodology. The main controversy, however, is still about Taguchi's statistical methods, not about his philosophical concepts concerning quality or robust design. Furthermore, it is generally accepted that Taguchi's philosophy has promoted, on a worldwide scale, the design of experiments for quality improvement upstream, or at the product and process design stage.
Taguchi's philosophy and methods support, and are consistent with, the Japanese quality control approach that asserts that higher quality generally results in lower cost. This is in contrast to the widely prevailing view in the United States that asserts that quality improvement is associated with higher cost. Furthermore, Taguchi's philosophy and methods support the Japanese approach to move quality improvement upstream. Taguchi's methods help design engineers build quality into products and processes. As George Box, Soren Bisgaard, and Conrad Fung observed: "Today the ultimate goal of quality improvement is to design quality into every product and process and to follow up at every stage from design to final manufacture and sale. An important element is the extensive and innovative use of statistically designed experiments."

TAGUCHI'S DEFINITION OF QUALITY

The old traditional definition of quality states quality is conformance to specifications. This definition was expanded by Joseph M. Juran (1904-) in 1974 and then by the American Society for Quality Control (ASQC) in 1983. Juran observed that "quality is fitness for use." The ASQC defined quality as" the totality of features and characteristics of a product or service that bear on its ability to satisfy given needs."
Taguchi presented another definition of quality. His definition stressed the losses associated with a product. Taguchi stated that "quality is the loss a product causes to society after being shipped, other than losses caused by its intrinsic functions." Taguchi asserted that losses in his definition "should be restricted to two categories: (1) loss caused by variability of function, and (2) loss caused by harmful side effects." Taguchi is saying that a product or service has good quality if it "performs its intended functions without variability, and causes little loss through harmful side effects, including the cost of using it."
It must be kept in mind here that "society" includes both the manufacturer and the customer. Loss associated with function variability includes, for example, energy and time (problem fixing), and money (replacement cost of parts). Losses associated with harmful side effects could be market shares for the manufacturer and/or the physical effects, such as of the drug thalidomide, for the consumer.
Consequently, a company should provide products and services such that possible losses to society are minimized, or, "the purpose of quality improvement … is to discover innovative ways of designing products and processes that will save society more than they cost in the long run." The concept of reliability is appropriate here. The next section will clearly show that Taguchi's loss function yields an operational definition of the term "loss to society" in his definition of quality.

TAGUCHI'S LOSS FUNCTION

We have seen that Taguchi's quality philosophy strongly emphasizes losses or costs. W. H. Moore asserted that this is an "enlightened approach" that embodies "three important premises: for every product quality characteristic there is a target value which results in the smallest loss; deviations from target value always results in increased loss to society; [and] loss should be measured in monetary units (dollars, pesos, francs, etc.)."
Figure I depicts Taguchi's typically loss function. The figure also contrasts Taguchi's function with the traditional view that states there are no losses if specifications are met.

Figure 1 Taguchi
Figure 1
Taguchi's Loss Function

It can be seen that small deviations from the target value result in small losses. These losses, however, increase in a nonlinear fashion as deviations from the target value increase. The function shown above is a simple quadratic equation that compares the measured value of a unit of output to the target T.: 

where L(Y) is the expected loss associated with the specific value of Y.
Essentially, this equation states that the loss is proportional to the square of the deviation of the measured value, Y, from the target value, T. This implies that any deviation from the target (based on customers' desires and needs) will diminish customer satisfaction. This is in contrast to the traditional definition of quality that states that quality is conformance to specifications. It should be recognized that the constant can be determined if the value of L(Y) associated with some value are both known. Of course, under many circumstances a quadratic function is only an approximation.
Since Taguchi's loss function is presented in monetary terms, it provides a common language for all the departments or components within a company. Finally, the loss function can be used to define performance measures of a quality characteristic of a product or service. This property of Taguchi's loss function will be taken up in the next section. But to anticipate the discussion of this property, Taguchi's quadratic function can be converted to:
This can be accomplished by assuming has some probability distribution with mean, a and variance o.2 This second mathematical expression states that average or expected loss is due either to process variation or to being off target (called "bias"), or both.

TAGUCHI, ROBUST DESIGN, AND THE
DESIGN OF EXPERIMENTS

Taguchi asserted that the development of his methods of experimental design started in Japan about 1948. These methods were then refined over the next several decades. They were introduced in the United States around 1980. Although, Taguchi's approach was built on traditional concepts of design of experiments (DOE), such as factorial and fractional factorial designs and orthogonal arrays, he created and promoted some new DOE techniques such as signal-to-noise ratios, robust designs, and parameter and tolerance designs. Some experts in the field have shown that some of these techniques, especially signal-to-noise ratios, are not optimal under certain conditions. Nonetheless, Taguchi's ideas concerning robust design and the design of experiments will now be discussed.
DOE is a body of statistical techniques for the effective and efficient collection of data for a number of purposes. Two significant ones are the investigation of research hypotheses and the accurate determination of the relative effects of the many different factors that influence the quality of a product or process. DOE can be employed in both the product design phase and production phase.
A crucial component of quality is a product's ability to perform its tasks under a variety of conditions. Furthermore, the operating environmental conditions are usually beyond the control of the product designers, and, therefore robust designs are essential. Robust designs are based on the use of DOE techniques for finding product parameter settings (e.g., temperature settings or drill speeds), which enable products to be resilient to changes and variations in working environments.
It is generally recognized that Taguchi deserves much of the credit for introducing the statistical study of robust design. We have seen how Taguchi's loss function sets variation reduction as a primary goal for quality improvement. Taguchi's DOE techniques employ the loss function concept to investigate both product parameters and key environmental factors. His DOE techniques are part of his philosophy of achieving economical quality design.
To achieve economical product quality design, Taguchi proposed three phases: system design, parameter design, and tolerance design. In the first phase, system design, design engineers use their practical experience, along with scientific and engineering principles, to create a viably functional design. To elaborate, system design uses current technology, processes, materials, and engineering methods to define and construct a new "system." The system can be a new product or process, or an improved modification of an existing product or process.
The parameter design phase determines the optimal settings for the product or process parameters. These parameters have been identified during the system design phase. DOE methods are applied here to determine the optimal parameter settings. Taguchi constructed a limited number of experimental designs, from which U.S. engineers have found it easy to select and apply in their manufacturing environments.
The goal of the parameter design is to design a robust product or process, which, as a result of minimizing performance variation, minimizes manufacturing and product lifetime costs. Robust design means that the performance of the product or process is insensitive to noise factors such as variation in environmental conditions, machine wear, or product to-product variation due to raw material differences. Taguchi's DOE parameter design techniques are used to determine which controllable factors and which noise factors are the significant variables. The aim is to set the controllable factors at those levels that will result in a product or process being robust with respect to the noise factors.
In our previous discussion of Taguchi's loss function, two equations were discussed. It was observed that the second equation could be used to establish quality performance measures that permit the optimization of a given product's quality characteristic. In improving quality, both the average response of a quality and its variation are important. The second equation suggests that it may be advantageous to combine both the average response and variation into a single measure. And Taguchi did this with his signal-to-noise ratios (S/N). Consequently, Taguchi's approach is to select design parameter levels that will maximize the appropriate S/N ratio.
These S/N ratios can be used to get closer to a given target value (such as tensile strength or baked tile dimensions), or to reduce variation in the product's quality characteristic(s). For example, one S/N ratio corresponds to what Taguchi called "nominal is best." Such a ratio is selected when a specific target value, such as tensile strength, is the design goal.
For the "nominal is best" case, Taguchi recommended finding an adjustment factor (some parameter setting) that will eliminate the bias discussed in the second equation. Sometimes a factor can be found that will control the average response without affecting the variance. If this is the case, our second equation tells us that the expected loss becomes:
Consequently, the aim now is to reduce the variation. Therefore, Taguchi's S/N ratio is:

where is the sample's standard deviation.
In this formula, by minimizing , − 10 log 10 , is maximized. Recall that all of Taguchi's S/N ratios are to be maximized.
Finally, a few brief comments concerning the tolerance design phase. This phase establishes tolerances, or specification limits, for either the product or process parameters that have been identified as critical during the second phase, the parameter design phase. The goal here is to establish tolerances wide enough to reduce manufacturing costs, while at the same time assuring that the product or process characteristics are within certain bounds.

EXAMPLES AND CONCLUSIONS

As Thomas P. Ryan has stated, Taguchi at the very least, has focused "our attention on new objectives in achieving quality improvement. The statistical tools for accomplishing these objectives will likely continue to be developed." Quality management "gurus," such as W. Edwards Deming (1900-1993) and Kaoru Ishikawa (1915-), have stressed the importance of continuous quality improvement by concentrating on processes upstream. This is a fundamental break with the traditional practice of relying on inspection downstream. Taguchi emphasized the importance of DOE in improving the quality of the engineering design of products and processes. As previously mentioned, however," his methods are frequently statistically inefficient and cumbersome." Nonetheless, Taguchi's design of experiments have been widely applied and theoretically refined and extended. Two application cases and one refinement example will now be discussed.
K. N. Anand, in an article in Quality Engineering, discussed a welding problem. Welding was performed to repair cracks and blown holes on the cast-iron housing of an assembled electrical machine. Customers wanted a defect-free quality weld, however the welding process had resulted in a fairly high percentage of welding defects. Management and welders identified five variables and two interactions that were considered the key factors in improving quality. A Taguchi orthogonal design was performed resulting in the identification of two highly significant interactions and a defect-free welding process.
The second application, presented by M. W. Sonius and B. W. Tew in a Quality Engineering article, involved reducing stress components in the connection between a composite component and a metallic end fitting for a composite structure. Bonding, pinning, or riveting the fitting in place traditionally made the connections. Nine significant variables that could affect the performance of the entrapped fiber connections were identified and a Taguchi experimental design was performed. The experiment identified two of the nine factors and their respective optimal settings. Therefore, stress levels were significantly reduced.
The theoretical refinement example involves Taguchi robust designs. We have seen where such a design can result in products and processes that are insensitive to noise factors. Using Taguchi's quadratic loss function, however, may provide a poor approximation of true loss and suboptimal product or process quality. John F. Kros and Christina M. Mastrangelo established relationships between nonquadratic loss functions and Taguchi's signal-to-noise ratios. Applying these relationships in an experimental design can change the recommended selection of the respective settings of the key parameters and result in smaller losses.
Peter B. Webb Ph.D. ]

FURTHER READING:

American Society for Quality Control. Statistics Division. Glossary and Tables. for Statistical Quality Control. Milwaukee, WI: American Society for Quality Control, 1983.
Anand, K. N. "Development of Process Specification for Radiographic Quality Welding." Quality Engineering, June 1997, 597-601.
Barker, T. B. "Quality Engineering by Design: Taguchi's Philosophy." Quality Progress 19, no. 12 (1986), 32-42.
Box, G. E P., and others. "Quality Practices in Japan." Quality Progress, March 1988, 37-41.
Byrne, Diane M., and Shin Taguchi. "The Taguchi Approach to Parameter Design." ASQC Quality Congress Transaction, 1986, 168-77.
Daniel, Cuthburt. Applications of Statistics to Industrial Experimentation. New York: Wiley, 1976.
Farnum, Nicholas R. Modern Statistical Quality Control and Improvement. New York: Duxbury Press, 1994.
Kackar, R. N. "Off-Line Quality Control, Parameter Design, and the Taguchi Method." Journal of Quality Technology 17, no. 4 (1985): 176-88.
Kros, John F., and Christina M. Mastrangelo. "Impact of Nonquadratic Loss in the Taguchi Design Methodology." Quality Engineering 10, no. 3 (1998): 509-19.
Lochner, Robert H., and Joseph E. Matar. Designing for Quality: An Introduction to the Best of Taguchi and Western Methods of Statistical Experimental Design. Milwaukee, WI: ASQC Quality Press, 1990.
Phadke, M. S. Quality Engineering Using Robust Design. New York: Prentice Hall, 1989.
Quinlan, J. "Product Improvement by Application of Taguchi Method." In Third Supplier Symposium on Taguchi Methods. Dearborn, Ml: American Supplier Institute, 1985.
Ross, P. J. Taguchi Techniques for Quality Engineering. New York: McGraw-Hill, 1988.
Ryan, T. P. "Taguchi's Approach to Experimental Design: Some Concerns." Quality Progress, May 1988, 34-36.
Sonius, M. W., and B. W. Tew. "Design Optimization of Metal to Composite Connections Using Orthogonal Arrays." Quality Engineering 9, no. 3 (1997): 479-87.
Taguchi, Genichi. Introduction to Quality Engineering. White Plains, NY: Asian Productivity Organization, UNIPUB, 1986.
——. "The Development of Quality Engineering." ASI Journal 1, no. 1 (1988): 1-4.


Selasa, 21 September 2010

Plan–Do–Check–Act Cycle
Also called: PDCA, plan–do–study–act (PDSA) cycle, Deming cycle, Shewhart cycle
Description
The plan–do–check–act cycle (Figure 1) is a four-step model for carrying out change. Just as a circle has no end, the PDCA cycle should be repeated again and again for continuous improvement.
Figure 1: Plan-do-study-act cycle
Figure 1: Plan-do-check-act cycle
When to Use Plan-Do-Check-Act
  • As a model for continuous improvement.
  • When starting a new improvement project.
  • When developing a new or improved design of a process, product or service.
  • When defining a repetitive work process.
  • When planning data collection and analysis in order to verify and prioritize problems or root causes.
  • When implementing any change.
Plan-Do-Check-Act Procedure
  1. Plan. Recognize an opportunity and plan a change.
  2. Do. Test the change. Carry out a small-scale study.
  3. Study. Review the test, analyze the results and identify what you’ve learned.
  4. Act. Take action based on what you learned in the study step: If the change did not work, go through the cycle again with a different plan. If you were successful, incorporate what you learned from the test into wider changes. Use what you learned to plan new improvements, beginning the cycle again.
Plan-Do-Check-Act Example
The Pearl River, NY School District, a 2001 recipient of the Malcolm Baldrige National Quality Award, uses the PDCA cycle as a model for defining most of their work processes, from the boardroom to the classroom.
PDCA is the basic structure for the district’s overall strategic planning, needs-analysis, curriculum design and delivery, staff goal-setting and evaluation, provision of student services and support services, and classroom instruction.
Figure 2 shows their “A+ Approach to Classroom Success.” This is a continuous cycle of designing curriculum and delivering classroom instruction. Improvement is not a separate activity: It is built into the work process.

Figure2: Plan-do-study-act example
Figure 2: Plan-do-check-act example 
Plan. The A+ Approach begins with a “plan” step called “analyze.” In this step, students’ needs are analyzed by examining a range of data available in Pearl River’s electronic data “warehouse,” from grades to performance on standardized tests. Data can be analyzed for individual students or stratified by grade, gender or any other subgroup. Because PDCA does not specify how to analyze data, a separate data analysis process (Figure 3) is used here as well as in other processes throughout the organization.
 Figure 3: Pearl River: analysis process
Figure 3: Pearl River: analysis process
Do. The A+ Approach continues with two “do” steps:
  1. “Align” asks what national and state standards require and how they will be assessed. Teaching staff also plans curriculum by looking at what is taught at earlier and later grade levels and in other disciplines to assure a clear continuity of instruction throughout the student’s schooling.

    Teachers develop individual goals to improve their instruction where the “analyze” step showed any gaps. 
  2. The second “do” step is, in this example, called “act.” This is where instruction is actually provided, following the curriculum and teaching goals. Within set parameters, teachers vary the delivery of instruction based on each student’s learning rates and styles and varying teaching methods.
Check. The “check” step is called “assess” in this example. Formal and informal assessments take place continually, from daily teacher “dipstick” assessments to every-six-weeks progress reports to annual standardized tests. Teachers also can access comparative data on the electronic database to identify trends. High-need students are monitored by a special child study team.
Throughout the school year, if assessments show students are not learning as expected, mid-course corrections are made such as re-instruction, changing teaching methods and more direct teacher mentoring. Assessment data become input for the next step in the cycle.
Act. In this example the “act” step is called “standardize.” When goals are met, the curriculum design and teaching methods are considered standardized. Teachers share best practices in formal and informal settings. Results from this cycle become input for the “analyze” phase of the next A+ cycle.
Excerpted from Nancy R. Tague’s The Quality Toolbox, Second Edition, ASQ Quality Press, 2004, pages 390-392.

The seven Basic tools of Quality

The seven basic tools of quality

   

Introduction

In 1950, the Japanese Union of Scientists and Engineers (JUSE) invited legendary quality guru W. Edwards Deming to go to Japan and train hundreds of Japanese engineers, managers and scholars in statistical process control. Deming also delivered a series of lectures to Japanese business managers on the subject, and during his lectures, he would emphasise the importance of what he called the "basic tool" that were available to use in quality control.
One of the members of the JUSE was Kaoru Ishikawa, at the time an associate professor at the University of Tokyo. Ishikawa had a desire to 'democratise quality': that is to say, he wanted to make quality control comprehensible to all workers, and inspired by Deming’s lectures, he formalised the Seven Basic Tools of Quality Control.
Ishikawa believed that 90% of a company’s problems could be improved using these seven tools, and that –- with the exception of Control Charts -- they could easily be taught to any member of the organisation. This ease-of-use combined with their graphical nature makes statistical analysis easier for all.
The seven tools are:
  • Cause and Effect Diagrams
  • Pareto Charts
  • Flow Charts
  • Check sheet
  • Scatter Plots
  • Control (Run) Charts
  • Histograms
What follows is a brief overview of each tool. If you would like to know more or be trained in their use, please get in touch using the form at the top of the page.
Click on the images to enlarge; you will need to disable your pop-up blocker.

Cause and Effect Diagrams

Also known as Ishikawa and Fishbone Diagrams
First used by Ishikawa in the 194os, they are employed to identify the underlying symptoms of a problem or "effect" as a means of finding the root cause. The structured nature of the method forces the user to consider all the likely causes of a problem, not just the obvious ones, by combining brainstorming techniques with graphical analysis. It is also useful in unraveling the convoluted relationships that may, in combination, drive the problem.
The basic Cause and Effect Diagram places the effect at one end. The causes feeding into it are then identified, via brainstorming, by working backwards along the "spines" (sometimes referred to as "vertebrae"), as in the diagram below:
For more complex process problems, the spines can be allocated a category and then the causes/inputs of each identified. There are several standard sets of categorisations that can be used, but the most common is Material, Machine/Plant, Measurement/Policies, Methods/Procedures, Men/People and Environment –- easily remembered as the "5M’s and an E" –- as shown in the below:
Each spine can then be further sub-divided, as necessary, until all the inputs are identified. The diagram is then used to highlight the causes that are most likely a contributory factor to the problem/effect, and these can be investigated for inefficiencies/optimization.

Control (Run) Charts

Dating back to the work of Shewhart and Deming, there are several types of Control Chart. They are reasonably complex statistical tools that measure how a process changes over time. By plotting this data against pre-defined upper and lower control limits, it can be determined whether the process is consistent and under control, or if it is unpredictable and therefore out of control.
The type of chart to use depends upon the type of data to be measured; i.e. whether it is attributable or variable data. The most frequently used Control Chart is a Run Chart, which is suitable for both types of data. They are useful in identifying trends in data over long periods of time, thus identifying variation.
Data is collected and plotted over time with the upper and lower limits set (from past performance or statistical analysis), and the average identified, as in the diagram below.

Pareto Charts

Based upon the Pareto Principle that states that 80% of a problem is attributable to 20% of its causes, or inputs, a Pareto Chart organises and displays information in order to show the relative importance of various problems or causes of problems. It is a vertical bar chart with items organised in order from the highest to the lowest, relative to a measurable effect: i.e. frequency, cost, time.
A Pareto Chart makes it easier to identify where the greatest possible improvement gains can be achieved. By showing the highest incidences or frequencies first and relating them to the overall percentage for the samples, it highlights what is known as the "vital few". Factors are then prioritized, and effort focused upon them.

Scatter Diagrams

A Scatter Diagram, or Chart, is used to identify whether there is a relationship between two variables. It does not prove that one variable directly affects the other, but is highly effective in confirming that a relationship exists between the two.
It is a graphical more than statistical tool. Points are plotted on a graph with the two variables as the axes. If the points form a narrow "cloud", then there is a direct correlation. If there is no discernible pattern or a wide spread, then there is no or little correlation.
If both variables increase as the other increases – i.e. the cloud extends at roughly 45 degrees from the point where the x and y axes cross – then they are said to be positively correlated. If the one variable decreases as the other increases, then they are said to be negatively correlated. These are linear correlations; they may also be non-linearly correlated.
Below is an example of a Scatter Diagram where the two variables have a positive linear correlation.

Histogram

Like Pareto Charts, Histograms are a form of bar chart. They are used to measure the frequency distribution of data that is commonly grouped together in ranges or "bins". Most commonly they are used to discern frequency of occurrence in long lists of data. For instance, in the list 2, 2, 3, 3, 3, 3, 4, 4, 5, 6, the number 3 occurs the most frequently. However, if that list comprises several hundred data points, or more, it would be difficult to ascertain the frequency. Histograms provide an effective visual means of doing so.
"Bins" are used when the data is spread over a wide range. For example, in the list 3, 5, 9, 12, 14, 17, 20, 24, 29, 31, 45, 49, instead of looking for the occurrence of each number from 1 to 49, which would be meaningless, it is more useful to group them such that the frequency of occurrence of the ranges 1-10, 11-20, 21-30, 31-40 and 41-50 are measured. These are called bins.
Histograms are very useful in discerning the distribution of data and therefore patterns of variation. They monitor the performance of a system and present it in a graphical way which is far easier to understand and read than a table of data. Once a problem has been identified, they can then also be used to check that the solution has worked.

Flow Chart

A flow chart is a visual representation of a process. It is not statistical, but is used to piece together the actual process as it is carried out, which quite often varies from how the process owner imagines it is. Seeing it visually makes identifying both inefficiencies and potential improvements easier.
A series of shapes are used to depict every step of the process; mental decisions are captured as well as physical actions and activities. Arrows depict the movement through the process. Flow charts vary in complexity, but when used properly can prove useful for identifying non-value-adding or redundant steps, the key parts of a process, as well as the interfaces between other processes.
Problems with flow charts occur when the desired process is depicted instead of the actual one. For this reason, it is better to brainstorm the process with a group to make sure everything is captured.

An example of a Flow Chart

Check sheet

Also known as Data Collection sheets and Tally charts
Like flow charts, check sheets are non-statistical and relatively simple. They are used to capture data in a manual, reliable, formalised way so that decisions can be made based on facts. As the data is collected, it becomes a graphical representation of itself. Areas for improvement can then be identified, either directly from the check sheet, or by feeding the data into one of the other seven basic tools.
Simply, a table is designed to capture the incidences of the variable(s) to be measured. Tick marks are then manually put in the relevant boxes. As the ticks build up, they give a graphical representation of the frequency of incidences. Below is a typical example.

Summary

The seven basic tools of quality can be used singularly or in tandem to investigate a process and identify areas for improvement, although they do not all necessarily need to be used. If a process is simple enough – or the solution obvious enough – any one may be all that is needed for improvement. They provide a means for doing so based on facts, not just personal knowledge, which of course can be tainted or inaccurate. Ishikawa advocated teaching these seven basic tools to every member of a company as a means to making quality endemic throughout the organisation.

References
Ishikawa, Kaoru. Guide to Quality Control. Kraus International Publications, White Plains, New York, 1982.
Tague, Nancy R. The Quality Toolbox, Second Edition, ASQ Quality Press, 2004.


Kamis, 16 September 2010

Balanced scorecard basic

Balanced Scorecard Basics

The balanced scorecard is a strategic planning and management system that is used extensively in business and industry, government, and nonprofit organizations worldwide to align business activities to the vision and strategy of the organization, improve internal and external communications, and monitor organization performance against strategic goals. It was originated by Drs. Robert Kaplan (Harvard Business School) and David Norton as a performance measurement framework that added strategic non-financial performance measures to traditional financial metrics to give managers and executives a more 'balanced' view of organizational performance.  While the phrase balanced scorecard was coined in the early 1990s, the roots of the this type of approach are deep, and include the pioneering work of General Electric on performance measurement reporting in the 1950’s and the work of French process engineers (who created the Tableau de Bord – literally, a "dashboard" of performance measures) in the early part of the 20th century.
The balanced scorecard has evolved from its early use as a simple performance measurement framework to a full strategic planning and management system. The “new” balanced scorecard transforms an organization’s strategic plan from an attractive but passive document into the "marching orders" for the organization on a daily basis. It provides a framework that not only provides performance measurements, but helps planners identify what should be done and measured. It enables executives to truly execute their strategies.
This new approach to strategic management was first detailed in a series of articles and books by Drs. Kaplan and Norton. Recognizing some of the weaknesses and vagueness of previous management approaches, the balanced scorecard approach provides a clear prescription as to what companies should measure in order to 'balance' the financial perspective. The balanced scorecard is a management system (not only a measurement system) that enables organizations to clarify their vision and strategy and translate them into action. It provides feedback around both the internal business processes and external outcomes in order to continuously improve strategic performance and results. When fully deployed, the balanced scorecard transforms strategic planning from an academic exercise into the nerve center of an enterprise.
Kaplan and Norton describe the innovation of the balanced scorecard as follows:
"The balanced scorecard retains traditional financial measures. But financial measures tell the story of past events, an adequate story for industrial age companies for which investments in long-term capabilities and customer relationships were not critical for success. These financial measures are inadequate, however, for guiding and evaluating the journey that information age companies must make to create future value through investment in customers, suppliers, employees, processes, technology, and innovation." 

balanced scorecard
Adapted from Robert S. Kaplan and David P. Norton, “Using the Balanced Scorecard as a Strategic Management System,” Harvard Business Review (January-February 1996): 76.

Perspectives

The balanced scorecard suggests that we view the organization from four perspectives, and to develop metrics, collect data and analyze it relative to each of these perspectives:
The Learning & Growth PerspectiveThis perspective includes employee training and corporate cultural attitudes related to both individual and corporate self-improvement. In a knowledge-worker organization, people -- the only repository of knowledge -- are the main resource. In the current climate of rapid technological change, it is becoming necessary for knowledge workers to be in a continuous learning mode. Metrics can be put into place to guide managers in focusing training funds where they can help the most. In any case, learning and growth constitute the essential foundation for success of any knowledge-worker organization.
Kaplan and Norton emphasize that 'learning' is more than 'training'; it also includes things like mentors and tutors within the organization, as well as that ease of communication among workers that allows them to readily get help on a problem when it is needed. It also includes technological tools; what the Baldrige criteria call "high performance work systems."
The Business Process Perspective
This perspective refers to internal business processes. Metrics based on this perspective allow the managers to know how well their business is running, and whether its products and services conform to customer requirements (the mission). These metrics have to be carefully designed by those who know these processes most intimately; with our unique missions these are not something that can be developed by outside consultants.
The Customer PerspectiveRecent management philosophy has shown an increasing realization of the importance of customer focus and customer satisfaction in any business. These are leading indicators: if customers are not satisfied, they will eventually find other suppliers that will meet their needs. Poor performance from this perspective is thus a leading indicator of future decline, even though the current financial picture may look good.
In developing metrics for satisfaction, customers should be analyzed in terms of kinds of customers and the kinds of processes for which we are providing a product or service to those customer groups.
The Financial Perspective
Kaplan and Norton do not disregard the traditional need for financial data. Timely and accurate funding data will always be a priority, and managers will do whatever necessary to provide it. In fact, often there is more than enough handling and processing of financial data. With the implementation of a corporate database, it is hoped that more of the processing can be centralized and automated. But the point is that the current emphasis on financials leads to the "unbalanced" situation with regard to other perspectives.  
There is perhaps a need to include additional financial-related data, such as risk assessment and cost-benefit data, in this category.

Strategy Mapping

Strategy maps are communication tools used to tell a story of how value is created for the organization.  They show a logical, step-by-step connection between strategic objectives (shown as ovals on the map) in the form of a cause-and-effect chain.  Generally speaking, improving performance in the objectives found in the Learning & Growth perspective (the bottom row) enables the organization to improve its Internal Process perspective Objectives (the next row up), which in turn enables the organization to create desirable results in the Customer and Financial perspectives (the top two rows).

Balanced Scorecard Software

The balanced scorecard is not a piece of software.  Unfortunately, many people believe that implementing software amounts to implementing a balanced scorecard. Once a scorecard has been developed and implemented, however, performance management software can be used to get the right performance information to the right people at the right time. Automation adds structure and discipline to implementing the Balanced Scorecard system, helps transform disparate corporate data into information and knowledge, and helps communicate performance information. The Balanced Scorecard Institute formally recommends the QuickScore Performance Information SystemTM developed by Spider Strategies and co-marketed by the Institute.