Selasa, 21 September 2010

Plan–Do–Check–Act Cycle
Also called: PDCA, plan–do–study–act (PDSA) cycle, Deming cycle, Shewhart cycle
Description
The plan–do–check–act cycle (Figure 1) is a four-step model for carrying out change. Just as a circle has no end, the PDCA cycle should be repeated again and again for continuous improvement.
Figure 1: Plan-do-study-act cycle
Figure 1: Plan-do-check-act cycle
When to Use Plan-Do-Check-Act
  • As a model for continuous improvement.
  • When starting a new improvement project.
  • When developing a new or improved design of a process, product or service.
  • When defining a repetitive work process.
  • When planning data collection and analysis in order to verify and prioritize problems or root causes.
  • When implementing any change.
Plan-Do-Check-Act Procedure
  1. Plan. Recognize an opportunity and plan a change.
  2. Do. Test the change. Carry out a small-scale study.
  3. Study. Review the test, analyze the results and identify what you’ve learned.
  4. Act. Take action based on what you learned in the study step: If the change did not work, go through the cycle again with a different plan. If you were successful, incorporate what you learned from the test into wider changes. Use what you learned to plan new improvements, beginning the cycle again.
Plan-Do-Check-Act Example
The Pearl River, NY School District, a 2001 recipient of the Malcolm Baldrige National Quality Award, uses the PDCA cycle as a model for defining most of their work processes, from the boardroom to the classroom.
PDCA is the basic structure for the district’s overall strategic planning, needs-analysis, curriculum design and delivery, staff goal-setting and evaluation, provision of student services and support services, and classroom instruction.
Figure 2 shows their “A+ Approach to Classroom Success.” This is a continuous cycle of designing curriculum and delivering classroom instruction. Improvement is not a separate activity: It is built into the work process.

Figure2: Plan-do-study-act example
Figure 2: Plan-do-check-act example 
Plan. The A+ Approach begins with a “plan” step called “analyze.” In this step, students’ needs are analyzed by examining a range of data available in Pearl River’s electronic data “warehouse,” from grades to performance on standardized tests. Data can be analyzed for individual students or stratified by grade, gender or any other subgroup. Because PDCA does not specify how to analyze data, a separate data analysis process (Figure 3) is used here as well as in other processes throughout the organization.
 Figure 3: Pearl River: analysis process
Figure 3: Pearl River: analysis process
Do. The A+ Approach continues with two “do” steps:
  1. “Align” asks what national and state standards require and how they will be assessed. Teaching staff also plans curriculum by looking at what is taught at earlier and later grade levels and in other disciplines to assure a clear continuity of instruction throughout the student’s schooling.

    Teachers develop individual goals to improve their instruction where the “analyze” step showed any gaps. 
  2. The second “do” step is, in this example, called “act.” This is where instruction is actually provided, following the curriculum and teaching goals. Within set parameters, teachers vary the delivery of instruction based on each student’s learning rates and styles and varying teaching methods.
Check. The “check” step is called “assess” in this example. Formal and informal assessments take place continually, from daily teacher “dipstick” assessments to every-six-weeks progress reports to annual standardized tests. Teachers also can access comparative data on the electronic database to identify trends. High-need students are monitored by a special child study team.
Throughout the school year, if assessments show students are not learning as expected, mid-course corrections are made such as re-instruction, changing teaching methods and more direct teacher mentoring. Assessment data become input for the next step in the cycle.
Act. In this example the “act” step is called “standardize.” When goals are met, the curriculum design and teaching methods are considered standardized. Teachers share best practices in formal and informal settings. Results from this cycle become input for the “analyze” phase of the next A+ cycle.
Excerpted from Nancy R. Tague’s The Quality Toolbox, Second Edition, ASQ Quality Press, 2004, pages 390-392.

The seven Basic tools of Quality

The seven basic tools of quality

   

Introduction

In 1950, the Japanese Union of Scientists and Engineers (JUSE) invited legendary quality guru W. Edwards Deming to go to Japan and train hundreds of Japanese engineers, managers and scholars in statistical process control. Deming also delivered a series of lectures to Japanese business managers on the subject, and during his lectures, he would emphasise the importance of what he called the "basic tool" that were available to use in quality control.
One of the members of the JUSE was Kaoru Ishikawa, at the time an associate professor at the University of Tokyo. Ishikawa had a desire to 'democratise quality': that is to say, he wanted to make quality control comprehensible to all workers, and inspired by Deming’s lectures, he formalised the Seven Basic Tools of Quality Control.
Ishikawa believed that 90% of a company’s problems could be improved using these seven tools, and that –- with the exception of Control Charts -- they could easily be taught to any member of the organisation. This ease-of-use combined with their graphical nature makes statistical analysis easier for all.
The seven tools are:
  • Cause and Effect Diagrams
  • Pareto Charts
  • Flow Charts
  • Check sheet
  • Scatter Plots
  • Control (Run) Charts
  • Histograms
What follows is a brief overview of each tool. If you would like to know more or be trained in their use, please get in touch using the form at the top of the page.
Click on the images to enlarge; you will need to disable your pop-up blocker.

Cause and Effect Diagrams

Also known as Ishikawa and Fishbone Diagrams
First used by Ishikawa in the 194os, they are employed to identify the underlying symptoms of a problem or "effect" as a means of finding the root cause. The structured nature of the method forces the user to consider all the likely causes of a problem, not just the obvious ones, by combining brainstorming techniques with graphical analysis. It is also useful in unraveling the convoluted relationships that may, in combination, drive the problem.
The basic Cause and Effect Diagram places the effect at one end. The causes feeding into it are then identified, via brainstorming, by working backwards along the "spines" (sometimes referred to as "vertebrae"), as in the diagram below:
For more complex process problems, the spines can be allocated a category and then the causes/inputs of each identified. There are several standard sets of categorisations that can be used, but the most common is Material, Machine/Plant, Measurement/Policies, Methods/Procedures, Men/People and Environment –- easily remembered as the "5M’s and an E" –- as shown in the below:
Each spine can then be further sub-divided, as necessary, until all the inputs are identified. The diagram is then used to highlight the causes that are most likely a contributory factor to the problem/effect, and these can be investigated for inefficiencies/optimization.

Control (Run) Charts

Dating back to the work of Shewhart and Deming, there are several types of Control Chart. They are reasonably complex statistical tools that measure how a process changes over time. By plotting this data against pre-defined upper and lower control limits, it can be determined whether the process is consistent and under control, or if it is unpredictable and therefore out of control.
The type of chart to use depends upon the type of data to be measured; i.e. whether it is attributable or variable data. The most frequently used Control Chart is a Run Chart, which is suitable for both types of data. They are useful in identifying trends in data over long periods of time, thus identifying variation.
Data is collected and plotted over time with the upper and lower limits set (from past performance or statistical analysis), and the average identified, as in the diagram below.

Pareto Charts

Based upon the Pareto Principle that states that 80% of a problem is attributable to 20% of its causes, or inputs, a Pareto Chart organises and displays information in order to show the relative importance of various problems or causes of problems. It is a vertical bar chart with items organised in order from the highest to the lowest, relative to a measurable effect: i.e. frequency, cost, time.
A Pareto Chart makes it easier to identify where the greatest possible improvement gains can be achieved. By showing the highest incidences or frequencies first and relating them to the overall percentage for the samples, it highlights what is known as the "vital few". Factors are then prioritized, and effort focused upon them.

Scatter Diagrams

A Scatter Diagram, or Chart, is used to identify whether there is a relationship between two variables. It does not prove that one variable directly affects the other, but is highly effective in confirming that a relationship exists between the two.
It is a graphical more than statistical tool. Points are plotted on a graph with the two variables as the axes. If the points form a narrow "cloud", then there is a direct correlation. If there is no discernible pattern or a wide spread, then there is no or little correlation.
If both variables increase as the other increases – i.e. the cloud extends at roughly 45 degrees from the point where the x and y axes cross – then they are said to be positively correlated. If the one variable decreases as the other increases, then they are said to be negatively correlated. These are linear correlations; they may also be non-linearly correlated.
Below is an example of a Scatter Diagram where the two variables have a positive linear correlation.

Histogram

Like Pareto Charts, Histograms are a form of bar chart. They are used to measure the frequency distribution of data that is commonly grouped together in ranges or "bins". Most commonly they are used to discern frequency of occurrence in long lists of data. For instance, in the list 2, 2, 3, 3, 3, 3, 4, 4, 5, 6, the number 3 occurs the most frequently. However, if that list comprises several hundred data points, or more, it would be difficult to ascertain the frequency. Histograms provide an effective visual means of doing so.
"Bins" are used when the data is spread over a wide range. For example, in the list 3, 5, 9, 12, 14, 17, 20, 24, 29, 31, 45, 49, instead of looking for the occurrence of each number from 1 to 49, which would be meaningless, it is more useful to group them such that the frequency of occurrence of the ranges 1-10, 11-20, 21-30, 31-40 and 41-50 are measured. These are called bins.
Histograms are very useful in discerning the distribution of data and therefore patterns of variation. They monitor the performance of a system and present it in a graphical way which is far easier to understand and read than a table of data. Once a problem has been identified, they can then also be used to check that the solution has worked.

Flow Chart

A flow chart is a visual representation of a process. It is not statistical, but is used to piece together the actual process as it is carried out, which quite often varies from how the process owner imagines it is. Seeing it visually makes identifying both inefficiencies and potential improvements easier.
A series of shapes are used to depict every step of the process; mental decisions are captured as well as physical actions and activities. Arrows depict the movement through the process. Flow charts vary in complexity, but when used properly can prove useful for identifying non-value-adding or redundant steps, the key parts of a process, as well as the interfaces between other processes.
Problems with flow charts occur when the desired process is depicted instead of the actual one. For this reason, it is better to brainstorm the process with a group to make sure everything is captured.

An example of a Flow Chart

Check sheet

Also known as Data Collection sheets and Tally charts
Like flow charts, check sheets are non-statistical and relatively simple. They are used to capture data in a manual, reliable, formalised way so that decisions can be made based on facts. As the data is collected, it becomes a graphical representation of itself. Areas for improvement can then be identified, either directly from the check sheet, or by feeding the data into one of the other seven basic tools.
Simply, a table is designed to capture the incidences of the variable(s) to be measured. Tick marks are then manually put in the relevant boxes. As the ticks build up, they give a graphical representation of the frequency of incidences. Below is a typical example.

Summary

The seven basic tools of quality can be used singularly or in tandem to investigate a process and identify areas for improvement, although they do not all necessarily need to be used. If a process is simple enough – or the solution obvious enough – any one may be all that is needed for improvement. They provide a means for doing so based on facts, not just personal knowledge, which of course can be tainted or inaccurate. Ishikawa advocated teaching these seven basic tools to every member of a company as a means to making quality endemic throughout the organisation.

References
Ishikawa, Kaoru. Guide to Quality Control. Kraus International Publications, White Plains, New York, 1982.
Tague, Nancy R. The Quality Toolbox, Second Edition, ASQ Quality Press, 2004.


Kamis, 16 September 2010

Balanced scorecard basic

Balanced Scorecard Basics

The balanced scorecard is a strategic planning and management system that is used extensively in business and industry, government, and nonprofit organizations worldwide to align business activities to the vision and strategy of the organization, improve internal and external communications, and monitor organization performance against strategic goals. It was originated by Drs. Robert Kaplan (Harvard Business School) and David Norton as a performance measurement framework that added strategic non-financial performance measures to traditional financial metrics to give managers and executives a more 'balanced' view of organizational performance.  While the phrase balanced scorecard was coined in the early 1990s, the roots of the this type of approach are deep, and include the pioneering work of General Electric on performance measurement reporting in the 1950’s and the work of French process engineers (who created the Tableau de Bord – literally, a "dashboard" of performance measures) in the early part of the 20th century.
The balanced scorecard has evolved from its early use as a simple performance measurement framework to a full strategic planning and management system. The “new” balanced scorecard transforms an organization’s strategic plan from an attractive but passive document into the "marching orders" for the organization on a daily basis. It provides a framework that not only provides performance measurements, but helps planners identify what should be done and measured. It enables executives to truly execute their strategies.
This new approach to strategic management was first detailed in a series of articles and books by Drs. Kaplan and Norton. Recognizing some of the weaknesses and vagueness of previous management approaches, the balanced scorecard approach provides a clear prescription as to what companies should measure in order to 'balance' the financial perspective. The balanced scorecard is a management system (not only a measurement system) that enables organizations to clarify their vision and strategy and translate them into action. It provides feedback around both the internal business processes and external outcomes in order to continuously improve strategic performance and results. When fully deployed, the balanced scorecard transforms strategic planning from an academic exercise into the nerve center of an enterprise.
Kaplan and Norton describe the innovation of the balanced scorecard as follows:
"The balanced scorecard retains traditional financial measures. But financial measures tell the story of past events, an adequate story for industrial age companies for which investments in long-term capabilities and customer relationships were not critical for success. These financial measures are inadequate, however, for guiding and evaluating the journey that information age companies must make to create future value through investment in customers, suppliers, employees, processes, technology, and innovation." 

balanced scorecard
Adapted from Robert S. Kaplan and David P. Norton, “Using the Balanced Scorecard as a Strategic Management System,” Harvard Business Review (January-February 1996): 76.

Perspectives

The balanced scorecard suggests that we view the organization from four perspectives, and to develop metrics, collect data and analyze it relative to each of these perspectives:
The Learning & Growth PerspectiveThis perspective includes employee training and corporate cultural attitudes related to both individual and corporate self-improvement. In a knowledge-worker organization, people -- the only repository of knowledge -- are the main resource. In the current climate of rapid technological change, it is becoming necessary for knowledge workers to be in a continuous learning mode. Metrics can be put into place to guide managers in focusing training funds where they can help the most. In any case, learning and growth constitute the essential foundation for success of any knowledge-worker organization.
Kaplan and Norton emphasize that 'learning' is more than 'training'; it also includes things like mentors and tutors within the organization, as well as that ease of communication among workers that allows them to readily get help on a problem when it is needed. It also includes technological tools; what the Baldrige criteria call "high performance work systems."
The Business Process Perspective
This perspective refers to internal business processes. Metrics based on this perspective allow the managers to know how well their business is running, and whether its products and services conform to customer requirements (the mission). These metrics have to be carefully designed by those who know these processes most intimately; with our unique missions these are not something that can be developed by outside consultants.
The Customer PerspectiveRecent management philosophy has shown an increasing realization of the importance of customer focus and customer satisfaction in any business. These are leading indicators: if customers are not satisfied, they will eventually find other suppliers that will meet their needs. Poor performance from this perspective is thus a leading indicator of future decline, even though the current financial picture may look good.
In developing metrics for satisfaction, customers should be analyzed in terms of kinds of customers and the kinds of processes for which we are providing a product or service to those customer groups.
The Financial Perspective
Kaplan and Norton do not disregard the traditional need for financial data. Timely and accurate funding data will always be a priority, and managers will do whatever necessary to provide it. In fact, often there is more than enough handling and processing of financial data. With the implementation of a corporate database, it is hoped that more of the processing can be centralized and automated. But the point is that the current emphasis on financials leads to the "unbalanced" situation with regard to other perspectives.  
There is perhaps a need to include additional financial-related data, such as risk assessment and cost-benefit data, in this category.

Strategy Mapping

Strategy maps are communication tools used to tell a story of how value is created for the organization.  They show a logical, step-by-step connection between strategic objectives (shown as ovals on the map) in the form of a cause-and-effect chain.  Generally speaking, improving performance in the objectives found in the Learning & Growth perspective (the bottom row) enables the organization to improve its Internal Process perspective Objectives (the next row up), which in turn enables the organization to create desirable results in the Customer and Financial perspectives (the top two rows).

Balanced Scorecard Software

The balanced scorecard is not a piece of software.  Unfortunately, many people believe that implementing software amounts to implementing a balanced scorecard. Once a scorecard has been developed and implemented, however, performance management software can be used to get the right performance information to the right people at the right time. Automation adds structure and discipline to implementing the Balanced Scorecard system, helps transform disparate corporate data into information and knowledge, and helps communicate performance information. The Balanced Scorecard Institute formally recommends the QuickScore Performance Information SystemTM developed by Spider Strategies and co-marketed by the Institute.