Decoding Operations Improvement

I like to tell people that I specialise in Operations Improvement.

Some people just nod and think, “improving operations, must be a good thing.”

Others will nod and secretly think, “wishy washy term that doesn’t mean anything!”

Others may want more information and may question me: “What type of operations improvement? What’s your flavour? Six Sigma, Lean, Operations Research or Total Quality Control?” Or they may just ask “How do you know that what you are doing is working?”

So, what do I mean exactly? And how do I know if I’m doing any good?

Well, the first thing to say is that Operations Improvement focuses on trying to improve the output of a process operation without major capital expenditure. That is, without building more production lines, increasing equipment size or physically rebuilding processes. We focus on trying to improve and enable the people, procedures, control, automation and the systems that they work in. That is not to say that we never recommend the spending of capital or upgrading of equipment. It may be that the existing equipment has reached its end of life or that previous operations improvement has resulted in an equipment bottleneck. That is just not our first avenue for solutions. We can often get better bang for our buck just by improving the way we are utilising our existing plant.

 

Where did it come from?

Operations Improvement is the term that is used in my experience working in Australia but it actually doesn’t return much when you Google it.

What I recognise as the field of Operations Improvement came out of the fields of Operations Management and Scientific Management.

These terms are almost synonymous but the American, Fredrick Taylor coined the term Scientific Management in the very early 20th century and published his book “The Principles of Scientific Management” in 1919. “Frederick W. Taylor was the first man in recorded history who deemed workdeserving of systematic observation and study” (Drucker). A man of his time, his attitude to the actual worker would be considered quite backward now: “the man who is … physically able to handle pig-iron and is sufficiently phlegmatic and stupid to choose this for his occupation is rarely able to comprehend the science of handling pig-iron,” (Taylor, p59). Taylor focused on there being one best way of doing something, finding it and enforcing it as standard practice.

 

“Frederick W. Taylor was the first man in recorded history who deemed work deserving of systematic observation and study” (Drucker)

 

After Taylor, another important figure was Walter Shewhart, who invented Control Charts and Statistical Process Control (SPC) in 1924. These tools and their principles are still used to this day, with the term Statistical Quality Control (SQC) used virtually interchangeably with SPC. Then after the second world war, another American, W. Edwards Deming was sent to Japan by the U.S. army to help Japanese industry get back on its feet during the rebuilding period. He gave lectures to key figures and consulted to major companies. Early on in that period, he stressed the importance that managers had a system of profound knowledge, consisting of 4 parts:

  1.    Appreciation of a system: understanding the overall processes involving suppliers, producers, and customers (or recipients) of goods and services (explained below);
  2.    Knowledge of variation: the range and causes of variation in quality, and use of statistical sampling in measurements;
  3.    Theory of knowledge: the concepts explaining knowledge and the limits of what can be known.
  4.    Knowledge of psychology: concepts of human nature.

Out of these he developed his 14 points for management (see inset). Later, he summarised his life’s work up until that point and is credited with launching the “Total Quality Control” movement. His lectures strongly influenced the Japanese, including Ishikawa (of the fishbone diagram fame), who, in turn, was later influential back in America. Closely related to Shewhart and Deming is the emergence of Continuous Improvement philosophy and the Plan-Do-Check/Study-Act cycle (Deming argued an important difference between “Check” and “Study”). It is out of Deming’s work, and those I’ve mentioned before that currently practised methodologies of Lean and Six Sigma emerged.

 

Lean emerged at Toyota as the Toyota Production System, relying heavily on Deming’s principles. And they continue to pay homage to Deming there: “There is not a day I don’t think about what Dr. Deming meant to us. Deming is the core of our management.” (Shoichiro Toyoda).

Six Sigma was developed by Bill Smith & Mikel J Harry at Motorola in 1986. Then at General Electric, Jack Welch made it central to his business strategy in 1995.

In general, lean focuses on the reduction of waste and six sigma focuses on the reduction of variability. They both also encompass many tools and some key methods for how to execute projects that achieve those goals and to structure the business so the concepts are integrated into everything they do. In recent years, six sigma has sought to incorporate lean into the Lean Six Sigma philosophy.

Proponents of Lean would argue that the Lean methodology already encompassed a goal of reduction in variation, via ‘defects’ being one of its seven elements of waste.

Another field that has something to offer is that of Operations Research, also called Management Science or Decision Science. The definition of Operations Research is a “discipline that deals with the application of advanced analytical methods to help make better decisions.” There are many models involving requiring various levels of mathematical proficiency to address categories of problems such as transportation (freight and delivery systems), scheduling and critical path analysis (applied in Microsoft Project), and decision analysis. I don’t pretend to be an expert on (or even proficient in) all aspects of operations research but I have studied and practised a very useful subset championed by Douglas Hubbard. He calls it Applied Information Economics (AIE) and describes it in his book, “How to Measure Anything: Finding the Value of Intangibles in Business”.

 

What is the Best Methodology?

 Here is where I’m not going to give you an answer.

Critics of Lean, Total Quality Management or Six Sigma will usually point to failings in other methodologies using examples of where implementations haven’t worked. A lot of these criticisms boil down to an incomplete implementation. Managers grab one or to tools use them on the shop floor and ignore or not even understand the philosophy’s prescription for their own management activities. So, a manager must understand the philosophy and practise it themselves for it to be effective.

Deming: “It is not enough that top management commit themselves for life to quality and productivity. They must know what it is that they are committed to – that is, what they must do. These obligations cannot be delegated. Support is not enough; action is required.”

A similar manifestation of this issue is letting a specialist team do the 6σ or Lean project but then give the critical tasks to another group who are free to ignore the philosophy or make decisions without evidence. This sends the signal that the Operations Improvement method is OK for cute little projects but real work gets done with old fashioned grunt and 6σ (for instance) will only get in the way. So implementation is one issue, but which philosophy has the best set of tools?

From that perspective, Lean proponents have the best idea, in my opinion. That is because Lean is not a static set of tools. If someone develops some research and practice that makes sense, they will incorporate it into their toolkit rather than say it is something separate. One such concept is the Theory of Constraints (TOC), developed by Eli Goldratt. While there are many people out there debating whether Lean or TOC is better, forward thinking proponents of Lean are just saying, let’s use the TOC as one of the techniques in our Lean toolkit. Even Eli Goldratt has said: “TOC tells you where to look and what to change, while lean tells you how to change.”

One thing to note is that while I’ve attempted to paint the picture that most of these philosophies and methodologies do not clash or contradict, they do on some points. One that stands out is Deming’s point 12. Deming’s principle of abolishing by objective is in conflict with Hubbard’s goal of being able to measure everything. In practice though, you don’t have to throw one philosophy out completely in order to use methods from another.

 

How do we know if it works?

There is nothing worse than some improvement expert coming in and saying they have improved something or made a process more efficient, when the end user or process owner knows that it is either not true or some other interacting and important measure has actually been affected negatively and hasn’t been taken into account. One particular example I remember is a place I worked at in a tropical region had trouble with overly wet feed stock to the plant in the tropical wet season. So we normally built a large stockpile for the wet season and reduce the surface area being exposed to the rains.

An operations / business improvement initiative was created with the sole objective of reducing the inventory carrying cost without considering any other parameters. The project was “completed” in the months leading up to the wet season, with savings proudly exclaimed. Then when the wet season arrived, wet ore caused drying to be a bottleneck and it was so wet that, at times, it wouldn’t grip the conveyor belt enough and would slide back down. So the solution was a bust. But the official project benefits were never revised. The loss in throughput was put down to an “exceptionally” heavy wet season (even though it was well within the expected range of possibilities and the long range forecast had predicted a heavier than average season).

The example above is Operations Improvement virtually at its worst. Considering virtually only a single target parameter without thinking about risks or adverse outcomes. Not consulting widely with subject matter experts or even bothering to review reports that led to the previous strategy. So best practice is to monitor and measure the process before and after the change. It is also to conduct a risk assessment to see what could be adversely impacted by the change and monitor each potential risk outcome for any delayed effect from the change. Better still is to hold a trial or a period of experimentation, where as many other factors as feasible are held constant and people are aware that the new practice is being evaluated.

The option of a trial and how to evaluate it is covered extensively in Hubbard’s Applied Information Economics (and his book “How to Measure Anything”) and also in the Six Sigma Hypothesis Roadmap and Design of Experiments. These methods share some of the same underlying principles in statistics and will tell you to what degree of certainty your change or trial has been successful. You don’t have to have a masters in statistics but you do have to know which particular statistical test to apply to your problem. A software package like Minitab can be helpful with doing the analysis efficiently, once you understand which test to complete.

 

Stay Tuned

 So there’s a little bit of background, a bit of the history of this field, plus a few anecdotes about how not to implement philosophies and solutions. In my next post (because I may have gotten to the end of your attention span), I’ll give a bit of a flavour of the approaches we use where I work and run through a few examples of applying a particular philosophy to good effect.

 

Rob Lyon

Senior Process Engineer

Key Engineering Solutions