Change Management Using PDCA – Understanding the Impact of Underperforming Processes | Operational Excellence Quick Hits
Quick Hits share weekly tips and techniques on topics related to Operational Excellence. This week’s theme relates to underperforming processes. We hope you enjoy the information presented!
Speaker 2: (00:05)
In this session, we’re going to talk about the issues of underperforming processes, and specifically processes that are dealing with speed loss. We’re working on a process and it’s running at a speed less than standard, so what do we do in those cases, and how do we prevent these things from happening?
Speaker 2: (00:28)
Again, of all our issues in terms of quality, performance, and availability, this section is specifically looking at speed loss. And when we talk about speed loss, we’re looking at standards required to run at a certain speed, but the process won’t hold specific tolerances. So that’s one of the reasons for speed loss, is if I run it at standard speed, that the product goes out of specifications. So the process isn’t capable when it’s running at the standard speed.
Speaker 2: (00:59)
Next is equipment wear that contributes to abnormalities when we run optimal speed. So if we’re running the equipment and it’s worn out, when we try to run at standard speed, then again we get issues with the process. And then also, we’ve seen cases where operators slow down because of past issues. Oh, if I run it at optimal speed, then I run into these issues. Whether those issues are relevant or not is another thing, but a lot of times we see behaviors from past experience.
Speaker 2: (01:29)
So the past experience is like, “Oh, when I do these activities, these are the results.” So it’s the cause and effect thinking that the operators have due to issues that were from the past that might not be relevant currently. How do we deal with these issues? So one of the best ways to deal with anyone of the unplanned downtime or loss of speed or minor stoppages, we want to have an active signaling system such as an Andon. So, Andon refers to a system which notifies management or maintenance or other workers of a quality or processing problem.
Speaker 2: (02:07)
Our objective is to prevent processing issues that arise from the abnormalities of unplanned downtime, speed loss, or minor stoppages. And typically, it’s a lighting system. And then you can program the lights to act as different signals. So you can have green. This is operating as normal, and a blinking green might be that I’m running at a rate less than standard, or I might have a yellow that comes on when there’s some minor stoppages, or I can program to have red if there’s unplanned downtime.
Speaker 2: (02:40)
So at any point in time, I can go out and look at, is the process operating normal? If not in one of the lights is on, that gives me a signal of what the problem is, and I can call the appropriate resources to deal with that issue. Again, we want to reduce the management window here. So from the time the issue occurs until we take active measures, that’s the management window. So we want to reduce that management window as much as possible. These simple techniques will significantly improve the uptime by fixing the issues faster and getting the process back up to speed and operating normal.
Speaker 2: (03:19)
Again, when we’re looking at all these issues, what we got to do is we got to apply the Pareto Principle, where we’re looking at the reasons for the different losses. So of course, we got to be able to capture that data and the data needs to be accurate. So we want to make sure that we have a data collection system that’s giving us accurate data collection so we can use that to understand what are the 80% of the losses that are coming from 20% of the loss category so we can focus our efforts, get to the root cause of those, what permanent commerce is in place and eliminate those problems from reoccurring.