- EnerSys Corporation is an oil & gas software and services company focused on delivering operational excellence for oil & gas pipeline control rooms. EnerSys is the provider of POEMS (Pipeline Operations Excellence Management System), compliance, and operation software for the pipeline control center.
This week’s Pipeliners Podcast episode features pipeline control room expert Charles Alday discussing the important elements of Workload Analysis and how to perform a workload assessment.
In this episode, you will learn about the various methods used to analyze workload and fatigue, how to set benchmarks and use data to analyze objective and subjective factors, and how to blend software with human analysis of workload to ensure pipeline safety.
Also, the episode concludes with important information for pipeline operators and control room managers on PHMSA performing audits in 2020. Find out when your operation can expect an audit.
Control Room Workload Analysis: Show Notes, Links, and Insider Terms
- Charles Alday is a principal control room management consultant for Pipeline Performance Group. Connect with Charles on LinkedIn.
- Workload Analysis is a requirement in pipeline control rooms to record and balance controller workload to ensure adequate time and vigilance to respond to alarms, ensuring safe pipeline operations.
- EnerSys, the sponsor of the podcast, offers a Workload Analysis software tool called WLAnalysis that creates a real-time understanding of workload and determines whether controllers have adequate vigilance time.
- The NASA Task Load Method (or NASA Task Load Index – TLX) is a multi-dimensional rating procedure that provides an overall workload score based on a weighted average of ratings on six subscales: Mental Demands, Physical Demands, Temporal Demands, Own Performance, Effort, and Frustration.
- The Karolinska Sleepiness Scale (KSS) measures the subjective level of sleepiness at a particular time during the day. The KSS is a measure of situational sleepiness, and it is sensitive to fluctuations.
- The CRM Rule (Control Room Management Rule as defined by 49 CFR Parts 192 and 195) introduced by PHMSA provides regulations and guidelines for control room managers to safely operate a pipeline. PHMSA’s pipeline safety regulations prescribe safety requirements for controllers, control rooms, and SCADA systems used to remotely monitor and control pipeline operations.
- Situational awareness is the controller’s ability to perceive environmental elements and events, comprehend their meaning, and project their status after a variable has changed.
- Alarm management is the process of managing the alarming system in a pipeline operation by documenting the alarm rationalization process, assisting controller alarm response, and generating alarm reports that comply with the CRM Rule for control room management. [Read about the ALMgr software analysis capabilities offered by EnerSys]
- Alarm rationalization is a component of the Alarm Management process of analyzing configured alarms to determine causes and consequences so that alarm priorities can be determined to adhere to API 1167. Additionally, this information is documented and made available to the controller to improve responses to uncommon alarm conditions.
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at remote location.
- Fatigue Mitigation, as outlined by PHMSA, requires operators to implement fatigue mitigation methods to reduce the risk associated with controller fatigue that could inhibit a controller’s ability to carry out the roles and responsibilities the operator has defined.
- OPID or OpID (Operator Identification Number) is assigned by PHMSA to each pipeline operator in their system to perform safety checks and audits.
Control Room Workload Analysis: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 103, sponsored by EnerSys Corporation, provider of POEMS, the Pipeline Operations Excellence Management System, SCADA compliance, and operations software for the pipeline control center. Find out more about POEMS at enersyscorp.com.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time. To show that appreciation, we are giving away a customized YETI tumbler to one listener each episode. This week our winner is Tyler Wingard with Southeast Gas. Congratulations, Tyler, your YETI is on its way. To learn how you can win this signature prize pack, stick around until the end of the episode.
This week, Charles Alday with Pipeline Performance Group is coming back to talk to us about workload analysis. Charles, welcome back to the Pipeliners Podcast.
Charles Alday: I appreciate the opportunity to be with you again, Russel, it’s been a while.
Russel: Yeah, it has. You were last on Episode 21 when we talked about team training, and we just recently crossed over our hundredth episode, so that means it’s been about a year-and-a-half since I’ve had you on the podcast last. I’m surprised I’ve allowed that much time to pass.
Charles: [laughs] Well, I appreciate you welcoming me back, I’ve looked forward to having a discussion.
Russel: I asked you on to talk about workload assessment, and maybe we’ll just dive in to that, I’ll ask what is workload assessment, and why is it important?
Charles: Workload assessment in the pipeline control room world came about with the control room management regulation, and it became important because companies needed to figure out whether or not controllers had sufficient time to respond to alarms and handle abnormal operations situations while doing all the other tasks required of them.
Russel: Maybe you talk a little about how many of these studies you’ve done, just to give the listeners a little bit of context?
Charles: We’ve done 254 and we’ve got 8 or 10 others in progress right now. When we first came across this requirement of making sure that you monitor the content and volume of the work being directed to controllers.
Dr. Michelle Terranova, one of the principals and I, came up with a method where we wanted to measure time and task and use the NASA task load method. Then, compare that to all the different tasks that are being done so that operators would have a measure of whether or not controllers had time to respond to alarms or not.
Russel: The NASA task load method, I’ve heard that talked about but frankly, I’ll be honest with you, I’ve never actually looked at that. What is the NASA task load method?
Charles: Back in the 1960s and 1970s, NASA was trying to figure out the workload of astronauts, aerospace pilots, and jobs like that that they were concerned about. They tried all kinds of methods. They tried probes, and sensors, and everything hooked up to people. That just didn’t seem to work. Finally, one of them said, “Why don’t we just ask them about their workload?”
Russel: [laughs] What a radical idea, right?
Charles: Yeah, I know. They came up with a scale that had six dimensions. The mental demand, physical demand, time demand, effort, how hard they’re having to work, frustration level, and then their performance satisfaction.
It’s been used in all kinds of industries, all over the world since they came up with the method. We use a 10 point scale for the NASA task load index method. The process we use when we go on site to do an on site workload assessment is that we work with the controllers, and we have 10 task categories. Then, we also look at the task load index.
We ask the controllers for every hour of two day shifts and two night shifts to report the amount of time they spend doing discrete tasks, the NASA task load index, and controller alertness levels.
Russel Treat: On the fact of it, that sounds kind of abstract.
Charles: All right, so let me break it down a little bit to you.
Russel: Yeah, if you would, that’d be great.
Charles: We have these 10 large task categories. Operations, in other words, active control tasks — the amount of time they spend monitoring when they’re not doing active tasks. For liquids, it’s sampling and proving tasks.
How much time they spend on log sheet and paperwork. How much time they spend on phone and radio calls and whom they’re talking with. How much time they spend in face to face talks. How much time they spend on administrative tasks, all the other stuff that people have to do at companies.
Then we get into the more important areas from a regulatory perspective. How much time do they spend responding to abnormal events. How much time they spend responding to emergency events. Then the 10th category is how much time they get for breaking during the shift.
Russel: Interesting. I guess there’s two aspects of this. It’s kind of helpful, the idea of capturing what they’re doing. But some of the data that you’re gathering here, some of the information go to the complexity of the task and the frustration experienced in engaging the task. How do you get to a point where you can analyze that to get to a scientific result?
Charles: Well, we probably should have gotten the scientist, Dr. Terranova to participate with us, maybe that could be a follow-up podcast. But the way we do it is, like I said, we collect that data from all of the controllers.
All of the controllers participate because one of the ways I interpreted the regulation related to workload is that it’s the workload directed and expected of each controller. So the method that we use, we want every controller to participate.
Charles: Once we get this data — every hour from every controller for two day shifts and two night shifts — we’ve got some statistical processing software. Dr. Terranova takes all this data and analyzes it in that software, and we come up with the percentage of time that each controller spends doing those discrete tasks. We combine all of that data for each controller into the percentage of times the controllers in that control room spend on those discrete tasks, but also those 10 categories that I named.
Charles: And then we take the NASA Task Load Index for day shift, night shift, weekends, and so forth, and then we conclude that as part of the data. Then we take the controller alertness ratings that we’ve collected — in other words, from extremely alert to extremely sleepy — and we include that as part of the data, too.
Russel: How do you get to the alertness level?
Charles: Well, again, that’s a self-measure based on the Karolinska Sleepiness Scale that was done by an organization in Sweden. I think it’s got seven points to it from extremely alert to extremely sleepy, and each hour of the shift, the controller selects one of the ratings, and then we look at all those ratings and see how many there were across the population.
Russel: Right. A lot of this is information provided by the controller, not stuff that you’re measuring through instrumentation or that type of thing.
Charles: It is all information provided by the controller.
Russel: Yeah, it’s what I would call a clipboard study. I think a lot of times it’s challenging to get consistency when you do it that way. What it raises for me is what’s the challenge of getting to a good result doing this kind of work?
Charles: Well, it certainly is challenging because you’re working with a group of people. I know when we first started doing this method, a lot of the companies that we talked with about it said it was too subjective. They wanted objective data. They only wanted to look at alarm rates, phone calls, all that kind of thing.
You mentioned clipboard study. We had one client where we did a clipboard study two years in a row, which is pretty labor intensive, pretty expensive, and also intrusive on the individuals who are being measured. I suggested the third year to this client, “Why don’t we do it both ways? Since you’re stuck on the clipboard method, why don’t we do it both ways this year and see if these results correlate?”
We did it both ways the third year and statistically the results correlated, and therefore, we were able to move forward with them ever since then using this method that I’m describing for you. At first, I had the same thoughts you did. This is not going to work, people will overestimate, people won’t participate, all that kind of thing.
As the years went by and we collected more and more data from more and more control rooms, of natural gas, liquids, refineries, gas plants, other operations like that, where anybody sitting at a panel or a schedule display doing these tasks, we found that it correlated very well across all of the control rooms that we’ve done this in.
Russel: So no statistical difference between doing it as a third party with the clipboard versus asking the controllers to self-report?
Charles: None that we saw. As we collect more and more data — you know scientists love data, right?
Russel: Oh, absolutely. That’s all anybody talks about anymore is data and data analytics.
Charles: The more data we’ve collected, the more assurance we have that it’s statistically valid.
Russel: That’s very interesting. How frequently do you recommend that somebody do this kind of analysis or study?
Charles: Well, you’ve got to do something each calendar year not to exceed 15 months. It doesn’t specify what you’ve got to do. The way it works for us is we’ve got clients where we do it every year with them.
We’ve got clients where we do it one year and they do something else for a year or two. I have no idea sometimes what they’re doing, but then they ask us to come back every third or fourth year and do the method again.
Russel: Charles, that actually kind of answers one of the questions I had, which is can you do this yourself? And if you try to do it yourself, what are the challenges?
Charles: You can do workload measurements yourself because the method we use is not the only method that can be used. I’m aware of companies that use the NASA Task Load Index as part of their workload measurement method, so you can certainly do that.
The other thing you can do is if you want to use a clipboard method or a self-reporting method, you can do that yourself. I think what we’ve found in working with some companies is the challenge comes from how to analyze all that data.
That’s what we run across every now and then. Somebody has done something themselves and then they can’t take that data and reach a conclusion about whether or not controllers do have time to respond to alarms. That’s what the gist of the regulation is for me. Do controllers have time to respond to alarms?
What happens when companies use a method is that they get a bunch of data — objective or subjective, or some combination thereof — and then they can’t reach a conclusion. Now that we’ve done so many of these, we have benchmarks for what’s an acceptable amount of time to spend on those 10 task categories that I talked about.
The benchmark we use is if in a control room, the controllers are spending more than 10 percent of their time responding to alarms and/or abnormal operating conditions, they’ve got a problem they need to address.
Russel: 10 percent.
Russel: That seems to me like a small number.
Charles: Well, our benchmark is somewhere around 5.1 percent responding to abnormal events.
Charles: What we’ve found in 8 or 10 control rooms, controllers were having to contend with abnormal operating conditions caused by poor communication equipment and they were spending a lot of time on that, and so, one of our recommendations was you need to fix your equipment.
This doesn’t occur very often anymore, but when control room management first came about, a lot of control rooms still had controllers doing a lot of administrative tasks because, guess what, the control room is always occupied…
Charles: …so we can get them to do this stuff.
Russel: It’s like every little job that nobody else wants to do that needs to happen overnight, they just throw it into the control room.
Charles: Yeah, so that has changed a lot in our experience.
Russel: Right. I think the other thing that’s changed, too, at least in my experience, is that before the control room management rule came out, pretty much everybody was in a perpetual alarm flood.
Charles: Yeah, that’s another thing.
Russel: They had so many alarms coming in, and I would also characterize that the mindset was I’m going to make sure the controller has situational awareness by making sure he has an alarm for that versus focusing on how do I build the graphics and how do I display the information in a way that I can see abnormal before I get an alarm.
Charles: The other thing we do as part of our workload assessment method, is we take the alarm data that’s provided to us by the client and we look at the number of alarm occurrences against the metrics that are in API 1167.
We’ve analyzed that too, and we say, well, if your controllers are reporting they spend X percentage of time responding to alarms, and then you’re presenting them a number of alarms you’re telling us, it may be an indication they’re not really doing an adequate alarm response. They may be doing more of an alarm acknowledgement, so that’s another part of where we do take the objective data and include that as part of the analysis.
Russel: That’s the kind of thing where a third party looking in and seeing a lot of control rooms can add some substantial value, I think.
Charles: The other thing we do when we do our first on site assessment with a company is we do a human factors assessment. We look at the control room environment, the ergonomics, the training program, the procedures, the fatigue management program. We basically look at the whole thing.
We always have a pipeliner consultant and a human factors consultant as part of the team and we make recommendations for improving those areas. We capture best practices they have, and then we also as part of the process, administer a controller human factors survey to collect data on all of those subjects in our human factors assessment.
Then client gets a separate report, a very lengthy report, that contains all these findings, plus, it also compares to the benchmarks of other control rooms where we’ve done these human factors assessments.
Russel: To me, that’s almost a baseline prerequisite before you start doing workload analysis. The level of complexity of the task and the level of effort to perform the task is impacted by the environment that you’re performing the task in.
Charles: That’s why when we came up with method years ago, we decided to do both. And to tell you the truth, the first one of these I ever did, before control room management ever came about, I spent two weeks in a control room — day shift, night shift — making observations, interviewing every controller in those two control rooms. That’s one of the tasks we use to see if what we were doing was effective.
Russel: I want to ask you a question. I didn’t really tee this up for you, but it’s a question that’s kind of on my mind. We have a software tool that we call Workload Analysis (WLAnalysis). What it does is it monitors the data being put into the logbook and the data that’s being captured in the SCADA system and uses that as a way to determine what workload is.
I’m curious what you think the value of that would be versus the value of what you’re doing in the study?
Charles: Well, the value of that is it does capture consistently what information is being entered into the system or to the software, and then you can compare that to other tasks the controllers are doing.
Russel: I think one of the things that we’ve seen is that particularly in an environment where the workload is changing — you tend to see this a lot in midstream where they’re adding wells and they’re taking wells off, and they’re adding systems and systems are maturing — the workload changes and moves around the system, if you will, is it gives people an indication of how is the workload changing over time.
Charles: Yes. I think the great advantage is that it’s a consistent approach that takes place throughout the year, right?
Russel: Exactly. Basically, you can get that report anytime you want to see it. I can do comparisons this week versus last week, this controller versus another controller, this operating area versus another operating area. I can do a more detailed analysis, but one of the prerequisites to do this is you’ve got to know how much time do I place on these tasks, and that comes from the kind of work that you’re doing. You’ve got to establish a baseline first.
Charles: I think one of the things that has occurred to me through the years in seeing different methods and everything, it shouldn’t be either/or. It should probably be both. There should be something like what we do at some point in time to establish that baseline and to figure out what all these tasks are, and probably then smart companies would take some software like what your company provides or some of the other vendors provide and then use that on an ongoing basis to see what’s changing.
Russel: I’m thinking this more programmatically. Again, this is kind of idealistic, I guess, but if you’ve got a good, solid baseline that looks at what is the environment, the human factors analysis, then I do some kind of baseline study and say here are the key tasks that are occurring and here’s the baseline time required to take care of one of these tasks.
You do ongoing capture to see if things are changing over time. And then on some periodic basis — a year, three years, whatever — you do some kind of periodic analysis to see if your baseline has changed.
Charles: I think that’s worthwhile, particularly for those companies that have analysts within the company that have time to do that. What you said made me think of was we went on-site in 2012 to do a control room management audit before PHMSA came and did an audit. This was a large control center, had a lot of support staff.
This analyst was showing me how they were measuring workload, and he was telling me he was spending about 12 hours a week analyzing this data. I said, “Well, buddy, if you’ve got 12 hours a week to do that then more power to you.” Most companies don’t have time to do that.
Russel: Yeah, most companies don’t have 12 hours in a year to do that.
Charles: That’s the advantage of what we do. It’s a one time a year thing, and if you do it over time in different seasons then you’ve got some outside help. Of course, within the outside help, there’s additional expense.
I like the fact that there are software packages that can capture data on an ongoing basis, and it probably could work pretty well, as long as either the vendor or the company itself has got somebody that can look at it and say, “Okay, if this is telling me the workload is X then I can conclude that might controllers have time to respond to alarms and handle any abnormal or emergency situations that might come up.” In other words, they’ve got some spare time there that can be used.
Russel: This is maybe a simplistic approach, but the way we do that is we perform this data capture and do some math, and we calculate and we can look at this hour by hour, or day by day, or shift by shift.
We know that there was this much time spent doing work that we tracked and this much time left over. And that time left over, we call that vigilance time, and what we try to do is set a target for vigilance time. We think a good practice would be 30 to 35 percent.
So the vigilance time would be time spent going through screens, looking at what’s happening, understanding what’s going on, seeing if you see anything that’s an abnormal condition, and responding to alarms that would be beyond what the normal level of alarm activity is.
Charles: Okay, so that’s very interesting because we would call that same thing the amount of time that controllers have for monitoring. Our benchmark based on 254 studies is 38.2 percent.
Russel: Well, I was off by three percent, three point something. That is interesting. We’ve never had this conversation before and it’s really interesting to see how that correlates.
Charles: It is, and that’s amazing to me.
Russel: Yeah, me too, really. You made me a little nervous there when you were teeing that up. I was like, uh oh, where’s this going?
Charles: When you said 35, I thought, I think that’s pretty dad gum close to what our benchmark is for monitoring.
Russel: Right. So that’s interesting. It’s just a little bit different semantics, a little bit different thinking, but coming to the same conclusion.
Charles: There is one thing that I heard about this week that I’d like to mention, if it’s okay.
Charles: There are some companies that use the NASA Task Load Index not related to workload, but they’re using it to say whether or not controllers are fatigued or not, and that’s two different things to me.
If you’re going to measure workload, then the NASA Task Load Index works fine and it’s an important part of it. If you’re going to measure fatigue levels, you need to use some kind of alertness rating or something like that rather than trying to include the NASA Task Load Index as part of your fatigue management measures.
Russel: Yeah, I don’t remember the episode. I’d have to look it up. I’ll link it up in the show notes for anybody who’s interested. But we did an episode with a PhD engineer, human factors person from Denmark who had developed the technology that was basically a camera technology and would monitor the face and the eye in particular and use that to set a fatigue level from a direct monitoring perspective.
We talked about that at some length. I find that kind of technology fascinating. I had looked at an eyewear technology many years ago and talked to some control rooms about that. Consistently, the answer I got was, “Yeah, we don’t think anybody’s actually going to do that. A camera’s less intrusive.”
I think it’d be very interesting if you could do a study and figure out how do controllers perform given different levels of complexity, different levels of environment, and different levels of fatigue. Now I don’t know if there’s an appetite for anybody to fund a study like that, but I think we would learn a lot if you were able to do it.
Charles: Yes, I think that would be very interesting, Russel, if we could incorporate all of these things into a comprehensive look at control rooms and controllers. I’m not sure we’re there yet with the use of technology but it would be worthwhile.
Russel: I don’t know if there’s an appetite to fund that type of thing within the industry. Nor do I have any idea what level of funding would be required to do it.
Charles: I think that’s the challenge. I talked to one vendor that supplied equipment like that, and it would cost $3,000 a person to have the equipment. I said, “I don’t know any pipeline company that’s going to go for that.”
Russel: Yeah, no doubt. No doubt at all. Look, this has been awesome. I wanted to ask you one other question, kind of a different topic, and that’s what are you hearing? What’s the scuttlebutt around the industry around the control room? What is going on that you think people ought to know about?
Charles: Well, I think the most important thing is that PHMSA says they’re going to start repeating all the control room management audits in early 2020. They’re getting all their inspection questions together, including adding some about cybersecurity to be used for informational purposes. If you’ve got a control room management plan, even if you’ve been audited before, you need to get ready for another comprehensive control room management audit.
Russel: Yeah, we’re hearing exactly the same thing, Charles. What we hear is that PHMSA’s been chartered to get back and reinspect everybody within the next three years. Some of our customers are telling us they’re getting phone calls and people are gathering information. PHMSA’s trying to identify every control room for every OPID that they have in their database.
Charles: I was talking to a control room manager in the Midwest about a month ago, and he said they had already been contacted with possible dates in the first part of 2020 for auditors to come on-site.
Russel: I guess that means we’ve got work to do.
Charles: Happy New Year, pipeline control rooms!
Russel: Yeah, exactly. Well, Charles, as always, it’s a pleasure. Thank you for coming back. We need to make sure we don’t wait so long before we do this again.
Charles: Yeah, I know. I’m getting older and older. There’s no telling how much longer I got.
Russel: Well, that’s not where I was going but sure enough. All right, take care, my friend.
Charles: All right, thank you, Russel. I appreciate it.
Russel: I hope you enjoyed this week’s episode of The Pipeliners Podcast and our conversation with Charles Alday. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinerspodcast.com/win to enter yourself in the drawing.
Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords