In this episode of the Pipeliners Podcast, host Russel Treat provides a recap of the API Pipeline Conference and Cybernetics Symposium. Listen to Russel share his perspective on the various sessions and conversations related to pipeline safety and compliance.
Also, Russel provides an important report on the regulations and recommended practices that were discussed at the conference. Pipeline operators can learn more about what is currently in place, what was tabled, and what is coming down the pipe later this year.
API Pipeline Conference: Show Notes, Links, and Insider Terms
- The API Pipeline Conference and Cybernetics Symposium took place April 24-26 in St. Louis. The conference featured dozens of important sessions on pipeline safety, CRM Rule compliance, leak detection programs, and the latest technology for pipeliners.
- Tisha Schuller, the keynote speaker at the conference, is a Principal at Adamantine Energy. Tisha’s unique story is that she shifted from an adversarial position on energy to a leading advocate for energy policy.
- Listen to Pipeliners Podcast Episode #18 on the Politics of Pipelines with guest David Holt of the Consumer Energy Alliance.
- Doug Sauer is the Manager of Midstream Logistics with Phillips 66. During the conference, Doug discussed the latest developments in Pipeline Leak Detection Program Management, per API RP 1175, that should be part of an overall safety management system.
- API Recommended Practice 1175 establishes a framework for Leak Detection Program management for hazardous liquid pipelines within the jurisdiction of the U.S. DOT (specifically, 49 CFR Part 195). API RP 1175 is specifically designed to provide pipeline operators with a description of industry practices in risk-based pipeline LDP management and to provide the framework to develop sound program management practices within a pipeline operator’s individual companies.
- API 1164 outlines SCADA cybersecurity processes for pipeline operators that could take years to implement correctly depending on the complexity of a SCADA system.
- Concerning rules for underground storage, the PIPES Act of 2016 defines the “minimum safety standards for underground natural gas storage facilities” and imposes a user fee for entities that operate underground storage facilities.
- The Operator Qualification Rule (OQ Rule) establishes the list of requirements for pipeline operators to verify the qualifications of each person to perform specific tasks in their pipeline facility.
- MAOP (maximum allowable operating pressure) was included in a bulletin issued by PHSMA informing owners and operators of gas transmission pipelines that if the pipeline pressure exceeds MAOP plus the build-up allowed for operation of pressure-limiting or control devices, the owner or operator must report the exceedance to PHMSA on or before the fifth day following the date on which the exceedance occurs. If the pipeline is subject to the regulatory authority of one of PHMSA’s State Pipeline Safety Partners, the exceedance must also be reported to the applicable state agency.
- Title 41 of the FAST Act (FAST-41) is an inter-agency program that allows for a single point of contact to streamline and permit large infrastructure projects.
- FLIR cameras are used for thermal imaging, night vision, and infrared, typically in industrial settings.
- Green codes are used to optimize off-the-shelf SCADA systems to provide additional functionality and benefits compared to building out a SCADA system from scratch.
- Read these EnerSys Corp. blogs on the latest PHMSA FAQs regarding Team Training and the impact of Hurricane Harvey on pipeliners.
API Pipeline Conference: Full Episode Transcript
Russel Treat: Welcome to the “Pipeliners Podcast,” Episode 23.
Russel: Thanks for listening to the Pipeliners Podcast. We appreciate you taking the time. To show that appreciation, we’re giving away a customized YETI tumbler to one listener each episode. This week, our winner is Bill Telzerow. Bill, I hope I pronounced that name correctly.
Bill is with Colonial Pipeline Company, and his YETI is in the mail. To learn how you can win this signature prize pack, stick around for the announcement at the end of the episode. This week, what I’m going to do is try to provide a summary of the API Pipeline and Cybernetics Conference that was conducted in St. Louis about a week ago.
I don’t know if you’ve had this experience, but a lot of times you go to a conference and you take one or two things out that’s new. I always find them valuable, but sometimes they’re more valuable than others. In particular, I thought that this year’s pipeline conference was jam-packed full of really good and valuable information.
I wasn’t able to attend all the sessions. We actually had three people there from our company (EnerSys Corporation). Some of what I’m going to talk about is my notes and talking to them about sessions they went to and I didn’t go to. I’ll try to provide you an update of key information.
Then, for the things that I mentioned that you might have interest in, we’ll get all those linked up in the show notes page. From there you should be able to have a lot of resources. Let’s just start by talking a little bit about the keynote. The keynote was given by Tisha Schuller, who’s a principal with Adamantine Energy.
Tisha is an energy policy thought leader. Her story’s interesting. She started out as an opponent to energy and energy projects. Over the course of her career, actually ended up in leadership positions with the State of Colorado and now with industry as a consultant, talking about energy policy. It was very interesting.
I don’t know if you recall, but just a few weeks ago, we had David Holt on the show with Consumer Energy Alliance. We were talking about the politics of pipelines. Really, the summary here is that many of the things that David talked about, Tisha was talking about the same things.
She really spoke to the need for conversation, relationship, and she talked about her own personal journey. How she learned about energy and all the various things that energy was used for, and how that changed her position from opponent to what she would call, I think, a responsible advocate.
Not unlike the story that David Holt tells — not in terms of his personal background, but in terms of just the politics of pipelines, and the need for us to advocate, and really educate the opposition to the extent that’s possible. I thought that was interesting, because for me at least, it was somewhat thematic.
The next session that I attended was on cybernetics and the need to up your game. I found that conversation quite compelling. In particular, Doug Sauer, who’s with Phillips 66, talked about the pipeline safety management and the status of that program.
He mentioned some of the things that the industry’s trying to do to create materials to support the implementation of safety management programs and so forth. I found that really interesting. There was a lot of interesting conversation around that subject.
I think the thing that I find compelling about the whole safety management thing is that just the fact that industry’s collaborating — this is not a requirement; it’s something that’s being advocated — but it’s certainly not a regulatory requirement.
Really, there’s a lot of collaboration going on between companies to create resources, and tools, and so forth to enable companies to implement this kind of program.
One of the things I thought that was really interesting in the material that Doug was sharing is the need to understand KPIs and the process you have to go through to get to the point you’re beginning to share KPIs.
One of the points he made is that just by measuring, sometimes you find out you’re not doing as well as what you thought or as well as what you think you should be doing.
It’s the value and the measurement against a standard is that you have a better idea of where you’re really at. That allows you to define actions, and programs, and such that you can undertake in order to improve your overall safety. I thought that was a really interesting conversation.
I’ll also say that for me it’d been a while since I’ve been to an API pipeline conference, probably three years, maybe four. It certainly seems that the industry’s come a very long way as it relates to cybernetics and issues around the control room.
There was a lot of material even in the pipeline portion of the conference, before the cybernetics conference even started, that related to control room and so forth. I thought that was really interesting and compelling. We’ll try and link up some of the materials that Doug Sauer talked about. The other thing I went to is to a regulatory update.
Obviously, I spend a fair amount of time and effort trying to stay current, and that is always a challenge. I’m not going to spend a whole lot of time talking about the details of the regulatory update. I’m just going to fly through that and hit the high points. Some of the high points included the following.
First off, as of December, the underground storage rule is out. Obviously, we’ve got a stay of enforcement on that, but that rule is out and active. In January 2017, we had a freezing of regulatory requirements, meaning no new regulations until a review was conducted. That review is ongoing.
In January 2017, the new OQ rule came out. The hazardous liquid final rule came out and then was withdrawn, but it’s still in the pipeline. It’s expected that that’s going to be coming down the line soon enough. Of course, the gas final rule is in process.
What’s anticipated is that’s actually going to be broken up into a couple of separate rule makings to finalize, the first being the MAOP requirements and other congressional mandates. Then, all of the elements around gathering being pushed into a second rule, with September 2018 being a date that’s been put forward to finalize that.
There’s still some conversation going on about that in terms of is that timeline reasonable, but that’s certainly the goal that’s being put forward. The other thing that was put forward was a presentation by Karen Hanley.
Karen was talking about the Title 41 FAST Act, also referred to as FAST-41, which is a program that is interagency and allows for a single point of contact, and an advocate to streamline and oversee federal permitting of these large projects.
Of course, that’s not just limited to pipelines, but certainly it has a big issue for pipelines, particularly where there’s a complicated permitting process. She presented a lot of information about FAST-41, and some benefits it’s having, and the level to which it’s been able to accelerate permitting.
For anyone working a project through the federal permitting process, Title 41 is probably something you want to look into if you’re not already aware of it and making use of that in terms of streamline your approval process. That’s the highlights for the regulatory update as it was presented at the conference.
One of the things I did as well is walk around and visit with the various vendors. There’s a lot of interesting information, particularly around advanced camera capability and around drones. Certainly, there was some conversation about the combination of those technologies.
In particular, Southwest Research Institute had a booth there. They were talking about a camera they had where they were taking an FLIR camera — so a camera that reads infrared — they were doing advanced image processing on the images to enhance and improve the ability of somebody looking at those images to actually see a hydrocarbon release.
They had some comparisons in their booth between standard FLIR imagery and their enhanced imagery. Frankly, it’s quite compelling. I actually think that we’re going to see these camera technologies, and the image processing around these technologies, advance a lot in the next several years.
In a previous life, I did some work around software with a company that was doing what was called chromosome karyotyping. They used electron microscopes, and they applied software to eliminate visual inspection. The long and the short of this is it’s interesting technology.
It’s not yet commercial, but it’s probably soon going to be coming commercial. That was an interesting conversation I had while I was walking around in the booths talking to people. Wednesday morning, there was a number of presentations. The one I chose to go to was on training.
Training is a subject that’s near and dear to my heart. There’s a lot of conversation about methods of training and cost of training. There’s also an ongoing conversation in our industry about the big crew changes, which goes to we have people who have several decades of experience who are retiring.
Then we have other people who are young and don’t have that experience. The whole way they engage with technology and the way they learn is really quite different than the older folks. I just find this whole topic really compelling. There were several presenters employing technology in the quest to transfer knowledge and teach skills.
One of the presentations was all around the various tools and technologies for doing this. Of course, anybody who plays on computers, and surfs the web, and looks at YouTube videos knows there’s a ton of training content out there.
The conversation was first about just content delivery, but also ultimately what this is about, it’s about developing capability. What you might call it is developing competency in the various tasks that are required. There were a number of tools that were mentioned for doing this for the delivery of the technology and for the keeping of the records.
I think probably the most compelling conversation for me was a session a little later in the cybernetics conference that included Charles Alday, who was a guest a couple weeks ago, and others talking about team training and how team training needs to be accomplished.
That conversation generally went like this: can’t I just do computer-based training [CBT]? The answer is no. As it relates to the team training stuff in the control room management rules, the frequently asked question response from PHMSA directly addresses that and says that CBT by itself is not adequate.
I think people would agree that classroom training, where there’s engagement and conversation with an instructor, and where the training is interactive and even hands-on, is by far the most powerful and effective kind of training. The challenge is it’s also the most expensive to deliver.
The question becomes: how do I create a training program that’s going to educate people and build competency? If I were to summarize the conversation, what I would say is this: “You need to think about training in terms of the jobs to be performed and the competencies required, and then what is the training program to build those competencies?”
Certainly, computer-based training can be very effective as a way to provide reminders, as a way to provide consistency to reinforce things that have already been trained. However, Ardis Bartle — I had a conversation with her recently, and she’s also been a guest — she’s working on a whitepaper to talk about some of the deficiencies and limitations of computer based training. Having said all this, I guess if you were going to boil it down, what becomes important is I need to understand the jobs to be performed and the competencies required.
Then I need to understand the policies, the procedures, the safety, the operator qualification, and the required skills necessary to perform that job. Then I’ve got to build a program where training develops that competency in a way that’s as efficient, and affordable, and effective as possible.
Of course, affordability and effectiveness, those two things are kind of in conflict with one another because they operate against very different constraints. Having said that, this also ties back to the whole conversation about safety management systems, because a big part of a safety management system is the training and the competency development.
As the pipeline industry moves toward safety management systems, if you think about this, to have a safety system, I have to have well-defined policy and procedure. Then, I have to have well-defined training.
One of the things I’m going to be looking at in a safety management program is what are the competencies I require throughout the organization? In what roles in the organization do I require these competencies? Then, how do I build those competencies? Very interesting conversation to me.
I think that certainly technology’s going to have a big impact on this. If you recall last week’s episode with Clint Bodungen, we had a conversation with Clint about a game that he has built that’s designed to provide training for people who work in the role of cybersecurity.
Anyway, it’s interesting conversation. I thought that was extremely valuable, and there were certainly some good takeaways for me in that conversation. In one of the sessions that I was not able to attend, but one of my co-workers did attend, they talked about something that I thought was really interesting.
This was the idea of what they called green codes. I heard this from Dale Schafer. Before Dale brought that term up, I’d never heard the term green code before. Being a technology guy and a geek, I pride myself on being up on all the buzzwords, and this was not a buzzword that I was up on.
Anyways, this came up, I believe, in the SCADA Replacement session. One of the presenters talked about green code. Basically, what they were saying is in my SCADA system, I already have everything in it that I need — all the features, the functions, the graphics, the alarming, the animations etc. — all that’s fully built out.
By not making any modifications to that, it makes it much easier, much simpler, to implement new systems. They were talking about this, particularly in the context of you’re in the process of acquiring assets. What’s easier to do?
If I buy an asset, is it easier to build a SCADA system from scratch, or is it easier to have something and just implement that green code on that asset? To me, that makes a huge amount of sense, particularly with all these requirements that are in the control room management rule, and the best practices around human factors and high-performance HMI, and so forth.
To me it makes a lot of sense to take a state of the art tool and use it out of the box. In my vernacular, I would call that using off-the-shelf software versus build it software. Most SCADA tools are platform tools, and those platform tools are used for multiple purposes.
They’ll be used for water/wastewater, for traffic control, exploration, pipelining, etc. Each of these domains has unique, specific requirements. If you can build out a tool that already addresses all those unique and specific requirements, then that becomes easier to implement.
That was interesting to me, because that’s not a conversation I have heard before in any kind of pipeline SCADA conversation. There tends to be, at least in my experience, there tends to be a desire to build things up from scratch. To the extent I can find materials here that are public domain, I’ll link those up in the show notes.
Hopefully, you find this helpful. If you have the opportunity to go to one of these pipeline conferences, I’d recommend it. I found it extremely valuable for myself. I should probably talk about some of the sessions that I would’ve like to have gone too, [laughs] but I couldn’t get to because, well, you just can’t be in two places at once.
In particular, I wanted to learn more about the safety management system. The whole conversation around pipeline safety management, I think it’s a big one. I think this is something that’s going to become more and more of a issue and an opportunity for the industry. It’s one of those things I’m trying to learn about.
I did have the opportunity to have a couple of sidebar conversations at the conference with people who have been involved since inception with the building of the SMS program. I’m going to try to get one or more of those guys on as a guest so that we can share with the listeners some more information about pipeline safety.
There was also a session on pipeline security. I’m certain they talked a little bit about API 1164, which is cybersecurity for pipeline operations. It is a standard that’s currently under update. It’s interesting. I’m mentioning a lot of names here of people who have been guests. I ran into many of them, if not all of them, at the conference.
One of which was Dan Nagala, and I asked him specifically about the pipeline security presentation. He said he thought it was one of the best sessions he’d ever attended. It was a lot of really good information in it. Anyways, that’s good to know that we as an industry are really leaning into that in a big way. I think that’s important.
I also went to the presentation on natural disasters. In particular, there were a couple of operators that were talking about the response to Harvey in Houston. Harvey was a unique event just because of the scale of it.
If you think about the amount of real estate that got gobbled up in that storm, and the number of pipeline assets that are in the greater Houston area that were impacted by that flood, it’s a big impact. There was a good presentation talking about collaboration. You tend to think of emergency response.
In the pipeline world, you think of a rupture, or a leak, or something. It’s a one point on the pipeline, and you have one team that’s addressing that.
In the case of Harvey, it was really a very different situation, particularly early on as the operators were just trying to understand the scale of the impact and were having to use boats to get to places where they would normally drive trucks.
I thought that was very interesting, and there was a lot of lessons learned that could be taken out of that conversation for sure. I did not attend any of the sessions on pipeline integrity. I need to piggyback on some other expert [laughs] in that particular technology and domain. I also, just given my schedule, I had to leave early.
There was a leak detection program management session on Recommended Practice 1175 that I, nor any of the folks on my team, were able to attend. That was really something that I was very interested in attending. There was also a presentation on column separation.
Again, that’s something else I would like to have gotten to, but just couldn’t get to everything. All in all, I thought it was an awesome conference, lot of material, I think. I’ve got a little homework myself.
That homework is I got to figure out how to get my hands on all the presentations that I wasn’t able to attend and to find out to what degree I might be able to share this. I’ll take that as a homework project and will have that question answered before this episode goes live.
With that, I hope you found this episode engaging and the summary of the API Pipeline Conference useful. I would very much appreciate some listener feedback on this episode, in particular.
When I do this kind of thing where I go to an event and try to do a summary, it’s really quite different than when we’re doing an interview with a subject matter expert and listening to them talk. Please, give me some feedback, if you would. Let me know if you think this is something that’s valuable and we should continue doing.
As always, we’re interested in your ideas. Just a reminder before you go. You should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit the Pipeliners Podcast website at PipelinersPodcast.com/win to enter yourself in the drawing.
Russel: If you have ideas, questions, or topics you would be interested in, please let me know on the contact us page at PipelinersPodcast.com, or reach out to me directly on LinkedIn. My profile name is Russel Treat.
Thanks again for listening, and I’ll talk to you next week.
Transcription by CastingWords