In this month’s edition of the Pipeline Technology Podcast sponsored by Pipeline & Gas Journal, Corie Allemand of Stratus Technologies discusses his recent article on how edge computing optimizes pipeline management.
Listen to this episode to learn more about why edge computing is a game-changer for the pipeline industry, the main focus of edge computing to collect the appropriate data to support decision-making, the difference between edge devices and edge services, how to use predictive analytics to support critical areas of pipeline management, the future of edge computing in the IoT era, and more topics.
Edge Computing: Show Notes, Links, and Insider Terms
- Corie Allemand is Director, Oil and Gas, at Stratus Technologies. Connect with Corie on LinkedIn.
- Stratus Technologies provides reliable and redundant zero-touch computing, enabling global Fortune 500 companies and small-to-medium sized businesses to securely and remotely turn data into actionable intelligence at the edge, cloud, and data center — driving uptime and efficiency. Find out more at Stratus.com.
- Pipeline & Gas Journal is the essential resource for technology, industry information, and analytical trends in the midstream oil and gas industry. For more information on how to become a subscriber, visit pgjonline.com/subscribe.
- Read Corie’s April 2021 article, “Edge Computing Optimizes Pipeline Management.”
- Edge Communications is a method of building out the architecture for structured communication from edge devices in the field to a host server using connectivity to transmit the data. When evaluating the capabilities of your SCADA systems for transporting data, it’s best to consider which method of communication fits your operation.
- Edge Computing differs from traditional SCADA only in the relevance of the dynamic lift that can be moved from the cloud to the edge, for optimizing bandwidth, and functional efficiencies.
- Edge Devices are pieces of hardware that provide an entry point into a connected network. The devices serve as a gateway between networks to move data between networks.
- Internet 4.0 (a/k/a Fourth Industrial Revolution) refers to the modern connection of people, systems, and devices as part of the Internet of Things (IoT) and Industrial Internet of Things (IIoT).
- PLCs (Programmable Logic Controllers) are programmable devices placed in the field that take action when certain conditions are met in a pipeline program.
- EFM (Electronic Flow Meter) measures the amount of substance flowing in a pipeline and performs other calculations that are communicated back to the system.
- SCADA (Supervisory Control and Data Acquisition) though evolving quickly, SCADA is generally a software used to visualize process data, alarms, analytic results, and lately integrating video surveillance, and artificial intelligence.
- HMI (Human Machine Interface) is the user interface that connects an operator to the controller in pipeline operations. High-performance HMI is the next level of taking available data and presenting it as information that is helpful to the controller to understand the present and future activity in the pipeline.
- Node is the term for an IoT device that consists of several parts and many functions around signal interface, processing, and transmitting data. It can be quite simple or very extensive.
- Virtualization is a method to better utilize hardware resources by creating virtual copies of the hardware and OS (virtual machines). In this environment, applications can go back and forth between the physical and virtual OS to perform actions.
- Redundancy is the duplication of critical aspects in a system, which is designed to increase the reliability of the system. This helps prevent any disruption of system operations, reducing the risk of downtime.
- Cloud environment (or cloud computing) refers to an off-site hosting location where applications, data, and business processes are stored, owned, and managed by a cloud provider. “Moving to the cloud” is the process of moving information away from on-site or on-premise hosting environments to the cloud environment.
Edge Computing: Full Episode Transcript
Announcer: The Pipeline Technology Podcast, brought to you by Pipeline & Gas Journal, the decision-making resource for pipeline and midstream professionals. Now your host, Russel Treat.
Russel Treat: Welcome to the Pipeline Technology Podcast, episode nine. On this episode, our guest is Corie Allemand, global oil and gas leader with Stratus Technologies. We’re going to be talking to Cory about his article published in the April 2021 Pipeline & Gas Journal titled “Edge Computing Optimizes Pipeline Management.” Corie, welcome to the Pipeline Technology Podcast.
Corie Allemand: Thanks for having me, Russel. I really appreciate the opportunity to come here and speak with you.
Russel: Listen, I think we should just relax because we know one another and it could get really geeky really fast here. What do you think?
Corie: Absolutely. Why not?
Russel: Before we dive in, why don’t you tell us a little bit about your background, where you come from, and how you got involved in pipelining?
Corie: Sure. My foundation in electronics comes from the Marine Corps. From the Marine Corps, I went into offshore telecommunications, from telecommunications into networking and communications into automation, into electrical and engineering assistant. I’ve done a couple of different things along the way work for folks like a company called Datacom, a company called Shell, and a company called Texaco.
I’ve been in the pipeline industry for a few years. I’ve done a few things around the automation and analytics, but all based on a foundation coming from networking and telecommunications and then evolving into the automation and electrical piece.
Russel: We’re cut from the same cloth, my friend. we’re going to talk about your article on optimizing pipeline operation through edge services. Let’s start with a couple of definitions. Let’s start with, what is the edge?
Corie: The edge is obviously different things to different people. I mean, it depends on what’s your description of the edges. It depends on what’s your operating areas. If I’m a pipeliner, and my operations area is a pump station, then that’s probably my edge. If I’m an upstream operator, and I’m dealing with wellheads, then obviously that may be my edge.
It’s just a different definition for different people, but I think the main focus is, where’s the data collected? Let’s put the edge at where the data is collected, and then what can we do to use that data to make intelligent decisions?
Russel: What is an edge service? I’m somewhat familiar with edge devices, and we’ve done some podcasts on edge devices, but what’s an edge service?
Corie: When I think about edge service, it’s obviously an environment. We’ve got the device. We’ve got the operating — or the system — on the device. We’ve got applications. We’ve got connected IoT pieces. There’s a lot of different pieces that would go into what I would call that edge service.
Russel: I’m processing that, Corie, as I’m thinking through that answer. When I think about the edge, I think about right next to the machine, right next to the instrumentation. That’s the edge, right?
Russel: If I’m talking about a compressor, then the edge is between the PLC and the network. If I’m talking about a meter, it’s between the EFM and the network. That’s where the edge is. When you talk about edge service, then that blows up in my mind that gets a little muddy and murky, if you will, at least for me because I don’t think it’s commonly understood or well-defined in our business.
How would you address it? How would you clarify it for me? What is service in more specific terms?
Corie: You listed three items that you think are edge. If we’re going to provide an edge service, how do we do that with a unified platform? How do we do that in a smart way that we’re not building siloed equipment to do all these applications in and basically different silos and removed from each other?
How do we build that into an edge service where these machines can be integrated, or the data can be consolidated in a way that makes that into a service that will platform for the company responsible for servicing that?
Russel: Or, rather than being a platform that allows you to do your own thing, it’s the platform, the software, the support everything to get to a particular value proposition. Did I get that right?
Russel: I think there’s a lot of value in that. That’s the trend where software is going these days. It’s becoming less Software as a Service. It’s becoming more Solutions as a Service because people don’t really buy software. What they do is they solve problems.
Corie: If you look at where the last few years we’ve gone as an industry, back when I worked for Texaco, we had engineers. We had technicians. We had time to do our own R&D. We could buy a piece from this person, a part from that person, and we could build our own solution because we had time to do it.
In today’s environment, I don’t think a lot of companies have that kind of staff availability. Everybody’s either putting out fires, or they’re doing their day-to-day tasks.
Russel: They don’t need another thing to support and maintain either. That’s the thing about automation. It creates more things to support and maintain.
Corie: At the end of the day, what the end user’s looking for is a solution. They don’t want pieces and parts that they have to build on their own. They want something that you can take to them and say, “If you plug this in, this is how it works, and this is what it does, and it’ll function.”
If you can example that and you can show them how that will take care of whatever that problem is they’re trying to solve for, I think that’s when we get to where we’d like to as a provider is actually helping solve a problem.
Russel: That’s very well-stated. I want to talk a little bit about edge devices because I know that’s the world in which you play. My data is now 12 to 24 months old, which in that space is antique.
Russel: Let me queue up my question this way. The last time I looked, I saw a lot of really cool edge devices. The prices were in the $1,500 to $2,000 range. They were beginning to get some environmental spec to them that would make sense in oil and gas, but they didn’t have any of the classified area stuff, and they didn’t have any of the low-power stuff that you really need to proliferate.
So, what’s the state of the edge device market? Have they gotten any environmental? Have they got the explosion-proof declassified? Have they got the low power?
Corie: Great question. Edge devices come in various flavors, sizes, shapes, forms. I do know certain edge devices have Class 1/Div 2 certifications associated with them. They’re not on the low end of that price spectrum, but they are available. You can find them.
Depending on what your goal is from the edge, I had a conversation recently where we talked about the edge, and what the edge means to different folks, and what that edge is. If the edge is just simply adding an additional measurement into your system, then obviously a low-end device can accomplish that.
Something that operates outside of your SCADA system, something that you want to add in basically tied in through a cell service or something like that that you want to add in to the control center, then some of those low-end edge devices are great at that.
You can take that all the way into probably the third level of edge would be where you’re actually using that local edge device, to run your SCADA HMI as a local resource and then tying that back in through a network.
In that case of edge, then you’re talking about a whole different level of edge because you’re talking about a device that is running and allowing you to operate that system or that station locally. You start talking about things like virtualization and redundancy, which are available in the edge as well. Now, you’ve gone from things that used to exist only in a data center or a control center and taking that out to a pump station.
Edge has various flavors, various sizes. That price range, that bottom point might be $1,500, but on that top end, you might be looking at a totally different number when you start talking about what is possible to do at the edge and when we start taking about virtualization, redundancy, virtual machines, running analytics. There’s a whole lot of things that can be done depending on what your goal is at the end of the day or what the edge looks like in your environment.
Russel: [laughs] I could geek out on this conversation, Corie. I really can. What’s interesting about that to me is that, and I’m going to reframe what you said a little bit, just see if I’m thinking along the same lines as you. These edge devices really simply stated is they’re just highly-reliable, hardened PCs?
Russel: They can be built more fit-for-purpose in terms of the OS you deploy on these PCs?
Russel: If you think about that and you say, “Instead of having a PLC running a machine, I’m going to have this edge device and put software on it, and it’s going to run the machine,” that’s a completely different architecture.
What that also does is you start to future-think this. As the need for specialized control and automation devices begins to diminish because everything’s an edge device and everything’s software, which is going to drive the cost of automation and the value of automation cost down and value up, I’ll be able to do more, and I’ll be able to support it more easily because now I just get another computer and put the app back on the computer.
Corie: I think that’s the direction we’re headed in, and that is the opportunity in front of the industry, yes. There’s still some things from a compute — when you really get into the geeky, the silicone side. [laughs]
Russel: When you really get deep under the hood. What I just said is wavy hands magic happens here kind of stuff, and we’re not quite there yet.
Corie: We’re not quite there yet.
Russel: I do think it helps people that are wondering why the edge is such a big deal understand why it’s such a big deal because it’s a game-changer. It’s a game-changer.
Corie: Just to finish that, I’d say we’re not quite there yet, but we are on the doorstep. There’s some things being done on the silica side that will perpetuate the ability to replace those types of devices in the near future, absolutely.
Russel: There’s also going to be a big adoption issue. It’s not going to be unlike when the PLC first came out and you’re replacing old-style electrical control panels. There was a whole lot of issues in terms around market adoption, and the operator’s ability to support the equipment, and all that.
You’re going to have a lot of those kinds of issues that we really haven’t touched yet because we’re still doing proof of concepts and such. We’re not doing things at scale.
Corie: You’re absolutely correct.
Russel: Let me ask this question. What’s the value proposition? Another way to ask this is, why do I care? What’s the value proposition of the edge?
Corie: When you start thinking about what’s possible, where can we go, then you really start thinking what are the major problems that we need to overcome with where we are today.
One of the things that you’ll read about very often in most industry pubs is downtime. Unplanned downtime is a killer in the midstream market, and in the upstream market. Any unplanned downtime just costs beaucoup dollars depending on the size of the pipe you’re putting the product through.
Advanced analytics, maintenance — so maintenance is a big cost. What do we do with maintenance? People are talking about, how do we do predictive maintenance? The edge is a lever to get you to predictive maintenance. Really, to run predictive maintenance analytics, you need to be able to run that data in real-time, capture it locally, and then be able to analyze what’s happening with your devices.
When you start thinking about that, you can’t do that in your automation infrastructure today. Either you’re coms-restricted or you’re locally restricted by your controllers, just the amount of data is going to be too massive to put into an existing SCADA system. That’s where the edge really allows you the opportunity to run real-time analytics on a mission-critical device where it’s located.
Obviously, you can start looking at what are the trends, what’s the behavior of that asset, what does it look like when it’s failed in the past, are there any signatures that are verifiable through correlation of events?
We’re not talking about one or two events, or just the pressure, or just the temperature. We’re talking about correlation of what could be in some cases 100 different events if you have the ability to log those. Depending on what you’ve captured in the past and what we have for historical, you can look at correlation of events. Correlation of events will lead you to behavior.
Once you can start to begin to predict that behavior, now we’re not talking about recovering from downtime. We’re talking about preventing downtime because we can make decisions before things go wrong. That’s when we really get to that predictive kind of environment.
Russel: Let’s unpack that a little bit, drive into a specific example. I know a little bit about compression, and I know a little bit about machinery analysis around compression.
Machine analytics around compression have been done for a long time, but historically somebody had some kind of data capture package. Once a month, or once a quarter, or once a year, they would go, and they’d hook all this stuff up to a machine, and they’d capture data, and they’d give you a report, say here’s what you need to do.
One of the things that the edge offers is now I can just do that all the time.
Russel: Which has got some value. I can see things that are not happening all the time but happen occasionally, and that’s got some value.
The other thing I would say is we have every piece of — every compressor of any size has a Murphy panel or something like that on it that is a shutdown controller that is looking for things that get out of spec, and it shuts the machine down to prevent damage. Everybody’s familiar with that. How would an edge device work as opposed to a Murphy panel?
Corie: What I would say is as we’re collecting the data and building the profile of the device, we don’t need to necessarily shut down the machine. We look for thresholds before that shutdown happens to power down a device before failure or to change what’s being fed into that device before we get to those kinds of critical measurements. We can change our method of operations based on the current performance of the compressor.
If that compressor’s starting to move off of best efficiency points or if we see a weird temperature, or weird pressure, or something, we can make a decision before we get to a shutdown point — whether we have to start up another unit, move product in a different direction, or make an operations decision before we get to failure of a device.
Russel: Again, there’s a lot bound up in that. I think about something as simple as monitoring the pressure and watching the pump curve on a compressor side, on the prime mover side, and watching what’s going on with the pressure curve.
If I’m just getting pressure, and I’m getting it once a second, that doesn’t really tell me much. But if I’m getting pressure every millisecond, and I’m looking at that pattern, that tells me something really important. Right?
Corie: Yeah. I take this back to something we’ve looked at on a liquid side. If I’m looking at the current draw versus the pump efficiency curve, and I know when I’m operating at my best efficiency point and I start to see no longer am I operating on my best efficiency point, and my current draw is going up, there’s something going awry here. That machine is no longer efficient.
Can I move it to a different pump so that I’m not wasting money powering a unit that’s less efficient than unit two or unit three? Those are just simple examples where if you’re watching this stuff in real-time, you can make those kinds of efficiency decisions.
Russel: Again, if you take the pump example, if you’ve got vibration analysis on that pump, and you’re looking at that current draw, and you can correlate a rise in the current draw to a change in one of the vibration sensors, you can go, “That bearing’s starting to go bad.” It’s not bad enough to replace, but this is not the pump I should turn on first.
Russel: Which means you might get more life out of your equipment between maintenance cycles.
Corie: Right. We’re talking about being able to understand the operations of the devices that are critical to the mission and being able to decide, “Okay, it’s off of efficiency, but it’s not in such a point that I need to do maintenance yet, but I’m going to put that third in the register. I’m going to run units two and three, and I’m not going to use one unless I have to because next time we go to the station we’re going to perform maintenance on that guy to get it back where it needs to be.” Now, we’re making decisions about efficiency.
Russel: The other thing that’s interesting about this conversation, Corie, is it illustrates a point that doesn’t get talked about a lot. We tend to talk about the value of the automation, and then people tend to get concerned if we keep automating things I’m not going to have a job. The reality is you’re going to have a job. It’s just going to be different.
We’re going to still need the human to look at that and make a decision about how are we going to operate and when are we going to perform maintenance.
Corie: Absolutely. The job is still there. It just changes the way it’s done. Instead of doing maintenance based on a calendar, now we’re doing maintenance based on the asset health.
When the machine tells us it’s time to do maintenance because it’s decreased efficiency, or we’ve got something just weird happening, then it’s time to go check that out and figure out what’s going on. The machine through the use of analytics will be able to tell us when it’s time to do that kind of health check.
Russel: It’ll be able to tell us when, but it will also be able to tell us why. I actually think the “why” is more valuable than the “when.”
Corie: Yeah, so you show up with the right parts and pieces.
Russel: You can make decisions about how to operate until you’re able to pull maintenance, right?
Corie: Absolutely. I had the opportunity to be part of a team where we did some analytics on some failure analysis for some vertical pumps. We were able to predict through signatures of this one pump that when these events correlate, this thing is going to fail within the next 12 to 14 days.
The consensus was from the customers, “Well, great, we can change our operating procedure, and it’ll last longer than 12 to 14 days because we’re not going to run it as hard or whatever.” I had the same conversation with a different operator at the same company said, “Well, hell no, I’m just going to run it 100 percent and plan to replace it in 12 days.” [laughs]
You can make your decision based on the way you want to operate your equipment. But now, you’re not stuck with something that’s down and you’re making phone calls at 2:00 a.m. to try to find somebody to fix them.
Russel: If there are valid reasons why you would do both of those things, right?
Russel: They go to, “Well, what’s the current operating objective for this device? What are the constraints I have for my decision-making, and what’s the opportunity I have for my decision-making?”
Corie: Both decisions are valid. It just depends, like you said, on that area of operations in that asset and what’s the plan. If the plan is to run it as much as I can and get as much out of it, “Now I’ve got a scheduled replacement,” then great.
The other guy maybe doesn’t have all that money to pay for the replacement at the moment. He’s going to lean out his usage of that unit so it’ll last longer. It just depends on, like I said, what you’ve got available to you and what you’re capable of.
Russel: Exactly. You mentioned earlier. You were talking about virtualization and redundancies. Let’s spend a little time talking about that. First, for people that don’t know, what is virtualization?
Corie: The way I described virtualization, and I’m not a data scientist, so I’ll just leave it at…You’ve got an application that exists in a virtual environment that the environment exists on multiple machines or multiple computers. That virtual machine can operate in both environments at the same time and can basically fluctuate in those environments depending on what computer resources are available to it.
One would be a primary node, and one might be a secondary node. That virtual machine can operate on either the primary or the secondary. Depending on alarms, or failures, or basically predicting failures with the units, or things of that nature, the virtual machine can float back and forth.
Russel: Certainly for those guys that work in data centers, I’m like you. I’m not a data scientist. I’ve not done very much direct work with virtual machines. I have other people that I work with that do that. I let them know that. That’s something I choose to not know.
The idea is I have a machine, and it’s available to run any virtual machine — one or more virtual machines. If I have a machine failure, I don’t have to reinstall and reconfigure. I just pick that virtual machine up, drop it someplace else, and it continues to run.
Russel: It’s basically taking the machine and representing it in software, and machine in this case being the computer.
Corie: Right. That computer exists in a virtual environment and can be in an environment where more than one computer, and can operate on whatever assets are available to it, or it can exist on only one machine but can be removed and put on a different machine to run at any time. It’s basically taking your computer and just putting it in its own packaged environment so that you can move it around if necessary.
In the world when you talk about virtualization and redundancy, that’s what allows you to take that computer that’s running on node A, or node B, or whatever you want to call it. When that unit has an issue, it can swap virtually into the other unit and run without, if, depending on what the nature of your architecture is, if it’s fault-tolerant, it could be a seamless, bumpless transfer of data and continue to operate. You wouldn’t know other than the alarm telling you that you had a hard drive failure.
Russel: Right. You had a hard drive failure, and you’re now running instead of virtual machine one, you’re running over here on virtual machine six.
Russel: You continue to operate?
Russel: When I start doing virtualization at the edge, what the heck does that look like? I could visualize that back in the data center, but I can’t visualize at the edge because at the edge, the I/O’s such a big deal. How do I virtualize at the edge?
Corie: When we look at virtualization at the edge, it’s obviously, once again, one of those things that’s different to different people. What I would say is if you’re running SCADA at the edge, now you can run in the virtual environment, you can run your SCADA, HMI, and historian all locally as basically for local control at operations of that resource.
Then, you can take additional virtual machines and look at doing things like compressor analytics, whether you’re doing some type of predictive maintenance applications. You can run various applications in various…You’re building virtual machines to support the various applications that are going to be integrated into supporting whatever those things are you’re trying to accomplish like analytics and things.
Russel: I tend to do better with specific examples. Let’s say I had a compressor package. I had a gas engine and a prime mover. They’re sitting there. They’ve got the PLC and such. I could put a set of virtualized edge devices there.
One could be running the data capture and analytics on the data capture. Another one could be historizing that. Another one could be running the HMI application and providing the presentation to a local HMI. Now I’ve got in effect the SCADA system but connected directly to that machine?
Russel: If I wanted redundancy so I kept it running, then maybe I need three virtual machines to do this task, and I put in six?
Russel: Or I put in a system big enough to run six. If one fails, it just drops over, and I do what I need to do that. Have I got it?
Corie: Yeah, you’re pretty much hitting it on the head there. I mean, you can do any number of things depending on what your goal is at that point in time and what you’re trying to accomplish. The thing about having the right machine at the right location, we’ve got…
I can think of examples where customers are running 10 virtual machines at a compressor station. When you think about the various things that they’re capturing and the things that they’re accomplishing in those virtual machines, maybe at one point, they either could not run, or they had to run all through the control center. You’re just talking about a different way of operating a system.
Russel: Well, it’s a different architect.
Corie: Instead of having six to eight computers at each compressor station where each one is running, that machine is running that application. Through virtualization, you can have one computer running all of those virtual machines. Then, of course, because you’re putting all your eggs in one basket, you want that basket to be really hard.
Russel: Yeah, right. [laughs]
Corie: You want that one computer to be virtualized, redundant, and have a really good uptime. [laughs]
Russel: Of course. Let me ask this question. For people that are listening to this conversation and maybe who are not automation types, but they’re pipeline operations type, what should they know about the edge?
Corie: I can get specific on this. If the edge is the next level in the evolution of computing — I know that sounds maybe a little weird — when we think about taking the compute environment, cloud has been talked about for a number of years, and everyone thought that cloud was where everything was going to go. Now, we realize that in our distributed architecture, everything can’t be in the cloud. That’s where the edge really comes in and takes up, I would say, the hiccup and getting everything into the cloud. The edge can help with that.
That’s where I see the edge from a simplistic view, allowing all this data that’s going to be accumulated for all these devices that we want to know what’s happening and when it’s happening as real-time as possible.
We can’t put all that data into the cloud. How do we get access to that data, and how do we make that data usable so we can make smart decisions based on what that data is telling us? That’s where the edge comes in.
The edge is that thing that sits near that asset and is able to collect the behavior, analyze it, and help us figure out what the next steps need to be or what things we need to do to make sure that we’re operating the most efficiently and the most proactively so that we can really understand the operation of our assets.
Russel: I think you make another really interesting point here, Corie. This is something I have said in other podcasts. Analytics is great, but analytics only work if you have a well-structured dataset and clean data. The cloud is ultimately going to be the enterprise repository for all of the data that you want to retain.
There’s a decision that needs to be made about gathering millisecond data at the edge, but I’m not going to take millisecond data and store it on the cloud. That’d be ridiculous.
The edge becomes a place where I collect the data, I normalize the data, structure it. I clean it, and then I summarize it in some way to provide it to others that they need it so they can use it for decision-making and other kinds of analysis.
Corie: I think of a simple depiction of video analytics. If we’re going to use video for security at a pump station or compressor station, and we want to know who’s coming through that gate, that camera’s on 24/7. That means it’s generating data, but do we need to see the 23 hours and 58 minutes when no one actually approached the gate?
Russel: Yeah. [laughs]
Corie: That data doesn’t need to be sent anywhere. That can be retained locally because it hasn’t done anything that is really anomalous or interesting to anyone at the control center. When that truck drives up to the gate, now we’ve got an activity; an event, and that event-driven correlation means something to someone, and that data can be sent and we can use some of that bandwidth that we’ve got going through that station. But we didn’t need the rest of that data.
That’s a simple use case I think of when you think about the edge and what the edge does for you, the ability to take all that data that you need to capture because you don’t know when that truck’s coming up, but you need to capture it. You don’t really need to see that empty gate all day long.
Russel: It’s a really good analogy. Corie, this has been fun. I love having these kinds of conversations. I’ve actually learned some new stuff. Thank you. I appreciate it. [laughs]
Corie: My pleasure.
Russel: Hopefully, the listeners learned as well.
Corie: I really appreciate the opportunity to come here and talk. This is fun for me. I am at Bubba geek. I told you, when I used to work for Texaco Pipeline, I had that fishing line in my truck all the time when I was running between pump stations. This is something that I’ve done for a long time, something I really enjoy, and something I enjoy talking about. Anytime, my pleasure.
Russel: Awesome. Thanks for coming on board. We appreciate it. I hope you enjoyed this month’s episode of the Pipeline Technology Podcast and our conversation with Corie. If you would like to support this podcast, the best thing to do is to leave us a review on Apple Podcast, Google Play, or on your smart device podcast app. You could find instructions at pipelinerspodcast.com.
If there is a Pipeline & Gas Journal article where you’d like to hear from the author, please let me know either on the Contact Us page of pipelinerspodcast.com or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next month.
Transcription by CastingWords