Pipeliners Podcast

Description

In this episode of the Pipeliners Podcast, Russel Treat welcomes Daniel Nagala of UTSI International Corporation to discuss the latest cybersecurity threats and awareness issues facing the pipeline industry.

The conversation focuses on what Mr. Nagala has seen in the field — both domestically and internationally — to identify the various challenges facing pipeline operators. Included is how to integrate new system capabilities with legacy systems without exposing operators to security risks.

Also included in this episode is a discussion of what was learned from the TRISIS malware attack and how to address vulnerabilities. Download this episode to become more aware of cybersecurity issues facing operators!

Cybersecurity Threats: Show Notes, Links, and Insider Terms

CyberSecurity Threats: Full Episode Transcript

Russel Treat:  Welcome to the “Pipeliners Podcast,” episode 16.

[music]

Announcer:  The Pipeliners Podcast, where professional, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now your host, Russel Treat.

Russel:  Thanks for listening to the Pipeliners Podcast. We appreciate you taking the time to listen. To show our appreciation, we’re giving away a customized YETI tumbler to one listener each episode. This week, our winner is Jack Burns with Blue Racer Midstream. Jack, your YETI is on its way.

This week on the Pipeliners Podcast, we have with us Dan Nagala. Dan has over 40 years experience with pipeline automation and controls. He’s recognized as one of the industry’s leading consultants. He’s also an expert in industrial control system cybersecurity and holds both Global Industrial Cyber Security Professional (GICSP) and Certified Ethical Hacker (CEH) certifications.

With that, let’s welcome Dan Nagala. Dan, welcome to the Pipeliners Podcast.

Dan Nagala:  Thank you, Russel.

Russel:  So glad to have you. I think probably the best thing to do maybe as a way to start is to ask you to tell the listeners a little bit about who you are and your background and how you got into cybersecurity.

Dan:  I’m Dan Nagala. I graduated from Northern Arizona University with a Bachelor’s degree in Computer Science Engineering in 1976. I moved to Houston and took a job with a little company that was building a pipeline SCADA system for Conoco Pipeline.

Ever since then, I’ve worked in the pipeline, SCADA, and automation industry either as a software developer in my early career, and later as a consultant and engineering advisor, including cybersecurity.

During my career, cybersecurity has been an integral part of pretty much every project we’ve done since, gosh, the beginning, even though we didn’t call it cybersecurity in the beginning. We called it access control or security. In essence, it was the beginning of the evolution of cybersecurity for industrial control systems.

Russel:  I always say this. There’s nothing new in this. We just rename it and change the technology a little bit.

Dan:  Exactly. I gave my first formal presentation on security for SCADA at an API conference in 2000 or 2001 [actually 2002]. I went back and looked at that presentation a couple of months ago. It’s amazing that everything I talked about are still topics we talk about today although the technology and the form of implementation is a little bit different.

Russel:  I’d be really interested to know when you gave that presentation in 2000 if anybody believed what you were saying. Is this a real problem?

Dan:  Amazingly, I believe there were only about 15 people in the audience. It wasn’t a very well attended paper. It was called “Threats, Vulnerabilities, and Mitigation Strategies.” There just wasn’t much interest in it at the time.

But of course back then, we still had a lot of people with systems that were not connected to any network or external systems. The control centers were still largely air gapped. People didn’t perceive as big of a threat as they do today where everything’s becoming largely and widely interconnected.

Russel:  I think that’s a real good point. I think that’s a great tee-up and segue for my first question, which is what do you think the current state of cybersecurity is in pipeline automation?

Dan:  Certainly in North America and among the large operators, we see a lot of attention from the top down to the lower technical levels, and including the operator levels in awareness and strategies for protecting their systems at a cybersecurity level. We see much more effort going into the control centers themselves than we do in the field.

By and large as an industry, I think we’re doing a pretty good job at the control centers in putting in layered networks, especially with companies that have new systems. Most everyone is doing layered networks, pretty much following the Purdue model or some variant thereof. They’re putting in DMZs and firewalls and all the things that they should be doing at that level.

But I see the field still being largely neglected in a number of companies, including some large operators. There are a few exceptions to that. But by and large, the field is a lot harder to secure.

Yet, we have networks that stretch thousands of miles out of our primary control centers out to our field locations. Those are vulnerable to physical breaches. Physical breaches can lead to a cybersecurity breach at those locations, as well. In our view, they have to treat those holistically. Physical and cybersecurity at a remote location have to be treated together.

Russel:  Dan, as you well know, that’s easy to say. But to actually accomplish, that is quite different.

There’s been a lot of value created by extending these IP networks all the way out to the remote sites, just in terms of how it enables diagnostics and data sharing and remote access, and so forth. The flip side of that is when you have a switch sitting in a control panel out at a remote site and that switch is in a simple network stack, that becomes a key point of entry for a nefarious actor potentially.

Dan:  Absolutely. In fact, one of the guys in my company recently did a review of a field communications architecture for a company. They had recently changed their methodology a little bit. It turns out their original methodology was pretty secure because everything concentrated through a single device that only had one connection point and then communication to its associated field devices.

But in their new architecture, they’d actually put that device on a switch. The switch had open ports on it. Actually, while everything used to go through that one box, now if somebody wanted to bypass that, all they’ve got to do is connect into the switch.

This was something that was corrected in that case, of course. But it’s an easy trap to fall into when you’re trying to gain value from the availability of a network and do more things. Sometimes, if you don’t think about the architecture just right, you can actually induce new vulnerabilities and new avenues where someone can get into your systems and cause damage.

Russel:  I know of at least one company that we have done some work with where all of the telemetry termination at a remote site is physically locked behind bars basically. You just cannot get into it. They are super rigid about who gets access. In general, they don’t let any one person have access at any time. It’s always two people having access to that telemetry endpoint.

You think about that in terms of the logistics and the cost associated with that. But the flip side is, in particular for the larger energy companies, this kind of thing is a big concern.

Dan:  Absolutely. That’s a good strategy. Another one is when you put those switches out in the field, or even in your primary control center, if you’ve got ports that aren’t used, you should use managed switches so you can lock down those ports. Until the port’s opened and authorized, even if somebody plugs into it, nothing’s going to happen.

Russel:  Simple little detail and very many of them.

Dan:  Another point on the state of our industry is we still see a lot of companies that are running legacy software, legacy systems for one reason or another. Maybe they were in-house developed and they’re so integrated into their operations, they can’t changed them very easily. Or, maybe it’s financial or maybe they’re just happy with what they have. They don’t want to change it.

But with those legacy systems comes vulnerabilities that may or may not be able to be patched — and it really depends on how old the legacy the system is and what kind of protection methodologies are available inside that system. But by and large, they don’t go in and re-architect their networks to implement best practice because it’s just not practical with those systems.

We end up having systems that can’t be protected, as well. They don’t support encryption if you wanted to use encryption. Third party access for maintenance support and remote users isn’t nearly as robust as it might be in a newer system, access control, just everything. They aren’t able to lock down their HMIs, as well.

There’s a lot of things that exist in legacy systems that still create vulnerabilities if someone is able to gain access to those systems either through a network or by physical access.

Russel:  One of the things I know of as a specific example is some of the other PLCs that had originally started putting IP ports on them.

Simple things like an IP flood would cause the whole PLC to lock up or die or maybe even just dump its entire program. Those are things that if you’re using Windows and staying on the current release pretty much don’t have much risk there.

But in these older systems, older school kind of hacks that are elemental can be quite devastating. I think that’s one of the things that people don’t understand about legacy systems is you have…

They may be harder to gain access to just because of their I’m going to use the word clunkiness. But they don’t have all the more elegant tools for providing access. That might provide you some level of security. But if you do get access, the kinds of things you can do to cause interruption are significant.

Dan:  Exactly. One thing I see a lot — and we tend to talk a lot about — are centralized control systems with a big SCADA that’s supervising the network. But, even though an operating company might have those, out in the field at sophisticated stations, be it a compressor station or large pump station or a terminal, you also have a local DCS.

Many times, the upper level SCADA system will communicate to that DCS to get pertinent values and also to send set points and pertinent controls to those devices. But at the DCS location, we see a lot less security than we see in a control room at the SCADA end.

Maybe because they’re in a plant – the plant’s locked down; they have physical access control. But still, I walk into [some] control rooms, and I see the usernames and passwords taped to the front of the HMI. [laughs]

Russel:  It is a little compelling. I think the other thing, too, is where they’re actually doing the work they generally don’t have the access to the enterprise IT resources that manage this stuff. Oftentimes, those enterprise IT resources don’t understand the control systems. I think that’s changing quickly in our business. But we got a ways to go.

I want to change and pivot a little bit. One of the things that you do that’s different than me…I’ve done some international work. But primarily, our work is U.S. and Canada. I know you do a great deal of work internationally. I’m curious how is the threat or the state of cybersecurity different in the U.S., versus what you see internationally?

Dan:  In Europe, at the control center level, we see pretty much equivalent attention being placed on it. We still have issues with customers that have old legacy systems. Those are just things they have to work through.

But, by and large, the large pipeline control centers that I’m aware of and have had some interactions with over the last 20 years have taken the recommendations for ICS security very strongly and have done really well with it in Europe.

In the Middle East where I do a lot of work, as well, we see a lot less emphasis on the architecture. But there’s a lot more emphasis on physical security, emphasis to the degree that they might even have manned security checkpoints at the entry to their stations, entry to the control rooms, and whatnot.

But, at the cyber side, they’re not quite as advanced as we are. Especially the further out in the field you get and the smaller and less developed facilities that you get into, the lesser they are.

Certainly in the very large and developed third world countries, like the Emirates and Saudi Arabia and whatnot, you see very sophisticated systems there, as well. But in smaller countries where I’ve worked, it’s more of a challenge for them. Partially, it’s because they don’t have the developed networks that we have in some of those countries, as well.

Russel:  The threat is different, too. I’ve done work in Africa. Same thing, you’d go to a very remote station, and they’d have 24/7 guards sitting at the station. That was more about just making sure people didn’t pilfer the copper and such than it was about security or risk to the control system.

Dan:  They’re also concerned about attacks where they’ve got a large facility. Like it’s happened in Algeria a couple of times where a terrorist group has come in and taken over a facility physically and held people captive, and also shut down the production from that facility. That’s money to the government. Those threats are a lot different than what we think about in our remote facilities.

I doubt that many people in North America worry about that sort of a problem, because they don’t have large facilities where there are lots of people living and that would have that much impact if somebody were to come in and say, “I’m going to take over your pipeline pump station or terminal.” That’s not going to be an issue.

Russel:  Sure. Coming back to focus on the U.S. again, cybersecurity’s such a broad and complex topic. I think technology and approaches, all that’s evolving very quickly as the threat does. What area might give you the most concern? What is it that’s going on right now that you think the pipeline operators aren’t adequately addressing?

Dan:  I think that there are still a lot of risks in remote access. We haven’t had any issues that I know of from third party remote access into a pipeline control center. But I believe that risk is still there.

I believe it’s there just because of things that I’ve seen where a vendor gains access to an operator’s control center systems. He’s on the network, authenticated with full administrator privileges, and can get to everything. He’s coming in from a VPN from his offices. If somebody can exploit his security in his facility and get in through that same route, all of a sudden, now, they’ve got access to everything in your site.

I think there’s some work that we could do on managing and maybe developing better third party access. I know there are exceptional people out there that do this very well. But there are still a lot of maybe not so very good approaches to it.

I also still see control centers — actually not in North America with CRM but in other places — I see control centers that have one operator username and password for everybody.

They don’t do shift changes. They don’t keep logs of what individual operators are doing. Even if he’s a remotely connected operator, everything [that] goes in the logs looks just like everybody else. They don’t know where it comes from or anything like that.

I think that’s something that might be looked at. Then probably one of the biggest things that’s recently happened is the advent of this TRISIS malware that showed up in Saudi Arabia.

Russel:  For listeners that might not know what that is, can you break that down for us a little bit?

Dan: TRISIS is a very targeted and specific malware that was built to modify the safety integrity systems, the SIS systems, running on a Triconx 3008 SIS system in refineries in Saudi Arabia. It was made and architected to specifically attack those PLCs for that site. It’s a Stuxnet-like evolutionary malware.

It was designed to make those safety integrity systems fail and potentially cause significant damage to a plant and perhaps loss of life. Some people are saying, “Yeah, this malware was designed to kill people,” but I don’t think we can make that leap and say that just yet. But, certainly, the consequence was there if in fact it had gotten to its final stage.

The only reason it failed was because there was a coding error as I understand it in the documents that I’ve read. This is the fifth such attack that’s caused something to worry about in an ICS system.

We had Stuxnet in the beginning, going down through Crash Override, and now this TRISIS malware. This is an evolution that we’re seeing of malware that’s being developed by well-funded, most likely state actors that want to do damage to the infrastructure or operations in a country or a large corporation.

Russel:  I participate in InfraGard. I read the reports about what’s going on with malware. Some of that stuff, you’re encouraged to share it in a controlled way. I want to be a little bit careful about I say. There is a concern that there is sleeping malware in control systems that could be woke up and exploited.

Dan:  There is.

Russel:  That’s a troubling concern. How was TRISIS found? Do you know?

Dan:  I don’t know exactly what the mechanism was that led to its discovery except perhaps a malfunction of SIS system that led them to investigate. Then they uncovered this. But, I don’t know the exact mechanism, and I’d rather not try to speculate on that.

Russel:  Sure. That’s the thing that gives me concern is what things might have been put in a control system before you got it more effectively locked down? How do you guard against those things? That’s the thing that would concern me.

Dan:  One of the fears with this malware is that its tradecraft is very good and that that tradecraft could now be used as a blueprint to leverage that same strategy into other targeted equipment and operational technology devices. That’s a big fear, especially as we talked about earlier where we’ve got remote sites that aren’t very well monitored or managed.

If someone gains access, if they have physical access, which I believe this malware had to be introduced by physical access, that’s when you can start seeing these sorts of things happen. Locking down the remote site, as well as keeping your vigilance up are highly recommended.

Russel:  That’s the thing. Once you gain access to the network, you can get anywhere on the network. That’s the thing you have to keep in mind when you’re talking about this.

Given what we’ve been talking about and given the fact that pipeline operators as all of us have limited resources and it’s all about establishing priorities and sequences, what would you recommend that an operator be looking at or expending resources on right now given this conversation we’ve been having?

Dan:  I would recommend looking harder at your field infrastructure and how your field infrastructure’s locked down, both from a physical and a cyber access point of view. I would also look heavily at your remote access.

I’d continue to enforce cybersecurity throughout all levels of the organization, from operations all the way down through engineering and field support, through policies and procedures that are well documented and followed up on.

I know we’ve talked about process and procedure and management of change for years and years. But I think there are still areas where it’s lacking, especially when it comes to cybersecurity and keeping everything protected and not making changes that cause new problems like the instance I mentioned where they’d made an architectural change which seemed like a good thing, but by doing that, they introduced a vulnerability.

Russel:  I think a couple things that I would take away from this conversation, one is look at the network and where people may be able to gain access. That’s certainly key.

Then how I keep people from gaining access, both physical access and cyber access? We’ve mentioned some things through the conversation. I think that’s one key thing. What you’re saying about policy and procedure is huge. You have to design the policy and procedure to be appropriate for the threat.

I drive a Ford F-150 pickup truck. I live in Houston. I frequently carry a briefcase that’s got at least one, sometimes two laptops in it. I have had my briefcase stolen out of my truck twice in the 30-plus years I’ve lived in Houston.

Dan:  Ooo!

Russel:  Twice is not as bad as some.

There’s certain parts of town where I will not go into that part of town and park my truck with my laptop in it. I just won’t do it. There’s other parts where I’ll go, but I’ll lock it up in the back. I’ve got a locking cover to the bed. I’ll pull it out of the truck and put it in the locking cover. There’s other places I’ll go and I’ll leave it in the back seat. I won’t worry about it.

But what I’m driving at here is that’s me understanding what part of town I’m in and what the threat is in that part of town about my laptop getting ripped off. Likewise, I think you have to look at your overall cybersecurity approach and try to get a sense of if somebody were going to do something, where would they most likely attempt to do it?

That can be a little difficult. But there’s certainly experts that can help you walk through those conversations.

Dan:  I completely agree. I think that’s a risk analysis effort. It’s risk analysis based on location.

Russel:  Location and what it has proximity to and how many people are going by that location.

If I’ve got a mainline valve site and it’s on the end of a narrow band Ethernet network and that mainline valve’s in the middle of a pasture and you’ve got to drive through 15 miles of mud to get to it, I probably don’t have a big threat. But if I’ve got an unmanned compressor station that’s operated remotely 24/7, I may have a fairly substantial threat. It’s looking at it that way.

Dan:  In the ’90s in Spain, when I was working there early in the ’90s, those were precisely the kinds of sites we worried about because we’d have ETA terrorists break into our remote block valve sites and blow up the pipeline. Happened two or three times while I was there.

Russel:  Wow.

Dan:  Fortunately, they don’t have that problem anymore. I went to a number of sites where that had happened. Like I said, it’s location-oriented. You have to determine what the location is, what the threat is, who you’re protecting against, and what measures you have to take for those sites.

Russel:  The other thing is the threat changes. You have to adapt.

Dan:  Exactly.

Russel:  We’re getting to a point where it’s probably a good place we can wrap up this conversation. I want to give you a plug. I know that you are involved with ENTELEC.

In fact, the listeners might be interested to know that you’re the co-chair of the ENTELEC Cybersecurity Committee. The ENTELEC Conference is coming up here in Houston. It’s May 15th through 17th. I know that you’ve got some things planned. As we’re closing out, why don’t you tell us what you’ve got planned for ENTELEC.

Dan:  We’ll be having a Cybersecurity Committee meeting and probably a short presentation on some topics yet to be determined during that committee meeting. Everybody’s welcome to attend that of course.

We’re also going to be hosting a cybersecurity roundtable. I believe it’s on the last day in the early afternoon, 1:00 to 2:30, I believe. That cybersecurity roundtable will just be an open forum where we’ll probably list a couple of topics of general interest that people might want to comment on.

We’ll go around the table, let people talk. We’ll add whatever topics people are interested in and try to get opinions and commentary from those in attendance. [It will be] pretty open, no pressure, no formal presentations or anything in particular.

Russel:  Great. I would encourage you if you have an interest in meeting Dan and learning more about cybersecurity and if you don’t about ENTELEC and the conference, and so forth, we’ll put all that into the show notes. Just go to the podcast website. You’ll be find that and link it up.

Dan, I’d like to say thank you so very much. Really appreciate it, bringing yet another perspective to cybersecurity and what we all need to be aware of in that domain. Would like to have you back at some point. I’m curious to ask you on another episode maybe about what was your most challenging SCADA project.

Dan:  Oh my gosh.

Russel:  That might be fun conversation to have.

Dan:  [laughs] There’s been a few of those.

Russel:  It might be fun for me, maybe not fun for you. Is that what you’re trying to say?

Dan:  They’re over now. They’re fun to talk about now.

Russel:  They were excellent learning opportunities. We all have some of them.

Dan:  Most definitely.

Russel:  Thanks again. We look forward to having you back.

Dan:  Thank you very much, Russel. I appreciate you asking me and enjoyed the discussion.

Russel:  Hope you enjoyed this week’s episode. I certainly enjoyed the opportunity to catch up with Dan and learn about the state of cybersecurity in the pipeline industry. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinerspodcast.com/win to enter yourself in the drawing.

[background music]

Russel:  If you have ideas, questions, or topics you’d be interested in, please let us know either on the contact us page at the pipelinerspodcast.com website, or you can reach out to me on LinkedIn. It’s Russel with one “l” Treat, just like it sounds, T-R-E-A-T. Thanks again for listening. I’ll talk to you next week.

Transcription by CastingWords

Pipeliners Podcast © 2019