Pipeliners Podcast


Pipeliners Podcast host Russel Treat welcomes new guest Stephen Sponseller from PTC to discuss the important topic of SCADA communications software and where the technology is going.

In this episode, you will learn about edge computing and the importance of capturing data for real-time analysis, how the devices used in the field continue to change and improve because of new technology, and the next big change in the next 3-10 years using AI.

“I think three years from now, we’re going to see this whole thing about edge computing beginning to proliferate, and it’s going to change the way people think about how they build up their automation, communications, and control systems. I think the first earthquake is in the very near term.”

Download and listen to the episode to hear more!

Future of SCADA Communications Software: Show Notes, Links, and Insider Terms

  • Stephen Sponseller is the Director of Oil & Gas Market Insights for PTC, which acquired Kepware in January 2016.
  • SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at remote location. SCADA breaks down into two key functions: supervisory control and data acquisition. Included is managing the field, communication, and control room technology components that send and receive valuable data, allowing users to respond to the data.
  • HMI (Human Machine Interface) is the user interface that connects an operator to the controller in pipeline operations. High-performance HMI is the next level of taking available data and presenting it as information that is helpful to the controller to understand present and future activity in the pipeline.
  • OPC (originally OLE for Process Control n/k/a Open Platform Communications) is a software interface standard that allows many different programs to communicate with industrial hardware devices such as PLCs. The original system was dependent on MS Windows before shifting to open platform.
    • PLC (Programmable Logic Controller) is a computerized system in operations that automates processes that require reliability within a given time period. PLCs are especially useful for pipeliners to automate difficult tasks in the field.
  • Edge Communications is a method of building out the architecture for structured communication from edge devices in the field to a host server using connectivity to transmit the data.
    • MQTT (Message Queuing Telemetry Transport) is a publish-subscribe protocol that allows data to move quickly through the system and does not bog down the system with unnecessary requests.
    • Poll Response only sends data if requested by the host. This creates limitations because you may need to wait up to 15 minutes for the full data package to be received.
    • Modbus is an older protocol that enables communication among many devices connected to the same network. The drawback is delays in the communication, oftentimes creating timestamp discrepancies.
  • IIoT (Industrial Internet of Things) is the use of sensors and connected devices for industrial purposes, such as communication between network devices in the field and a pipeline system.
  • IT/OT convergence is the integration of IT (Information Technology) systems with OT (Operational Technology) systems used to monitor events, processes, and devices and make adjustments in enterprise and industrial operations.
  • Raspberry Pi is an ultra-small and affordable computer that runs on the Linux operating system. The main industrial functionality is to attach the computers to edge devices for more efficient, reliable, and cost-effective data collection.
  • KEPServerEX enables users to connect, manage, monitor, and control diverse automation devices and software applications through a single-server interface.
  • Hysteresis refers to the phenomenon of an output value not being directly related to the corresponding input because of a lag or delay in delivery. In industrial terms, this is the measurement of how much the signal changes between the time the event occurs and the data is received for analysis.

Future of SCADA Communications Software: Full Episode Transcript

Russel Treat:  Welcome to the Pipeliners Podcast, Episode 36.

[intro music]

Announcer:  The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations.

Now, your host, Russel Treat.

Russel:  Thanks for listening to the Pipeliners Podcast. We appreciate you taking the time, and to show that appreciation, we are giving away a customized YETI tumbler to one listener each episode. This week, our winner is Jeff Wiese with TRC Solutions.

Jeff, I hope I got your name right. In any case, your YETI is on its way. To learn more about how you can win this signature prize pack, stick around to the end of the episode.

This week, we are very lucky to have with us, Steve Sponseller. I met Steve at a measurement school quite a number of years ago. We’re going to talk to Steve about the future of communications.

Steve, welcome to the Pipeliners Podcast.

Steve Sponseller:  Thank you, Russel. Thanks for having me.

Russel:  You’re with Kepware and have been for some time. While a lot of guys in automation controls know about Kepware, maybe that many of our listeners do not. Why don’t you tell us who is Kepware and a little bit about what you do there?

Steve:  Sure thing. Kepware, I guess you could say, started out as OPC server software company a little over 20 years ago. We grew up in the discrete manufacturing space with our software being used to communicate to automation equipment like PLCs and provide that data to applications above us — as we would say — like SCADA, HMI, and historians.

All along, we have been used in the oil and gas space because there are PLCs throughout the industry: on drilling rigs, wellheads, pipeline stations, terminals, and so forth. We didn’t really market to the industry until the industry came to us.

People like yourself started visiting with us at trade shows and up here in the Portland, Maine, office where we’re located, encouraging us to basically provide more support to the industry. That really started about, I’m going to say, 8 to 10 years ago.

That additional support is communicating to other devices that are specific to the oil and gas industry — like flow computers — and capturing not just the real-time data that we normally would capture in the OPC world, but also capturing the measurement data that those flow computers are calculating and storing locally.

Russel:  You’ve used a couple of buzzwords, so I want to help the listeners out. One of them is OPC. What is OPC?

Steve:  It is a standard that came out from the automation industry many years ago, probably about 20 years ago. It originally stood for OLE for Process Control. Whoever understands exactly what that is gets a free YETI, right? [laughs]

Russel:  Actually I can answer that question for you. OLE stands for Object Linking and Embedding.

Steve:  Right.

Russel:  It’s a technology that was built by Microsoft. It’s the technology that allows you to take an Excel spreadsheet and embed it into a Microsoft Word document.

Steve:  Very well said. Yeah, so that type of functionality in the automation space. The problem was that all these different applications — the historians, the SCADA applications, and so forth — if they wanted to communicate to someone else’s piece of equipment, that they had to basically write the drivers themselves, the proprietary drivers to communicate.

That’s just something that really bogged them down. The hardware wasn’t necessarily open. They didn’t really want to give out that proprietary information, the protocol specifications.

OPC now stands for Open Platform Communications. In the end, OPC is a generic standard that allows these applications to worry about supporting one protocol — basically this one standard — and allows them then to communicate to all of these different types of data sources or devices.

That’s where Kepware would be in the middle, writing the drivers to communicate to all the different types of hardware and then providing that interface up to the applications via OPC.

Russel:  That’s actually how we originally got involved. We were looking for a tool to do what’s called measurement data collection. It’s basically collecting the audit trail from a flow computer and bringing it back to the measurement accounting group.

That’s very specialized. Every one of the manufacturers has a different, unique protocol for supporting that. That’s how we originally got involved with you guys. You guys speak that kind of geekdom.

Steve:  Thankfully, our engineers, that’s what gets them up every morning.

Russel:  I know. I’ll say this. It’s rare I find somebody that can go really, really deep in talking about communications protocols, but I have found them in abundance at Kepware.

The reason I asked you on is I wanted to talk about what you guys are doing that’s pushing the state of the art around communications, in particular, around something that we did a number of years ago where we wrote a white paper on what we were calling then “distributed communications.”

The industry word now for that’s “edge communications.” Maybe you can talk a little bit about what is edge communications and why does that matter.

Steve:  Sure. Let’s start by talking about what are communications today, the traditional architecture of communications. You talked about polling that measurement data and the operational data from all these PLCs and flow computers out there in the field.

Again, going back to Kepware’s traditional world where we’re in the four walls of a factory, to me, that’s almost easy. You have Ethernet communications within a factory, no wireless communications, at least not traditionally, and pretty easy at the end of the day to get your data into your applications.

Flip side into oil and gas and other type of industries that are really spread out. Not only do you have different types of telemetry — radio, or cellular, or even satellite type of communications that traditionally have a pretty low bandwidth and some latency and other challenges like weather to deal with — that’s the network that you get to deal with.

Then you have sometimes hundreds or even thousands of these devices spread out across your fields. That is almost like an inverse model of that manufacturing factory example, and it’s a real challenge.

You come across a SCADA guy in the industry and it’s almost like he’s got an art to his job of being able to fine tune the communications, to be able to allow the traditional architecture, which is a centralized host — that’s what we call the SCADA system and the polling engine that goes with that — that is out there trying to communicate to all those thousands of devices. At the end of the day, it has to reach out to that device and say, “Hey, for this particular data point, what’s the latest?” A lot of times, the data point comes back as not really having changed much.

You could almost think of that as being like a wasted communication on your bandwidth there in the network, but you still had to make that poll response in order to find out that the data point hasn’t changed.

That’s the traditional architecture. This is what we were talking about that one day, Russel, when we started working on this outline of the white paper that we eventually wrote where we talked about distributed communications.

Instead of having your polling engine be located in Houston, or Denver, or wherever your enterprise is located, pushing that polling engine, or pushing the communications out closer to the devices — and this is where the term “edge” comes from — you’re pushing it to the edge, so that you are more directly connected to those devices. You have a much better path to the communications.

Now that you have that communications server out at the edge, it can change from being a poll/response type communications architecture to more of an unsolicited or publish/subscribe type of communications where the server will just broadcast out any updates.

If there’s no update, if the point hasn’t changed, there’s no point in providing an update, then the applications above can just subscribe.

There’s a message broker in between. MQTT is one of these type of protocols that the industry is really turning towards that has this publish-subscribe type of way of communicating. Also, we didn’t really talk about security yet. MQTT also is a more purpose-built protocol when you consider security, so you’re able to broadcast your information, your data across these networks in a secure fashion.

Russel:  One of the things that probably people don’t realize is that these technologies when they’re originally built, they’re built with an idea that it’s going to have a life cycle of 25 or 30 years. When I first entered the industry, Modbus — it’s a poll response protocol — it was everywhere.

Modbus is a very simple protocol. Modbus was built for people that were doing two, or three-wire copper between a device and something else that are talking. There was no way, unless you physically got on the wire and electrically spliced into the wire, that you’d ever get that communication.

Now that we’re moving that stuff through the air, and we’re encapsulating it inside of PCP/IP network protocols, all of a sudden nobody ever contemplated security in Modbus. Now the newer protocols like MQTT, they have security contemplated from the time they’re created, so they’re different in their approach to communication.

Without getting down super deep into the technical weeds, it raises the question, why would I want to move my communications server to the edge beyond just the ability to just push back changes to the host? What’s the other value of doing that communications at the edge?

Steve:  We talk about the challenges of the centralized host approach, again, talking about the thousands of devices that are out there. There just simply isn’t enough bandwidth to be able to communicate to all of them and bring back a lot of the “SCADA data” and measurement data that everyone has been struggling to collect over the years.

We haven’t really talked about it yet, but there’s IoT out there, the Internet of Things. Depending on what you want to call it, that could mean many things to many people, but companies wanting to have a better overall insight as to what’s going on out there in their field operations require more data collection.

You might be using that data for machine learning to do predictive maintenance. You might be wanting to increase your efficiencies of your operations and your production.

Whatever the point is that it was challenging enough to get data just for SCADA purposes, for supervisory control and data acquisition. Now we need even more finite data, more data to be able to do some of these awesome type applications out there that, again, just require a lot more data.

The requirements for data continue to get more and more, and the networks are getting better, but it’s still a challenge. However, being able to push that data collection to the edge as well as some analytics can definitely help. When we talk about analytics, I can mean many different things.

Even if you’re doing some data reduction, doing some computations — even if it’s just real simple computations out at the edge and then passing the results across the data with this better, more secure protocol approach — that’s going to allow the industry to get there to where they’re able to collect the amount of required data for IoT, or whatever you want to call it.

Russel:  Steve, you and I have obviously talked about this a lot over the years that we’ve known one another, so I talk about it this way.

I’ve talked about this on other podcasts about how much data do you need to collect and that kind of stuff. Bringing the data back to the host, and what I need is different than what I might need in the field, I’ll try to talk about this in terms of what do I need.

At the machine, I need the information about the machine that’s telling me how it’s working. If I’m hooked up to a compressor, I’m going to want RPM. I probably want to pick up some firing information on the actual engine.

I probably want to pick up pressure at the millisecond level, so I can see the pressure curve inside the cylinder. All that information is going to tell me about the machine health, but I don’t need that to run the business.

If I’ve got a compressor station, I probably need things like suction and discharge pressures. I probably need that maybe on a one-minute basis. I don’t need all that other information to operate the station.

When I come back to the business, I probably need hourly average suction and discharge and some totalized flows. The nature of the data I need is different, but the ability to get that really high concentration data at the edge gives me the ability to do these advanced analytics.

I think one of the places we’re going to go is I’m going to send the pressure back to the hose for the operators in the control room. I’m going to send information about the pressure like how much has it changed in the last minute, what is its hysteresis — which is a fancy word to say how much is the signal changing in small units of time — which gives me an idea of the quality of the instrument data.

I think what we’re going to do is we’re going to get a lot more data at the edge. We’re going to process that data and apply math to it and analytics. Then we’re going to give you the pressure and then a few things about the pressure that are going to tell me other things.

Steve:  Russel, that was a great point that you had made about the different forms of the data, or the different information that a various role or a persona would want. The technician there at the machine itself, he wants various types of data and ways of looking at that data.

At the compressor station level, they want to be looking at all of their machines and the overall efficiencies of their operations. You keep walking it back to the enterprise and they are certainly going to be looking at different KPIs of their overall operation.

At the end of the day, the data is still out there in that compressor. It’s just that traditionally it was pretty difficult to get that data to the various peoples in the enterprise, or the various folks in the enterprise, again, these different “personas” as we’re now calling them these days.

They used to have to poll it out of the SCADA to try to get that SCADA data out to the other parts of the organization. Sometimes it would be spreadsheets and be delayed. This is where we start talking about the IT/OT traditional separation.

We’re trying to bridge the OT world and the IT world so that you can get data throughout your organization. It’s not just getting raw data. We want to have this data be contextualized so that it’s more meaningful to the people, to the personas that need that data.

The CEO, he’s going have certain KPIs that he’s looking for. The financial guys, the production managers, the facility operators down to that technician, they all need access that local data. They don’t want to just want to get the raw data. They want it to be contextualized.

Russel:  Let’s talk a little bit about edge computing. From there, we’ll lead into what Kepware is doing in that domain.

We’ve talked a lot about data collection at the edge, but there’s also these devices that are called edge computers. Maybe you could tell us what an edge computer is, who’s making them, and what’s the state of the art with those things?

Steve:  Everyone knows about the Raspberry Pi, or a lot of people know about the Raspberry Pi. That’s just a great example of a very inexpensive, small footprint type of device that has some compute power on it. Moore’s Law is that the computing power continues to double as the size of the chip gets cut in half. The price gets less and less expensive as well.

The idea: instead of having this high-powered, centralized server in your data center that’s hosting your polling engine and your SCADA host system pushing that type of application, push the data collection out into these smaller devices that are less expensive.

Putting perhaps hundreds of these devices out there that are directly connected to the equipment so that you no longer have those challenges of dealing with those somewhat unreliable, low bandwidth networks. The data collectors are now directly attached to the equipment so that you have just a much better chance of not losing any of your data.

Russel:  You mentioned Raspberry Pi. Anybody who’s an automation geek is going to be familiar with Raspberry Pi. Who are the other people that are making these edge devices?

Steve:  A lot of your traditional type server manufacturers are out there — HPE, Dell, etc. It doesn’t have to be like a server type device. It could be a router or a modem from Cisco. Basically, a lot of these devices out there, having compute power embedded in them and, like I said, just out there at the edge, that’s what we’re considering now as the edge.

Russel:  What operating systems are these devices running?

Steve:  Traditionally, it’s not going to be your Microsoft operating system which, historically, many applications are written on Microsoft OS, including KEPServerEX. These devices that are out there, they are now coming with a different operating system, usually some form of Linux.

Russel:  To me, this is an interesting question. It’s hard to say where the future is going to go. If you think about PLCs, RTUs, and the devices we’ve typically used in the field, that they’re not really computers in the sense of like a Microsoft machine or a Macintosh machine that’s running a Microsoft Windows OS or running an Apple Mac OS.

They’re chipset based OSes. Whatever the chipset, you program to the chipset. To say that another way, it’s a real low level of programming. That means that all the automation devices, they all have their own proprietary development tools and approaches. When I go to an edge device, an edge device is going to run operating systems.

The thing that’s interesting about Linux is Linux is an open source operating system. You can run it on server class machines. There are people that do it. One of the advantages of Linux is that, because it’s open source and I can make my own modifications to it, I can do things to lock it down from a security standpoint that I can’t do if I’m running a Macintosh OS or a Windows operating system.

The other thing is that the power of the Macintosh or Windows operating systems is I can write code. When I write code for that operating system, it runs on any computer than runs an operating system. This whole edge computing thing in my mind, and this will take some time to play out, and it’s hard to say exactly where it will go.

It’s a big threat to the automation devices because I think and, Steve, you might have a different take on this, but automation devices were basically developed for electricians to replace electrical control panels, where I wired up the controls and I physically put the relays in. They were originally built to do that function. The programming environments look like electrical wiring diagrams.

Nowadays, everybody that’s coming up through school, you don’t go to even high school these days and take any advanced class and not learn how to write a code. Everybody knows how to write a code, at least at some level.

If I can put a device in the field that runs a generic operating system, I can now write software to do all the logic. That’s a big deal. That’s a game-changer, in my opinion.

There’s not a lot of people doing that for process control yet. Mostly, they’re doing that for analytics, but I certainly think that’s coming.

Linux tends to be proliferating because you can make it a much smaller platform or a smaller instance for the operating system, which makes sense for these edge devices. There’s Windows edge devices. There’s micro OS edge devices. Linux certainly seems to be where the future is going at the edge.

Steve:  Yeah, and from Kepware specifically, we have fielded many, many inquiries, people pushing us, basically, to provide an independent platform version of our software, so that it can basically run on any type of platform. That is actually something that we have been working on now for quite some time. Hopefully, by the end of the year, we’ll start being able to put some of this new version of our software out in the field, and get it into people’s hands to start playing with it.

Russel:  I wish I could go back and listen to the conversation you and I originally had about distributed computing, me pontificating about what I thought the future was going to be, and why I thought that future made sense.

I remember going to Kepware, whiteboarding out what I thought this was, and everybody bending their head like, “Who’s this weirdo, and where is he coming from?” [laughs] Here we are, six years later. Of course, to some degree, it’s different.

I find it really interesting how these things play out. Now, here we are, and you guys are about to bring out a new version of Kepware that’s going to run on Linux at the edge. I think that’s a game-changing development in our industry.

I have no idea how people use that, or where it’ll go in the future. It certainly is very compelling.

Steve:  The future is definitely interesting. My seven-plus years here at Kepware have been nothing but interesting, and that just doesn’t stop. This is a really exciting time for us. We talked about some edge based analytics. Again, not 100 percent sure what the future holds, but we will be providing some form of analytics at the edge as well, so that we can do things like data reduction, so that we’re not passing data points that aren’t really necessary in their raw form across the network.

Russel:  That’s a big conversation. [laughs]

Steve:  Yeah, and like I said, it’s tough to look in the crystal ball, and be able to tell you exactly what things are going to look like. We’re getting approached from many different angles, people wanting to do many different things.

That’s the interesting part of Kepware, is that we’re used in so many different industries, so many different applications, that it takes a really strong product management team to keep the course, and not provide these — what we would call — Frankenstein products, but more general products that people can do something with to make it their own solution then.

Russel:  That actually brings up an interesting conversation. What would be your definition of a Frankenstein product?

Steve:  If someone came to us and said, “Hey, Kepware, I want to put your software out at the end, and I want it to do specifically this, so that I can get this specific data into my specific application. It’s going to require some enhancements on your part, and doing things a little bit differently than you normally do. Hey, I’m going to give you a large order for it, and going to get you all excited. Basically, stop doing what you’re doing and work on this, again, what we call a Frankenstein project.” If that’s how we did our product management approach, it’s like last call driven.

You’re like, “Okay, we’ll go do that for you.” Then another big order comes along and wants us to do something totally different. “Okay, we’ll go do that.” Then five years later, we have all these different capabilities, but no real cohesiveness.

We don’t want that type of experience for our users. We want to give them a tool, something that they can use to build their solutions. That’s been the Kepware philosophy all along.

Russel:  Having been in those kind of conversations with you guys, and fairly deeply, it’s really interesting to me. We were talking earlier about this concept of architecture. Architecture in information systems is how I put things together.

Whenever I build an architecture, until I build a new architecture, I become a slave to it. It creates possibilities, but it also creates constraints. I remember having the conversation with you guys way back about moving to the edge.

I’m like, “Why can’t you just do that now?” You guys tried to explain that to me, and I think I eventually got it. I think you guys do a really good job of that. One of the things that people will say about Kepware is, “You just put it in, it just works.”

Steve:  Exactly. That’s because it is just a general, all-purpose tool, as opposed to a customized tool for just a few customers. You look at Kepware’s customer database, it’s awesome. Actually, one time we ran a report of the top 100 companies by size. We found out that we were in maybe 60 percent of those companies. Then we realized those companies included banking, healthcare, and non-industrial types. We then just looked at the Industry Week’s 500 list, and looked at their top 100 companies.

It turned out we were in 96 percent of them. It’s pretty incredible, when you think about how spread our user base is. It’s also very challenging that, at the end of the day, when they download our software from our website, that they’re able to understand it and use it.

We can’t possibly, from a technical support standpoint, speak to all of these customers that are using our software. We had to make it very easy to use, great documentation, and again, something that they can do something with to carry it further.

Russel:  Where do you think this future is headed? If you were going to try and stare into the crystal ball, where do you think we’re going to be in 3 years, and then in 10 years, related to this edge computing?

Steve:  [laughs] Well, our very first release of Kepware, Linux version specifically built for the edge, will just be data collection for our top three selling drivers. That’s going to be our Modbus driver, like you had mentioned Modbus before, our Allen-Bradley suite, and our Siemens suite.

Those are our top selling drivers. I’m sorry that this is a Kepware specific answer, but aside from that, we’re just going to be starting out with data collection. That’s what I’m getting at.

Then we’ll be moving to more edge based analytics. Hopefully, three years from now, some decisions are being made locally.

AI type decisions, so not necessarily a human being involved in every decision. Then even further out from that, at what point do the machines start taking over? We have put so much intelligence, and talking again about how much compute power you can put in the small footprint.

The applications themselves, how they’re coming along, AI in general, to the point where these machines are learning about themselves as they operate and fine tuning themselves. Like I said, that’s a whole ‘nother discussion.

Russel:  People will talk about advanced AI, what that means for the human race, and all that kind of stuff. I am certainly not a fatalist around all that. I’ve been around software too long. I understand its limitations.

I think you’re right, that initially, it’s going to be about just doing more kinds of data collection. Then it’s going to be about generic data analytics. What I mean by that is simple statistics, doing averages, means, deviations, and some simple statistical analysis to see how the number’s changing.

Now, I’m going to send you a number, and then I want to send you information about the number. I think the more interesting question is, how is all this data going to get contextualized for the different users?

That’s a very different problem than what we’ve historically done with understanding, managing, and utilizing data. Until we get the data models fairly rigorous and very well understood, it’s going to be hard to do a whole lot.

Steve:  Something we haven’t mentioned — just to put it out there, in case people have already realized it or heard — that Kepware did get acquired about two-and-a-half years ago. We were acquired by another software company called PTC.

They have gotten into the IoT business as well. They also acquired an IoT platform called ThingWorx. Now, Kepware is part of the ThingWorx platform. We still thankfully have our own branding and standalone solution that we’ve always had, or standalone application that we’ve always had.

We’re now also part of this ThingWorx platform. It is trying to accomplish those problems that you just talked about, Russel. You know, being able to contextualize the data, many different type of personas, rapid application enablement, so that you can quickly build these KPIs or these applications depending on, again, what your persona is, and what you want to do with that data.

It is a big problem, like you had said.

Russel:  There are some tools out there. PTC and ThingWorx is certainly one of the big ones, but there’s many others. There’s probably 15 or 20 notable ones, not to mention all the other smaller ones that are out there doing similar things.

I did a deep dive. It’s been almost a year-and-a-half ago now since I did that deep dive into IoT and data analytics. I think we’re still very much in the infancy of this technology, but I do think that we’re going to start doing some really interesting stuff, and very soon, at the edge.

I think three years from now, we’re going to see this whole thing about edge computing beginning to proliferate, and it’s going to change the way people think about how they build up their automation, communications, and control systems. I think the first earthquake is in the very near term.

Steve:  The pace of technology has just been amazing.

Russel:  It’s mind-boggling. It’s just absolutely mind-boggling.

Steve:  Hold on. It’s a wild ride.

Russel:  Right. We mentioned earlier that we wrote this white paper about six years ago. Here you guys are, rolling this product out. Even from concept to implementation, it takes a while. If you start looking at, something of that big a change, 30 years ago, it would have taken a company 15 years to get that far.

Now, it’s six. It’s just going to continue to accelerate. Look, one of the things I like to do    and I don’t do this with every episode — sometimes, I try to say, what are my three key takeaways? I’d like to do that.

I’m going to try and summarize the episode with three key takeaways, and I’m going to see what you think about my three key takeaways. The first thing is, this whole thing with edge computing. The only real thing about edge computing that’s new is I’m going to have the ability to deal with a whole lot more data.

Instead of dealing with data I collect once a minute back to the host, or once every five seconds back to the host, I’m going to be able to collect data at the hundred millisecond, or even more frequent, level. I’m going to get a lot more data.

That’s going to lead to the ability to do analytics and processing on that to understand how machines performing, and maintenance activities. Take things that we’ve done at the host. Those are going to migrate out to the field. We’re going to use that high-concentration data.

Two, there’s going to be a big evolution in machines. For edge computers, there’s going to be a big evolution in that. Then three, other than it’s going to be a tectonic change, it’s very difficult right now to understand exactly what that means in our business.

That’d be my three takeaways. What do you think? Do you think that that’s a good summary?

Steve:  I think you’ve done a great job of wrapping up this entire conversation.

Russel:  Look, I want to say thank you for coming on and being our guest.

Steve:  Oh, my pleasure.

Russel:  I’m certain we’ll have some other extremely geeky conversation we can have in the future. We’ll capture that and share that with the listeners as well. Thanks again, and we look forward to having you back in the future.

Steve:  That sounds great, Russel. Thanks again for inviting me.

Russel:  I hope you enjoyed this week’s episode of the Pipeliners Podcast, and our conversation with Steve Sponseller. I certainly found it informative, and I’m looking forward to seeing what other cool stuff these guys come up with.

Just a reminder before you go, you should register to win our Pipeliners Podcast customized YETI tumbler. Simply visit pipelinerspodcast.com/win, and enter yourself in the drawing.

[background music]

Russel:  If you have ideas, questions, or topics you’d be interested in, please let me know, either on the Contact Us page at pipelinerspodcast.com, or you can reach out to me directly on LinkedIn. My profile is Russel Treat.

Thanks for listening. I’ll talk to you next week.

Transcription by CastingWords

Pipeliners Podcast © 2020