This week’s Pipeliners Podcast episode features first-time guest Doug Fisher of Rogue7 discussing the key technology elements of using analytics and machine learning to support the pipeline control room.
In this episode, you will learn about how analytics and machine learning are applied to pipeline control, how it can add value and improve your ability to operate a pipeline, and how to separate safety data from operations data. You will also hear Doug’s prediction on the future of analytics and machine learning in the pipeline industry over the next 5-10 years.
Pipeline Control Room Technology: Show Notes, Links, and Insider Terms
- Doug Fisher is the president of Rogue7. Connect with Doug on LinkedIn.
- Rogue7 is a machine-learning company focused on improving pipeline operations that will lead to safer, more efficient utilization of pipelines around the world.
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at a remote location.
- HMI (Human Machine Interface) is the user interface that connects an operator to the controller in pipeline operations. High-performance HMI is the next level of taking available data and presenting it as information that is helpful to the controller to understand present and future action.
- Machine Learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
- TensorFlow is a free and open-source software library for machine learning. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.
- Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text.
- Python is an interpreted, high-level, and general-purpose programming language.
- PLCs (Programmable Logic Controllers) are programmable devices placed in the field that take action when certain conditions are met in a pipeline program.
- The Gas Law is defined by the fundamental equation of PV = nRT, where pressure (P) times volume (V) equals moles of gas (n) times gas constant (R) times temperature (T). The units are arbitrary and are accommodated by the value of the gas constant R, which is different for every set of units.
- PID (photoionization detector) is a type of gas detector. Typical photoionization detectors measure volatile organic compounds and other gases in concentrations from sub parts per billion to 10 000 parts per million (ppm).
- Digital Twin is a digital replica of a physical asset, process, or system. In the pipeline industry, the machine learning makes a model of the pipeline.
- Data Wrangling refers to how data is prepared during data analysis and model building.
- Leak Detection is the process of monitoring, diagnosing, and addressing a leak in a pipeline to mitigate risks.
- Leak Detection Systems (LDS) include external and internal methods of leak detection. External methods are based on observing external factors within the pipeline to see if any product is released outside the line. Internal methods are based on measuring parameters of the hydraulics of the pipeline such as flow rate, pressure, density, or temperature. The information is placed in a computational algorithm to determine whether there is a leak.
- Leak Prevention is the study and practice of reducing the number of incidents that release oil or hazardous substances into the environment and limiting the amount released during those incidents.
- Stress Corrosion Cracking (SCC) is the growth of crack formation in a highly-corrosive environment. The presence of SCC can lead to the failure of a pipeline under stress, especially in extreme temperatures.
Pipeline Control Room Technology: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 154, sponsored by Gas Certification Institute, providing training and standard operating procedures for custody transfer measurement professionals, now offering online interactive and instructor-led training. Find out more about GCI at gascertification.com.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time. To show that appreciation, we give away a customized YETI tumbler to one listener each episode. This week, our winner is Kyle Freeman with Cimarex Energy. To learn how you can win this signature prize pack, stick around till the end of the episode.
This week, we have Doug Fisher, President of Rogue7, joining us to talk about analytics, machine learning, and digital twins for the pipeline control room. This is right up my alley. I warn you, it could get geeky. Doug, welcome to the Pipeliners Podcast.
Doug Fisher: Thank you, Russel, for having me. This is quite an honor.
Russel: I appreciate that. I’m looking forward to this conversation. I think we may have an opportunity to redline our geek-o-meter. I always look forward to those opportunities. Before we dive in, let me ask you to do this. Could you tell us a little bit about yourself and your background and how you got into pipelining?
Doug: Okay. My name is Doug Fisher. I’ve been involved in SCADA for 35 years. Some of that has been architecting the software architecture of the SCADA systems themselves. Some of it has been in installations. I have experience around the world. I’ve been to Singapore, China, Taiwan, the Middle East, as well, of course, as Canada and the U.S.
With that experience is really where I found that there’s a need for applying analytics to SCADA systems, but SCADA systems really aren’t ready for it. That’s why I started up Rogue7 to apply machine learning to the oil and gas space.
Russel: Let’s unpack that a little bit. First, let’s talk about what is analytics. That’s the buzzword du jour around technology these days. What is analytics?
Doug: Really, analytics can be such a wide variety of things, but there’s a bunch of data that SCADA systems are gathering on your pipeline. There’s things like the pressures and the temperatures, and it’s called time-based, because it’s all coming in at various speeds.
That data is just wanting to be analyzed, wanting to produce some insights that a human has a lot of trouble condensing down into something that’s useful. You can apply statistics to some machine learning. You can apply simple math.
Really, analytics, where it gets fun is where you can apply machine learning and actually make some predictions of the future or detection of things that are abnormal. That’s really where I think analytics come into play.
Russel: Analytics, I’m going to try and restate this in a little bit simpler language. I encourage you to not get simple. Just this is me trying to learn it. Analytics is basically doing math and statistics on a time series dataset.
Doug: Yes, for sure.
Russel: I’m looking at what are the averages, what are the highs, what are the lows, what are the standard deviations? Then I’m doing other kinds of more advanced analysis of that series of numbers to try and learn things.
Doug: For sure. There’s the simple case of sometimes an operator is tasked with maintaining an average flow rate on the system. They don’t have an average flow rate. All they’ve got is the current flow rate. Of course, you want to produce a calculated average for them, so they can see it and tell how well they’ve been doing over the last several hours.
Then there’s the question of the future. Can you apply machine learning and actually tell them what the flow rate is going to be in a little while? That’s where the advanced analytics are going to come into play.
Russel: I’m going to ask you to define for me machine learning. What is machine learning, distinct from analytics?
Doug: Machine learning is really using things like TensorFlow, using Jupyter Notebooks, TensorFlow, Python to analyze and create a neural network that really has some intelligence. It’s based on the intelligence that the human brain has.
Can you apply it to the data that you have? That’s where I think the big difference. Machine learning is really applying intelligence instead of just statistics.
Russel: Right, yeah. That’s a mouthful. I’m a math geek, and I’m one of these guys that loves statistics. If I had unlimited free time, I would be doing machine learning and trying to uniquely apply it to stocks, trading, and that kind of thing.
Basically, this is the same kind of thing that people have been doing for a long time around trying to forecast the price of a commodity or forecast the price of a stock based on past data.
Doug: Yep, that is. Certainly, machine learning has been applied to the stock market. The stock market isn’t nearly as predictable as a pipeline is, because a pipeline — you put more stuff in, more stuff comes out. You increase the pressure, more flow rate occurs. There’s definite physics involved that can be used to predict.
Russel: Yeah, exactly. The technical term being it’s constrained, right? It’s an equation that’s constrained.
Doug: It is. Sometimes there’s PLCs in there. It may not be physics, really, but maybe the PLC opens the valve when the pressure gets higher than a certain limit. Machine learning can learn that just as well as it can learn PV equals nRT.
Russel: Right. Machine learning is basically taking the analysis and using it to predict the future state. You also use neural networks, so maybe we should talk a little bit about what a neural network is.
Doug: A neural network is modeled on the neurons in your brain. They take thousands of inputs, and they apply a weight to each one of them, saying, “I’ll take a whole bunch of this, a little bit of that, and some of that. I’m going to add them all together and produce one output.”
Now, that by itself is simplified, but that’s how your brain works. As you get a network of several things all working in parallel, you can produce amazing results based on just adjusting the weights for each of these neurons, each of these inputs.
Russel: That’s the learning part of machine learning, is knowing what the inputs ought to be and what the weights ought to be to apply to those inputs, right?
Doug: Yeah, it’s very much a training process. This is where machine learning and the hardware to do machine learning has really, is changing every month. It’s becoming better. This is where the training happens.
You have to basically reward the algorithm when it’s doing a good job and punish it when it’s not doing a good job. It will learn and adjust the weights and biases as it gets more data, until it gets a good answer.
Russel: Doug, how do you reward and punish an algorithm? This is fascinating. How do you actually do that?
Doug: In the end, it’s called a loss function. It’s all mathematics in the end. You basically take all your inputs, run it through the neural network, and you get an output. You run it through a loss function and say, “That one is really bad, or it’s just a little bit bad.” It’s a number. What is the loss? Some number between zero and a million.
Then the software for TensorFlow, for machine learning, basically takes that number and says, “Oh, I’m going in the wrong direction. Let me adjust all my weights just a little bit down a bit, and I’ll see if that makes it better.”
They say, “Oh, it’s only 900,000 now. I’m going in the right direction.” It’s that’s going in the right direction that is really where the learning speeds up, because there’s so many numbers. We’re talking 10,000 numbers that have to be figured out what the right value is.
By doing a little bit at a time, and you have enough data for it to learn from, it gets you answers, and it gets you pretty good answers.
Russel: This is a great, great, great definition, Doug. It’s bringing to mind for me a PID controller.
Doug: Yeah, well…
Russel: If you think about a neural network, is it basically a PID controller on a series of numbers?
Doug: I know what you’re saying.
Russel: Is that an appropriate analogy?
Doug: Yeah, it is, pretty much, but it’s like multiplied by 10,000 times. Yeah, it’s basically when the output, when the error gets too big, play with the numbers to make it better.
Russel: Right. If you think about how a PID controller works, if I’m well away from the number I want to get to, then I make a different kind of adjustment than if I’m close.
Doug: That’s right. When you’ve got hundreds of set points, hundreds of PIDs together, you’re stuck with making a little adjustment to one and going, “Well, that improved it overall.” Then you make a little adjustment to another one.
“Well, that made it worse, so I’ll go back the other way on that one.” It becomes an iterative process for a bunch of PID loops.
Russel: When you think about that from a machine learning standpoint, if I scale that way up, and I have a lot more inputs and a lot more decision points that are distilling this thing down, all of a sudden, I get it to one thing that I care about.
Doug: That’s right. You’re actually not limited to one thing, but that’s the simple case, is getting one thing. “What will the pressure be in an hour?” or, “What will the flow rate be in an hour?” You could do one model for pressure, one model for flow rate.
Some of the advanced techniques allow you to do all three at the same time, using the same model. The interesting thing about this model is it’s taking only data in. You’re not putting anything about the engineering of the pipeline.
You’ve got no pipeline elevations, no pump curves. None of that difficult stuff needs to be put in.
Russel: That’s compelling to me when you think about how analytics might be applied to pipeline control. If I now don’t have to have a hydraulic model to forecast my future state, and I’m going to do that all with just statistics and analytics, based on, that means that the model can learn this particular pipeline.
Russel: I don’t have to configure in a hydraulic model the pipeline, the fluid properties, and everything else. What I can do is just have the model learn those things.
Doug: That’s the benefit of it. Instead of giving all of those engineering data to hydrostatic engineering, and him going away for six months, writing up a model, instead, you hook up the machine learning system, and let it read data from your live system for about a month or so, and let it calculate. Let it build its own model inside, and you get pretty much the same accuracy.
Russel: Yeah, no, that’s really interesting. That’s a nice segue. I wanted to talk about, so how does all this analytics and machine learning apply to the pipeline control? How would you use it to add value or improve our ability to operate a pipeline?
Doug: That’s certainly important. Why would you apply machine learning unless you’re going to add some value, or you’re going to reduce costs? One of the big costs in a pipeline is electricity, or whatever power it is being used to pump the product through the pipeline.
You want to be able to reduce the costs, because sometimes, increasing the power to the pump, you increase it 20 percent, and the flow rate increases by 0.5 percent. There’s no real reason to waste all that energy doing that.
Can you cut down the energy consumption and still meet your goals of getting the product through the pipeline? The other thing that’s really important is throughput. For an oil pipeline, there’s a lot of pipelines out there that are throughput limited.
If they could get more product from the field to the refinery, the refinery gets to make more product and make more revenue. How do you get more product through the pipeline? That could be done as well. You optimize your set points for getting the most product through, essentially, but not overpressuring the pipeline.
Of course, you don’t want to go past any design limits, so what’s the closest you can get to the design limits and increase your flow rate?
Russel: This is actually a really interesting point. On a previous podcast, I was talking about one of the challenges is that, whenever you optimize one thing in a pipeline system, you frequently de-optimize others.
If I’m optimizing for pressure cycles, I could be de-optimizing for flow rate, right? If I’m optimizing for flow rate, I might be de-optimizing for pressure cycles. When you apply something like this kind of tool, I can now start looking at how I optimize across a broader set of things that matter.
Doug: That’s the interesting thing about machine learning. It’s math. Essentially, we’re optimizing for the result of a formula. If this hour, flow rate, throughput is the most important thing, then you optimize for throughput and forget everything else, and that’s fine.
Other times, you can say, “Well, throughput is 75 percent important now, and the pressure fluctuations are the next 25 percent,” so I apply my formula differently, and you get different suggestions out of machine learning for how to optimize it.
You can do pressure fluctuations, like you said. You can do throughput. You can reduce power consumption, if your SCADA system, if we’ve got data for power consumption. Of course, that’s one of the things that analytics, machine learning really needs.
If you want to optimize for power consumption, and you don’t have power consumption data, then that can’t be done. You’ve got to be able to find the way to get that data in, and that’s not always in a SCADA system. Sometimes, you have to bring in it from outside systems.
Russel: That’s actually a whole ‘nother conversation, because we tend to think, in the pipeline world, as all the data coming through SCADA. I actually have a theory that we’re going to see the SCA and the DA separate as we move forward with technology, because there’s all kinds of data that we want that’s real-time data feeds but are not necessary for safe operations of the pipeline.
Doug: I agree completely. I’ve talked to SCADA operators or SCADA IT people that are basically trying to reduce the number of points going into their SCADA system, because it basically results in regulatory overhead.
They’ve got more work to do. They’ve got to prove it. They’ve got to regulate it. You go, “Well, in the 21st century, we shouldn’t be looking for less data. We should be looking for more.” How do we do something with more data? How do we bring the currents feeding into the pumps and actually do something useful with it?
Russel: How do you separate the safety data from the operations data, the data I need to optimize versus the data I need to operate safely?
Doug: I think that’s really where putting a system adjacent to the SCADA system… Sure, you get the data out of the SCADA system, but you also get the data out of maybe a PLC that’s monitoring the electricity going to the pump. Maybe you download weather data. Maybe you’ve got forecasting data, whatever. You get the data into the system, let the machine learning calculate it. You can still present the information to the operator, but on an adjacent screen. It’s not part of the SCADA system.
Russel: Right. It’s part of the operator’s information construct, but it’s not part of the critical, safe operations of the pipeline system.
Doug: Right, and that’s what’s in a control room already.
Russel: Exactly. How does this work in practice? You’re talking about “I put the data separate from the SCADA screen or the screens I use to monitor and operate the pipeline.” You’re not inferring that I’m going to take analytics and change set points using the analytics engine.
What you’re saying is, “I’m going to take analytics, and I’m going to present information to the controller that they can use to support their decision making.” Is that correct?
Doug: That is correct, although I can see the future coming, too. I don’t know if it’s 10 years down the road or 5 years down the road, but somewhere down the road, the analytics will be fed straight into the SCADA system, and we’ll get results.
Right now, there’s regulatory limitations that limit what data can go into SCADA systems for safety purposes. That’s where, at this point in time, I think the sweet spot is really providing the feedback, saying, “This is the suggested set point that we’re suggesting right now. If you like it, you put it into the SCADA system.”
Keep the operator in the loop, but just give him more tools so that he can make the choices. If the circumstances change, and they’re no longer going for throughput optimization, give him tools to get power optimization instead, but give him tools so that he can do his job better. That’s instead of taking over his job.
Russel: That’s right. Take the things that machines do very well, and could create a lot of workload, and get them off the controller. Let the controller do the things they do very well that allow somebody to identify abnormal, to figure out where I need to be. Then let the machine help me figure out how to get there.
Doug: Yeah. The new generation of people that’ll become control room operators, they’ve been growing up on Xboxes. Xboxes don’t say, “Adjust this, and we give you no information to help you.” They give you advice and suggestions and potentials for the future kind of thing. There’s lots of information that help them, and why can’t we give that to this control room operator?
Russel: I think you make a very good point, Doug. I want to shift a little bit, because before we got on the microphone, we were talking about digital twins. Could you give me a definition of what is a digital twin, and then we’ll talk about how that applies to this general topic here.
Doug: A few people have used different meanings, but I like to use the word digital twin, which is essentially the machine learning is making a model of the pipeline. It knows — based on the other sensors in the pipeline — what a particular pressure sensor is supposed to be reading.
Looking at the upstream pressure, downstream pressure, and the current valve positions or pump settings, it knows what the pressure should be. Based on that model, you get, it’s basically a twin of the real sensor.
Now, the beauty of the twin of the real sensor now is that, what happens if that sensor fails? The wire falls off it. Maybe the value freezes. It’s a reasonable value, but it’s not the right value. Let’s say it’s the limit of the pressure on your pipeline.
It’s running at 300 right now, and that’s normal. Then you up the set point, it stays at 300, and you think this is fine. It’s really, the digital twin is saying, “Well, really, it should be reading 500 right now.” You’ve got a real problem. Unless you’re told that the real sensor is faulty, how are you going to detect that?
The digital twin is used to detect faulty equipment, equipment out of calibration, and really, it offloads work. Part of what machine looking happens is we get looking at the data that’s coming from a system, and we find out that there’s, 10 percent of the sensors are really malfunctioning half the time.
No one’s noticed that they’re malfunctioning, because they don’t serve a day to day purpose, but there is a purpose for that sensor to be there. When the operator needs it, and it’s not working, that’s a big deal. Detecting it early is an important piece.
Russel: There’s other potential applications for that as well, because there’s reasons other than a bad transmitter that the number you’re seeing on your screen doesn’t match what the analytics engine or the machine learning engine is telling you what you should be seeing.
There could be other kinds of process upsets or abnormal operations that would cause the number I’m seeing to be real on the live transmitter, but it wouldn’t necessarily point me to an abnormal operation as quickly if I didn’t have the digital twin.
Doug: For example, it might not be the sensor itself that’s wrong, but maybe there’s a valve that’s saying it’s closed, but it’s not. The valve didn’t seat properly, and so it’s leaking. The pressure is bleeding through the valve. That shouldn’t be happening. The digital twin would indicate there’s a problem and something to look at.
Russel: I think this goes to the conversation you’re talking about later — the nature of what you’re asking the controller to do. I would rather have my controller trying to figure out, “Do I have an abnormal, and what’s the root cause?” than just tweaking buttons to meet a delivery.
Doug: For sure.
Russel: That’s an over-simplification, right, but that’s where we’re talking about heading.
Doug: One of the things that’s incredibly important for ecology, for green goals is leak detection systems are in place right now. Leak detection really is based on analytics, as well.
When they produce an alarm saying, “We think there might be a leak,” the control room operators have a very limited amount of time — maybe 10 minutes — to make a decision about whether that leak is real or it’s caused by some fault of some equipment.
This meter is faulty. This pressure sensor’s wrong. It’s not really a leak, it’s just that there’s faulty equipment. Why not tell them it’s faulty even before the leak alarm happens? That way, they don’t have this 10 minutes of panic as their supervisor’s looking over their shoulder, going, “Do we have to shut down the pipeline? Do we have to shut it down?”
Think of the costs involved to shut a pipeline down and start it up again. That’s where the digital twin can pay off, as well.
Russel: I think that’s a very good point. I think there’s probably other applications for a digital twin around alarm response. Not just leak alarms, but other kinds of alarms. You could use a digital twin as a mechanism for determining if that alarm is valid or not.
Russel: You likewise could use machine learning to apply it to the workflow to do a validation around an alarm to figure a root cause. It gives you a way to automate the analysis behind the alarm enunciation. I think there’s a lot of potential applications here that make the controller’s job better.
Doug: Oh, yeah. I’m looking forward to the future of what machine learning can do.
Russel: Yeah, exactly. I think we’re in a very interesting time in the pipeline business, where there’s a lot of change going to happen around what we do with…what we’ve historically thought of as SCADA’s going to evolve pretty radically. If you think about, in terms of functionality, SCADA hasn’t evolved a lot in about 25 years.
We’ve had big improvements in how the HMIs — how we can illustrate things — but in terms of just the basic functionality, it hasn’t really evolved much.
Doug: No, and actually, the regulations seem to be putting a big force on preventing it from evolving much. It’s not uncommon to have a SCADA system that’s been in for 30 years.
Russel: That’s not a factor of regulations. That’s just a factor of making a SCADA change is a very expensive, painful process.
Doug: I’m blaming regulations for making it a painful process, but I think we’re agreeing, really.
Russel: Yeah, no, it’d be painful even if it wasn’t for the regulations, trust me.
Russel: Doug, let’s try to talk a little bit about where we think this is headed. Obviously, you guys are working on some technology and working on some applications. Where do you think this is headed? Where do you think we’re going to be in 5 or 10 years around this whole idea of analytics, machine learning, and digital twins in the control room?
Doug: Providing some assistance to the operator, I think, is going to be a brand-new thing. They’ve got very few tools. Leak detection’s the only real tool they have right now of any substantial note. Finding more and more tools and ability is just going to increase the number.
Actually, what I see in the future is part of what we looked at when we said that the stress on the pipeline, pressure fluctuations, a lot of pipeline companies have got a group that once a month does a report on how the pressure fluctuations affected the stress on the pipeline, SCC, or stress crack corrosion, is something they calculate on a monthly basis.
We can do that on a by-the-minute basis. We can provide feedback to the operators saying, “If you want to reduce the stress on your pipeline, then these are the recommended set points that we do.” It’s an optimization.
Instead of throughput, maybe you say, “I want to reduce SCC.” We’ve got that capability, but really, pipeline operators aren’t ready for it. They’ve got the wrong department involved. The operators are not really part of that operation yet.
That’s part of the future as well, is these capabilities are there. Can we actually work through the process of the business requirements, the different department responsibilities, to actually make it useful, and will the operators have enough time to do something with the information?
Russel: Yeah, said another way, Doug, I think what you’re leading to is how are we going to operationalize these things? The other thing you’re saying implicitly is that the way that pipelines are organized around their people structures is closely tied to the tools and technologies they have available to them.
As these tools and technologies become available that are different, there’s a rethink around how you organize your pipeline organization to make use of these things.
Doug: I agree completely. Will pipeline companies be able to adapt? I don’t know. My job is to give them the tools so they have a choice. I think that they don’t know the capabilities of machine learning. They don’t know the capabilities of analytics. They don’t even know what to ask for.
That’s like in one of my quotes from Henry Ford. It’s like, “If you ask the customer what they want, they’ll say a faster horse,” when that’s not really the answer that will make them happy.
Russel: That’s right. That’s the fun part of what guys like you and I do. We get to work on trying to move the needle and challenge people to think about things different than maybe they’ve done before.
Doug: For sure, yeah.
Russel: That’s what makes it interesting, right? That’s how you have an impact. That’s how you help the industry get better. Ultimately, that’s what we’re talking about doing. Look, this has been awesome. Anything you want to add to put a capper on this conversation, Doug?
Doug: I think that machine learning is primed and ready to go. I think that it’s being improved for many other reasons, like Walmart, Google, and people are doing a lot of advancements to machine learning. It’s about time we applied them to the pipeline space, because the pipeline space can make such a big advantage of it, but I think you really need a bridge.
What I’ve seen is a lot of pipeline companies have had failures, essentially, where the machine learning people don’t understand the pipeline problem. I think that it’s necessary for a company like Rogue7 to apply the bridge between the two.
To have somebody that knows their pipelines, and knows machine learning, and then can work with their matter expert on the pipeline, and communicate it with the machine learning people that are, and create solutions that are useful and have got a business payback. That’s really where I think the future’s tied to.
Russel: I think you’re absolutely right. It’s going to take people like yourselves that are facilitators that can take and bridge the gap between all the complexities of the technology and the knowledge of the actual pipeline operations.
I would also say this — this would be the part I would add to this conversation in terms of the future, and I’ll talk about what I think the challenge is — for this to really work, you’ve got to get a firm handle on all your real-time data feeds.
You’ve got to have accurate, consistently-named, well-organized data streams for this to work. That’s going to be a challenge for a lot of folks.
Doug: That’s one of the things that we want to take on that ourselves. When we put it in a box and that’s doing the machine learning, we’re going to gather our own copy of the history and store it in the format the machine learning can do a good job with.
Wrangling data — it’s a term we haven’t brought up yet — data wrangling can be an incredibly expensive problem with any machine learning solution, whereas if we take it over and give a complete solution, we can solve that problem. We’ll get our own data. Then we don’t have to wrangle it.
Russel: That right there is a great place to stop because that, Doug, is a podcast for another day. [laughs]
Doug: There you go, sounds good. This has been awesome. I’m really glad we had this opportunity to talk.
Russel: Thanks for coming on. This has been fun. I appreciate it. We’ll look forward to having you back in the future.
Doug: Thanks a lot, Russel.
Russel: Hope you enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Doug. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinerspodcast.com/win to enter yourself in the drawing.
If you’d like to support the podcast, the best way to do that is to leave us a review. You can do that on iTunes/Apple Podcast, Google Play, or wherever you listen. You can find instructions at pipelinerspodcast.com.
Russel: If you have ideas, questions, or topics you’d be interested in, please let me know on the Contact Us page of pipelinerspodcast.com or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords