Pipeliners Podcast


The Pipeliners Podcast is excited to deliver a series of episodes with Giancarlo Milano of Atmos International. In the second episode of the series on leak detection, Russel Treat and Giancarlo discuss the statistical volume balance model for leak detection.

In this episode, you will learn how the SVB model uses statistics to calculate the probability of a leak in a pipeline using key factors. You will also learn about the importance of the Sequential Probability Ratio Test to deliver the calculation necessary for this model. Is this an ideal model for pipeliners? Listen to find out.

In the next episode of the series, Russel and Giancarlo will discuss the difference between leak detection and rupture detection, including the role of Negative Pressure Wave technology.

Statistical Volume Balance Model for Leak Detection: Show Notes, Links, and Insider Terms

  • Giancarlo Milano is the Senior Simulation Support Engineer at Atmos International. Connect with Giancarlo on LinkedIn.
  • As part of this series with Giancarlo, enter to win our book giveaway contest for the “Introduction to Pipeline Leak Detection” by Atmos founders Michael Twomey and Jun Zhang.
  • Leak detection systems include external and internal methods.
    • External methods are based on observing external factors within the pipeline to see if any product is released outside the line.
    • Internal methods are based on measuring parameters of the hydraulics of the pipeline such as flow rate, pressure, density, or temperature. The information is placed in a computational algorithm to determines whether there is a leak.
  • Statistical Volume Balance is a method using the volume in and out of a pipeline, along with pressure changes to account for the pipeline inventory in real-time. This method is also capable of detecting smaller leaks while coping with transient conditions. A statistical approach using Sequential Probability Ratio Test (SPRT) evaluates the probability of a leak in the pipeline.
  • The nominal flow rate measures the volume of a substance passing through the pipeline under specific pressure conditions in normal operating conditions.
  • The leak rate measures the accuracy of a leak alarm by comparing how much product is coming into the pipeline versus what is going out of the pipeline.
  • Pressure drop analysis allows a pipeline operator to determine the location of a leak by comparing the data recorded from adjacent pressure sensors.
  • Ambient temperature is the temperature surrounding a piece of equipment. The equipment typically includes sensors to recognize changes to the temperature and send the data to personnel monitoring the temperature.
  • The enhanced real-time transient model is an advanced version of the real-time transient model by using advanced data collection capabilities to reduce the occurrences of false alarms in a system.
  • API 1130 is a recommended practice published by the American Petroleum Institute and incorporated by reference into the U.S. pipeline regulations in 49 CFR 195.134 and 49 CFR 195.444 for how pipeline operators should design, operate, and maintain their computational pipeline monitoring (CPM) systems. While this standard was not discussed during the podcast, it is a critical document for any pipeline operator with CPM-based pipeline leak detection.

Statistical Volume Balance Model for Leak Detection: Full Episode Transcript

Russel Treat:  Welcome to the “Pipeliners Podcast,” episode 25.

[background music]

Announcer:  The Pipeliners Podcast where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now your host, Russel Treat.

Russel:  Thanks for listening to the Pipeliners Podcast. We appreciate you taking the time. To show our appreciation, we’re giving away a customized YETI tumbler to one listener each episode. This week, our winner is Jonathan Tindall with Atmos International. Congratulations, Jonathan, and enjoy your YETI.

To learn how you can win this signature price pack, just stick around to the end of the episode. This week, we have Giancarlo Milano with Atmos International returning for the second in our series on “Leak Detection.”

This week, we are going to be talking about Statistical Mass Balance as a method of leak detection. Welcome to the podcast, Giancarlo.

Giancarlo Milano:  Hey, Russel. Good to be back.

Russel:  Last week, we talked about Real Time Transient Models. This week, we’re going to talk about Statistical Volume Balance. Probably the best ways to start, first, is to ask you why don’t you tell us what Statistical Volume Balance is as a computational pipeline monitoring approach.

Giancarlo:  Right. Let’s start with that. The Statistical Volume Balance, it’s a method for leak detection where the system uses computational volume balance in a pipeline.

Russel:  Why don’t you explain to me in layman’s terms what is a Statistical Volume Balance?

Giancarlo:  A Statistical Volume Balance or SVB, it’s a method for leak detection where it’s used in the corrected volume balance of the pipeline in conjunction with statistical approach in order to determine whether there’s a leak in the pipeline or not.

Now, the statistical approach that is used is known as SPRT, which stands for Sequential Probability Ratio Test. The idea behind this statistical approach is to determine the probability of there being the leak in the pipeline by comparing two scenarios, or rather two hypotheses.

One of the hypotheses being whether there is a leak in the line, and the other hypotheses, whether there is no leak in the line. The approach uses the Gaussian theory within the SPRT.

What it does is when the corrected volume difference between…which is calculated from what’s coming into the pipeline, versus what’s leaving the pipeline, plus an inventory compensation factor. It’s a value of the leak size that we’re looking for, then the probability of that leak starts to increase.

When it reaches certain threshold, a probability of about 99 percent, that’s when the system goes ahead and throws off a leak alarm to the operator.

Russel:  We didn’t talk about this before. We kicked off the episode, but you just threw out a whole bunch of math that some of the listeners might not get.

I want to try and make this a bit more simplistic. When you say statistics, what we are talking about is the calculation of a probability, and probably the thing that most people are familiar with in that term is gambling — flipping a coin, playing cards, and knowing what the possibility of a certain outcome is.

There’s two things that factor into that — it’s what is my current situation. If you think about that in terms of playing 21 or blackjack, and somebody counting cards, every card that comes out of the deck impacts what’s left in the deck. It’s the same thing as what’s you’re…instead of looking at the card coming out the deck, what you’re looking at is…

I’m comparing two possibilities. One possibility is there is a leak. One possibility is there isn’t. I am applying math, and then applying statistics, which is the SPRT thing, to determine which of those is more likely to be true — the leak or not the leak.

What that means is that thing has to have data to learn from. It’s got to compare with the history.

Giancarlo:  Correct. They way that the system is implemented is by us analyzing the difference of the cards. In this case, the cards are going to be the flow at every injection and every delivery point, plus the pressure, maybe temperature variation inside the pipeline, which are used for compensation purposes, rather to calculate the inventory of the pipeline.

Every time that you’re taking a scan, every time you’re taking a card, as you pointed out in your analogy, what we’re doing is comparing what’s called the normal flow difference of the pipeline, which is the no leak scenario versus the leak size that we’re looking for, which is going to be the leak scenarios.

The difference between these two is going to give us the increase of the probability of there being a leak in the pipeline.

Russel:  This is interesting. If we compare this a little bit to what we talked about last week, which was the Real Time Transient Model. The Real Time Transient Model relies on creating a mathematical representation of the pipeline. This doesn’t rely on that. This relies on knowledge about the pipeline behavior in terms of pressures and volumes.

Giancarlo:  That is correct. We are not trying to simulate what’s hydraulically happen inside the pipeline. What’s actually being done is purely comparing natural volume balance — how much product is coming in, versus how much product is coming out.

On top of that, in a leak free scenario, ideally in theory, everything that comes into the pipeline should be coming out of the pipeline. In theory, the pressures in the pipeline should be nice and steady. There should be no changes in the inventory.

When there’s packing situations, that’s going to change. When there’s no pressure on transient, that’s going to change. The inventory is used to compensate for that difference that’s seen between injections and delivery during those transient conditions.

This difference, the system is calculated in this value every scan. It’s monitoring what’s coming in, what’s coming out, what the difference is, and every scan is comparing to that leak scenario or leak hypothesis in order to determine the probability of there being a leak in the pipeline.

When there’s a leak, what’s coming into the line is not going to be the same as was coming out based on the flow instrumentation, also by the pressures in the pipeline due to the inventory of the line. That difference in corrected flow is going to increase above the threshold of the leak size of the system that’s looking for.

During that situation, the probability is going to start to ramp up, is going to start to increase from 0 to 99.99 percent. When it gets to that threshold, that’s when the leak alarm is thrown and the operator can take his action.

Now, one of the interesting things about Statistical Volume Balance is that the operator actually has the ability to visualize what the probability of the leak is at any given time. If he’s operating the pipeline and there’s zero probability, he’s not doing anything, everything is great.

Now, if he sees that all of the sudden the probability starts to increase, and he hasn’t touched the pipeline, he can start to wonder if something has happened that might have caused the probability to start increasing.

He can pick up the phone, he can inquire with the field operator to see if there’s something happening into the pipeline that may be causing the probability to increase. In some instances, it has been known that operators identified an issue and shut down the pipeline before that probability gets 99 percent, because they’ve already identified something in the field already.

They don’t need to wait for that leak alarm, to come in at 99 percent, but they can start making those increase as that probability increases.

Russel:  That certainly requires the operator that has some education about the model and how it operates so that they can make those decisions. I think a couple of things that the listeners might want to know as we are talking about what this is.

It’s easy to oversimplify I think what a statistical volume approach is doing, because you’re doing a live fully corrected volume balance, which means correcting for volume in the pipe based on pressures and temperatures and things of that nature.

Consequently, it’s not just a matter of comparing number A to number B, there’s some computations I have to do to get to a number representing this probability.

Giancarlo:  Correct. That is right. When we are talking just about the compensation part, every scan you’re taking the sample for all the product that’s coming, versus all the product that’s coming out. That’s going to tell you in basic terms whether you have a packing or unpacking situation or your line is running nice and steady.

The next factor after that is calculating whether it has been a pressure change from the previous scan to the current scan. By analyzing that pressure change, then you’re going to know whether there’s been a transient that has been introduced into the pipeline. Now, the tuning of this inventory factor is going to depend from pipeline to pipeline and from operator to operator.

If we’re talking liquids or gas pipelines, the way that the inventory is calculated — and it’s used to compensate for the flow that is not being by the instrumentation in the presence of the leak — it’s tuned slightly different.

It does take some time to analyze this data, and then tune the inventory in order to make sure that your inventory factor and your flow difference, it’s as close to zero as possible. Let’s take, for example, if you’re flowing a pipeline at, let’s say, 1,000 barrels an hour. That’s the volume that’s coming in. Then ideally, at the outlet, you should be seeing the same 1,000 barrels an hour. Now, when you start up the pump, or slightly shut down the control valve at the outlet to change the flow set point, you’re going to seeing a flow imbalance.

Let’s say that you change outlet flow set point at the outlet station to control by 200 barrels an hour less. Now, you’re injecting 1,000 barrels into the pipeline, and you’re taking out only 800. That 800, based on the length of the pipeline, based on the product that you’re moving, that 800 is not going to be seen at the inlet right away.

It’s going to take some time. Now, if you’ll recall what we discussed last week, and also in our fundamentals chat, there’s going to be a pressure transient that’s going to move from the moment where that change occur along the pipeline.

Now, as that pressure wave travels through, there’s going to be pressure meters along the line, and also at that station that is going to capture that change. We can use those pressure changes in order to compensate for the flow change that has not yet been seen at the inlet of the station.

The pressure signal attenuates along the pipeline. Those changes are going to be less and less. Then eventually, that flow injection is going to be decreasing little by little. This is, of course, in a no leak scenario.

What you are doing is trying to balance by comparing the flows and the pressures to make sure that when you add the two together, the flow difference and the pressure compensation, in theory, that number should always be zero.

Of course, we know that the difference between theory and reality is always different. There’s going to be some factors, and a little bit of a room that we need to tune for in order to make sure that we are compensating for them.

Statistical Volume Balance systems can typically be tuned to about one percent of the nominal flow rate of the pipeline. What’s typically done is that a fixed nominal flow rate is picked around what’s the normal operation on the pipeline between low flow rates and high flow rates.

Based on that number, the system, it’s tuned for typically one percent in about 60 minutes. That’s just a go-to number, a standard number, provided that the accuracy and the repeatability of the instrumentation is kept under some specific numbers.

Now, one of the most important factors of the Statistical Volume Balance is not much the accuracy of the instruments, but rather the repeatability of the instrument itself. When we talk about repeatability, we’re not just talking about the repeatability that is provided by the instrumentation manufacturer.

We’re actually talking about the repeatability between every injection meter and every delivery meter. It’s one meter compared repeatable to another meter.

If you think about it, when we’re looking at how much flow is coming in versus how much flow is coming out, ideally, during a no leak situation, when the pipeline is running nice and steady, everything that’s coming in should be coming out.

We know that in reality, that’s not always the case. Even if you take those instruments from the laboratory conditions to the field, there’s always going to be some difference between how much product’s coming in, versus how much product’s coming out.

That difference right there is going to be the natural difference of your pipeline, your instrumentation. What’s first done is that we learn this difference. Then, we base our no leak and leak scenario based on this.

Now, how much we can detect is going to be based on the performance of that instrument. Typically, the sensitivity of the system is directly based on the instrumentation repeatability. As I mentioned earlier, determine leak sizes in the range of one percent in 60 minutes.

We have seen systems where the instruments is very well, systems are tuned and optimized very well. We have them drop those leak sizes to 0.5 percent of the nominal flow rate under 30 to 45 minutes. It’s really going to be dependent on the tuning and the performance on the instrumentation.

Russel:  We made the same comment about Real Time Transient Models last week. However, the difference is that in that situation, you want the instrument to give you the actual number.

Giancarlo:  Correct.

Russel:  The difference is accuracy versus repeatability. Real Time Transient Models need accurate numbers.

Giancarlo:  Correct.

Russel:  Statistical Volume Models need repeatable numbers, meaning, if my number is off a little bit but it’s always off by that same amount, then the way I’m doing the math is going to wash that difference out. It need to always repeat that same number.

The other thing that people that work with instruments know is that they tend to bounce.

Giancarlo:  They do.

Russel:  When you’re reading them very, very fast, you’re not getting exactly the number that you’re reading for. Anyway, I think that’s certainly helpful to understand that distinction. What about reliability, in terms of false alarm levels? What do you find there, as it relates to Statistical Volume Balance?

Giancarlo:  Statistical Volume Balance are one of the systems that have a very low false alarm ratio. This is because the systems are tuned and optimized to different operational conditions based on the data that’s collecting during the optimization or tuning process.

Provided that we’re able to grab the information, tune it, and learn about the repeatability between these instruments on the pipeline, we’re able to keep the system highly sensitive with a low level of false alarm.

One of the features of a Statistical Volume Balance is that it actually uses pattern recognition in order to learn of different operational conditions. During a leak situation, then we’re going to be able to learn what the good probability of that leak is.

During an operational conditions, we’re also going to be able to identify that those changes in operations are not related to a leak, but rather related to something that has happened in an injection or a delivery facility that has caused a transient.

Now, during the RTTM talk last week, I had mentioned that when there’s transient conditions in the pipeline, an RTTM system will typically increase the leak size that the system is looking for, or rather decrease its sensitivity in order to avoid false alarms.

Statistical Volume Balance, they don’t do that. What we actually do on a Statistical Volume Balance is learn about that operational transient condition, or recognize it by using patterns.

Then, rather than changing the leak size that we’re looking for, then we extend the detection time just for a little bit in order to cope with that pressure wave that is moving through the system. During these transient conditions, we will still be looking for that 0.5 or 1 percent leak rate of the nominal flow rate.

But what we’ll actually be doing is that we’ll say during this 5 or 15 minutes, depending on the pipeline, really, we’re going to be using an extended rate for that probability. Let’s say we’ll go from one hour to two or three hours, only for that short period window in order to cope with that indifference that we’re not able to compensate appropriately, due to the nature of the instrumentation.

During that time, what we’re doing is we’re avoiding false alarms. If there is a flow difference, then the system is still reacting, but it’s reacting just a little bit slowly during this short period of time until that transient has gone away.

Then, the probability continue to increase at its normal rate. When it comes to reliability, we’re talking about a very high, reliable system that’s able to give you a good sense for a leak alarm with a low false alarm ratio.

Russel:  Then that leads to the other question about accuracy, that being the ability to identify the location of the leak and its size. How does a Statistical Volume Balance do that?

Giancarlo:  The main drivers for this system are going to be the flow and pressure. We talked about repeatability. We talk a little bit about accuracy. Accuracy, of course, is also very important for the system.

When we’re talking about the accuracy, we need to make sure that when the leak alarm does come then, how accurate is that leak rate? Obviously, that leak rate is going to be based on volume balance, how much product is coming in, versus how much product is coming out of the line.

Because we have learned what the natural flow differential of the pipeline is during steady conditions, in the situation where there is a leak present, then we can use this natural difference, and calculate for the difference between the natural difference and the new difference in the presence of a leak.

Based on that, we’re able to provide a very good, accurate leak rate for the pipeline in the presence of a leak. Based on the leak rate, the system is actually able to analyze, or determine rather, when the leak started.

This is determined based on a linear calculation, based on the slope of the probability of the leak. By analyzing how that probability increases, whether it was a sharp increase or a slow increase, a linear approximation can be made are order to determine when that probability started to increase from no leak towards a leak scenario.

Based on that time, then we’re able to backtrack and calculate how much volume has left the pipeline from the moment that the leak started, based on the current flow rate that the system is calculating for the leak rate.

It’s also able to provide you with a good volume calculation of how much product has left the pipeline since the moment that the system estimated the leak has started. The next point is leak location. For leak location, what the system actually uses are the pressures in the pipeline, rather than the flows.

From the moment that the leak alarm comes in, what the system does is that it does a pressure drop analysis throughout the pipeline, and identifies the largest pressure drop. Based on that number, it’s able to identify the location between two pressure sensors by doing an analysis of how that pressure wave traveled through the pipeline.

Obviously, it’s able to provide you with a very good number of the location, as well. When we talk about the location, we have a rule of thumb. That is, the larger your leak size, the better your location, the faster your detection time.

What that tells us is that when you have a large leak, the signature of the pressure drop at the different instrumentation is going to be more identifiable. It’s going to be easy to identify. Even an operator could identify, just by looking at the SCADA screen, using the pressure drop method, by looking just at the pressure meter.

When we put that in a CPM system, we are able to identify the signals more easily, and we’re able to identify a better location based on that.

Russel:  Simply stated, you look at volume difference to get leak size, leak rate, and you look at pressure wave to get leak location, or pressure profile?

Giancarlo:  Pressure drops through the wave, and pressure profile, as well. There’s two methods for detecting the locations of a leak. One of them is the friction factor method, where we are analyzing the pressure profile on the pipeline, the pressure decay on the line during normal operations, and then the profile of the pipeline during a leak situation.

By comparing these two when there’s a leak present, we’re able to identify where that location is. The second approach for leak location, it’s time of travel. We’re analyzing when the pressure and what instrumentation dropped, versus when it drop at adjacent pressure meters.

Based on the drop, and also based on the magnitude of that drop, then we’re able to do a triangulation of where that leak is.

Russel:  The last of the four criterion in API 1130 is robustness, which is the ability to detect the leak through a range of operating conditions. At least in my experience, statistical models have a weakness. It’s this robustness.

I’m not saying they’re weak in this area, but as compared to Real Time Transient Models, they may not be as responsive, because I’ve got to have the history of that operating condition in order to be able to tune the model.

If I see an operating condition I’ve never seen before, that can be a problem for a statistical approach.

Giancarlo:  I actually tend to disagree with that. When we’re talking about the Statistical Volume Balance, and then when I was mentioning earlier about pattern recognition, we’re able to identify patterns at every injection and every delivery. We treat these individually.

When we’re looking at the pattern recognitions in an injection or a delivery station, then we’re looking at its own pressure and its own flow, and how those two are interacting with each other. When we tune a station for pattern recognitions, we analyze how the pressure and the flow react to each other.

We’re able to set up thresholds based on these changes in order to recognize these transients that are introduced into the pipeline. By doing that, an engineer tuning one of these systems can do…a leak detection can analyze what the threshold is and analyze the impact that the transient condition is having into the pipeline.

Then, rather than put in a bare threshold for these parameters, maybe just make it a little bit tighter, so when other conditions that have not yet been seen into the pipeline occur, then the symbol is able to still detect that transient condition, and avoid the false alarm.

Russel:  I think the point you’re making — and really, this is where I was going with the question — is that tuning is critical. That goes to the next question I wanted to ask about. When we talked about what’s required to put in a Real Time Transient Model, that was having a good mathematical representation of the pipeline.

In the case of a statistical model, because this is a learning algorithm, the system hasn’t had an opportunity to learn. Then the system has to be tuned. That’s a very different approach to what it takes to put one of these kinds of leak detection systems in, versus what you’ve got to do with a Real Time Transient.

Giancarlo:  The minimum instrumentation that these type of system require, just as a bare minimum, it’s flow and pressure. Many people ask, “Well, but you’re not taking temperature into consideration.” The simple answer to that if there’s a change in temperature, that’s going to be seen by the pressure.

It’s obviously going to depend on where the pipeline is in the world, and what type of ambient temperature effect is having, it’s able to see, it sees how it is affected, or based on the fluid itself. Typically, when there’s a temperature change, the pressure is going to react accordingly.

As far as driving a Statistical Volume Balance equation to determine the probability of a leak in the pipeline or not, the minimum requirement, it’s flows and pressures. Now, if there are additional systems or additional data that we can read, such as pump statuses, RPMs, set points, valve statuses, tank levels, all of this information is more than welcome.

What that’s going to do is that it’s going to help the leak detection engineer tune and optimize this system for those operating conditions during normal and during transient. What’s typically required for the tuning and the optimization of the system is about 30 days worth, initially. We say 30 days because that’s a cycle of monthly operation, but every month, operations could be a little different. Operation can change from winter to fall due to ambient conditions, and maybe the requirement on the nominations at the delivery point. All of that can change.

It’s not really until you have a one year worth of data and optimization, where you can say now I have a good tuned and optimized system. The initial tuning and optimization is going to give you very good leak detection, and reliable leak detection with no false alarm rates.

That person who’s monitoring the leak detection system as far as the optimization goes needs to be available in order to keep track of it. If there’s a new operation that has never been before, although those operation thresholds are not too tight, then they might need to be improved when we have that data.

One of the things that I say about really all leak detection systems is, we can only tune for the data that we have gathered, collected, and analyzed. If you have never seen that data, you cannot tune for it. You can make an assumption, but then that’s all going to be up in the air as far as it’s not really what’s happening inside the pipeline.

When we talk about leak detections, we have to make sure that we take that data, we analyze it, we pass it through the system, and then we observed how, in this case, the probably of the leak increases or not.

Russel:  We could do a whole episode on that statement right there, Giancarlo, about you can’t really know what the system can do until you see the data. That is always, always a big challenge. I don’t care what you’re doing.

You need that information. You need it accurately. You need to have it well understood. It’s only over time, by living in that data, do you really begin to understand, “Here are my real limitations, constraints, and boundary conditions for what it is I’m trying to do.” That just takes time.

Giancarlo:  Let me throw something out there, just based on my experience. You have no idea how many times I’ve seen where a system is working great during the morning shift, but then the evening shift comes in, and then the system starts alarming all over the place.

Just by having two different operators that are using two different procedures for your pipeline operation. The way they start up…

Russel:  Wow. I never even thought about that, but that makes perfect.

Giancarlo:  You’re tuning for this data that you’re collecting, and then after the system is delivered, maybe a new operator is hired, or someone is doing something slightly different, for whatever reason. Even that could affect your system.

Russel:  Something as simple as a different pump start sequence.

Giancarlo:  Correct. A lot of things like that could affect it. It’s important to make sure that the operator has a good relation with the vendor in order to know how the pipeline works, what new operations are coming in, or have happened in order to tune it, and optimize it for that first year.

Russel:  I say that all the time — the operator knows their operation. What we know is the technology. What we’ve got to do is work together to figure out how to…Those are two different kinds of knowledge, and they’re both necessary to do this stuff well.

Giancarlo:  Correct. It might take an operator 30 seconds to answer a question about why that pressure dropped that much, while it might take the vendor or leak detection engineer two or three hours by analyzing the flows, the pressures, pump statuses, and set point changes, to drive to the same conclusion.

It’s very important to have a relationship between the vendor and the operator.

Russel:  I think this is one of the reasons that PHMSA pushes so hard to make sure that the control room is involved in decision making about changes to the pipeline, because these things get impacted. They need to be thought through.

Giancarlo:  Absolutely.

Russel:  I want to talk about one last thing. We’re running a bit long. That is the enhanced Real Time Transient Model, which is relatively new, at least in my experience. Why don’t you talk about what the enhanced Real Time Transient Model is?

Giancarlo:  Enhanced Real Time Transient Model is being able to use your same principle for the real transients, transient model that not generating an alarm purely based on that method for determining the difference between the measured and the simulated value right away.

Doing some further analysis in order to recognize whether that difference, whether in the flow signal or in inventory, could be related to a leak or not. It’s not just alarming right away, but it’s doing some further analysis in order to determine what the probability of that leak is.

Russel:  My understanding of enhanced Real Time Transient is, to some degree, it’s combining the statistical approach and the Real Time Transient approach, using one to confirm the other. Am I understanding that correctly, or is that…?

Giancarlo:  That is an accurate statement. You’re not just basing the RTTM leak detection based on the difference. You’re doing something after that signal comes in. Don’t alarm right away, but use that leak alarm indication by the RTTM in order to determine what your probability of the leak is after that.

By doing that, you can have a more reliability RTTM type of system. You’re still obviously going to be subject to the accuracy in the instrument, the tuning, the optimizing, and possible errors introduced by the data that you’re putting into the model.

At least you will have a little bit of a buffer after that leak alarm signal comes in. Just don’t alarm based on that difference. Do a little bit of further analysis in order to determine whether that leak alarm is probably or not.

Russel:  It’s why we’re going to API 1175, because really, leak detection, it’s a program issue. It’s not a particular technology issue. I certainly think that’s thematic in both these first two conversations we’ve had.

Giancarlo:  Absolutely.

Russel:  Tell us a little bit about the book giveaway. I think the listeners will be interested in that.

Giancarlo:  I think on the notes for this episode, there is going to be a link to our website. The idea there is that you just go to that link, fill out some of your information — name, email address — and ask us whether you want us to email you a copy in PDF, or send you a physical book.

If you fill out that information, we’ll be more than happy to send you a book. The name of the book itself is “Introduction to Pipeline Leak Detection.” It was written by Jun Zhang, who is our director and CEO.

Dr. Jun is actually the person who worked with Shell in order to come up with the Statistical Volume Balance for their leak detection system. Last year, Jun and Michael [Twomey] wrote this book. It’s a non commercial approach type of book, just to provide users with a very good introduction to leak detection.

It doesn’t go into the technical details in too much depth, just enough to cover the bases of every single methodology for doing leak detection to provide broad knowledge to the industry. I think it’s a great book for anyone who wants to know a little bit more about leak detection.

After that, if they want more, then we’ll be more than happy to be there for them.

Russel:  Hope you enjoyed this week’s episode of The Pipeliners Podcast. I enjoyed the conversation with Giancarlo, and we need to say thank you to Atmos International. Atmos is making a special offer to support the podcast.

They’re giving away a free copy of the book “Understanding Leak Detection” to the first five listeners that go the Pipeliners Podcast. Go to the show notes, and link through to the Atmos site, where you can register to get a copy of that book, but you got to act fast.

Finally, a reminder before you go that you should register to win our Pipeliners Podcast YETI tumbler. Simply visit pipelinerspodcast.com/win to enter yourself in the drawing.

[background music]

Russel:  Thanks for listening. If you have ideas, questions, or topics you’d be interested in, please let us know on the Contact Us page at pipelinerspodcast.com, or you can reach out to me directly on LinkedIn. Thanks again for listening. I’ll talk to you next week.

Transcription by CastingWords

Pipeliners Podcast © 2019