Skip to main content Skip to Footer

BLOG


April 21, 2016
On data and pipeline integrity
By: Jeffrey Miers

Today in the pipeline and midstream industry, we’re hearing a lot about data.

The US Congress continues to revise the Pipeline Safety Act of 2011, extending the legislation through 2019 and increasing funds for pipeline safety programs and grants and, in March, PHMSA announced proposed regulations to update critical safety requirements for natural gas transmission pipelines. Within these evolving regulations is a thematic undercurrent around data, particularly as it relates to the safe operation of assets in the field. Ultimately, that's the primary objective: to make certain that companies are gathering and applying data as best as they can to improve the safe operation of assets–be it pipe, or compressors or whatever's out there.

Companies are taking stock of the quantity and variety of data they have, and trying to figure out what condition it’s in. Are the data sets current? Do they accurately reflect the state of the infrastructure? Are there processes in place to refresh the data sets and keep them current? Do we have all the data we need?

With respect to the management and integrity of pipelines, regulators want to know that the data is “traceable, verifiable and complete.” In other words, can you trace the data back to its source and document it? In some cases, that could mean going all the way back to the manufacturer of pipes that are more than a half-century old. Some 60 percent of all US pipelines were installed before 1970. But for many pipeline operators, the most basic challenge is compiling a complete set of data.

For example, one of the things we are doing with Columbia Pipeline Group, with the Intelligent Pipeline Solution by GE and Accenture, is loading a lot of their data–they’ve got about 15,000 miles of pipe and a lot of it is very old, built in the 1940s–into our tool set and generating what we describe as data completeness reports. We’re not yet trying to pass judgement on the quality of the data, but first saying, “Where are the gaps in the data sets you have?” You can then look at how those gaps in data overlay geographically across the pipeline.

Every operator is going to have a different plan as to what actions to take to fill those gaps, but one of the ways we use the Intelligent Pipeline Solution is to help companies get started. The tools we show them in some cases leverage advanced analytics. We’re talking about analytics that have the potential to yield insights into not only improving the safe operation of assets, but ultimately optimizing their performance. The reality, though, is that some companies are not yet able to fully harness the full potential of their data because they simply don’t have a handle on it. Starting a program around Intelligent Pipeline gives companies an opportunity to use the tools to begin to solve those basic data remediation problems and position themselves for a future of greater operational efficiency.

Popular Tags

    More blogs on this topic

      Archive

        SUGGESTED CONTENT