Good Data Goes Beyond Sensors, with a Client Example

We recently had a client that was seeing odd fluctuations in their gas injection rates and trying to optimize gas injection was proving to be difficult. We are looked at the Gas Injection Rate over about 20 days. The data displayed is down-sampled to show an average point every 4hrs. The candlestick bars show the minimum and maximum value each of those 4 hours. As you can see from the graph, there is a constant change in the injection rate.

During a well review for this cohort of wells, one of the Production Engineers asked why the gas injection rate appeared to be changing on his set of wells. He knew that he was not making these changes and the compressors were operating correctly. The additional review found more wells with similar fluctuating characteristics. As they dug deeper, it was determined that the proximity of the compressor to the injection sensor was causing interference. Acoustic filters were added to the compressors and the resulting, clear and consistent signal is shown on the lower view.

What was important to notice in this example? Don’t just accept that the source data you are looking at is correct. Had OspreyData used the highly variable gas injection rate to make recommendations, the projected injection rates would likely have been consistently wrong. In this case, it was a small set of wells, but from an organizational perspective, it likely would have resulted in questions for all injection recommendations across all the fields and plays.

This highlights how subtle some issues with source data can be.

Let’s dig a little deeper. What are the things that we should consider when evaluating and assessing the source data? Maybe we should start by considering what type or source of data is required to drive predictive machines.

When we consider the data required to power machine learning, we must consider the full lifecycle of the well. Most uses cases that we, at OspreyData, have built, have moved beyond just utilizing a set of time-series based sensors. Well designs and completion reports provide insight into the physical aspects of the well and, in some cases, they can also provide a measure of the performance expectations for the well. Understanding maintenance or chemical treatments that have or have not been completed aids in the model’s effectiveness.

For failure diagnostics and detection, detailed failure reports including work-over and tear-down reports are critical. The understanding and categorization of failure and failure details can be of great consequence in constructing models. The more detailed data picture we can provide the more accurate the models can be.

We understand that many field-based priorities are dependent on the perceived production of a well. Developing a stronger understanding of production allocations allows systems to construct better interpretations of the differences in production.

The concept of “more data is always better” must be tempered with a thoughtful assessment of how that source data provides additional information.

To hear more about the importance of data quality, access our webcast replay featuring Ron Frohock (OspreyData’s Chief Technology Officer) and Ken Collins (with over 25 years of experience in oil and gas services and engineering). Access this webcast replay entitled “Don’t Lose the AI Race: Why You Need a Data Quality Strategy in Oil & Gas”.   

Related Articles