Achieving High Data Quality Is An Incremental Journey

As the adage goes “a journey of a thousand miles starts with a single step”, don’t be daunted by how far you and your team have to travel; focus instead on starting and determine the steps that you can make today. Recognize that these solutions will not be implemented overnight, but the impact on production and operations in your organization will be seen quickly and will be substantial, worth the overall effort.

Let’s consider how our organizations might be impacted by an Artificial Intelligence journey and how that interacts with data quality.

An initial consideration is the operational change. The impact of this change will be based on where an organization is in its digital oilfield transformation. If the organization has yet to make strides toward a digital oilfield, then this might be the first consideration of these impacts.

Collection and storage of things like sensor streams or well designs are understood in organizations today. Ensuring that they are accurately updated might not have been considered primary importance. Collection of detailed maintenance and failure logs may not, historically, have been critical.

AI initiatives demand varied high-quality data. Curating this data requires coordination and cooperation across many departments with a significant shift in the importance of its maintenance.

A second consideration is the introduction of new techniques in artificial lift monitoring, surveillance, or optimization can be viewed skeptically by operational teams. It is important to reinforce that AI-enabled tools are to aid in focusing scarce operational or production teams to the wells that are most critically impacting your production. Predictive diagnostics provide indications of failures much early in the cycle while preventative maintenance analytics can show where wells are not running optimally.

If an organization does not have a strong basis in operating from data, then it is possible that there will be questions about the validity of the source data. It can be difficult to have teams trust the diagnostics or recommendations provided by a machine learning solution if they don’t first trust the source data on that it is based.

A third consideration is that an organization must genuinely assess where they are and where they must improve in their collection and curation of source data. In some cases, cataloging the sources of data can create the initial step. There may be quick wins by tackling items like consolidating the variety of names used for wells or locations across all of the systems.

It is reasonable to question the source data that is being reviewed. It is easy to overlook a serious issue or problem in source data that is collected. Anomalies in signal might be an anomaly, or they might be an indication of some other issue.

As you may see, achieving the highest of data quality standard can be a process that takes time, an incremental journey, but the value is visible every step of the way.

Please contact us with any questions you may have about data quality. We would love to discuss our solutions for maximizing your oil and gas ROI.

Related Articles