Increase Operational Efficiency with OspreyData

This week we’re sharing some slides that our Customer Engagement Manager, Jon Snyder, presented at a recent Professional Petroleum Data Management (PPDM) Association virtual luncheon. The slides and associated illustrations highlight the incremental and iterative process of how engineers, supervisors and operators collaborate to leverage data into actionable insights, execute automated workflows, and gain operational efficiencies that lower expenses.

The first step in this process is aligning data with problems operators are trying to solve. We must ensure that the data quality is sufficient to solve problems with the ROI. At OspreyData, we evaluate several key areas to ensure data quality is sufficient for analytical purposes.

  • Coverage – Represents all available data streams for each well / lift-type across a diverse population of assets. Good data goes beyond sensors. Data includes sensors from multiple devices, well design, maintenance history, production, etc.
  • Connected – Is the same vocabulary shared across devices? Various naming conventions causes confusion as teams look in their respective systems for the “other” names of the wells. How easily can data be extracted from core systems and/or repositories for evaluation, profiling, transformations, and modeling?
  • History – Does historic data show changes in well behavior that correlate to problems and covers a population of events with enough variation / examples?
  • Continuity – How much source data is available without gaps or lapses? Faulty sensors and communication failures lead to an incomplete picture.
  • Granularity – What is the sample rate of data? Low frequency data can mask problems. With higher frequency data, we can diagnose earlier and take corrective action.
  • Latency – Is data recent and relevant for modeling?  With fresh, near real-time data we can diagnose a problematic event earlier and take corrective action sooner.

After profiling and ensuring data meets certain quality thresholds, OspreyData begins the process of tagging and labeling historical data. Using in house petroleum and artificial-lift expertise, we mark events such as gas interference or gas locking. By analyzing these sub-optimal events, operators gain insights as to which key mechanisms lead to downtime. With historical data labeled, OspreyData can then utilize machine learning models and algorithms to detect similar events in near real time allowing operators to manage or pump by exception.

Using the same set of event detection models, we can create a series of automated workflows with prescribed procedures to mitigate sub-optimal events before they become detrimental. In addition, these automated processes allow control centers and/or supervisors to allocate and manage resources appropriately and efficiently to maintain uptime which ultimately increase production, reduce costs, and add valuable time back into our day.

The incorporation of advanced analytics and utilization of automated workflows helps operators recoup lost production opportunities, reduce remedial action costs, and extend equipment life with reduced wear and tear.

OspreyData’s Production Intelligence solutions build value into processes in a step-by-step way. We can guide your organization through Data Quality Assessments, leading to Event Detection, and Workflow Integrations.

Overall, this will all lead to increasing your operational efficiencies. For more information on how Unified Monitoring and OspreyData can work for you, feel free to view our Unified Monitoring Solutions Page and request an online demo to view at your convenience.  If you would like to launch your digital oilfield today, contact us now!

Related Articles