Skip To Main Content
Graphic image of many decorative lines and arrows flowing down to a single arrow.
Image: Getty Images

Imagine a task that used to take 11 minutes now taking less time than the blink of an eye. Couple that speed increase with 97% accuracy, and these are the results researchers at Texas A&M University achieved when combining machine learning, neural networks and novel compression tactics in a new project advancing reservoir production forecasts.

Why is the research important? Because advancing forecasts improves the availability and accuracy of information oil and gas companies use to make sound financial and operating decisions.

"Production forecasting can be done several ways, but everything depends on it," said researcher Mohammad Elkady. "Any decision – if you want to take out a loan, do an economic study, or make a development phase decision – depends on the forecast because it tells you how much oil, gas or water you're going to produce."

 Elkady and fellow researcher Veena Kumar created a flexible method they say could apply to understanding and evaluating any complex reservoir. Both are graduate students in the Harold Vance Department of Petroleum Engineering and work under the guidance of their advisor, Dr. Siddharth Misra, the Ted H. Smith, Jr. '75 and Max R. Vordenbaum '73 DVG Associate Professor.

Any decision – if you want to take out a loan, do an economic study, or make a development phase decision – depends on the forecast because it tells you how much oil, gas or water you're going to produce.

Mohammad Elkady

Misra and Elkady introduced the work in the paper, "Ultrafast Multiphase Production Forecasting for Large Gas Condensate Shale Reservoirs," during the Abu Dhabi International Progressive Energy Congress in Oct 2023. Misra also presented the work in the paper, "Rapid Production Forecasting for Hydraulically Fractured Wells in Large, Heterogeneous Shale Reservoir," at the Society of Petroleum Engineers’ Workshop in New Orleans, Louisiana in Nov. 2023.

Production forecasting history

To make a profit, oil and gas companies weigh equipment, human resources, and other costs against the profitable hydrocarbons they can produce from subsurface reservoirs. Such decisions would be easy if petroleum engineers could peek inside the earth and see what's there. Since the subsurface is opaque, it's tricky to accurately predict oil recovery from reservoirs as conditions change over time.

In the past, engineers used two main methods for forecasting. The first and oldest plots a well's oil recovery rates and declines over time, and that decline curve analysis can help estimate the recoveries in nearby or similar wells. The second is newer and uses computational resources to create a visual map of a reservoir by compressing the massive amounts of data gathered from surface and subsurface sensors and simulating the information into simplified renderings.

Refining data to enhance forecasting

This research is not the first to use machine learning or even neural networks to get a more accurate forecast, but it is the first to combine those processes with geomodel data compression. Elkady said data compression isn't new, as it's been used successfully in areas like image and video analysis. High-resolution videos must be compressed when shared around the world on restrictive bandwidths, yet it must be done selectively to keep the subjects true to the original imaging. Though the application in petroleum is novel, data compression makes sense for similar reasons.

"Geological data usually has a lot of data, and some may not be as important as others," said Elkady. "That's the beauty of data compression; it's not just compression, it extracts the significant features out of these data."

That's the beauty of data compression; it's not just compression, it extracts the significant features out of these data.

Mohammad Elkady

Elkady and Kumar began by creating 4,000 virtual homogenous or uniformly structured reservoirs containing 88,000 grid cells. Each cell had three randomly sampled geological properties: permeability, porosity, water saturation, fracture space, etc. The total data generated for each reservoir equaled over a quarter of a million points. The researchers ran these original reservoirs through a simulator for the most accurate production forecasts possible.

The researchers then compressed the original data using two different methods simultaneously to crunch the numbers down with a 50,000 to 1 ratio. After training in Python, they taught the neural network to do production forecast workflows of the compressed reservoir model. Results showed the workflow was much faster than the simulator – under a second for every 700 seconds the simulator took – and had an error rate of 3% when compared to simulated forecasts.

Next, the researchers used simulated data guided by Aramco that mimicked actual reservoir information to create about 3,000 virtual reservoirs that were heterogeneous or complex in structure and content yet still had the same number of grid cells.

“We used geostatistical tools to generate the geology to be as realistic as we could," said Elkady. "The Aramco team supervised the generation of these datasets to make sure that, from their experience, this looks like a real reservoir."

When that set of comparisons between workflow and simulator forecasts was completed, the researchers added more parameters, starting with six or seven. Eventually, they included human-related parameters, such as when operators choose how much gas they wish to produce despite what the reservoir is capable of, a complex concept for neural networks to factor in.

Testing the model's effectiveness and precision on a variety of real-world shale gas reservoirs will undoubtedly demonstrate its potential and adaptability in diverse situations.

Veena Kumar

"Initially, the amount of accuracy to error was very bad when we added human variations to the data set," Kumar said. "I am proud of getting the error percentage down, but we're looking into the reason for the initial higher percentages."

The model-building stage – including reservoir parameter definitions, geomodel compressions, and prediction parameter factoring – required about one hour, depending on the data complexity. Once the models were trained and tuned, the time ratio for generating forecasts from the models remained less than one second for the researcher’s workflow to every 700 seconds, or about 11 minutes, when using a commercial software simulator.

Future direction

Currently, the work is expanding the number of parameters, refining the techniques and input factors, and testing methods to reduce the training times needed to learn the software. After that, the researchers want to test it out on actual reservoir data, ideally a reservoir where some production has already been completed.

They are optimistic that the work can be applied effectively in many different formations.

"The goal of refining the multivariate forecast model for long-term projections, potentially over a decade, is one of the approaches we're excited to try out," said Kumar. "Testing the model's effectiveness and precision on a variety of real-world shale gas reservoirs will undoubtedly demonstrate its potential and adaptability in diverse situations."