With advances in digital technology, use of integrated reservoir models has become commonplace for making informed reservoir development and management decisions. Yet even with faster computers, one of the biggest challenges associated with such modeling is to deliver a model appropriate for the business objective in a timeframe such that it can inform the relevant decision.
There are a number of factors which contribute to this challenge. First, some models are better suited for addressing particular business objectives than others. For example, a framework model complemented by benchmarking may be used for early life volumetric estimates for screening, whereas a cellular full-field model may be better for longer-term depletion strategies. Second, the complexity of the reservoir itself will drive the design and techniques used to characterize the reservoir. Given typical time constraints, it is often important to include only the necessary detail. Third, primary data sources typically used to construct a reservoir model include core, wireline logs, outcrop analogues, and seismic data and interpretations, all of which are measured at different scales. Consequently, re-scaling is performed on source data, plus the model itself is often re-scaled for flow simulation. Finally, additional information is frequently acquired during model construction or shortly thereafter. This requires either flexibility of the model to incorporate the data, or a freeze date after which no additional information will be incorporated until after delivery of the model.
Building useful models requires integration across a variety of disciplines, including geoscience, engineering, and petrophysics, throughout the entire life-cycle of the model. From initially framing the problem, through planning, building, and delivering the model, to iterating through updates or additional models, this presentation poses suggestions for delivering appropriate reservoir models as an integrated subsurface team.