Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Data-Driven Fracture Morphology Prognosis from High Pressured Modified Proppants Based on Stochastic-Adam-RMSprop Optimizers; tf.NNR Study

Data-Driven Fracture Morphology Prognosis from High Pressured Modified Proppants Based on... big data and cognitive computing Article Data‑DrivenFractureMorphologyPrognosis from High PressuredModifiedProppants Basedon Stochastic‑Adam‑RMSprop Optimizers;tf.NNRStudy 1 1, 2, 2 DennisDelali KwesiWayo ,SonnyIrawan * ,AlfrendoSatyanaga * andJong Kim DepartmentofPetroleumEngineering, SchoolofMiningandGeosciences,NazarbayevUniversity, Astana010000,Kazakhstan;dennis.wayo@nu.edu.kz DepartmentofCivilandEnvironmental Engineering,SchoolofEngineeringandDigitalSciences, NazarbayevUniversity,Astana010000,Kazakhstan * Correspondence: irawan.sonny@nu.edu.kz(S.I.);alfrendo.satyanaga@nu.edu.kz(A.S.); Tel.: +7‑7172705800(S.I.);+7‑7714912838(A.S.) Abstract: Data‑driven models with some evolutionary optimization algorithms, such as particle swarm optimization (PSO) and ant colony optimization (ACO) for hydraulic fracturing of shale reservoirs, have in recent times been validated as one of the best‑performing machine learning al‑ gorithms. Log data from well‑logging tools and physics‑driven models is difficult to collate and modeltoenhancedecision‑makingprocesses. Thestudysoughttotrain,test,andvalidatesynthetic data emanating from CMG’s numerically propped fracture morphology modeling to support and enhance productive hydrocarbon production and recovery. This data‑driven numerical model was investigatedforefficient hydraulic‑inducedfracturing byusing machinelearning, gradientdescent, and adaptive optimizers. While satiating research curiosities, the online predictive analysis was conductedusingtheGoogleTensorFlowtoolwiththeTensorProcessingUnit(TPU),focusingonlin‑ ear and non‑linear neural network regressions. A multi‑structured dense layer with 1000, 100, and 1 neurons was compiled with mean absolute error (MAE) as loss functions and evaluation metrics Citation: Wayo,D.D.K.;Irawan,S.; concentrating on stochastic gradient descent (SGD), Adam, and RMSprop optimizers at a learning Satyanaga,A.; Kim, J.Data‑Driven rateof0.01. However,theemergingalgorithmwiththebestoveralloptimizationprocesswasfound Fracture MorphologyPrognosis from to be Adam, whose error margin was 101.22 and whose accuracy was 80.24% for the entire set of HighPressured ModifiedProppants 2000 synthetic data it trained and tested. Based on fracture conductivity, the data indicates that BasedonStochastic‑Adam‑RMSprop therewasa higher chance of hydrocarbon production recoveryusing this method. Optimizers;tf.NNR Study. BigData Cogn. Comput. 2023, 7, 57. https:// Keywords: hydraulic fracturing; proppants; numerical modeling; data‑driven; neural network doi.org/10.3390/bdcc7020057 optimizers AcademicEditors: GuarinoAlfonso, RoccoZaccagnino, Emiliano Del Gobboand MoulayA. Akhloufi 1. Introduction Received: 9February 2023 Revised: 10 March 2023 Hydrocarbonproductiondecline[1,2]posessubstantivethreatstoenergysustainabil‑ Accepted: 20March2023 ity; hence, the demand for resolving this challenge in both conventional and unconven‑ Published: 24March 2023 tionalwellsisonamarathoncourse. Ourcontemporaryresearchcommunityhasenhanced thepracticalandtechnicalhydraulicfracturing[3,4]meanstointensifytherecoveryofoil and gas in unconventional reservoirs. This well stimulation [5,6] method could also be termedasfracking[7],whichconsistsofpassinghigh‑pressurefluidsthataresimplymade Copyright: © 2023 by the authors. of chemical additives, sand, and water (proppants) for opening and holding up channels Licensee MDPI, Basel, Switzerland. for the production of excess stored hydrocarbons. However, under this technique, it is This article is an open access article often linear to investigate and initiate the fracturing process, observe the fluid flow from distributed under the terms and the fractured formation, and determine the fracture propagation. The primary purpose conditions of the Creative Commons of hydraulic fracturing is to increase the hydrocarbon productive index targeted at low‑ Attribution (CC BY) license ( https:// permeable formations, for instance, shale formations [8–10]. Hydrocarbon production de‑ creativecommons.org/licenses/by/ clineismostlyattributedtoformationdamage,whichisoneofthereasonsemanatingfrom 4.0/). Big DataCogn. Comput.2023,7,57. https://doi.org/10.3390/bdcc7020057 https://www.mdpi.com/journal/bdcc Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 2 of 18 production decline is mostly attributed to formation damage, which is one of the reasons Big DataCogn. Comput.2023,7,57 2 of18 emanating from poorly designed drilling and completion fluids [11,12]. The leak of these fluids seals off the formation of pore throats and void spaces [13,14], preventing the flow of f poorly luidsdesigned from thdrilling e formatand ion tcompletion o the wellfluids bores.[ 11,12]. The leak of these fluids seals off the formation of pore throats and void spaces [13,14], preventing the flow of fluids from Empirically, it is often prudent to study and design appropriate hydraulic fracturing theformationtothewellbores. methods before their inception. Researchers [15,16] have investigated several ways to Empirically, it is often prudent to study and design appropriate hydraulic fractur‑ stimulate non-productive wells coupled with effective predictive analysis by designing ing methods before their inception. Researchers [15,16] have investigated several ways numerical models to counter poor flow regimes in the formation. Successful predictive tostimulatenon‑productivewellscoupledwitheffectivepredictiveanalysisbydesigning studies are the result of the type and geometric structure of the formation morphology, numerical models to counter poor flow regimes in the formation. Successful predictive the type of proppants [17,18], and their mechanical stress capabilities, fracture length, and studies are the result of the type and geometric structure of the formation morphology, infinite conductivity. thetypeofproppants[17,18],andtheirmechanicalstresscapabilities,fracturelength,and infinite Base conductivity. d on physics-driven models, Suri-Islam-Hossain (SIH) [19] used an extended Based on physics‑driven models, Suri‑Islam‑Hossain (SIH) [19] used an extended fi‑ finite element method (XFEM) to simulate fluid leak-off effects under proppant transport niteelementmethod(XFEM)tosimulatefluidleak‑offeffectsunderproppanttransportfor for fracture propagation. Their hydrodynamic integrated model, as shown in Figure 1, fracturepropagation. Theirhydrodynamicintegratedmodel,asshowninFigure1,demon‑ demonstrated an XFEM initial pressure for fracturing set to 7497 psi. The results of their strated an XFEM initial pressure for fracturing set to 7497 psi. The results of their study study indicate that the proppants’ transport and their relative suspension are largely indicatethattheproppants’transportandtheirrelativesuspensionarelargelyinfluenced influenced by an increased rate of injection. byanincreasedrateof injection. Figure1. Extendedfiniteelementmethod(XFEM)showingitsinitialpressureforfracturing,adapted Figure 1. Extended finite element method (XFEM) showing its initial pressure for fracturing, withpermission fromRef. [19], 2020, Suri, Y. adapted with permission from Ref. [19], 2020, Suri, Y. However,subsequentphysics‑drivensimulations[20],conductedbyWangetal.[21], However, subsequent physics-driven simulations [20], conducted by Wang et al. [21], explain how permeability testing for coal bed methane deposits can be carried out safely expla and effectiv in how ely permeabi without blow‑ups. lity testing Thefor co authors al bed met further indicated hane depos thatitthe s cdirection an be carof ried frac‑ out safely turecanbecomeuncertain,sincefracturechannelstendtoexpandinthedirectionofprin‑ and effectively without blow-ups. The authors further indicated that the direction of cipal stress. In Figure 2, Wang et al., using PFC2D, modeled and simulated a directional fracture can become uncertain, since fracture channels tend to expand in the direction of hydraulic fracturing (DHF), whose findings demonstrated that fracture propagation can principal stress. In Figure 2, Wang et al., using PFC2D, modeled and simulated a beregulatedusingtheDHFapproach[22],asthisovercomesitsoriginalorprincipalstress, directional hydraulic fracturing (DHF), whose findings demonstrated that fracture andforthisreason,itisassertedthatfracturepropagationextendsalongandperpendicu‑ propagation can be regulated using the DHF approach [22], as this overcomes its original lartotheslotting. or principal stress, and for this reason, it is asserted that fracture propagation extends Martyushev et al. [23] expounded the use of machine learning (ML) for the predic‑ along and perpendicular to the slotting. tive optimization of reservoir pressure in directional hydraulic fracturing (DHF) carbon‑ Martyushev et al. [23] expounded the use of machine learning (ML) for the predictive ate reservoirs. Their study considered hydraulically fractured Well 423 on the D fm oil deposit site, as presented in Figure 3. The focus of ML modeling was based simply on optimization of reservoir pressure in directional hydraulic fracturing (DHF) carbonate the interactions and influences of the neighboring wells (9070, 430, 424, 427, 433) on Well reservoirs. Their study considered hydraulically fractured Well 423 on the D3fm oil 423,beforeandafterDHF.Therelationshipforthemodelwasreferredtoasthecoefficient deposit site, as presented in Figure 3. The focus of ML modeling was based simply on the of correlation (r), as demonstrated in Figure 3. The result of their research indicates that interactions and influences of the neighboring wells (9070, 430, 424, 427, 433) on Well 423, the higher the correlation coefficient, the more accurate the reservoir pressure prediction, before and after DHF. The relationship for the model was referred to as the coefficient of and as demonstrated, Well 423, before and after DHF, presents increased pressure levels, correlation (r), as demonstrated in Figure 3. The result of their research indicates that the indicating a red region, and low reservoir pressures correspond to a lower correlation co‑ higher the correlation coefficient, the more accurate the reservoir pressure prediction, and efficient,indicatingayellowand blueregion. as demonstrated, Well 423, before and after DHF, presents increased pressure levels, indicating a red region, and low reservoir pressures correspond to a lower correlation coefficient, indicating a yellow and blue region. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 3 of 18 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 3 of 18 Big DataCogn. Comput.2023,7,57 3 of18 Figure 2. Fracture propagation perpendicular to the slotting in sample 4 adapted with permission Figure 2. Fracture propagation perpendicular to the slotting in sample 4 adapted with permission Figure 2. Fracture propagation perpendicular to the slotting in sample 4 adapted with permission from Ref. [21], 2022, Wang, K. from Ref. [21], 2022, Wang, K. fromRef. [21],2022, Wang,K. Figure 3. Coefficient of correlation (a) before DHF, (b) after DHF, adapted with permission from Figure 3. Coefficient of correlation ( a) before DHF, (b) after DHF, adapted with permission from Figure 3. Coefficient of correlation (a) before DHF, (b) after DHF, adapted with permission from Ref. [23], 2022, Martyushev, D.A. Ref. [23] Ref.[23], , 2022 2022, , Martyushev, D Martyushev, D.A. .A. How However, r ever, reservoir eservoir pre pressures ssures mi migrating grating from ne from neighboring ighboring we wells lls 429, 429, 427, 427, and and 4424 24 tto o However, reservoir pressures migrating from neighboring wells 429, 427, and 424 to Well423beforeDHFpresentacasewherethetendencyofawellblowoutisobviouswhile Well 423 before DHF present a case where the tendency of a well blowout is obvious while Well 423 before DHF present a case where the tendency of a well blowout is obvious while drilling. The ML predictive analysis presented in the case of Martyushev et al. [23] sup‑ drilling. The ML predictive analysis presented in the case of Martyushev et al. [23] drilling. The ML predictive analysis presented in the case of Martyushev et al. [23] ports supports m managerial anagerdecision‑making ial decision-makto ingoptimize to optimiz drilling e drilling operations. operations. supports managerial decision-making to optimize drilling operations. Nonetheless, there has also been abundant research on data‑driven models for the Nonetheless, there has also been abundant research on data-driven models for the Nonetheless, there has also been abundant research on data-driven models for the prediction of hydraulic fracturing well stimulation. Dong et al. [24] optimized fracture prediction of hydraulic fracturing well stimulation. Dong et al. [24] optimized fracture prediction of hydraulic fracturing well stimulation. Dong et al. [24] optimized fracture parameters using data‑driven algorithms. The authors explain that there is a high cost parameters using data-driven algorithms. The authors explain that there is a high cost and parameters using data-driven algorithms. The authors explain that there is a high cost and and driven uncertainty associated with fracture spacing and half‑length. For this reason, driven uncertainty associated with fracture spacing and half-length. For this reason, the driven uncertainty associated with fracture spacing and half-length. For this reason, the the research expounded on the use of an evolutionary optimization algorithm (EOA) for research expounded on the use of an evolutionary optimization algorithm (EOA) for research expounded on the use of an evolutionary optimization algorithm (EOA) for Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 4 of 18 Big DataCogn. Comput.2023,7,57 4 of18 parametric fracture optimization. Hence, their resulting numerical simulation, based on a parametric fracture optimization. Hence, their resulting numerical simulation, based on gradient-boosted decision tree, random forest, support-vector machine, and multilayer a gradient‑boosted decision tree, random forest, support‑vector machine, and multilayer perception (MLP), demonstrated in Figure 4, shows that among all the four production- perception (MLP), demonstrated in Figure 4, shows that among all the four production‑ prediction models, one of the EOA, i.e., particle swarm optimization (PSO), produced the prediction models, one of the EOA, i.e., particle swarm optimization (PSO), produced the highest net present value. highestnetpresentvalue. Figure4. EOA‑PSO with the highestNPV,adapted with permission from Ref. [24],2022, Dong,Z. Figure 4. EOA-PSO with the highest NPV, adapted with permission from Ref. [24], 2022, Dong, Z. In recent times and in this current study, the neural network prognosis architectures In recent times and in this current study, the neural network prognosis architectures have not only looked at the deep neural network Keras architectures, such as sequen‑ have not only looked at the deep neural network Keras architectures, such as sequential, tial, functional, and subclassing API analysis, but there has also been an advance inves‑ functional, and subclassing API analysis, but there has also been an advance investigation tigation on the use of convolutional neural networks (CNN) and recurrent neural net‑ on the use of convolutional neural networks (CNN) and recurrent neural networks works (LSTM), with a proposed extension of optimizers. However, the likes of Elbaz and (LSTM), with a proposed extension of optimizers. However, the likes of Elbaz and Shen Shen[25–27]haveprovenintheirresearchthepossibilityofadvancingtheneuralnetwork [25–27] have proven in their research the possibility of advancing the neural network architectureprognosis. architecture prognosis. Inotherwords,whilemaintainingtheTensorFlowKerasSequentialAPIarchitecture, In other words, while maintaining the TensorFlow Keras Sequential API architecture, a synthetic dataset for training and testing using the most effective neural network opti‑ a synthetic dataset for training and testing using the most effective neural network mizers from the current study is essential for reducing predicted errors in the petroleum optimizers from the current study is essential for reducing predicted errors in the frackingsector. Thestochasticgradientdescent[28,29]algorithmusedisevaluatedforbig petroleum fracking sector. The stochastic gradient descent [28,29] algorithm used is datasets, with the intention of selecting batches at random from the total dataset for each evaluated for big datasets, with the intention of selecting batches at random from the total iteration. In order to roughly obtain a minimum, this optimizer sorts to shuffle the data dataset for each iteration. In order to roughly obtain a minimum, this optimizer sorts to at random for each iteration. Most importantly, in the case of gradient descent, it is not shuffle the data at random for each iteration. Most importantly, in the case of gradient suitable for large datasets, as the convex algorithm does not randomly shuffle the entire descent, it is not suitable for large datasets, as the convex algorithm does not randomly dataset, but instead, for every iteration, the whole data is focused on finding the approxi‑ shuffle the entire dataset, but instead, for every iteration, the whole data is focused on mateminimum. Forthisreason,SGDproducesalotofnoise,basedonthebatchesforeach finding the approximate minimum. For this reason, SGD produces a lot of noise, based on iteration, and to reach the desired approximate minimum, a higher number of iterations the batches for each iteration, and to reach the desired approximate minimum, a higher is needed, which brings the total time for computation to a record high. However, it is number of iterations is needed, which brings the total time for computation to a record purported that SGD with higher iterations can optimize noise cancellation. Nonetheless, high. However, it is purported that SGD with higher iterations can optimize noise another means of countering noise production is by the extension of SGD with momen‑ cancellation. Nonetheless, another means of countering noise production is by the tum,imagineproppinganaturallyfracturedandlowpermeableformation,wherethemo‑ extension of SGD with momentum, imagine propping a naturally fractured and low mentum of the proppants in the natural fracture formation gains maximum convergence. permeable formation, where the momentum of the proppants in the natural fracture Most of all, while considering the momentum, the likelihood that the desired minimum formation gains maximum convergence. Most of all, while considering the momentum, couldbereachedishigh;hence,carefulregulationofthenumberofiterationsisneededfor the likelihood that the desired minimum could be reached is high; hence, careful betteroptimization. regulation of the number of iterations is needed for better optimization. Adaptive moment estimation (Adam) [30] is an extension of SGD [31]; whereas, the Adaptive moment estimation (Adam) [30] is an extension of SGD [31]; whereas, the weightsoftheentirenetworkundertrainingareoptimizedbyasinglelearningrate,Adam, weights of the entire network under training are optimized by a single learning rate, on the other hand, concentrates on upgrading each network’s weights. Based on its wide Adam, on the other hand, concentrates on upgrading each network’s weights. Based on usage, several researchers have indicated it as the benchmark for deep learning and stan‑ its wid dard optimization e usage, severapproaches, al researchersince s haveit indic doesated not it as the benchmark support overtime computation for deep learn and ing re‑ and st quires anless dardmemory optimizafor tioncomputation, approaches, thereby since it does reducing not support overtime computa the entire cost of computation. tion Whiletherehasbeenoverwhelmingresearchcuriosityforbetteradaptationofdeeplearn‑ and requires less memory for computation, thereby reducing the entire cost of Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 5 of 18 Big DataCogn. Comput.2023,7,57 5 of18 computation. While there has been overwhelming research curiosity for better adaptation of deep learning optimizers, previous studies in the research community [32,33], few intelligent applications based on the root mean square propagation (RMSprop) optimizer ing optimizers, previous studies in the research community [32,33], few intelligent appli‑ have been published. This adaptive optimizer takes its roots from RProp, known as cations based on the root mean square propagation (RMSprop) optimizer have been pub‑ resilient backpropagation. Since RProp contradicts the theory behind stochastic gradient lished. This adaptive optimizer takes its roots from RProp, known as resilient backpropa‑ descent, RMSprop was developed as an extension of RProp. As a result, just as Adam gation. Since RProp contradicts the theory behind stochastic gradient descent, RMSprop focuses on each network’s weight, so does RMSprop. In this case, a weight’s specified was developed as an extension of RProp. As a result, just as Adam focuses on each net‑ learning rate is gradually split by the size of its most recent gradients, averaged over time, work’s weight, so does RMSprop. In this case, a weight’s specified learning rate is grad‑ and determined using the mean square method. Figure 5 presents an illustrative ually split by the size of its most recent gradients, averaged over time, and determined performance of the current study’s choice of gradient and adaptive optimizers for the using the mean square method. Figure 5 presents an illustrative performance of the cur‑ manipulation and tweaking of the synthetic fracking dataset to optimize the predictive rentstudy’schoiceofgradientandadaptiveoptimizersforthemanipulationandtweaking petroleum industry. The previous neural network modeling, as conducted by [34], of the synthetic fracking dataset to optimize the predictive petroleum industry. The pre‑ defined an input shape of 28, 28, 1 and upon successfully splitting the improvised data at vious neural network modeling, as conducted by [34], defined an input shape of 28, 28, 1 a dtype float 32, built its model using Keras Sequential, and with activation functions set and upon successfully splitting the improvised data at a dtype float 32, built its model us‑ to ReLU, and SoftMax introduced a loss function of cross entropy under the following ingKerasSequential,andwithactivationfunctionssettoReLU,andSoftMaxintroduceda optimizers, seen in Figure 5. It is empirically significant to note that the resulting lossfunctionofcrossentropyunderthefollowingoptimizers,seeninFigure5. Itisempir‑ experiment indicates that the Adam optimizer achieved the best performing algorithm, as ically significant to note that the resulting experiment indicates that the Adam optimizer it is followed by RMSprop and SGD. achievedthebestperforming algorithm, as it is followedby RMSprop and SGD. Figure 5. Optimizers performances, adapted with permission from Ref. [34]. Figure5. Optimizersperformances, adapted with permission from Ref. [34]. Another typica Another typical l study on study on vavarious rious optimi optimizers zers based based on di on different fferent datasets datasets conducted conducted bby y Mohap Mohapatra atra et et al. al. [[353]5demonstrates ] demonstratesthe thefficacy e efficof acy AdaSw of Adarm aSwcompared arm comptoared SGD, toAda‑ SGD, AdaG Grad,rAdaDelta, ad, AdaDeltRMSprop, a, RMSprop AMSGrad, , AMSGrAdam, ad, Adam emulating , emulatSGD ing SGwith D wit PSO h PSO parameters. parametThe ers. The ad adaptiv ape tgradient‑based ive gradient-base optimizers d optimizunder ers under aseries a seof riecompiled s of compmodels iled mow dels we ereused re used fordeep for deep learning learnin comparativ g compearmean ative msquared ean sqand uared and m mean absolute ean aberrors solute(MSE/MAE) errors (MSE/loss MAE) function loss analysis. The authors, while focusing on swarm intelligence, thus AdaSwarm and the ex‑ function analysis. The authors, while focusing on swarm intelligence, thus AdaSwarm ponentially weighted momentum particle swarm optimizer (EMPSO), whose various pa‑ and the exponentially weighted momentum particle swarm optimizer (EMPSO), whose rameters were measured against gradient descent (GD), defined the capabilities of these various parameters were measured against gradient descent (GD), defined the capabilities optimizerstoexecuteprecisegradientapproximations,whichfurtherexposesthenovelty of these optimizers to execute precise gradient approximations, which further exposes the oftheirconductedresearch. Basedontheneuralnetworkalgorithms(EMPSO/AdaSwarm) novelty of their conducted research. Based on the neural network algorithms andsubsequentdifferentialandnon‑differentialmodelsproposedbyMohapatraetal.[ 35], (EMPSO/AdaSwarm) and subsequent differential and non-differential models proposed it resulted that the gradient‑free adaptive swarm intelligence algorithm (AdaSwarm) had by Mohapatra et al. [35], it resulted that the gradient-free adaptive swarm intelligence provensuperioroverother optimizers, such as RMSprop, SGD, and Adam. algorithm (AdaSwarm) had proven superior over other optimizers, such as RMSprop, Inthiscurrentstudy,stochasticgradientdescent(SGD),andAdamandRMSpropop‑ SGD, and Adam. timizersforhydrocarbonproductionrecoverypredictiveanalysisweremodeledbasedon In this current study, stochastic gradient descent (SGD), and Adam and RMSprop high‑pressure hydraulic fracturing. Moreover, the concentration of gradient descent and optimizers for hydrocarbon production recovery predictive analysis were modeled based adaptive optimizers is used to train and test hydraulic fracturing on numerically mod‑ on high-pressure hydraulic fracturing. Moreover, the concentration of gradient descent eled datasets, based on the Google TensorFlow machine learning algorithms. A linear and adaptive optimizers is used to train and test hydraulic fracturing on numerically and non‑linear neural network regression (NNR) based on these selected optimizers was used to optimize highly modified proppants [ 36] for effective fracture propagation and productionrecovery. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 6 of 18 modeled datasets, based on the Google TensorFlow machine learning algorithms. A linear and non-linear neural network regression (NNR) based on these selected optimizers was used to optimize highly modified proppants [36] for effective fracture propagation and production recovery. 2. Methods Big DataCogn. Comput.2023,7,57 6 of18 2.1. Data-Driven Modeling Building up models generates data that emanates from intelligent tools. Being aware of t2.heMethods difficulty in reading log data, it is practical to use synthetic data for modeling, 2.1. Data‑DrivenModeling making a clear-cut validation with real data. However, in the absence of physics-driven Buildingupmodelsgeneratesdatathatemanatesfromintelligenttools. Beingaware simulations, as demonstrated in Figure 1, data-driven model analysis can be the easiest ofthedifficultyinreadinglogdata,itispracticaltousesyntheticdataformodeling,mak‑ computationally intelligent tool at hand. Moreover, the wholistic parameters involved in ingaclear‑cutvalidationwithrealdata. However,intheabsenceofphysics‑drivensimula‑ modeling data generation originate from the initial reservoir conditions, hydraulic tions,asdemonstratedinFigure1,data‑drivenmodelanalysiscanbetheeasiestcomputa‑ fractionally ture chara intelligent cteristi toolcats, hand. and hydrocarbon Moreover,thewholistic production. Fig parametersinvolv ure ed6 provides the detaile inmodeling d data generation originate from the initial reservoir conditions, hydraulic fracture charac‑ methods and flow chart for an effective propped fracture prognosis. teristics, and hydrocarbon production. Figure 6 provides the detailed methods and flow chartforaneffectiveproppedfracture prognosis. Figure6. Flowchartfor optimizing hydraulic fracturing. Figure 6. Flowchart for optimizing hydraulic fracturing. 2.2. NumericalModeling Basedonacommercialblackoilsimulator,CMG’sintegratedthird‑partygeomechan‑ ics‑based hydraulic fracturing tools, shown in Figure 7, were used to numerically model 2.2. Numerical Modeling the data, which generated input and output parameters, with concentrations on porosity Based on a commercial black oil simulator, CMG’s integrated third-party (ϕ), height (h), fracture length (L ), fracture width (w ), fracture permeability, and a pro‑ f f geomechan ductivityindex ics-ba(flowing sed hydr bottom‑hole aulicpressure, fractuPring t). ools, shown in Figure 7, were used to wf The 2000‑dataset model was numerically focused on shale formations. The current numerically model the data, which generated input and output parameters, with study’s 3D design [37–39] two‑phase flow simulation in assumed vertical reservoirs was concentrations on porosity (𝜙 ), height (h), fracture length (Lf), fracture width (wf), fracture saturated with oil and gas. The striated vertical and transverse propped fracture prop‑ permeability, and a productivity index (flowing bottom-hole pressure, Pwf). agation of the simulated reservoir obtained its operation perpendicular to its minimum principal stress, yet in the direction of its maximum principal stress. According to Oritz The 2000-dataset model was numerically focused on shale formations. The current etal.[40]theirstudyinitiatedthemostappreciabledual‑permeabilityprocedureformod‑ study’s 3D design [37–39] two-phase flow simulation in assumed vertical reservoirs was eling two‑phased shale plays and natural fractures [41]. Their applicable method for sim‑ saturated with oil and gas. The striated vertical and transverse propped fracture ulating naturally induced fractures and hydraulic fractures was made possible by CMG‑ propagation of the simulated reservoir obtained its operation perpendicular to its IMEX.Notwithstanding,inputparametersmodeledwithCMGbyKulgaetal.[42]yielded promising hydraulic fracturing [43] parameters for numerically synthesizing the data minimum principal stress, yet in the direction of its maximum principal stress. According inTable1. to Oritz et al. [40] their study initiated the most appreciable dual-permeability procedure for modeling two-phased shale plays and natural fractures [41]. Their applicable method for simulating naturally induced fractures and hydraulic fractures was made possible by CMG-IMEX. Notwithstanding, input parameters modeled with CMG by Kulga et al. [42] Big DataCogn. Comput.2023,7,57 7 of18 Table1. Input Parametersfor CMG Numerical Modeling. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 7 of 18 ReservoirConditions HydraulicFracture Parameters FBHP Pi[psi] T [ F] Yg A [acres] h [ft] k[md] ϕ [%] L [ft] k [md] w [in] Pwf f f f Min. 500 100 yielded prom 0.5 1000 ising hydr 60 aulic fr 0.00001 acturing [43] pa 4 500 ramete2000 rs for nu0.01 merically synthesizing the 510 data in Table 1. Max. 5000 300 0.9 2000 500 0.1 30 1500 100000 0.4 756 Figure 7. CMG hydraulic fracturing simulation in shale reservoir adapted with permission from Figure 7. CMG hydraulic fracturing simulation in shale reservoir adapted with permission from Ref.[40], 2021, AriasOrtiz, D. A. Ref. [40], 2021, Arias Ortiz, D. A. According to the minimum and maximum synthetic data generated, the initiation of Table 1. Input Parameters for CMG Numerical Modeling. propping fractures in vertical shale reservoirs is mostly termed to have initial reservoir conditions with pressures (P ) of about 500 psi, thermal conditions of 100 degrees Fahren‑ Reservoir Conditions Hydraulic Fracture Parameters FBHP heit (T), and an area (A) of about 1000 acres. Additionally, while considering an efficient Pi [psi] T [°F] Yg A [acres] h [ft] k [md] ϕ [%] Lf [ft] kf [md] wf [in] Pwf predictive analysis, synthetic data for hydraulic fractures obtained a maximum fracture length (L ) of 1500 ft at a height (h) of 500 ft and 30% porosity (ϕ) and a fractured perme‑ Min. f500 100 0.5 1000 60 0.00001 4 500 2000 0.01 510 ability (k ) at 0.1 mD. Nonetheless, the width of the fracture based on the data was from Max. 5000 300 0.9 2000 500 0.1 30 1500 100000 0.4 756 0.1to0.4toconductivelyexpoundtheporechannelsforhigherproductivityortoincrease flowingbottom‑holepressure. According to the minimum and maximum synthetic data generated, the initiation of 2.3. Fluid‑FractureEquations propping fractures in vertical shale reservoirs is mostly termed to have initial reservoir Figure8schematicallydemonstratesaone‑winginfinitehomogenoustwo‑dimensional conditions with pressures (Pi) of about 500 psi, thermal conditions of 100 degrees formation hydraulic fracturing model that was originally proposed by Perkins and Kern, Fahrenheit (T), and an area (A) of about 1000 acres. Additionally, while considering an alsoknownliterallyastheP‑Kequation[44]. Thisfractureflowdiagramdepictshowhigh‑ efficient predictive analysis, synthetic data for hydraulic fractures obtained a maximum pressure proppants or fluids move in the direction of the x‑axis with a constant height of fracture length (Lf) of 1500 ft at a height (h) of 500 ft and 30% porosity (ϕ) and a fractured hon the y‑axis. The diameter of the fracture morphology on the z‑axis remains the width. permeabi It is interesting lity (ktof) a note t 0that .1 mD. Nonethel the fracture length ess, the wi is exponentially dth of the fra greatercthan ture that based on the of a con‑ data was stantheightandwidth. ThesearemathematicallyrepresentedbasedonthefollowingP‑K from 0.1 to 0.4 to conductively expound the pore channels for higher productivity or to assumptions: increase flowing bottom-hole pressure. (a) Thereisnostorage effectnor fluid leak off. (b) Atthetip,thenet pressureremains zero. 2.3. Fluid-Fracture Equations (c) Fluids areNewtonianand incompressible. (d) Fig Fluidure 8 injectionschematic is assumedally demonstrates to be in constant volumetric a one-flowwrate. ing infinite homogenous two- (e) Becausemuchlessenergyisneededtopropagateafracturethantosimplyallowthe dimensional formation hydraulic fracturing model that was originally proposed by fluidtoflowalongit, the toughness of the formation can be disregarded. Perkins and Kern, also known literally as the P-K equation [44]. This fracture flow diagram depicts how high-pressure proppants or fluids move in the direction of the x-axis with a constant height of h on the y-axis. The diameter of the fracture morphology on the z-axis remains the width. It is interesting to note that the fracture length is exponentially greater than that of a constant height and width. These are mathematically represented based on the following P-K assumptions: (a) There is no storage effect nor fluid leak off. (b) At the tip, the net pressure remains zero. (c) Fluids are Newtonian and incompressible. (d) Fluid injection is assumed to be in constant volumetric flow rate. (e) Because much less energy is needed to propagate a fracture than to simply allow the fluid to flow along it, the toughness of the formation can be disregarded. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 8 of 18 Big DataCogn. Comput.2023,7,57 8 of18 Figure 8. Illustrative Perkins–Kern–Nordgren one-wing fracture model. Figure8. IllustrativePerkins–Kern–Nordgrenone‑wing fracture model. YoungModulus[45] (verticalplane),E ; Young Modulus [45] (vertical plane), E’; 2 H −4y (1) E = P, 2 H − 4y ′ w E = P, (1) Young Modulus (plane strain), E’; (2) E = , YoungModulus (planestrain), E ; 1− v Maximum fracture width, wm; E = , (2) 2H 1 −v (3) Wm= P, E′ Maximumfracture width,w ; Fluid continuity [46,47]; m ∂q ∂A 2H (4) W +q= L+ P,=0, (3) ∂x ∂t The continuity based on the assumption (a) would be further expressed as; Fluidcontinuity [46,47]; ∂q ∂q ∂A (5) =0, +qL + = 0, (4) ∂x ∂x ∂t Integrating the Newtonian laminar fluid where Equation (3) is at x = 0; Thecontinuitybased onthe assumption (a) would be further expressed as; Q μL (6) w = ∂ 0.38 q , = 0, (5) E′ ∂x Since one of the P-K assumptions [48] is made on a constant flow rate along the fractured axis, attention is given to the fracture length (L), maximum fracture width at the IntegratingtheNewtonian laminar fluid where Equation (3) is at x = 0; bottom-hole (w ), and net pressure at downhole (P ), which are represented as; ( ) Q µL 625 Q E′ w = 0.38 , (6) ′ (7) L= t , 4096π μH Since one of the P‑K assumptions [48] is made on a constant flow rate along the frac‑ 640 Q μ tured axis, attention is given to the fracture length (L), maximum fracture width at the w = t , (8) π E′H bottom‑hole(w ),andnet pressure at downhole (P ), which are represented as; 0 0 ( ) ( ) 3 ′ 625 Q E 4 L80 = 1 E′ Q μ t , (9(7) ) 3 4 P = 4096 × π µH t , π 4 H ( ) 1 Nordgren’s improved model add( s stor) age an 1 d leak-off effects to make P-K’s equation 640 Q µ more convincing and practical, as presented in Equations (10)–(12) [49,50]; w = t , (8) E H L= t (10) 2πC H  1 1 2 5 ( ) ( ) 4 5 80 1 E Q µ 1   5 P = × Q μ t , (9) (11) 2 6 π 4 w =4 tH π E′C H Big DataCogn. Comput.2023,7,57 9 of18 Nordgren’simprovedmodeladdsstorageandleak‑offeffectstomakeP‑K’sequation moreconvincing and practical,as presented in Equations (10)–(12) [49,50]; Q 1 L = t (10) 2πC H ( ) Q µ 1 w = 4 t (11) π E C H   1 2 4 E Q µ 1   P = 2 t (12) π C H Forhydrocarbonproductionrecoveryinverticalshalewells,thereisaninflowofthe two‑phaseprocess. FurtherFractureAssumptions 1. Therateofflowisassumed. 2. Fractureisconducted inverticalwells. 3. Timeforinjectionis considered. 4. Existingproppants athigh pressure are included. 2.4. TensorFlow TensorFlow is an all‑inclusive open‑source machine learning platform. Its large, ver‑ satile ecosystem of tools, libraries, and community resources enables academics to im‑ prove the state‑of‑the‑art of machine learning while simultaneously enabling developers toswiftlyconstructanddeployML‑poweredproducts. TensorFlowwasdevelopedbyen‑ gineers and researchers on the Google Brain team, a division of Google’s Machine Intelli‑ genceResearchdepartment,forthepurposeofconductingmachinelearninganddeepneu‑ ralnetworkresearch. Thetechniqueisversatileenoughtobeappliedinseveralotherindus‑ tries. TensorFlowoffersnon‑guaranteedbackwardcompatibilityforvariouslanguages,in additiontoestablished Python andC++ APIs. 2.4.1. DataPre‑Processingand Splitting Inspiteofthis,thedeepneuralnetworkanalysisdevelopedforthepresentstudyused a linear and non‑linear method while focusing on selected optimizers (SGD, Adam, and RMSprop)underthe impact oflearning rates, activation,and loss functions. Moreover, there was no need to pre‑process or normalize the data because it had al‑ readybeencleanedbeforeimporting,basedonthePandaslibrary. Thecompletelengthof the2000syntheticdatawastrained at 80% and tested at 20%. 2.4.2. Deep NeuralNetwork(Non‑Linear Regression) Thestudygeneratedamodelfornon‑linearregression[51–53]todeterminetheinflu‑ ence and prediction of various fractured input parameters over the production recovery with Keras sequential stable input dense layer of 100, 10 and an output layer shape of 1, showninFigure9andTable2. Moreover,whilemaintainingalearningrateofabout0.01, the activation function for the input layer was set to a rectified linear unit (ReLU; thus, a non‑linearactivationfunctionintended for deep neural networks). The generated model for training the data was compiled, setting the loss function to themeanabsoluteerrorandtheoptimizerstothestochasticdescentgradient(SGD),Adam, andRMSprop,respectively,foreachofthemodels’build‑ups. Figures10–15demonstrates loss curves for various input parameters emanating from fractured height, width, length, permeability,andporosityoftheformationandfracturedconductivityrawsyntheticdata, validatedinFigure 9. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 9 of 18 E′ Q μ (12) P =2 t π C H For hydrocarbon production recovery in vertical shale wells, there is an inflow of the two-phase process. Further Fracture Assumptions 1. The rate of flow is assumed. 2. Fracture is conducted in vertical wells. 3. Time for injection is considered. 4. Existing proppants at high pressure are included. 2.4. TensorFlow TensorFlow is an all-inclusive open-source machine learning platform. Its large, versatile ecosystem of tools, libraries, and community resources enables academics to improve the state-of-the-art of machine learning while simultaneously enabling developers to swiftly construct and deploy ML-powered products. TensorFlow was developed by engineers and researchers on the Google Brain team, a division of Google’s Machine Intelligence Research department, for the purpose of conducting machine learning and deep neural network research. The technique is versatile enough to be applied in several other industries. TensorFlow offers non-guaranteed backward compatibility for various languages, in addition to established Python and C++ APIs. 2.4.1. Data Pre-Processing and Splitting In spite of this, the deep neural network analysis developed for the present study used a linear and non-linear method while focusing on selected optimizers (SGD, Adam, and RMSprop) under the impact of learning rates, activation, and loss functions. Moreover, there was no need to pre-process or normalize the data because it had already been cleaned before importing, based on the Pandas library. The complete length of the 2000 synthetic data was trained at 80% and tested at 20%. 2.4.2. Deep Neural Network (Non-Linear Regression) The study generated a model for non-linear regression [51–53] to determine the influence and prediction of various fractured input parameters over the production recovery with Keras sequential stable input dense layer of 100, 10 and an output layer shape of 1, shown in Figure 9 and Table 2. Moreover, while maintaining a learning rate of Big DataCogn. Comput.2023,7,57 10 of18 about 0.01, the activation function for the input layer was set to a rectified linear unit Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 10 of 18 (ReLU; thus, a non-linear activation function intended for deep neural networks). Table 2. Output screen; standard model summary for all training. Model: “Proppant_Fracturing_ML_Modeling” Layer (Type) Output Shape Param # Input_layer (Dense) (None, 1000) 2000 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 10 of 18 dense_6 (Dense) (None, 100) 100100 output_layer (Dense) (None, 1) 101 Table 2. Output screen; standard model summary for all training. Total params: 102,201 Figure9. Scattered raw data validationfor proppant propagation. Figure 9. Scattered raw data validation for proppant propagation. Trainable params: 102,201 Model: “Proppant_Fracturing_ML_Modeling” Layer (Type) Output Shape Param # Non-trainable params: 0 Table2. Output screen; standardmodel summary for all training. Input_layer (Dense) (None, 1000) 2000 Model: “Proppant_Fracturing_ML_Modeling” dense_6 (Dense) (None, 100) 100100 The generated model for training the data was compiled, setting the loss function to output_layer (Dense) (None, 1) 101 Layer(Type) OutputShape Param# Total params: 102,201 the mean absolute error and the optimizers to the stochastic descent gradient (SGD), Input_layer(Dense) (None,1000) 2000 Trainable params: 102,201 dense_6(Dense) (None,100) 100100 Adam, and RMSprop, respectively, for each of the models’ build-ups. Figures 10–15 output_lay Non-trainaer ble pa (Dense) rams: 0 (None,1) 101 demonstrates loss curves for various input parameters emanating from fractured height, Totalparams: 102,201 width, length, permeability, and porosity of the formation and fractured conductivity raw The generated model for training the data was compiled, setting the loss function to Trainable params: 102,201 the mean absolute error and the optimizers to the stochastic descent gradient (SGD), Non‑trainableparams: 0 synthetic data, validated in Figure 9. Adam, and RMSprop, respectively, for each of the models’ build-ups. Figures 10–15 demonstrates loss curves for various input parameters emanating from fractured height, width, length, permeability, and porosity of the formation and fractured conductivity raw synthetic data, validated in Figure 9. Figure 10. Height/Pwf loss curves; (a) SGD final loss = 61.50, ( b) Adam final loss = 125.86, ( c) RM‑ Figure 10. Height/Pwf loss curves; (a) SGD final loss = 61.50, (b) Adam final loss = 125.86, (c) Figure 10. Height/Pwf loss curves; (a) SGD final loss = 61.50, (b) Adam final loss = 125.86, (c) Sprop RMSprop final finalloss=231.91. loss = 231.91. RMSprop final loss = 231.91. Figure 11. Porosity/Pwf loss curves; (a) SGD final loss = 399.11, ( b) Adam final loss = 65.46, ( c) RM‑ Figure 11. Porosity/Pwf loss curves; (a) SGD final loss = 399.11, (b) Adam final loss = 65.46, (c) RMSprop final loss = 106.36. Spropfinalloss=106.36. Figure 11. Porosity/Pwf loss curves; (a) SGD final loss = 399.11, (b) Adam final loss = 65.46, (c) RMSprop final loss = 106.36. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 11 of 18 Big Big DData ata Cogn. Cogn. Com Comput. put. 2023 2023,, 77,, x F 57 OR PEER REVIEW 11 of 11 of 18 18 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 11 of 18 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 11 of 18 Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, ( c) RM‑ Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) RMSprop final loss = 174.33. RMSprop final loss = 174.33. RMSprop final Spropfinalloss= loss = 174 174.33. .33. RMSprop final loss = 174.33. Figure 13. Width/Pwf loss curves; (a) SGD final loss = 620.66, (b) Adam final loss = 93.45, (c) Figure 13. Figure Figure 13. 13. W Wi Wi idth/Pwf dth/Pwf lo dth/Pwf lo lossss ss curv cu cu rves; ( es; rves; ( (a) aSGD a ) SGD ) SGD final final lo final lo loss=620.66, ss = 6 ss = 6(20.66, ( 20.66, ( b)Adam b b) Ada ) Ada finalloss m m final loss = final loss = =93.45,( c)93.45, ( 93.45, ( RMSprop cc) ) Figure 13. Width/Pwf loss curves; (a) SGD final loss = 620.66, (b) Adam final loss = 93.45, (c) RMSprop final loss = 124.99. RMSprop final loss = 124.99. RMSprop final RMSprop final loss = 124 loss = 124 .99 .99 . . finalloss =124.99. Figure 14. Figure 14. Per Per mm eability eability /Pwf /Pwf loss loss cu cu rves rves ; ( ; ( aa ) SGD final ) SGD final loss = 61.50 loss = 61.50, ( , (b b) Adam final loss ) Adam final loss = 72.6, ( = 72.6, (c c) ) Figure 14. Figure14. Per Permeability/Pwf meability/Pwf loss loss cu curv rves es;; ((a a))SGD SGD final finalloss loss = 61.50 =61.50,( , (b)bAdam ) Adam final loss finalloss=72.6, = 72.6, ( ( c)RM‑ c) Figure 14. Permeability/Pwf loss curves; (a) SGD final loss = 61.50, (b) Adam final loss = 72.6, (c) RMSprop Final loss = 169.38. RMSprop Final loss = 169.38. RMSprop Final loss = 169.38. SpropFinalloss=169.38. RMSprop Final loss = 169.38. Figure Figure 15. 15. Conductivity/Pwf Conductivity/Pwfloss loss curv curves; es;(a ()aSGD ) SGD final lo finalloss=ss 61.72, = 61.72, ( ( b)Adam b) Adam final lo finalloss=94.42 ss = 94.42 ( ( c)RM‑ c) Figure 15. Conductivity/Pwf loss curves; (a) SGD final loss = 61.72, (b) Adam final loss = 94.42 (c) Figure 15. Conductivity/Pwf loss curves; (a) SGD final loss = 61.72, (b) Adam final loss = 94.42 (c) Figure 15. Conductivity/Pwf loss curves; (a) SGD final loss = 61.72, (b) Adam final loss = 94.42 (c) RMSprop final loss = 177.10. Spropfinalloss=177.10. RMSprop final RMSprop final loss = 177 loss = 177.10. .10. RMSprop final loss = 177.10. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 12 of 18 Big B Biig D g DData a atta a Cogn. Cogn. Cogn. Com Com Comput. p put ut. . 2023 2023 2023,,, 7 77, x F ,, x F 57 O OR PE R PEER R ER RE EVIE VIEW W 12 of 12 of 12 of 18 18 18 2.4.3. Neural Network (Linear Regression) 2.4.3. NeuralNetwork(Linear Regression) 2. 2.4. 4.3. Ne 3. Neura urall Net Netw work (Lin ork (Linear Reg ear Regr res essio sion n) ) Production recovery prediction analysis engaged a linear neural network regressor Production recovery prediction analysis engaged a linear neural network regressor Production recovery predic Production recovery prediction analysis tion analysis engag engage ed a d a lin linear ne ear neural ne ural network regres twork regress so orr utilizing the Keras sequential model for the synthetic data. The model was compiled with utilizingtheKerassequentialmodelforthesyntheticdata. Themodelwascompiledwith ut util iliz izing t ing th he e Keras Keras se sequen quenttiial al model model for for t th he e syntheti synthetic c da data. The model ta. The model wa was comp s compiille ed wi d with th an an input input sh shape ape of 1 of1and and a alinear lineaactiv r actiation, vation, un unlike like R ReLU, eLU, asas in indicated dicatefor d for t the hnon‑linear e non-lineare‑ r an input an input sh sha ap pe of 1 e of 1 and a and a line linea ar a r ac cttiivat vatiion, un on, unli like R ke Re eLU, LU, as in as indic dica atte ed for t d for th he n e no on-l n-liinea nearr gressor regressor m mentioned entioned e earlier. arlieHow r. However, ever,while whithe le tstudy he stmaintained udy maintaits ined it reliability, s reliabthe ilitmodel y, the regres regressor m sor meent ntioned e ioned eaarrlliieer. r. However, However, whi whille t e th hee st study udy ma maint intaaiin ned it ed itss re reli liab abilit ilityy,, t th he e wascompiledwiththesamelossfunctions(MAE)andselectedoptimizersfortrainingthe model was compiled with the same loss functions (MAE) and selected optimizers for model wa model was c s co ompiled w mpiled wiitth h t th he same e same los losss funct functiions ons ((M MAE) AE) and and se select lected opt ed optiimi mizers zers for for model training the withamodel 0.01learning with a 0rate .01 land earnian ng ra epoch te aof nd 100. an epoch of Figures16 100 –20 . Fiillustrate gures 16–the 20 ipredictiv llustrate e tra traiini ning the ng the model model wi with th a 0 a 0..01 01 l le ea ar rni ning ra ng rate te a an nd d a an n epoch of epoch of 100 100. . Fi Figures 16– gures 16–20 i 20 illllu ustra strate te versionofthelinearregression. the predictive version of the linear regression. the predi the predic cti tive versi ve versio on of the li n of the linear regressi near regression. on. Figure 16. Fracture height linear regressor model and optimizer performances; (a) SGD final loss = Figure 16. Fracture height linear regressor model and optimizer performances; (a) SGD final Figure 16. Figure 16. Fracture height linear regressor mo Fracture height linear regressor model and optim del and optimiizer performances; ( zer performances; (a a)) SG SGD fi D final los nal loss s = = 347.88, (b) Adam final loss = 221.07, (c) RMSprop final loss = 220.25. 347.88, ( loss 347.88, ( = 347.88, b b) ) Ada Ada(bm m)Adam final loss = 2 final loss = 2 finalloss 21.07, ( 21.07, ( =221.07, c c)) RMSprop final lo RMSprop final lo ( c)RMSprop final ss ss = 220.25 loss = 220.25 = 220.25. . . Figure 17. Porosity linear regressor model and optimizer performances; (a) SGD final loss = 225.58, Figure 17. Figure 17. Porosity linear regressor mode Porosity linear regressor model an l and optimizer pe d optimizer performances; ( rformances; (a a)) SGD final SGD final loss loss = 225.58, = 225.58, Figure 17. Porosity linear regressor model and optimizer performances; (a) SGD final loss = 225.58, (b) Adam final loss = 234.23, (c) RMSprop final loss = 225.56. ((b b) Adam final ) Adam final loss = 234 loss = 234.23, ( .23, (c c) RMSprop fina ) RMSprop final lo l loss = 225 ss = 225.56 .56. . (b)Adamfinalloss=234.23, ( c) RMSprop final loss = 225.56. Figure 18. Permeability linear regressor model and optimizer performances; (a) SGD final loss = Figure 18. Figure 18. Per Perm meability eability l liinear reg near regrressor m essor mo odel and del and optim optimiizer perform zer performa ances nces; ( ; (a a) ) SGD final loss = SGD final loss = Figure 18. Permeability linear regressor model and optimizer performances; (a) SGD final 260.98, (b) Adam final loss = 255.49, (c) RMSprop final loss = 253.81. 260.98, ( 260.98, (b b) ) Ada Adam m final loss = 2 final loss = 255.49, ( 55.49, (c c)) RMSprop final lo RMSprop final loss ss = 253.81 = 253.81. . loss= 260.98, (b)Adam finalloss=255.49, ( c)RMSprop final loss = 253.81. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 13 of 18 Big Big D Data ataCogn. Cogn. Com Comput. put. 2023 2023,, 77,, x F 57 OR PEER REVIEW 13 of 13 of 18 18 Figure 19. Fracture length linear regressor model and optimizer performances; (a) SGD final loss = Figure 19. Fracture length linear regressor model and optimizer performances; (a) SGD final Figure 19. Fracture length linear regressor model and optimizer performances; (a) SGD final loss = 487.19, (b) Adam final loss = 157.91, (c) RMSprop final loss = 253.81. loss= 487.19, (b)Adam finalloss=157.91, ( c)RMSprop final loss = 253.81. 487.19, (b) Adam final loss = 157.91, (c) RMSprop final loss = 253.81. Figure 20. Fracture width linear regressor model and optimizer performances; (a) SGD final loss = Figure Figure 20. 20. Fracture width Fracture width linear regressor m linear regressorodel and model and optimizer performances; ( optimizer performances; a) SG(a D) fSGD inal lofinal ss = 255.78, (b) Adam final loss = 251.87, (c) RMSprop final loss = 249.91. 255.78, (b) Adam final loss = 251.87, (c) RMSprop final loss = 249.91. loss= 255.78, (b)Adam final loss= 251.87, ( c)RMSprop final loss = 249.91. 3. Results and Discussion 3. ResultsandDiscussion 3. Results and Discussion The performances of optimizers for the various regressors used to predict hydraulic The performances of optimizers for the various regressors used to predict hydraulic The performances of optimizers for the various regressors used to predict hydraulic fracturing and production recovery are compiled in Table 3. For better lay understanding, fracturing and production recovery are compiled in Table 3. For better lay understand‑ fracturing and production recovery are compiled in Table 3. For better lay understanding, the average of all optimizers based on the various input parameters was obtained. ing, the average of all optimizers based on the various input parameters was obtained. the average of all optimizers based on the various input parameters was obtained. How However, ever,Stochastic Stochasticdescent, descent, Ad Adam,am, andand RMSprop RMSpro optimizers p optimize indicated rs indicated verygood very goo perfor‑ d However, Stochastic descent, Adam, and RMSprop optimizers indicated very good mance performanc optimizers, e optimizer just as s, jother ust as ot known her knoptimizers own optimize demonstrated rs demonstthe rated t capabilities he capabof ilitmold‑ ies of performance optimizers, just as other known optimizers demonstrated the capabilities of ingandshapingthe fittedmodel into anaccurate form. molding and shaping the fitted model into an accurate form. molding and shaping the fitted model into an accurate form. T Table 3. able3. Keras Keras optimizers optimizers for for produc production tion recovery prediction recoveryprediction.. Table 3. Keras optimizers for production recovery prediction. Loss Functions/MAE Loss Functions/MAE Loss Functions/MAE Parameters Conductivity Par Parameters ameters Conductivity Conductivity h [ft] ϕ [%] Lf [ft] wf [in] k [md] Average h [f h [ft] t] ϕϕ [% [%] ] LfL [f[ft] t] wfw [in] [in] k [md k [md] ] Average Average f f [mD.in] [mD.in] [mD.in] SGD 61.50 399.11 366.46 620.66 61.50 61.72 261.83 SGD 61.50 399.11 366.46 620.66 61.50 61.72 261.83 SGD 61.50 399.11 366.46 620.66 61.50 61.72 261.83 Non- Keras Non- Keras Keras ADAM 125.86 65.46 155.50 93.45 72.6 94.42 101.22 Non‑ ADAM 125.86 65.46 155.50 93.45 72.6 94.42 101.22 Linear Optimizers ADAM 125.86 65.46 155.50 93.45 72.6 94.42 101.22 Linear Optimizers Optimizers Linear RMSprop 231.91 106.36 174.33 124.99 169.38 177.10 163.87 RMSprop 231.91 106.36 174.33 124.99 169.38 177.10 163.87 RMSprop 231.91 106.36 174.33 124.99 169.38 177.10 163.87 Loss functions/MAE Loss functions/MAE Loss functions/MAE Parameters Conductivity Parameters Conductivity Parameters h [ft] ϕ [%] Lf [ft] wf [in] k [md] Average h [ft] ϕ [%] Lf [ft] wf [in] k [md] Average Conductivity [mD.in] h [ft] ϕ [%] L [ft] w [in] k [md] Average [mD.in] f f [mD.in] SGD 347.88 225.58 487.19 255.78 260.98 198.30 295.95 SGD 347.88 225.58 487.19 255.78 260.98 198.30 295.95 Keras Keras SGD 347.88 225.58 487.19 255.78 260.98 198.30 295.95 Linear ADAM 221.07 234.23 157.91 251.87 255.49 134.83 209.23 Linear ADAM 221.07 234.23 157.91 251.87 255.49 134.83 209.23 Optimizers Keras Optimizers ADAM 221.07 234.23 157.91 251.87 255.49 134.83 209.23 Linear RMSprop 220.25 225.56 253.81 249.91 253.81 192.46 232.63 Optimizers RMSprop 220.25 225.56 253.81 249.91 253.81 192.46 232.63 RMSprop 220.25 225.56 253.81 249.91 253.81 192.46 232.63 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 14 of 18 3.1. Non-Linear Optimizers The performances of Keras optimizers based on the neural network ReLU compiled Big DataCogn. Comput.2023,7,57 14 of18 the model into a non-linear form. The input data parameters, while considering SGD, obtained an average loss function or a mean absolute error of 261.83, Adam’s optimizer 3.1. Non‑LinearOptimizers computed as 101.22, and RMSprop was 163.87. The final loss comparative analysis The performances of Keras optimizers based on the neural network ReLU compiled indicates that the lower the loss, the more accurate the prediction would be. the model into a non‑linear form. The input data parameters, while considering SGD, ob‑ tainedanaveragelossfunctionorameanabsoluteerrorof261.83,Adam’soptimizercom‑ 3.2. Linear Optimizers puted as 101.22, and RMSprop was 163.87. The final loss comparative analysis indicates The model thatthelow erwathes compil loss,the more ed based on the li accurate the prediction near awould ctivatibe. on function, and the input parameters fitted the models to obtain an average Adam optimizer loss function of 209.23. 3.2. LinearOptimizers Generally, the linear optimizers for the fracture propagation data were determined to The model was compiled based on the linear activation function, and the input pa‑ have performed inadequately, in contrast to the same optimizers for non-linear functions. rameters fitted the models to obtain an average Adam optimizer loss function of 209.23. This could be as a result of inadequate neurons used, unfavorable learning rates, and a Generally,thelinearoptimizersforthefracturepropagationdataweredeterminedtohave limited number of iterations. However, Figure 21. draws out the entire computation for performedinadequately, in contrast to the same optimizers for non‑linear functions. This couldbeasaresultofinadequateneuronsused,unfavorablelearningrates,andalimited this study, demonstrating the best optimizer. As mentioned earlier, the lower the loss, the numberofiterations. However,Figure21. drawsouttheentirecomputationforthisstudy, higher the performance; hence, from the visual bar graph, Adam demonstrated the lowest demonstratingthebestoptimizer. Asmentionedearlier,thelowertheloss,thehigherthe loss for the synthetic fracture propagation data. performance;hence,fromthevisualbargraph,Adamdemonstratedthelowestlossforthe syntheticfracturepropagationdata. Figure21. Optimizing Keras optimizers at different final losses. Figure 21. Optimizing Keras optimizers at different final losses. 3.3. ProductionRecoveryOptimization ThesyntheticdataemanatingfromCMGmodelingwasshuffledandindexedtoread 3.3. Production Recovery Optimization production recovery, as demonstrated in Figure 21. However, the model used for the ear‑ The synthetic data emanating from CMG modeling was shuffled and indexed to read lierpredictionindicatedthat,outoftheentire2000syntheticdatasetsreplicated,theAdam optimizerprediction,witha101.22lossfunction,wasatbest80.24%accurate. Production production recovery, as demonstrated in Figure 21. However, the model used for the recovery was based on the flowing bottom‑hole pressure; hence, a plot to determine the earlier prediction indicated that, out of the entire 2000 synthetic datasets replicated, the fractureconductivitywheretheproppedfracturetendstoconveyformationfluidsintothe Adam optimizer prediction, with a 101.22 loss function, was at best 80.24% accurate. wellboreisdemonstratedinFigure 22. Production recovery was based on the flowing bottom-hole pressure; hence, a plot to The measure of permeability and fracture width from the data generated explains that determine the fracture conductivity where the propped fracture tends to convey theshaleformationunderreviewhasabeett rchanceofmaximizinghydrocarbonproduction. formation fluids into the wellbore is demonstrated in Figure 22. The measure of permeability and fracture width from the data generated explains that the shale formation under review has a better chance of maximizing hydrocarbon production. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 15 of 18 Big DataCogn. Comput.2023,7,57 15 of18 Figure22. Fracture conductivityvalidation. Figure 22. Fracture conductivity validation. 3.4. ValidationandLimitations 3.4. Validation and Limitations TheweightofthisstudywascomparedwiththeworkofDongetal.[24],whoinrecent The weight of this study was compared with the work of Dong et al. [24], who in times developed a machine learning algorithm coupled with other multilayer perception recent times developed a machine learning algorithm coupled with other multilayer algorithmswithanemphasisonparticleswarmoptimization(PSO).Afterseveraltweaks, perception algorithms with an emphasis on particle swarm optimization (PSO). After theauthorsnoticedthePSO,providedthebestresults,asindicatedinFigure3. Drawingan several tweaks, the authors noticed the PSO, provided the best results, as indicated in intenseanalysisbetweenPSOandAdamoptimizers,thecurrentstudydidnotmodelusing Figure 3. Drawing an intense analysis between PSO and Adam optimizers, the current PSO, which is recommended for consideration in subsequent research while considering study did not model using PSO, which is recommended for consideration in subsequent AdaSwarm[25],whichis embeddedin the NumPy and TensorFlowlibraries. research while considering AdaSwarm [25], which is embedded in the NumPy and Neuralnetworkoptimizers,asresearchedbyMohapatraetal.[25],explicitlyprovide TensorFlow libraries. a comparative analysis of various optimizers, which throws enough light on the current Neural network optimizers, as researched by Mohapatra et al. [25], explicitly provide study. Inaddition,whileconsideringtheefficacyofSGD,Adam,andRMSprop,itisquite a comparative analysis of various optimizers, which throws enough light on the current fair to validate their performances with other relevant datasets. Furthermore, since some study. In addition, while considering the efficacy of SGD, Adam, and RMSprop, it is quite ofMohapatraetal.’sderivedmodelsconsistedofsecond‑orderdifferentialequations,val‑ fair to validate their performances with other relevant datasets. Furthermore, since some idation of their optimizers is best for the current study’s non‑linear neural network re‑ of Mohapatra et al.’s derived models consisted of second-order differential equations, gressions. In so doing, it is worth noting that, when the average of the previous study’s validation of their optimizers is best for the current study’s non-linear neural network lossfunctionsfromSGD,Adam,andRMSpropwithholdingtolerancewasdulycompared, regressions. In so doing, it is worth noting that, when the average of the previous study’s Adamstillperformedmuchbetter, evenconsidering other datasets. loss functions from SGD, Adam, and RMSprop withholding tolerance was duly However, it is advisable to keep in mind that there was probably not enough model compared, Adam still performed much better, even considering other datasets. and other hyperparameter tweaking to enhance optimization. In the case of Adam, the However, it is advisable to keep in mind that there was probably not enough model algorithmwasabletoachieveanaccuracyof80.24%. Mostimportantly,Adam’sremaining and other hyperparameter tweaking to enhance optimization. In the case of Adam, the accuracy of 20% could be further optimized to 100% accuracy if long periods of iterations algorithm was able to achieve an accuracy of 80.24%. Most importantly, Adam’s atanepochof,i.e.,500to1000forbothlinearandnon‑linearneuralnetworktrainingcould remaining accuracy of 20% could be further optimized to 100% accuracy if long periods beconsidered,withadecreasinglearning rate of 0.001. of iterations at an epoch of, i.e., 500 to 1000 for both linear and non-linear neural network 4. Conclusions training could be considered, with a decreasing learning rate of 0.001. The data‑driven sensitive machine learning algorithms, with an emphasis on SGD, 4. Conclusions Adam, and RMSprop optimizers, have indeed paved an additional artificially intelligent way to optimize synthetic data and its input parameters for fracturing hydrocarbon wells The data-driven sensitive machine learning algorithms, with an emphasis on SGD, with low permeability indexes, and their tendency to optimize production recovery pre‑ Adam, and RMSprop optimizers, have indeed paved an additional artificially intelligent dictions. Moreover, based on high‑pressure proppants, the novelty of the study was able way to optimize synthetic data and its input parameters for fracturing hydrocarbon wells toidentify,usingGoogleTensorFlowlibraries, that: with low permeability indexes, and their tendency to optimize production recovery predictions. Moreover, based on high-pressure proppants, the novelty of the study was • The linear function for the trained deep neural network on the synthetic dataset was able tnot o ident fully ifyoptimized, , using Gooand gle Ten the sw orF eakest low libr optimizer aries, thamong at: them was stochastic gradient descent(SGD),with a meanabsoluteerror of 295.95. Big DataCogn. Comput.2023,7,57 16 of18 • Whileiteratingforanon‑linearalgorithm,Adamemergedasthebest‑performingop‑ timizer,withalossfunctionof 101.22. • The proliferation of non‑linear neural network algorithms for the prediction and op‑ timizationofhydraulicfracture morphology is highly recommended. • The synthetic data and other conventional data are both suitable for machine learn‑ ing algorithms and for decisive decision‑making procedures. Google TensorFlow li‑ brariespresenteasy access to coding and validation. • Theoverallnoveltyofthestudyisthatitautomatesdata‑drivenprognosisbyoptimiz‑ ingthehydraulicfractureparameters,fromcomplexCMGnumericalmodelingtous‑ ingKerasSequentialAPIalgorithmsandseveraloptimizercompilationsfordecision‑ makinganalysis. • This study limits the complexity of physics‑driven computational fracking analysis andprovidesanindustrialautomationmeansofpredictingtheexpectationsandreme‑ diesforfrackingpetroleum shale reservoirs. Author Contributions: D.D.K.W. and S.I. designed the numerical models and generated the data. D.D.K.W. computed the machine learning workflow and wrote the manuscript. The methods and results of the manuscripts were reviewed by S.I., A.S. and J.K. Project administration and funding acquisition was performed by A.S., J.K. and S.I. All authors have read and agreed to the published versionof the manuscript. Funding: This research was funded by [Nazarbayev University] grant number [11022021CRP1512] And the APC was funded by [Nazarbayev University]. The authors are grateful for this support. Anyopinions,findings,andconclusionsorrecommendationsexpressedinthismaterialarethoseof theauthor(s)and donot necessarilyreflect the views of NazarbayevUniversity. DataAvailabilityStatement: Thedataused is confidential. Acknowledgments: WearegratefultoNazarbayevUniversityforprovidinguswiththeopportunity tocon‑tinuesharingourworkaspartoftheCollaborativeResearchProgram(CRP)fortheperi‑ods of 2022–2024 with project number 11022021CRP1512. We again show appreciation to the support of Faculty‑Development Competitive Research Grant for 2020–2022 (batch 2) with project number 08042FD1911. In spite of these, we wholeheartedly thank the authors cited in this piece of writing fortheir extensivestudy that promotes knowledgesharing. Conflicts of Interest: The authors hereby declare that the research presented in this paper was not impactedbyanyknown conflicting financial interests or personal connections. References 1. Irawan,S.;Kinif,B.I.;Bayuaji,R.Maximizingdrillingperformancethroughenhancedsolidcontrolsystem. IOPConf. Ser. Mater. Sci. Eng. 2017, 267,012038. [CrossRef] 2. Irawan,S.;Kinif,I.B.Solid ControlSystem for MaximizingDrilling. Drill. InTech2018,1, 192. [CrossRef] 3. Gandossi, L. An Overview of Hydraulic Fracturing and Other Formation Stimulation Technologies for Shale Gas Production; no. EUR 26347EN.2013;EU Publications: Luxembourg, 2015. [CrossRef] 4. Li,G.;Song,X.;Tian, S.; Zhu,Z. IntelligentDrilling and Completion: A Review. Engineering 2022,18,33–48. [CrossRef] 5. Kundert, D.; Mullen, M. Proper Evaluation of Shale Gas Reservoirs Leads to a More Effective Hydraulic‑Fracture Stimulation. InProceedingsofthe SPE RockyMountain PetroleumTechnologyConference, Denver,CO, USA, 14–16 April 2009. 6. Liu, Y.; Zheng, X.; Peng, X.; Zhang, Y.; Chen, H.; He, J. Influence of natural fractures on propagation of hydraulic fractures in tightreservoirs during hydraulicfracturing. Mar. Pet. Geol. 2022,138,105505. [CrossRef] 7. Zhao,H.;Liu,C.;Xiong,Y.;Zhen,H.;Li,X.Experimentalresearchonhydraulicfracturepropagationingroupofthincoalseams. J.Nat. Gas. Sci. Eng. 2022,103, 104614. [CrossRef] 8. Suo, Y.; Su, X.; Wang, Z.; He, W.; Fu, X.; Feng, F.; Pan, Z.; Xie, K.; Wang, G. A study of inter‑stratum propagation of hydraulic fractureofsandstone‑shaleinterbedded shale oil. Eng. Fract. Mech. 2022,275,108858. [CrossRef] 9. Yang, Y.; Li, X.; Yang, X.; Li, X. Influence of reservoirs/interlayers thickness on hydraulic fracture propagation laws in low‑ permeabilitylayeredrocks. J.Pet. Sci. Eng. 2022,219,111081. [CrossRef] 10. Xiong,D.;Ma,X.Influenceofnaturalfracturesonhydraulicfracturepropagationbehaviour. Eng. Fract. Mech. 2022,276,108932. [CrossRef] 11. Wayo, D.D.K.; Irawan, S.; Noor, M.Z.B.M.; Badrouchi, F.; Khan, J.A.; Duru, U.I. A CFD Validation Effect of YP/PV from Laboratory‑FormulatedSBMDIFfor ProductiveTransport Load to the Surface. Symmetry2022, 14,17. [CrossRef] Big DataCogn. Comput.2023,7,57 17 of18 12. Wayo,D.D.K.;Irawan,S.;Khan,J.A.;Fitrianti,F.CFDValidationforAssessingtheRepercussionsofFilterCakeBreakers;EDTA andSiO2on FilterCake ReturnPermeability. Appl. Artif. Intell. 2022,36, 2112551. [CrossRef] 13. Peng,X.;Rao,X.;Zhao,H.;Xu,Y.;Zhong,X.;Zhan,W.;Huang,L.Aproxymodeltopredictreservoirdynamicpressureprofile of fracture network based on deep convolutional generative adversarial networks (DCGAN). J. Pet. Sci. Eng. 2022, 208, 109577. [CrossRef] 14. Galkin, S.V.; Martyushev, D.A.; Osovetsky, B.M.; Kazymov, K.P.; Song, H. Evaluation of void space of complicated potentially oil‑bearing carbonate formation using X‑ray tomography and electron microscopy methods. Energy Rep. 2022, 8, 6245–6257. [CrossRef] 15. Ponomareva, I.N.; Martyushev, D.A.; Govindarajan, S.K. A new approach to predict the formation pressure using multiple regressionanalysis: CasestudyfromSukharevoilfieldreservoir—Russia. J.KingSaudUniv.‑Eng. Sci. 2022,inpress. [CrossRef] 16. Wang, D.B.; Zhou, F.‑J.; Li, Y.‑P.; Yu, B.; Martyushev, D.; Liu, X.‑F.; Wang, M.; He, C.‑M.; Han, D.‑X.; Sun, D.‑L. Numerical simulationoffracturepropagationinRussia carbonatereservoirs during refracturing. Pet. Sci. 2022,19,2781–2795. [CrossRef] 17. Bessmertnykh,A.;Dontsov,E.;Ballarini,R.Theeffectsofproppantonthenear‑frontbehaviorofahydraulicfracture. Eng. Fract. Mech. 2020,235,107110. [CrossRef] 18. Yi, S.S.; Wu, C.H.; Sharma, M.M. Proppant distribution among multiple perforation clusters in plug‑and‑perforate stages. SPE Prod. Oper. 2018,33,654–665. [CrossRef] 19. Suri,Y.;Islam,S.Z.;Hossain,M.ProppanttransportindynamicallypropagatinghydraulicfracturesusingCFD‑XFEMapproach. Int. J.RockMech. Min. Sci. 2020, 131, 104356. [CrossRef] 20. Wu,C.H.;Sharma,M.M.Modelingproppanttransportthroughperforationsinahorizontalwellbore. SPEJ.2019,24,1777–1789. [CrossRef] 21. Wang, K.; Zhang, G.; Du, F.; Wang, Y.; Yi, L.; Zhang, J. Simulation of directional propagation of hydraulic fractures induced by slottingbasedondiscrete elementmethod. Petroleum2022, inpress. [CrossRef] 22. Luo,A.;Li,Y.;Wu,L.;Peng,Y.;Tang,W.Fracturedhorizontalwellproductivitymodelforshalegasconsideringstresssensitivity, hydraulicfractureazimuth, andinterference betweenfractures. Nat. Gas Ind. B 2021,8, 278–286. [CrossRef] 23. Martyushev, D.A.; Ponomareva, I.N.; Filippov, E.V. Studying the direction of hydraulic fracture in carbonate reservoirs: Using machinelearningto determine reservoirpressure. Pet. Res. 2022,inpress. [CrossRef] 24. Dong, Z.; Wu, L.; Wang, L.; Li, W.; Wang, Z.; Liu, Z. Optimization of Fracturing Parameters with Machine‑Learning and Evolu‑ tionaryAlgorithm Methods. Energies 2022,15,6063. [CrossRef] 25. Elbaz, K.; Shen, S.L.; Zhou, A.; Yin, Z.Y.; Lyu, H.M. Prediction of Disc Cutter Life During Shield Tunneling with AI via the Incorporationof aGenetic Algorithminto aGMDH‑Type Neural Network. Engineering 2021,7,238–251. [CrossRef] 26. Shen, S.L.; Elbaz, K.; Shaban, W.M.; Zhou, A. Real‑time prediction of shield moving trajectory during tunnelling. Acta Geotech. 2022,17,1533–1549. [CrossRef] 27. Elbaz,K.;Yan,T.;Zhou,A.;Shen,S.L.Deeplearninganalysisforenergyconsumptionofshieldtunnelingmachinedrivesystem. Tunn. Undergr. SpaceTechnol. 2022, 123, 104405. [CrossRef] 28. Fang,J.;Gong,B.;Caers,J.Data‑DrivenModelFalsificationandUncertaintyQuantificationforFracturedReservoirs. Engineering 2022,18,116–128. [CrossRef] 29. Aboosadi, Z.A.; Rooeentan, S.; Adibifard, M. Estimation of subsurface petrophysical properties using different stochastic algo‑ rithmsinnonlinear regressionanalysis of pressure transients. J. Appl. Geophy. 2018,154, 93–107. [CrossRef] 30. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. 2014. Available online: http://arxiv.org/abs/1412.6980 (ac‑ cessedon7January2023). 31. Kamrava,S.;Tahmasebi,P.;Sahimi,M.Enhancingimagesofshaleformationsbyahybridstochasticanddeeplearningalgorithm. NeuralNetw. 2019,118,310–320. [CrossRef] 32. Wang, Q.; Song, Y.; Zhang, X.; Dong, L.; Xi, Y.; Zeng, D.; Liu, Q.; Zhang, H.; Zhang, Z.; Yan, R.; et al. Evolution of corrosion predictionmodelsforoilandgaspipelines: Fromempirical‑driventodata‑driven. Eng. Fail. Anal. 2023,146,107097. [CrossRef] 33. Liu, Y.Y.; Ma, X.H.; Zhang, X.W.; Guo, W.; Kang, L.X.; Yu, R.Z.; Sun, Y.P. A deep‑learning‑based prediction method of the estimatedultimate recovery(EUR)of shale gaswells. Pet. Sci. 2021,18, 1450–1464. [CrossRef] 34. A Comprehensive Guide on Deep Learning Optimizers. Available online: https://www.analyticsvidhya.com/blog/2021/10/a‑ comprehensive‑guide‑on‑deep‑learning‑optimizers/ (accessedon 10 February 2023). 35. Mohapatra, R.; Saha, S.; Coello, C.A.C.; Bhattacharya, A.; Dhavala, S.S.; Saha, S. AdaSwarm: Augmenting Gradient‑Based Opti‑ mizersinDeepLearningwith SwarmIntelligence. IEEETrans. Emerg. Top Comput. Intell. 2022,6, 329–340. [CrossRef] 36. Hou,L.;Elsworth,D.;Zhang,F.;Wang,Z.;Zhang,J.Evaluationofproppantinjectionbasedonadata‑drivenapproachintegrat‑ ingnumerical andensemble learning models. Energy 2023, 264,126122. [CrossRef] 37. Mukhtar, F.M.; Duarte, C.A. Coupled multiphysics 3‑D generalized finite element method simulations of hydraulic fracture propagationexperiments. Eng. Fract. Mech. 2022, 276, 108874. [CrossRef] 38. Pezzulli, E.; Nejati, M.; Salimzadeh, S.; Matthäi, S.K.; Driesner, T. Finite element simulations of hydraulic fracturing: A compar‑ isonof algorithmsfor extractingthe propagation velocityofthe fracture. Eng. Fract. Mech. 2022,274,108783. [CrossRef] 39. Ou,C.;Liang,C.;Li,Z.;Luo,L.;Yang,X.3Dvisualizationofhydraulicfracturesusingmicro‑seismicmonitoring: Methodology andapplication. Petroleum2022,8,92–101. [CrossRef] Big DataCogn. Comput.2023,7,57 18 of18 40. Ortiz, D.A.A.; Klimkowski, L.; Finkbeiner, T.; Patzek, T.W. The effect of hydraulic fracture geometry on well productivity in shaleoilplayswithhighpore pressure. Energies 2021,14, 7727. [CrossRef] 41. Zhang, Y.; Liu, Z.; Han, B.; Zhu, S.; Zhang, X. Numerical study of hydraulic fracture propagation in inherently laminated rocks accountingforbeddingplane properties. J. Pet. Sci. Eng. 2022, 210,109798. [CrossRef] 42. Kulga, B.; Artun, E.; Ertekin, T. Development of a data‑driven forecasting tool for hydraulically fractured, horizontal wells in tight‑gassands. Comput. Geosci. 2017, 103, 99–110. [CrossRef] 43. Yusof,M.A.M.;Mahadzir,N.A.Developmentofmathematicalmodelforhydraulicfracturingdesign. J.Pet. Explor. Prod. Technol. 2015,5,269–276. [CrossRef] 44. Nguyen, H.T.; Lee, J.H.; Elraies, K.A. A review of PKN‑type modeling of hydraulic fractures. J. Pet. Sci. Eng. 2020, 195, 107607. [CrossRef] 45. Wypych, G. The Effect of Fillers on the Mechanical Properties of Filled Materials. In Handbook of Fillers, 5th ed.; ChemTech Publishing: Toronto, ON, Canada,2021; pp. 525–608. [CrossRef] 46. Fanchi,J.R.FluidFlowEquations. InSharedEarthModeling;GulfProfessionalPublishing: Houston,TX,USA,2002;pp. 150–169. [CrossRef] 47. Fanchi, J.R. Reservoir Simulation. In Integrated Reservoir Asset Management; Elsevier: Amsterdam, The Netherlands, 2010; pp.223–241. [CrossRef] 48. PKN Hydraulic Fracturing Model—FrackOptima Help. Available online: http://www.frackoptima.com/userguide/theory/pkn. html (accessedon21February 2023). 49. Nordgren,R.P.Propagationof aVerticalHydraulic Fracture. Soc. Pet. Eng. J.1972,12, 306–314. [CrossRef] 50. Rahman, M.M.; Rahman, M.K. A review of hydraulic fracture models and development of an improved pseudo‑3D model for stimulatingtightoil/gassand. EnergySourcesPartARecoveryUtil. Environ. Eff. 2010,32,1416–1436. [CrossRef] 51. Misra, S.; Li, H. Deep neural network architectures to approximate the fluid‑filled pore size distributions of subsurface geolog‑ ical formations. In Machine Learning for Subsurface Characterization; Elsevier: Amsterdam, The Netherlands, 2019; pp. 183–217. [CrossRef] 52. Duru, U.I.; Wayo, D.D.K.; Oguh, R.; Cyril, C.; Nnani, H. Computational Analysis for Optimum Multiphase Flowing Bottom‑ Hole Pressure Prediction. Transylv. Rev. 2022, 30, 16010–16023. Available online: http://transylvanianreviewjournal.com/index. php/TR/article/view/907 (accessed on 20 February 2023). 53. Kim, Y.; Satyanaga, A.; Rahardjo, H.; Park, H.; Sham, A.W.L. Estimation of effective cohesion using artificial neural networks basedonindexsoilproperties: ASingaporecase. Eng. Geol. 2021,289, 106163. [CrossRef] Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual au‑ thor(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people orpropertyresulting fromanyideas, methods, instructionsor products referred to in the content. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Big Data and Cognitive Computing Multidisciplinary Digital Publishing Institute

Data-Driven Fracture Morphology Prognosis from High Pressured Modified Proppants Based on Stochastic-Adam-RMSprop Optimizers; tf.NNR Study

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/data-driven-fracture-morphology-prognosis-from-high-pressured-modified-e29t4rVq0e

References (58)

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2023 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. Terms and Conditions Privacy Policy
ISSN
2504-2289
DOI
10.3390/bdcc7020057
Publisher site
See Article on Publisher Site

Abstract

big data and cognitive computing Article Data‑DrivenFractureMorphologyPrognosis from High PressuredModifiedProppants Basedon Stochastic‑Adam‑RMSprop Optimizers;tf.NNRStudy 1 1, 2, 2 DennisDelali KwesiWayo ,SonnyIrawan * ,AlfrendoSatyanaga * andJong Kim DepartmentofPetroleumEngineering, SchoolofMiningandGeosciences,NazarbayevUniversity, Astana010000,Kazakhstan;dennis.wayo@nu.edu.kz DepartmentofCivilandEnvironmental Engineering,SchoolofEngineeringandDigitalSciences, NazarbayevUniversity,Astana010000,Kazakhstan * Correspondence: irawan.sonny@nu.edu.kz(S.I.);alfrendo.satyanaga@nu.edu.kz(A.S.); Tel.: +7‑7172705800(S.I.);+7‑7714912838(A.S.) Abstract: Data‑driven models with some evolutionary optimization algorithms, such as particle swarm optimization (PSO) and ant colony optimization (ACO) for hydraulic fracturing of shale reservoirs, have in recent times been validated as one of the best‑performing machine learning al‑ gorithms. Log data from well‑logging tools and physics‑driven models is difficult to collate and modeltoenhancedecision‑makingprocesses. Thestudysoughttotrain,test,andvalidatesynthetic data emanating from CMG’s numerically propped fracture morphology modeling to support and enhance productive hydrocarbon production and recovery. This data‑driven numerical model was investigatedforefficient hydraulic‑inducedfracturing byusing machinelearning, gradientdescent, and adaptive optimizers. While satiating research curiosities, the online predictive analysis was conductedusingtheGoogleTensorFlowtoolwiththeTensorProcessingUnit(TPU),focusingonlin‑ ear and non‑linear neural network regressions. A multi‑structured dense layer with 1000, 100, and 1 neurons was compiled with mean absolute error (MAE) as loss functions and evaluation metrics Citation: Wayo,D.D.K.;Irawan,S.; concentrating on stochastic gradient descent (SGD), Adam, and RMSprop optimizers at a learning Satyanaga,A.; Kim, J.Data‑Driven rateof0.01. However,theemergingalgorithmwiththebestoveralloptimizationprocesswasfound Fracture MorphologyPrognosis from to be Adam, whose error margin was 101.22 and whose accuracy was 80.24% for the entire set of HighPressured ModifiedProppants 2000 synthetic data it trained and tested. Based on fracture conductivity, the data indicates that BasedonStochastic‑Adam‑RMSprop therewasa higher chance of hydrocarbon production recoveryusing this method. Optimizers;tf.NNR Study. BigData Cogn. Comput. 2023, 7, 57. https:// Keywords: hydraulic fracturing; proppants; numerical modeling; data‑driven; neural network doi.org/10.3390/bdcc7020057 optimizers AcademicEditors: GuarinoAlfonso, RoccoZaccagnino, Emiliano Del Gobboand MoulayA. Akhloufi 1. Introduction Received: 9February 2023 Revised: 10 March 2023 Hydrocarbonproductiondecline[1,2]posessubstantivethreatstoenergysustainabil‑ Accepted: 20March2023 ity; hence, the demand for resolving this challenge in both conventional and unconven‑ Published: 24March 2023 tionalwellsisonamarathoncourse. Ourcontemporaryresearchcommunityhasenhanced thepracticalandtechnicalhydraulicfracturing[3,4]meanstointensifytherecoveryofoil and gas in unconventional reservoirs. This well stimulation [5,6] method could also be termedasfracking[7],whichconsistsofpassinghigh‑pressurefluidsthataresimplymade Copyright: © 2023 by the authors. of chemical additives, sand, and water (proppants) for opening and holding up channels Licensee MDPI, Basel, Switzerland. for the production of excess stored hydrocarbons. However, under this technique, it is This article is an open access article often linear to investigate and initiate the fracturing process, observe the fluid flow from distributed under the terms and the fractured formation, and determine the fracture propagation. The primary purpose conditions of the Creative Commons of hydraulic fracturing is to increase the hydrocarbon productive index targeted at low‑ Attribution (CC BY) license ( https:// permeable formations, for instance, shale formations [8–10]. Hydrocarbon production de‑ creativecommons.org/licenses/by/ clineismostlyattributedtoformationdamage,whichisoneofthereasonsemanatingfrom 4.0/). Big DataCogn. Comput.2023,7,57. https://doi.org/10.3390/bdcc7020057 https://www.mdpi.com/journal/bdcc Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 2 of 18 production decline is mostly attributed to formation damage, which is one of the reasons Big DataCogn. Comput.2023,7,57 2 of18 emanating from poorly designed drilling and completion fluids [11,12]. The leak of these fluids seals off the formation of pore throats and void spaces [13,14], preventing the flow of f poorly luidsdesigned from thdrilling e formatand ion tcompletion o the wellfluids bores.[ 11,12]. The leak of these fluids seals off the formation of pore throats and void spaces [13,14], preventing the flow of fluids from Empirically, it is often prudent to study and design appropriate hydraulic fracturing theformationtothewellbores. methods before their inception. Researchers [15,16] have investigated several ways to Empirically, it is often prudent to study and design appropriate hydraulic fractur‑ stimulate non-productive wells coupled with effective predictive analysis by designing ing methods before their inception. Researchers [15,16] have investigated several ways numerical models to counter poor flow regimes in the formation. Successful predictive tostimulatenon‑productivewellscoupledwitheffectivepredictiveanalysisbydesigning studies are the result of the type and geometric structure of the formation morphology, numerical models to counter poor flow regimes in the formation. Successful predictive the type of proppants [17,18], and their mechanical stress capabilities, fracture length, and studies are the result of the type and geometric structure of the formation morphology, infinite conductivity. thetypeofproppants[17,18],andtheirmechanicalstresscapabilities,fracturelength,and infinite Base conductivity. d on physics-driven models, Suri-Islam-Hossain (SIH) [19] used an extended Based on physics‑driven models, Suri‑Islam‑Hossain (SIH) [19] used an extended fi‑ finite element method (XFEM) to simulate fluid leak-off effects under proppant transport niteelementmethod(XFEM)tosimulatefluidleak‑offeffectsunderproppanttransportfor for fracture propagation. Their hydrodynamic integrated model, as shown in Figure 1, fracturepropagation. Theirhydrodynamicintegratedmodel,asshowninFigure1,demon‑ demonstrated an XFEM initial pressure for fracturing set to 7497 psi. The results of their strated an XFEM initial pressure for fracturing set to 7497 psi. The results of their study study indicate that the proppants’ transport and their relative suspension are largely indicatethattheproppants’transportandtheirrelativesuspensionarelargelyinfluenced influenced by an increased rate of injection. byanincreasedrateof injection. Figure1. Extendedfiniteelementmethod(XFEM)showingitsinitialpressureforfracturing,adapted Figure 1. Extended finite element method (XFEM) showing its initial pressure for fracturing, withpermission fromRef. [19], 2020, Suri, Y. adapted with permission from Ref. [19], 2020, Suri, Y. However,subsequentphysics‑drivensimulations[20],conductedbyWangetal.[21], However, subsequent physics-driven simulations [20], conducted by Wang et al. [21], explain how permeability testing for coal bed methane deposits can be carried out safely expla and effectiv in how ely permeabi without blow‑ups. lity testing Thefor co authors al bed met further indicated hane depos thatitthe s cdirection an be carof ried frac‑ out safely turecanbecomeuncertain,sincefracturechannelstendtoexpandinthedirectionofprin‑ and effectively without blow-ups. The authors further indicated that the direction of cipal stress. In Figure 2, Wang et al., using PFC2D, modeled and simulated a directional fracture can become uncertain, since fracture channels tend to expand in the direction of hydraulic fracturing (DHF), whose findings demonstrated that fracture propagation can principal stress. In Figure 2, Wang et al., using PFC2D, modeled and simulated a beregulatedusingtheDHFapproach[22],asthisovercomesitsoriginalorprincipalstress, directional hydraulic fracturing (DHF), whose findings demonstrated that fracture andforthisreason,itisassertedthatfracturepropagationextendsalongandperpendicu‑ propagation can be regulated using the DHF approach [22], as this overcomes its original lartotheslotting. or principal stress, and for this reason, it is asserted that fracture propagation extends Martyushev et al. [23] expounded the use of machine learning (ML) for the predic‑ along and perpendicular to the slotting. tive optimization of reservoir pressure in directional hydraulic fracturing (DHF) carbon‑ Martyushev et al. [23] expounded the use of machine learning (ML) for the predictive ate reservoirs. Their study considered hydraulically fractured Well 423 on the D fm oil deposit site, as presented in Figure 3. The focus of ML modeling was based simply on optimization of reservoir pressure in directional hydraulic fracturing (DHF) carbonate the interactions and influences of the neighboring wells (9070, 430, 424, 427, 433) on Well reservoirs. Their study considered hydraulically fractured Well 423 on the D3fm oil 423,beforeandafterDHF.Therelationshipforthemodelwasreferredtoasthecoefficient deposit site, as presented in Figure 3. The focus of ML modeling was based simply on the of correlation (r), as demonstrated in Figure 3. The result of their research indicates that interactions and influences of the neighboring wells (9070, 430, 424, 427, 433) on Well 423, the higher the correlation coefficient, the more accurate the reservoir pressure prediction, before and after DHF. The relationship for the model was referred to as the coefficient of and as demonstrated, Well 423, before and after DHF, presents increased pressure levels, correlation (r), as demonstrated in Figure 3. The result of their research indicates that the indicating a red region, and low reservoir pressures correspond to a lower correlation co‑ higher the correlation coefficient, the more accurate the reservoir pressure prediction, and efficient,indicatingayellowand blueregion. as demonstrated, Well 423, before and after DHF, presents increased pressure levels, indicating a red region, and low reservoir pressures correspond to a lower correlation coefficient, indicating a yellow and blue region. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 3 of 18 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 3 of 18 Big DataCogn. Comput.2023,7,57 3 of18 Figure 2. Fracture propagation perpendicular to the slotting in sample 4 adapted with permission Figure 2. Fracture propagation perpendicular to the slotting in sample 4 adapted with permission Figure 2. Fracture propagation perpendicular to the slotting in sample 4 adapted with permission from Ref. [21], 2022, Wang, K. from Ref. [21], 2022, Wang, K. fromRef. [21],2022, Wang,K. Figure 3. Coefficient of correlation (a) before DHF, (b) after DHF, adapted with permission from Figure 3. Coefficient of correlation ( a) before DHF, (b) after DHF, adapted with permission from Figure 3. Coefficient of correlation (a) before DHF, (b) after DHF, adapted with permission from Ref. [23], 2022, Martyushev, D.A. Ref. [23] Ref.[23], , 2022 2022, , Martyushev, D Martyushev, D.A. .A. How However, r ever, reservoir eservoir pre pressures ssures mi migrating grating from ne from neighboring ighboring we wells lls 429, 429, 427, 427, and and 4424 24 tto o However, reservoir pressures migrating from neighboring wells 429, 427, and 424 to Well423beforeDHFpresentacasewherethetendencyofawellblowoutisobviouswhile Well 423 before DHF present a case where the tendency of a well blowout is obvious while Well 423 before DHF present a case where the tendency of a well blowout is obvious while drilling. The ML predictive analysis presented in the case of Martyushev et al. [23] sup‑ drilling. The ML predictive analysis presented in the case of Martyushev et al. [23] drilling. The ML predictive analysis presented in the case of Martyushev et al. [23] ports supports m managerial anagerdecision‑making ial decision-makto ingoptimize to optimiz drilling e drilling operations. operations. supports managerial decision-making to optimize drilling operations. Nonetheless, there has also been abundant research on data‑driven models for the Nonetheless, there has also been abundant research on data-driven models for the Nonetheless, there has also been abundant research on data-driven models for the prediction of hydraulic fracturing well stimulation. Dong et al. [24] optimized fracture prediction of hydraulic fracturing well stimulation. Dong et al. [24] optimized fracture prediction of hydraulic fracturing well stimulation. Dong et al. [24] optimized fracture parameters using data‑driven algorithms. The authors explain that there is a high cost parameters using data-driven algorithms. The authors explain that there is a high cost and parameters using data-driven algorithms. The authors explain that there is a high cost and and driven uncertainty associated with fracture spacing and half‑length. For this reason, driven uncertainty associated with fracture spacing and half-length. For this reason, the driven uncertainty associated with fracture spacing and half-length. For this reason, the the research expounded on the use of an evolutionary optimization algorithm (EOA) for research expounded on the use of an evolutionary optimization algorithm (EOA) for research expounded on the use of an evolutionary optimization algorithm (EOA) for Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 4 of 18 Big DataCogn. Comput.2023,7,57 4 of18 parametric fracture optimization. Hence, their resulting numerical simulation, based on a parametric fracture optimization. Hence, their resulting numerical simulation, based on gradient-boosted decision tree, random forest, support-vector machine, and multilayer a gradient‑boosted decision tree, random forest, support‑vector machine, and multilayer perception (MLP), demonstrated in Figure 4, shows that among all the four production- perception (MLP), demonstrated in Figure 4, shows that among all the four production‑ prediction models, one of the EOA, i.e., particle swarm optimization (PSO), produced the prediction models, one of the EOA, i.e., particle swarm optimization (PSO), produced the highest net present value. highestnetpresentvalue. Figure4. EOA‑PSO with the highestNPV,adapted with permission from Ref. [24],2022, Dong,Z. Figure 4. EOA-PSO with the highest NPV, adapted with permission from Ref. [24], 2022, Dong, Z. In recent times and in this current study, the neural network prognosis architectures In recent times and in this current study, the neural network prognosis architectures have not only looked at the deep neural network Keras architectures, such as sequen‑ have not only looked at the deep neural network Keras architectures, such as sequential, tial, functional, and subclassing API analysis, but there has also been an advance inves‑ functional, and subclassing API analysis, but there has also been an advance investigation tigation on the use of convolutional neural networks (CNN) and recurrent neural net‑ on the use of convolutional neural networks (CNN) and recurrent neural networks works (LSTM), with a proposed extension of optimizers. However, the likes of Elbaz and (LSTM), with a proposed extension of optimizers. However, the likes of Elbaz and Shen Shen[25–27]haveprovenintheirresearchthepossibilityofadvancingtheneuralnetwork [25–27] have proven in their research the possibility of advancing the neural network architectureprognosis. architecture prognosis. Inotherwords,whilemaintainingtheTensorFlowKerasSequentialAPIarchitecture, In other words, while maintaining the TensorFlow Keras Sequential API architecture, a synthetic dataset for training and testing using the most effective neural network opti‑ a synthetic dataset for training and testing using the most effective neural network mizers from the current study is essential for reducing predicted errors in the petroleum optimizers from the current study is essential for reducing predicted errors in the frackingsector. Thestochasticgradientdescent[28,29]algorithmusedisevaluatedforbig petroleum fracking sector. The stochastic gradient descent [28,29] algorithm used is datasets, with the intention of selecting batches at random from the total dataset for each evaluated for big datasets, with the intention of selecting batches at random from the total iteration. In order to roughly obtain a minimum, this optimizer sorts to shuffle the data dataset for each iteration. In order to roughly obtain a minimum, this optimizer sorts to at random for each iteration. Most importantly, in the case of gradient descent, it is not shuffle the data at random for each iteration. Most importantly, in the case of gradient suitable for large datasets, as the convex algorithm does not randomly shuffle the entire descent, it is not suitable for large datasets, as the convex algorithm does not randomly dataset, but instead, for every iteration, the whole data is focused on finding the approxi‑ shuffle the entire dataset, but instead, for every iteration, the whole data is focused on mateminimum. Forthisreason,SGDproducesalotofnoise,basedonthebatchesforeach finding the approximate minimum. For this reason, SGD produces a lot of noise, based on iteration, and to reach the desired approximate minimum, a higher number of iterations the batches for each iteration, and to reach the desired approximate minimum, a higher is needed, which brings the total time for computation to a record high. However, it is number of iterations is needed, which brings the total time for computation to a record purported that SGD with higher iterations can optimize noise cancellation. Nonetheless, high. However, it is purported that SGD with higher iterations can optimize noise another means of countering noise production is by the extension of SGD with momen‑ cancellation. Nonetheless, another means of countering noise production is by the tum,imagineproppinganaturallyfracturedandlowpermeableformation,wherethemo‑ extension of SGD with momentum, imagine propping a naturally fractured and low mentum of the proppants in the natural fracture formation gains maximum convergence. permeable formation, where the momentum of the proppants in the natural fracture Most of all, while considering the momentum, the likelihood that the desired minimum formation gains maximum convergence. Most of all, while considering the momentum, couldbereachedishigh;hence,carefulregulationofthenumberofiterationsisneededfor the likelihood that the desired minimum could be reached is high; hence, careful betteroptimization. regulation of the number of iterations is needed for better optimization. Adaptive moment estimation (Adam) [30] is an extension of SGD [31]; whereas, the Adaptive moment estimation (Adam) [30] is an extension of SGD [31]; whereas, the weightsoftheentirenetworkundertrainingareoptimizedbyasinglelearningrate,Adam, weights of the entire network under training are optimized by a single learning rate, on the other hand, concentrates on upgrading each network’s weights. Based on its wide Adam, on the other hand, concentrates on upgrading each network’s weights. Based on usage, several researchers have indicated it as the benchmark for deep learning and stan‑ its wid dard optimization e usage, severapproaches, al researchersince s haveit indic doesated not it as the benchmark support overtime computation for deep learn and ing re‑ and st quires anless dardmemory optimizafor tioncomputation, approaches, thereby since it does reducing not support overtime computa the entire cost of computation. tion Whiletherehasbeenoverwhelmingresearchcuriosityforbetteradaptationofdeeplearn‑ and requires less memory for computation, thereby reducing the entire cost of Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 5 of 18 Big DataCogn. Comput.2023,7,57 5 of18 computation. While there has been overwhelming research curiosity for better adaptation of deep learning optimizers, previous studies in the research community [32,33], few intelligent applications based on the root mean square propagation (RMSprop) optimizer ing optimizers, previous studies in the research community [32,33], few intelligent appli‑ have been published. This adaptive optimizer takes its roots from RProp, known as cations based on the root mean square propagation (RMSprop) optimizer have been pub‑ resilient backpropagation. Since RProp contradicts the theory behind stochastic gradient lished. This adaptive optimizer takes its roots from RProp, known as resilient backpropa‑ descent, RMSprop was developed as an extension of RProp. As a result, just as Adam gation. Since RProp contradicts the theory behind stochastic gradient descent, RMSprop focuses on each network’s weight, so does RMSprop. In this case, a weight’s specified was developed as an extension of RProp. As a result, just as Adam focuses on each net‑ learning rate is gradually split by the size of its most recent gradients, averaged over time, work’s weight, so does RMSprop. In this case, a weight’s specified learning rate is grad‑ and determined using the mean square method. Figure 5 presents an illustrative ually split by the size of its most recent gradients, averaged over time, and determined performance of the current study’s choice of gradient and adaptive optimizers for the using the mean square method. Figure 5 presents an illustrative performance of the cur‑ manipulation and tweaking of the synthetic fracking dataset to optimize the predictive rentstudy’schoiceofgradientandadaptiveoptimizersforthemanipulationandtweaking petroleum industry. The previous neural network modeling, as conducted by [34], of the synthetic fracking dataset to optimize the predictive petroleum industry. The pre‑ defined an input shape of 28, 28, 1 and upon successfully splitting the improvised data at vious neural network modeling, as conducted by [34], defined an input shape of 28, 28, 1 a dtype float 32, built its model using Keras Sequential, and with activation functions set and upon successfully splitting the improvised data at a dtype float 32, built its model us‑ to ReLU, and SoftMax introduced a loss function of cross entropy under the following ingKerasSequential,andwithactivationfunctionssettoReLU,andSoftMaxintroduceda optimizers, seen in Figure 5. It is empirically significant to note that the resulting lossfunctionofcrossentropyunderthefollowingoptimizers,seeninFigure5. Itisempir‑ experiment indicates that the Adam optimizer achieved the best performing algorithm, as ically significant to note that the resulting experiment indicates that the Adam optimizer it is followed by RMSprop and SGD. achievedthebestperforming algorithm, as it is followedby RMSprop and SGD. Figure 5. Optimizers performances, adapted with permission from Ref. [34]. Figure5. Optimizersperformances, adapted with permission from Ref. [34]. Another typica Another typical l study on study on vavarious rious optimi optimizers zers based based on di on different fferent datasets datasets conducted conducted bby y Mohap Mohapatra atra et et al. al. [[353]5demonstrates ] demonstratesthe thefficacy e efficof acy AdaSw of Adarm aSwcompared arm comptoared SGD, toAda‑ SGD, AdaG Grad,rAdaDelta, ad, AdaDeltRMSprop, a, RMSprop AMSGrad, , AMSGrAdam, ad, Adam emulating , emulatSGD ing SGwith D wit PSO h PSO parameters. parametThe ers. The ad adaptiv ape tgradient‑based ive gradient-base optimizers d optimizunder ers under aseries a seof riecompiled s of compmodels iled mow dels we ereused re used fordeep for deep learning learnin comparativ g compearmean ative msquared ean sqand uared and m mean absolute ean aberrors solute(MSE/MAE) errors (MSE/loss MAE) function loss analysis. The authors, while focusing on swarm intelligence, thus AdaSwarm and the ex‑ function analysis. The authors, while focusing on swarm intelligence, thus AdaSwarm ponentially weighted momentum particle swarm optimizer (EMPSO), whose various pa‑ and the exponentially weighted momentum particle swarm optimizer (EMPSO), whose rameters were measured against gradient descent (GD), defined the capabilities of these various parameters were measured against gradient descent (GD), defined the capabilities optimizerstoexecuteprecisegradientapproximations,whichfurtherexposesthenovelty of these optimizers to execute precise gradient approximations, which further exposes the oftheirconductedresearch. Basedontheneuralnetworkalgorithms(EMPSO/AdaSwarm) novelty of their conducted research. Based on the neural network algorithms andsubsequentdifferentialandnon‑differentialmodelsproposedbyMohapatraetal.[ 35], (EMPSO/AdaSwarm) and subsequent differential and non-differential models proposed it resulted that the gradient‑free adaptive swarm intelligence algorithm (AdaSwarm) had by Mohapatra et al. [35], it resulted that the gradient-free adaptive swarm intelligence provensuperioroverother optimizers, such as RMSprop, SGD, and Adam. algorithm (AdaSwarm) had proven superior over other optimizers, such as RMSprop, Inthiscurrentstudy,stochasticgradientdescent(SGD),andAdamandRMSpropop‑ SGD, and Adam. timizersforhydrocarbonproductionrecoverypredictiveanalysisweremodeledbasedon In this current study, stochastic gradient descent (SGD), and Adam and RMSprop high‑pressure hydraulic fracturing. Moreover, the concentration of gradient descent and optimizers for hydrocarbon production recovery predictive analysis were modeled based adaptive optimizers is used to train and test hydraulic fracturing on numerically mod‑ on high-pressure hydraulic fracturing. Moreover, the concentration of gradient descent eled datasets, based on the Google TensorFlow machine learning algorithms. A linear and adaptive optimizers is used to train and test hydraulic fracturing on numerically and non‑linear neural network regression (NNR) based on these selected optimizers was used to optimize highly modified proppants [ 36] for effective fracture propagation and productionrecovery. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 6 of 18 modeled datasets, based on the Google TensorFlow machine learning algorithms. A linear and non-linear neural network regression (NNR) based on these selected optimizers was used to optimize highly modified proppants [36] for effective fracture propagation and production recovery. 2. Methods Big DataCogn. Comput.2023,7,57 6 of18 2.1. Data-Driven Modeling Building up models generates data that emanates from intelligent tools. Being aware of t2.heMethods difficulty in reading log data, it is practical to use synthetic data for modeling, 2.1. Data‑DrivenModeling making a clear-cut validation with real data. However, in the absence of physics-driven Buildingupmodelsgeneratesdatathatemanatesfromintelligenttools. Beingaware simulations, as demonstrated in Figure 1, data-driven model analysis can be the easiest ofthedifficultyinreadinglogdata,itispracticaltousesyntheticdataformodeling,mak‑ computationally intelligent tool at hand. Moreover, the wholistic parameters involved in ingaclear‑cutvalidationwithrealdata. However,intheabsenceofphysics‑drivensimula‑ modeling data generation originate from the initial reservoir conditions, hydraulic tions,asdemonstratedinFigure1,data‑drivenmodelanalysiscanbetheeasiestcomputa‑ fractionally ture chara intelligent cteristi toolcats, hand. and hydrocarbon Moreover,thewholistic production. Fig parametersinvolv ure ed6 provides the detaile inmodeling d data generation originate from the initial reservoir conditions, hydraulic fracture charac‑ methods and flow chart for an effective propped fracture prognosis. teristics, and hydrocarbon production. Figure 6 provides the detailed methods and flow chartforaneffectiveproppedfracture prognosis. Figure6. Flowchartfor optimizing hydraulic fracturing. Figure 6. Flowchart for optimizing hydraulic fracturing. 2.2. NumericalModeling Basedonacommercialblackoilsimulator,CMG’sintegratedthird‑partygeomechan‑ ics‑based hydraulic fracturing tools, shown in Figure 7, were used to numerically model 2.2. Numerical Modeling the data, which generated input and output parameters, with concentrations on porosity Based on a commercial black oil simulator, CMG’s integrated third-party (ϕ), height (h), fracture length (L ), fracture width (w ), fracture permeability, and a pro‑ f f geomechan ductivityindex ics-ba(flowing sed hydr bottom‑hole aulicpressure, fractuPring t). ools, shown in Figure 7, were used to wf The 2000‑dataset model was numerically focused on shale formations. The current numerically model the data, which generated input and output parameters, with study’s 3D design [37–39] two‑phase flow simulation in assumed vertical reservoirs was concentrations on porosity (𝜙 ), height (h), fracture length (Lf), fracture width (wf), fracture saturated with oil and gas. The striated vertical and transverse propped fracture prop‑ permeability, and a productivity index (flowing bottom-hole pressure, Pwf). agation of the simulated reservoir obtained its operation perpendicular to its minimum principal stress, yet in the direction of its maximum principal stress. According to Oritz The 2000-dataset model was numerically focused on shale formations. The current etal.[40]theirstudyinitiatedthemostappreciabledual‑permeabilityprocedureformod‑ study’s 3D design [37–39] two-phase flow simulation in assumed vertical reservoirs was eling two‑phased shale plays and natural fractures [41]. Their applicable method for sim‑ saturated with oil and gas. The striated vertical and transverse propped fracture ulating naturally induced fractures and hydraulic fractures was made possible by CMG‑ propagation of the simulated reservoir obtained its operation perpendicular to its IMEX.Notwithstanding,inputparametersmodeledwithCMGbyKulgaetal.[42]yielded promising hydraulic fracturing [43] parameters for numerically synthesizing the data minimum principal stress, yet in the direction of its maximum principal stress. According inTable1. to Oritz et al. [40] their study initiated the most appreciable dual-permeability procedure for modeling two-phased shale plays and natural fractures [41]. Their applicable method for simulating naturally induced fractures and hydraulic fractures was made possible by CMG-IMEX. Notwithstanding, input parameters modeled with CMG by Kulga et al. [42] Big DataCogn. Comput.2023,7,57 7 of18 Table1. Input Parametersfor CMG Numerical Modeling. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 7 of 18 ReservoirConditions HydraulicFracture Parameters FBHP Pi[psi] T [ F] Yg A [acres] h [ft] k[md] ϕ [%] L [ft] k [md] w [in] Pwf f f f Min. 500 100 yielded prom 0.5 1000 ising hydr 60 aulic fr 0.00001 acturing [43] pa 4 500 ramete2000 rs for nu0.01 merically synthesizing the 510 data in Table 1. Max. 5000 300 0.9 2000 500 0.1 30 1500 100000 0.4 756 Figure 7. CMG hydraulic fracturing simulation in shale reservoir adapted with permission from Figure 7. CMG hydraulic fracturing simulation in shale reservoir adapted with permission from Ref.[40], 2021, AriasOrtiz, D. A. Ref. [40], 2021, Arias Ortiz, D. A. According to the minimum and maximum synthetic data generated, the initiation of Table 1. Input Parameters for CMG Numerical Modeling. propping fractures in vertical shale reservoirs is mostly termed to have initial reservoir conditions with pressures (P ) of about 500 psi, thermal conditions of 100 degrees Fahren‑ Reservoir Conditions Hydraulic Fracture Parameters FBHP heit (T), and an area (A) of about 1000 acres. Additionally, while considering an efficient Pi [psi] T [°F] Yg A [acres] h [ft] k [md] ϕ [%] Lf [ft] kf [md] wf [in] Pwf predictive analysis, synthetic data for hydraulic fractures obtained a maximum fracture length (L ) of 1500 ft at a height (h) of 500 ft and 30% porosity (ϕ) and a fractured perme‑ Min. f500 100 0.5 1000 60 0.00001 4 500 2000 0.01 510 ability (k ) at 0.1 mD. Nonetheless, the width of the fracture based on the data was from Max. 5000 300 0.9 2000 500 0.1 30 1500 100000 0.4 756 0.1to0.4toconductivelyexpoundtheporechannelsforhigherproductivityortoincrease flowingbottom‑holepressure. According to the minimum and maximum synthetic data generated, the initiation of 2.3. Fluid‑FractureEquations propping fractures in vertical shale reservoirs is mostly termed to have initial reservoir Figure8schematicallydemonstratesaone‑winginfinitehomogenoustwo‑dimensional conditions with pressures (Pi) of about 500 psi, thermal conditions of 100 degrees formation hydraulic fracturing model that was originally proposed by Perkins and Kern, Fahrenheit (T), and an area (A) of about 1000 acres. Additionally, while considering an alsoknownliterallyastheP‑Kequation[44]. Thisfractureflowdiagramdepictshowhigh‑ efficient predictive analysis, synthetic data for hydraulic fractures obtained a maximum pressure proppants or fluids move in the direction of the x‑axis with a constant height of fracture length (Lf) of 1500 ft at a height (h) of 500 ft and 30% porosity (ϕ) and a fractured hon the y‑axis. The diameter of the fracture morphology on the z‑axis remains the width. permeabi It is interesting lity (ktof) a note t 0that .1 mD. Nonethel the fracture length ess, the wi is exponentially dth of the fra greatercthan ture that based on the of a con‑ data was stantheightandwidth. ThesearemathematicallyrepresentedbasedonthefollowingP‑K from 0.1 to 0.4 to conductively expound the pore channels for higher productivity or to assumptions: increase flowing bottom-hole pressure. (a) Thereisnostorage effectnor fluid leak off. (b) Atthetip,thenet pressureremains zero. 2.3. Fluid-Fracture Equations (c) Fluids areNewtonianand incompressible. (d) Fig Fluidure 8 injectionschematic is assumedally demonstrates to be in constant volumetric a one-flowwrate. ing infinite homogenous two- (e) Becausemuchlessenergyisneededtopropagateafracturethantosimplyallowthe dimensional formation hydraulic fracturing model that was originally proposed by fluidtoflowalongit, the toughness of the formation can be disregarded. Perkins and Kern, also known literally as the P-K equation [44]. This fracture flow diagram depicts how high-pressure proppants or fluids move in the direction of the x-axis with a constant height of h on the y-axis. The diameter of the fracture morphology on the z-axis remains the width. It is interesting to note that the fracture length is exponentially greater than that of a constant height and width. These are mathematically represented based on the following P-K assumptions: (a) There is no storage effect nor fluid leak off. (b) At the tip, the net pressure remains zero. (c) Fluids are Newtonian and incompressible. (d) Fluid injection is assumed to be in constant volumetric flow rate. (e) Because much less energy is needed to propagate a fracture than to simply allow the fluid to flow along it, the toughness of the formation can be disregarded. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 8 of 18 Big DataCogn. Comput.2023,7,57 8 of18 Figure 8. Illustrative Perkins–Kern–Nordgren one-wing fracture model. Figure8. IllustrativePerkins–Kern–Nordgrenone‑wing fracture model. YoungModulus[45] (verticalplane),E ; Young Modulus [45] (vertical plane), E’; 2 H −4y (1) E = P, 2 H − 4y ′ w E = P, (1) Young Modulus (plane strain), E’; (2) E = , YoungModulus (planestrain), E ; 1− v Maximum fracture width, wm; E = , (2) 2H 1 −v (3) Wm= P, E′ Maximumfracture width,w ; Fluid continuity [46,47]; m ∂q ∂A 2H (4) W +q= L+ P,=0, (3) ∂x ∂t The continuity based on the assumption (a) would be further expressed as; Fluidcontinuity [46,47]; ∂q ∂q ∂A (5) =0, +qL + = 0, (4) ∂x ∂x ∂t Integrating the Newtonian laminar fluid where Equation (3) is at x = 0; Thecontinuitybased onthe assumption (a) would be further expressed as; Q μL (6) w = ∂ 0.38 q , = 0, (5) E′ ∂x Since one of the P-K assumptions [48] is made on a constant flow rate along the fractured axis, attention is given to the fracture length (L), maximum fracture width at the IntegratingtheNewtonian laminar fluid where Equation (3) is at x = 0; bottom-hole (w ), and net pressure at downhole (P ), which are represented as; ( ) Q µL 625 Q E′ w = 0.38 , (6) ′ (7) L= t , 4096π μH Since one of the P‑K assumptions [48] is made on a constant flow rate along the frac‑ 640 Q μ tured axis, attention is given to the fracture length (L), maximum fracture width at the w = t , (8) π E′H bottom‑hole(w ),andnet pressure at downhole (P ), which are represented as; 0 0 ( ) ( ) 3 ′ 625 Q E 4 L80 = 1 E′ Q μ t , (9(7) ) 3 4 P = 4096 × π µH t , π 4 H ( ) 1 Nordgren’s improved model add( s stor) age an 1 d leak-off effects to make P-K’s equation 640 Q µ more convincing and practical, as presented in Equations (10)–(12) [49,50]; w = t , (8) E H L= t (10) 2πC H  1 1 2 5 ( ) ( ) 4 5 80 1 E Q µ 1   5 P = × Q μ t , (9) (11) 2 6 π 4 w =4 tH π E′C H Big DataCogn. Comput.2023,7,57 9 of18 Nordgren’simprovedmodeladdsstorageandleak‑offeffectstomakeP‑K’sequation moreconvincing and practical,as presented in Equations (10)–(12) [49,50]; Q 1 L = t (10) 2πC H ( ) Q µ 1 w = 4 t (11) π E C H   1 2 4 E Q µ 1   P = 2 t (12) π C H Forhydrocarbonproductionrecoveryinverticalshalewells,thereisaninflowofthe two‑phaseprocess. FurtherFractureAssumptions 1. Therateofflowisassumed. 2. Fractureisconducted inverticalwells. 3. Timeforinjectionis considered. 4. Existingproppants athigh pressure are included. 2.4. TensorFlow TensorFlow is an all‑inclusive open‑source machine learning platform. Its large, ver‑ satile ecosystem of tools, libraries, and community resources enables academics to im‑ prove the state‑of‑the‑art of machine learning while simultaneously enabling developers toswiftlyconstructanddeployML‑poweredproducts. TensorFlowwasdevelopedbyen‑ gineers and researchers on the Google Brain team, a division of Google’s Machine Intelli‑ genceResearchdepartment,forthepurposeofconductingmachinelearninganddeepneu‑ ralnetworkresearch. Thetechniqueisversatileenoughtobeappliedinseveralotherindus‑ tries. TensorFlowoffersnon‑guaranteedbackwardcompatibilityforvariouslanguages,in additiontoestablished Python andC++ APIs. 2.4.1. DataPre‑Processingand Splitting Inspiteofthis,thedeepneuralnetworkanalysisdevelopedforthepresentstudyused a linear and non‑linear method while focusing on selected optimizers (SGD, Adam, and RMSprop)underthe impact oflearning rates, activation,and loss functions. Moreover, there was no need to pre‑process or normalize the data because it had al‑ readybeencleanedbeforeimporting,basedonthePandaslibrary. Thecompletelengthof the2000syntheticdatawastrained at 80% and tested at 20%. 2.4.2. Deep NeuralNetwork(Non‑Linear Regression) Thestudygeneratedamodelfornon‑linearregression[51–53]todeterminetheinflu‑ ence and prediction of various fractured input parameters over the production recovery with Keras sequential stable input dense layer of 100, 10 and an output layer shape of 1, showninFigure9andTable2. Moreover,whilemaintainingalearningrateofabout0.01, the activation function for the input layer was set to a rectified linear unit (ReLU; thus, a non‑linearactivationfunctionintended for deep neural networks). The generated model for training the data was compiled, setting the loss function to themeanabsoluteerrorandtheoptimizerstothestochasticdescentgradient(SGD),Adam, andRMSprop,respectively,foreachofthemodels’build‑ups. Figures10–15demonstrates loss curves for various input parameters emanating from fractured height, width, length, permeability,andporosityoftheformationandfracturedconductivityrawsyntheticdata, validatedinFigure 9. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 9 of 18 E′ Q μ (12) P =2 t π C H For hydrocarbon production recovery in vertical shale wells, there is an inflow of the two-phase process. Further Fracture Assumptions 1. The rate of flow is assumed. 2. Fracture is conducted in vertical wells. 3. Time for injection is considered. 4. Existing proppants at high pressure are included. 2.4. TensorFlow TensorFlow is an all-inclusive open-source machine learning platform. Its large, versatile ecosystem of tools, libraries, and community resources enables academics to improve the state-of-the-art of machine learning while simultaneously enabling developers to swiftly construct and deploy ML-powered products. TensorFlow was developed by engineers and researchers on the Google Brain team, a division of Google’s Machine Intelligence Research department, for the purpose of conducting machine learning and deep neural network research. The technique is versatile enough to be applied in several other industries. TensorFlow offers non-guaranteed backward compatibility for various languages, in addition to established Python and C++ APIs. 2.4.1. Data Pre-Processing and Splitting In spite of this, the deep neural network analysis developed for the present study used a linear and non-linear method while focusing on selected optimizers (SGD, Adam, and RMSprop) under the impact of learning rates, activation, and loss functions. Moreover, there was no need to pre-process or normalize the data because it had already been cleaned before importing, based on the Pandas library. The complete length of the 2000 synthetic data was trained at 80% and tested at 20%. 2.4.2. Deep Neural Network (Non-Linear Regression) The study generated a model for non-linear regression [51–53] to determine the influence and prediction of various fractured input parameters over the production recovery with Keras sequential stable input dense layer of 100, 10 and an output layer shape of 1, shown in Figure 9 and Table 2. Moreover, while maintaining a learning rate of Big DataCogn. Comput.2023,7,57 10 of18 about 0.01, the activation function for the input layer was set to a rectified linear unit Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 10 of 18 (ReLU; thus, a non-linear activation function intended for deep neural networks). Table 2. Output screen; standard model summary for all training. Model: “Proppant_Fracturing_ML_Modeling” Layer (Type) Output Shape Param # Input_layer (Dense) (None, 1000) 2000 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 10 of 18 dense_6 (Dense) (None, 100) 100100 output_layer (Dense) (None, 1) 101 Table 2. Output screen; standard model summary for all training. Total params: 102,201 Figure9. Scattered raw data validationfor proppant propagation. Figure 9. Scattered raw data validation for proppant propagation. Trainable params: 102,201 Model: “Proppant_Fracturing_ML_Modeling” Layer (Type) Output Shape Param # Non-trainable params: 0 Table2. Output screen; standardmodel summary for all training. Input_layer (Dense) (None, 1000) 2000 Model: “Proppant_Fracturing_ML_Modeling” dense_6 (Dense) (None, 100) 100100 The generated model for training the data was compiled, setting the loss function to output_layer (Dense) (None, 1) 101 Layer(Type) OutputShape Param# Total params: 102,201 the mean absolute error and the optimizers to the stochastic descent gradient (SGD), Input_layer(Dense) (None,1000) 2000 Trainable params: 102,201 dense_6(Dense) (None,100) 100100 Adam, and RMSprop, respectively, for each of the models’ build-ups. Figures 10–15 output_lay Non-trainaer ble pa (Dense) rams: 0 (None,1) 101 demonstrates loss curves for various input parameters emanating from fractured height, Totalparams: 102,201 width, length, permeability, and porosity of the formation and fractured conductivity raw The generated model for training the data was compiled, setting the loss function to Trainable params: 102,201 the mean absolute error and the optimizers to the stochastic descent gradient (SGD), Non‑trainableparams: 0 synthetic data, validated in Figure 9. Adam, and RMSprop, respectively, for each of the models’ build-ups. Figures 10–15 demonstrates loss curves for various input parameters emanating from fractured height, width, length, permeability, and porosity of the formation and fractured conductivity raw synthetic data, validated in Figure 9. Figure 10. Height/Pwf loss curves; (a) SGD final loss = 61.50, ( b) Adam final loss = 125.86, ( c) RM‑ Figure 10. Height/Pwf loss curves; (a) SGD final loss = 61.50, (b) Adam final loss = 125.86, (c) Figure 10. Height/Pwf loss curves; (a) SGD final loss = 61.50, (b) Adam final loss = 125.86, (c) Sprop RMSprop final finalloss=231.91. loss = 231.91. RMSprop final loss = 231.91. Figure 11. Porosity/Pwf loss curves; (a) SGD final loss = 399.11, ( b) Adam final loss = 65.46, ( c) RM‑ Figure 11. Porosity/Pwf loss curves; (a) SGD final loss = 399.11, (b) Adam final loss = 65.46, (c) RMSprop final loss = 106.36. Spropfinalloss=106.36. Figure 11. Porosity/Pwf loss curves; (a) SGD final loss = 399.11, (b) Adam final loss = 65.46, (c) RMSprop final loss = 106.36. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 11 of 18 Big Big DData ata Cogn. Cogn. Com Comput. put. 2023 2023,, 77,, x F 57 OR PEER REVIEW 11 of 11 of 18 18 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 11 of 18 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 11 of 18 Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, ( c) RM‑ Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) Figure 12. Length/Pwf loss curves; (a) SGD Final loss = 366.46, (b) Adam final loss = 155.50, (c) RMSprop final loss = 174.33. RMSprop final loss = 174.33. RMSprop final Spropfinalloss= loss = 174 174.33. .33. RMSprop final loss = 174.33. Figure 13. Width/Pwf loss curves; (a) SGD final loss = 620.66, (b) Adam final loss = 93.45, (c) Figure 13. Figure Figure 13. 13. W Wi Wi idth/Pwf dth/Pwf lo dth/Pwf lo lossss ss curv cu cu rves; ( es; rves; ( (a) aSGD a ) SGD ) SGD final final lo final lo loss=620.66, ss = 6 ss = 6(20.66, ( 20.66, ( b)Adam b b) Ada ) Ada finalloss m m final loss = final loss = =93.45,( c)93.45, ( 93.45, ( RMSprop cc) ) Figure 13. Width/Pwf loss curves; (a) SGD final loss = 620.66, (b) Adam final loss = 93.45, (c) RMSprop final loss = 124.99. RMSprop final loss = 124.99. RMSprop final RMSprop final loss = 124 loss = 124 .99 .99 . . finalloss =124.99. Figure 14. Figure 14. Per Per mm eability eability /Pwf /Pwf loss loss cu cu rves rves ; ( ; ( aa ) SGD final ) SGD final loss = 61.50 loss = 61.50, ( , (b b) Adam final loss ) Adam final loss = 72.6, ( = 72.6, (c c) ) Figure 14. Figure14. Per Permeability/Pwf meability/Pwf loss loss cu curv rves es;; ((a a))SGD SGD final finalloss loss = 61.50 =61.50,( , (b)bAdam ) Adam final loss finalloss=72.6, = 72.6, ( ( c)RM‑ c) Figure 14. Permeability/Pwf loss curves; (a) SGD final loss = 61.50, (b) Adam final loss = 72.6, (c) RMSprop Final loss = 169.38. RMSprop Final loss = 169.38. RMSprop Final loss = 169.38. SpropFinalloss=169.38. RMSprop Final loss = 169.38. Figure Figure 15. 15. Conductivity/Pwf Conductivity/Pwfloss loss curv curves; es;(a ()aSGD ) SGD final lo finalloss=ss 61.72, = 61.72, ( ( b)Adam b) Adam final lo finalloss=94.42 ss = 94.42 ( ( c)RM‑ c) Figure 15. Conductivity/Pwf loss curves; (a) SGD final loss = 61.72, (b) Adam final loss = 94.42 (c) Figure 15. Conductivity/Pwf loss curves; (a) SGD final loss = 61.72, (b) Adam final loss = 94.42 (c) Figure 15. Conductivity/Pwf loss curves; (a) SGD final loss = 61.72, (b) Adam final loss = 94.42 (c) RMSprop final loss = 177.10. Spropfinalloss=177.10. RMSprop final RMSprop final loss = 177 loss = 177.10. .10. RMSprop final loss = 177.10. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 12 of 18 Big B Biig D g DData a atta a Cogn. Cogn. Cogn. Com Com Comput. p put ut. . 2023 2023 2023,,, 7 77, x F ,, x F 57 O OR PE R PEER R ER RE EVIE VIEW W 12 of 12 of 12 of 18 18 18 2.4.3. Neural Network (Linear Regression) 2.4.3. NeuralNetwork(Linear Regression) 2. 2.4. 4.3. Ne 3. Neura urall Net Netw work (Lin ork (Linear Reg ear Regr res essio sion n) ) Production recovery prediction analysis engaged a linear neural network regressor Production recovery prediction analysis engaged a linear neural network regressor Production recovery predic Production recovery prediction analysis tion analysis engag engage ed a d a lin linear ne ear neural ne ural network regres twork regress so orr utilizing the Keras sequential model for the synthetic data. The model was compiled with utilizingtheKerassequentialmodelforthesyntheticdata. Themodelwascompiledwith ut util iliz izing t ing th he e Keras Keras se sequen quenttiial al model model for for t th he e syntheti synthetic c da data. The model ta. The model wa was comp s compiille ed wi d with th an an input input sh shape ape of 1 of1and and a alinear lineaactiv r actiation, vation, un unlike like R ReLU, eLU, asas in indicated dicatefor d for t the hnon‑linear e non-lineare‑ r an input an input sh sha ap pe of 1 e of 1 and a and a line linea ar a r ac cttiivat vatiion, un on, unli like R ke Re eLU, LU, as in as indic dica atte ed for t d for th he n e no on-l n-liinea nearr gressor regressor m mentioned entioned e earlier. arlieHow r. However, ever,while whithe le tstudy he stmaintained udy maintaits ined it reliability, s reliabthe ilitmodel y, the regres regressor m sor meent ntioned e ioned eaarrlliieer. r. However, However, whi whille t e th hee st study udy ma maint intaaiin ned it ed itss re reli liab abilit ilityy,, t th he e wascompiledwiththesamelossfunctions(MAE)andselectedoptimizersfortrainingthe model was compiled with the same loss functions (MAE) and selected optimizers for model wa model was c s co ompiled w mpiled wiitth h t th he same e same los losss funct functiions ons ((M MAE) AE) and and se select lected opt ed optiimi mizers zers for for model training the withamodel 0.01learning with a 0rate .01 land earnian ng ra epoch te aof nd 100. an epoch of Figures16 100 –20 . Fiillustrate gures 16–the 20 ipredictiv llustrate e tra traiini ning the ng the model model wi with th a 0 a 0..01 01 l le ea ar rni ning ra ng rate te a an nd d a an n epoch of epoch of 100 100. . Fi Figures 16– gures 16–20 i 20 illllu ustra strate te versionofthelinearregression. the predictive version of the linear regression. the predi the predic cti tive versi ve versio on of the li n of the linear regressi near regression. on. Figure 16. Fracture height linear regressor model and optimizer performances; (a) SGD final loss = Figure 16. Fracture height linear regressor model and optimizer performances; (a) SGD final Figure 16. Figure 16. Fracture height linear regressor mo Fracture height linear regressor model and optim del and optimiizer performances; ( zer performances; (a a)) SG SGD fi D final los nal loss s = = 347.88, (b) Adam final loss = 221.07, (c) RMSprop final loss = 220.25. 347.88, ( loss 347.88, ( = 347.88, b b) ) Ada Ada(bm m)Adam final loss = 2 final loss = 2 finalloss 21.07, ( 21.07, ( =221.07, c c)) RMSprop final lo RMSprop final lo ( c)RMSprop final ss ss = 220.25 loss = 220.25 = 220.25. . . Figure 17. Porosity linear regressor model and optimizer performances; (a) SGD final loss = 225.58, Figure 17. Figure 17. Porosity linear regressor mode Porosity linear regressor model an l and optimizer pe d optimizer performances; ( rformances; (a a)) SGD final SGD final loss loss = 225.58, = 225.58, Figure 17. Porosity linear regressor model and optimizer performances; (a) SGD final loss = 225.58, (b) Adam final loss = 234.23, (c) RMSprop final loss = 225.56. ((b b) Adam final ) Adam final loss = 234 loss = 234.23, ( .23, (c c) RMSprop fina ) RMSprop final lo l loss = 225 ss = 225.56 .56. . (b)Adamfinalloss=234.23, ( c) RMSprop final loss = 225.56. Figure 18. Permeability linear regressor model and optimizer performances; (a) SGD final loss = Figure 18. Figure 18. Per Perm meability eability l liinear reg near regrressor m essor mo odel and del and optim optimiizer perform zer performa ances nces; ( ; (a a) ) SGD final loss = SGD final loss = Figure 18. Permeability linear regressor model and optimizer performances; (a) SGD final 260.98, (b) Adam final loss = 255.49, (c) RMSprop final loss = 253.81. 260.98, ( 260.98, (b b) ) Ada Adam m final loss = 2 final loss = 255.49, ( 55.49, (c c)) RMSprop final lo RMSprop final loss ss = 253.81 = 253.81. . loss= 260.98, (b)Adam finalloss=255.49, ( c)RMSprop final loss = 253.81. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 13 of 18 Big Big D Data ataCogn. Cogn. Com Comput. put. 2023 2023,, 77,, x F 57 OR PEER REVIEW 13 of 13 of 18 18 Figure 19. Fracture length linear regressor model and optimizer performances; (a) SGD final loss = Figure 19. Fracture length linear regressor model and optimizer performances; (a) SGD final Figure 19. Fracture length linear regressor model and optimizer performances; (a) SGD final loss = 487.19, (b) Adam final loss = 157.91, (c) RMSprop final loss = 253.81. loss= 487.19, (b)Adam finalloss=157.91, ( c)RMSprop final loss = 253.81. 487.19, (b) Adam final loss = 157.91, (c) RMSprop final loss = 253.81. Figure 20. Fracture width linear regressor model and optimizer performances; (a) SGD final loss = Figure Figure 20. 20. Fracture width Fracture width linear regressor m linear regressorodel and model and optimizer performances; ( optimizer performances; a) SG(a D) fSGD inal lofinal ss = 255.78, (b) Adam final loss = 251.87, (c) RMSprop final loss = 249.91. 255.78, (b) Adam final loss = 251.87, (c) RMSprop final loss = 249.91. loss= 255.78, (b)Adam final loss= 251.87, ( c)RMSprop final loss = 249.91. 3. Results and Discussion 3. ResultsandDiscussion 3. Results and Discussion The performances of optimizers for the various regressors used to predict hydraulic The performances of optimizers for the various regressors used to predict hydraulic The performances of optimizers for the various regressors used to predict hydraulic fracturing and production recovery are compiled in Table 3. For better lay understanding, fracturing and production recovery are compiled in Table 3. For better lay understand‑ fracturing and production recovery are compiled in Table 3. For better lay understanding, the average of all optimizers based on the various input parameters was obtained. ing, the average of all optimizers based on the various input parameters was obtained. the average of all optimizers based on the various input parameters was obtained. How However, ever,Stochastic Stochasticdescent, descent, Ad Adam,am, andand RMSprop RMSpro optimizers p optimize indicated rs indicated verygood very goo perfor‑ d However, Stochastic descent, Adam, and RMSprop optimizers indicated very good mance performanc optimizers, e optimizer just as s, jother ust as ot known her knoptimizers own optimize demonstrated rs demonstthe rated t capabilities he capabof ilitmold‑ ies of performance optimizers, just as other known optimizers demonstrated the capabilities of ingandshapingthe fittedmodel into anaccurate form. molding and shaping the fitted model into an accurate form. molding and shaping the fitted model into an accurate form. T Table 3. able3. Keras Keras optimizers optimizers for for produc production tion recovery prediction recoveryprediction.. Table 3. Keras optimizers for production recovery prediction. Loss Functions/MAE Loss Functions/MAE Loss Functions/MAE Parameters Conductivity Par Parameters ameters Conductivity Conductivity h [ft] ϕ [%] Lf [ft] wf [in] k [md] Average h [f h [ft] t] ϕϕ [% [%] ] LfL [f[ft] t] wfw [in] [in] k [md k [md] ] Average Average f f [mD.in] [mD.in] [mD.in] SGD 61.50 399.11 366.46 620.66 61.50 61.72 261.83 SGD 61.50 399.11 366.46 620.66 61.50 61.72 261.83 SGD 61.50 399.11 366.46 620.66 61.50 61.72 261.83 Non- Keras Non- Keras Keras ADAM 125.86 65.46 155.50 93.45 72.6 94.42 101.22 Non‑ ADAM 125.86 65.46 155.50 93.45 72.6 94.42 101.22 Linear Optimizers ADAM 125.86 65.46 155.50 93.45 72.6 94.42 101.22 Linear Optimizers Optimizers Linear RMSprop 231.91 106.36 174.33 124.99 169.38 177.10 163.87 RMSprop 231.91 106.36 174.33 124.99 169.38 177.10 163.87 RMSprop 231.91 106.36 174.33 124.99 169.38 177.10 163.87 Loss functions/MAE Loss functions/MAE Loss functions/MAE Parameters Conductivity Parameters Conductivity Parameters h [ft] ϕ [%] Lf [ft] wf [in] k [md] Average h [ft] ϕ [%] Lf [ft] wf [in] k [md] Average Conductivity [mD.in] h [ft] ϕ [%] L [ft] w [in] k [md] Average [mD.in] f f [mD.in] SGD 347.88 225.58 487.19 255.78 260.98 198.30 295.95 SGD 347.88 225.58 487.19 255.78 260.98 198.30 295.95 Keras Keras SGD 347.88 225.58 487.19 255.78 260.98 198.30 295.95 Linear ADAM 221.07 234.23 157.91 251.87 255.49 134.83 209.23 Linear ADAM 221.07 234.23 157.91 251.87 255.49 134.83 209.23 Optimizers Keras Optimizers ADAM 221.07 234.23 157.91 251.87 255.49 134.83 209.23 Linear RMSprop 220.25 225.56 253.81 249.91 253.81 192.46 232.63 Optimizers RMSprop 220.25 225.56 253.81 249.91 253.81 192.46 232.63 RMSprop 220.25 225.56 253.81 249.91 253.81 192.46 232.63 Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 14 of 18 3.1. Non-Linear Optimizers The performances of Keras optimizers based on the neural network ReLU compiled Big DataCogn. Comput.2023,7,57 14 of18 the model into a non-linear form. The input data parameters, while considering SGD, obtained an average loss function or a mean absolute error of 261.83, Adam’s optimizer 3.1. Non‑LinearOptimizers computed as 101.22, and RMSprop was 163.87. The final loss comparative analysis The performances of Keras optimizers based on the neural network ReLU compiled indicates that the lower the loss, the more accurate the prediction would be. the model into a non‑linear form. The input data parameters, while considering SGD, ob‑ tainedanaveragelossfunctionorameanabsoluteerrorof261.83,Adam’soptimizercom‑ 3.2. Linear Optimizers puted as 101.22, and RMSprop was 163.87. The final loss comparative analysis indicates The model thatthelow erwathes compil loss,the more ed based on the li accurate the prediction near awould ctivatibe. on function, and the input parameters fitted the models to obtain an average Adam optimizer loss function of 209.23. 3.2. LinearOptimizers Generally, the linear optimizers for the fracture propagation data were determined to The model was compiled based on the linear activation function, and the input pa‑ have performed inadequately, in contrast to the same optimizers for non-linear functions. rameters fitted the models to obtain an average Adam optimizer loss function of 209.23. This could be as a result of inadequate neurons used, unfavorable learning rates, and a Generally,thelinearoptimizersforthefracturepropagationdataweredeterminedtohave limited number of iterations. However, Figure 21. draws out the entire computation for performedinadequately, in contrast to the same optimizers for non‑linear functions. This couldbeasaresultofinadequateneuronsused,unfavorablelearningrates,andalimited this study, demonstrating the best optimizer. As mentioned earlier, the lower the loss, the numberofiterations. However,Figure21. drawsouttheentirecomputationforthisstudy, higher the performance; hence, from the visual bar graph, Adam demonstrated the lowest demonstratingthebestoptimizer. Asmentionedearlier,thelowertheloss,thehigherthe loss for the synthetic fracture propagation data. performance;hence,fromthevisualbargraph,Adamdemonstratedthelowestlossforthe syntheticfracturepropagationdata. Figure21. Optimizing Keras optimizers at different final losses. Figure 21. Optimizing Keras optimizers at different final losses. 3.3. ProductionRecoveryOptimization ThesyntheticdataemanatingfromCMGmodelingwasshuffledandindexedtoread 3.3. Production Recovery Optimization production recovery, as demonstrated in Figure 21. However, the model used for the ear‑ The synthetic data emanating from CMG modeling was shuffled and indexed to read lierpredictionindicatedthat,outoftheentire2000syntheticdatasetsreplicated,theAdam optimizerprediction,witha101.22lossfunction,wasatbest80.24%accurate. Production production recovery, as demonstrated in Figure 21. However, the model used for the recovery was based on the flowing bottom‑hole pressure; hence, a plot to determine the earlier prediction indicated that, out of the entire 2000 synthetic datasets replicated, the fractureconductivitywheretheproppedfracturetendstoconveyformationfluidsintothe Adam optimizer prediction, with a 101.22 loss function, was at best 80.24% accurate. wellboreisdemonstratedinFigure 22. Production recovery was based on the flowing bottom-hole pressure; hence, a plot to The measure of permeability and fracture width from the data generated explains that determine the fracture conductivity where the propped fracture tends to convey theshaleformationunderreviewhasabeett rchanceofmaximizinghydrocarbonproduction. formation fluids into the wellbore is demonstrated in Figure 22. The measure of permeability and fracture width from the data generated explains that the shale formation under review has a better chance of maximizing hydrocarbon production. Big Data Cogn. Comput. 2023, 7, x FOR PEER REVIEW 15 of 18 Big DataCogn. Comput.2023,7,57 15 of18 Figure22. Fracture conductivityvalidation. Figure 22. Fracture conductivity validation. 3.4. ValidationandLimitations 3.4. Validation and Limitations TheweightofthisstudywascomparedwiththeworkofDongetal.[24],whoinrecent The weight of this study was compared with the work of Dong et al. [24], who in times developed a machine learning algorithm coupled with other multilayer perception recent times developed a machine learning algorithm coupled with other multilayer algorithmswithanemphasisonparticleswarmoptimization(PSO).Afterseveraltweaks, perception algorithms with an emphasis on particle swarm optimization (PSO). After theauthorsnoticedthePSO,providedthebestresults,asindicatedinFigure3. Drawingan several tweaks, the authors noticed the PSO, provided the best results, as indicated in intenseanalysisbetweenPSOandAdamoptimizers,thecurrentstudydidnotmodelusing Figure 3. Drawing an intense analysis between PSO and Adam optimizers, the current PSO, which is recommended for consideration in subsequent research while considering study did not model using PSO, which is recommended for consideration in subsequent AdaSwarm[25],whichis embeddedin the NumPy and TensorFlowlibraries. research while considering AdaSwarm [25], which is embedded in the NumPy and Neuralnetworkoptimizers,asresearchedbyMohapatraetal.[25],explicitlyprovide TensorFlow libraries. a comparative analysis of various optimizers, which throws enough light on the current Neural network optimizers, as researched by Mohapatra et al. [25], explicitly provide study. Inaddition,whileconsideringtheefficacyofSGD,Adam,andRMSprop,itisquite a comparative analysis of various optimizers, which throws enough light on the current fair to validate their performances with other relevant datasets. Furthermore, since some study. In addition, while considering the efficacy of SGD, Adam, and RMSprop, it is quite ofMohapatraetal.’sderivedmodelsconsistedofsecond‑orderdifferentialequations,val‑ fair to validate their performances with other relevant datasets. Furthermore, since some idation of their optimizers is best for the current study’s non‑linear neural network re‑ of Mohapatra et al.’s derived models consisted of second-order differential equations, gressions. In so doing, it is worth noting that, when the average of the previous study’s validation of their optimizers is best for the current study’s non-linear neural network lossfunctionsfromSGD,Adam,andRMSpropwithholdingtolerancewasdulycompared, regressions. In so doing, it is worth noting that, when the average of the previous study’s Adamstillperformedmuchbetter, evenconsidering other datasets. loss functions from SGD, Adam, and RMSprop withholding tolerance was duly However, it is advisable to keep in mind that there was probably not enough model compared, Adam still performed much better, even considering other datasets. and other hyperparameter tweaking to enhance optimization. In the case of Adam, the However, it is advisable to keep in mind that there was probably not enough model algorithmwasabletoachieveanaccuracyof80.24%. Mostimportantly,Adam’sremaining and other hyperparameter tweaking to enhance optimization. In the case of Adam, the accuracy of 20% could be further optimized to 100% accuracy if long periods of iterations algorithm was able to achieve an accuracy of 80.24%. Most importantly, Adam’s atanepochof,i.e.,500to1000forbothlinearandnon‑linearneuralnetworktrainingcould remaining accuracy of 20% could be further optimized to 100% accuracy if long periods beconsidered,withadecreasinglearning rate of 0.001. of iterations at an epoch of, i.e., 500 to 1000 for both linear and non-linear neural network 4. Conclusions training could be considered, with a decreasing learning rate of 0.001. The data‑driven sensitive machine learning algorithms, with an emphasis on SGD, 4. Conclusions Adam, and RMSprop optimizers, have indeed paved an additional artificially intelligent way to optimize synthetic data and its input parameters for fracturing hydrocarbon wells The data-driven sensitive machine learning algorithms, with an emphasis on SGD, with low permeability indexes, and their tendency to optimize production recovery pre‑ Adam, and RMSprop optimizers, have indeed paved an additional artificially intelligent dictions. Moreover, based on high‑pressure proppants, the novelty of the study was able way to optimize synthetic data and its input parameters for fracturing hydrocarbon wells toidentify,usingGoogleTensorFlowlibraries, that: with low permeability indexes, and their tendency to optimize production recovery predictions. Moreover, based on high-pressure proppants, the novelty of the study was • The linear function for the trained deep neural network on the synthetic dataset was able tnot o ident fully ifyoptimized, , using Gooand gle Ten the sw orF eakest low libr optimizer aries, thamong at: them was stochastic gradient descent(SGD),with a meanabsoluteerror of 295.95. Big DataCogn. Comput.2023,7,57 16 of18 • Whileiteratingforanon‑linearalgorithm,Adamemergedasthebest‑performingop‑ timizer,withalossfunctionof 101.22. • The proliferation of non‑linear neural network algorithms for the prediction and op‑ timizationofhydraulicfracture morphology is highly recommended. • The synthetic data and other conventional data are both suitable for machine learn‑ ing algorithms and for decisive decision‑making procedures. Google TensorFlow li‑ brariespresenteasy access to coding and validation. • Theoverallnoveltyofthestudyisthatitautomatesdata‑drivenprognosisbyoptimiz‑ ingthehydraulicfractureparameters,fromcomplexCMGnumericalmodelingtous‑ ingKerasSequentialAPIalgorithmsandseveraloptimizercompilationsfordecision‑ makinganalysis. • This study limits the complexity of physics‑driven computational fracking analysis andprovidesanindustrialautomationmeansofpredictingtheexpectationsandreme‑ diesforfrackingpetroleum shale reservoirs. Author Contributions: D.D.K.W. and S.I. designed the numerical models and generated the data. D.D.K.W. computed the machine learning workflow and wrote the manuscript. The methods and results of the manuscripts were reviewed by S.I., A.S. and J.K. Project administration and funding acquisition was performed by A.S., J.K. and S.I. All authors have read and agreed to the published versionof the manuscript. Funding: This research was funded by [Nazarbayev University] grant number [11022021CRP1512] And the APC was funded by [Nazarbayev University]. The authors are grateful for this support. Anyopinions,findings,andconclusionsorrecommendationsexpressedinthismaterialarethoseof theauthor(s)and donot necessarilyreflect the views of NazarbayevUniversity. DataAvailabilityStatement: Thedataused is confidential. Acknowledgments: WearegratefultoNazarbayevUniversityforprovidinguswiththeopportunity tocon‑tinuesharingourworkaspartoftheCollaborativeResearchProgram(CRP)fortheperi‑ods of 2022–2024 with project number 11022021CRP1512. We again show appreciation to the support of Faculty‑Development Competitive Research Grant for 2020–2022 (batch 2) with project number 08042FD1911. In spite of these, we wholeheartedly thank the authors cited in this piece of writing fortheir extensivestudy that promotes knowledgesharing. Conflicts of Interest: The authors hereby declare that the research presented in this paper was not impactedbyanyknown conflicting financial interests or personal connections. References 1. Irawan,S.;Kinif,B.I.;Bayuaji,R.Maximizingdrillingperformancethroughenhancedsolidcontrolsystem. IOPConf. Ser. Mater. Sci. Eng. 2017, 267,012038. [CrossRef] 2. Irawan,S.;Kinif,I.B.Solid ControlSystem for MaximizingDrilling. Drill. InTech2018,1, 192. [CrossRef] 3. Gandossi, L. An Overview of Hydraulic Fracturing and Other Formation Stimulation Technologies for Shale Gas Production; no. EUR 26347EN.2013;EU Publications: Luxembourg, 2015. [CrossRef] 4. Li,G.;Song,X.;Tian, S.; Zhu,Z. IntelligentDrilling and Completion: A Review. Engineering 2022,18,33–48. [CrossRef] 5. Kundert, D.; Mullen, M. Proper Evaluation of Shale Gas Reservoirs Leads to a More Effective Hydraulic‑Fracture Stimulation. InProceedingsofthe SPE RockyMountain PetroleumTechnologyConference, Denver,CO, USA, 14–16 April 2009. 6. Liu, Y.; Zheng, X.; Peng, X.; Zhang, Y.; Chen, H.; He, J. Influence of natural fractures on propagation of hydraulic fractures in tightreservoirs during hydraulicfracturing. Mar. Pet. Geol. 2022,138,105505. [CrossRef] 7. Zhao,H.;Liu,C.;Xiong,Y.;Zhen,H.;Li,X.Experimentalresearchonhydraulicfracturepropagationingroupofthincoalseams. J.Nat. Gas. Sci. Eng. 2022,103, 104614. [CrossRef] 8. Suo, Y.; Su, X.; Wang, Z.; He, W.; Fu, X.; Feng, F.; Pan, Z.; Xie, K.; Wang, G. A study of inter‑stratum propagation of hydraulic fractureofsandstone‑shaleinterbedded shale oil. Eng. Fract. Mech. 2022,275,108858. [CrossRef] 9. Yang, Y.; Li, X.; Yang, X.; Li, X. Influence of reservoirs/interlayers thickness on hydraulic fracture propagation laws in low‑ permeabilitylayeredrocks. J.Pet. Sci. Eng. 2022,219,111081. [CrossRef] 10. Xiong,D.;Ma,X.Influenceofnaturalfracturesonhydraulicfracturepropagationbehaviour. Eng. Fract. Mech. 2022,276,108932. [CrossRef] 11. Wayo, D.D.K.; Irawan, S.; Noor, M.Z.B.M.; Badrouchi, F.; Khan, J.A.; Duru, U.I. A CFD Validation Effect of YP/PV from Laboratory‑FormulatedSBMDIFfor ProductiveTransport Load to the Surface. Symmetry2022, 14,17. [CrossRef] Big DataCogn. Comput.2023,7,57 17 of18 12. Wayo,D.D.K.;Irawan,S.;Khan,J.A.;Fitrianti,F.CFDValidationforAssessingtheRepercussionsofFilterCakeBreakers;EDTA andSiO2on FilterCake ReturnPermeability. Appl. Artif. Intell. 2022,36, 2112551. [CrossRef] 13. Peng,X.;Rao,X.;Zhao,H.;Xu,Y.;Zhong,X.;Zhan,W.;Huang,L.Aproxymodeltopredictreservoirdynamicpressureprofile of fracture network based on deep convolutional generative adversarial networks (DCGAN). J. Pet. Sci. Eng. 2022, 208, 109577. [CrossRef] 14. Galkin, S.V.; Martyushev, D.A.; Osovetsky, B.M.; Kazymov, K.P.; Song, H. Evaluation of void space of complicated potentially oil‑bearing carbonate formation using X‑ray tomography and electron microscopy methods. Energy Rep. 2022, 8, 6245–6257. [CrossRef] 15. Ponomareva, I.N.; Martyushev, D.A.; Govindarajan, S.K. A new approach to predict the formation pressure using multiple regressionanalysis: CasestudyfromSukharevoilfieldreservoir—Russia. J.KingSaudUniv.‑Eng. Sci. 2022,inpress. [CrossRef] 16. Wang, D.B.; Zhou, F.‑J.; Li, Y.‑P.; Yu, B.; Martyushev, D.; Liu, X.‑F.; Wang, M.; He, C.‑M.; Han, D.‑X.; Sun, D.‑L. Numerical simulationoffracturepropagationinRussia carbonatereservoirs during refracturing. Pet. Sci. 2022,19,2781–2795. [CrossRef] 17. Bessmertnykh,A.;Dontsov,E.;Ballarini,R.Theeffectsofproppantonthenear‑frontbehaviorofahydraulicfracture. Eng. Fract. Mech. 2020,235,107110. [CrossRef] 18. Yi, S.S.; Wu, C.H.; Sharma, M.M. Proppant distribution among multiple perforation clusters in plug‑and‑perforate stages. SPE Prod. Oper. 2018,33,654–665. [CrossRef] 19. Suri,Y.;Islam,S.Z.;Hossain,M.ProppanttransportindynamicallypropagatinghydraulicfracturesusingCFD‑XFEMapproach. Int. J.RockMech. Min. Sci. 2020, 131, 104356. [CrossRef] 20. Wu,C.H.;Sharma,M.M.Modelingproppanttransportthroughperforationsinahorizontalwellbore. SPEJ.2019,24,1777–1789. [CrossRef] 21. Wang, K.; Zhang, G.; Du, F.; Wang, Y.; Yi, L.; Zhang, J. Simulation of directional propagation of hydraulic fractures induced by slottingbasedondiscrete elementmethod. Petroleum2022, inpress. [CrossRef] 22. Luo,A.;Li,Y.;Wu,L.;Peng,Y.;Tang,W.Fracturedhorizontalwellproductivitymodelforshalegasconsideringstresssensitivity, hydraulicfractureazimuth, andinterference betweenfractures. Nat. Gas Ind. B 2021,8, 278–286. [CrossRef] 23. Martyushev, D.A.; Ponomareva, I.N.; Filippov, E.V. Studying the direction of hydraulic fracture in carbonate reservoirs: Using machinelearningto determine reservoirpressure. Pet. Res. 2022,inpress. [CrossRef] 24. Dong, Z.; Wu, L.; Wang, L.; Li, W.; Wang, Z.; Liu, Z. Optimization of Fracturing Parameters with Machine‑Learning and Evolu‑ tionaryAlgorithm Methods. Energies 2022,15,6063. [CrossRef] 25. Elbaz, K.; Shen, S.L.; Zhou, A.; Yin, Z.Y.; Lyu, H.M. Prediction of Disc Cutter Life During Shield Tunneling with AI via the Incorporationof aGenetic Algorithminto aGMDH‑Type Neural Network. Engineering 2021,7,238–251. [CrossRef] 26. Shen, S.L.; Elbaz, K.; Shaban, W.M.; Zhou, A. Real‑time prediction of shield moving trajectory during tunnelling. Acta Geotech. 2022,17,1533–1549. [CrossRef] 27. Elbaz,K.;Yan,T.;Zhou,A.;Shen,S.L.Deeplearninganalysisforenergyconsumptionofshieldtunnelingmachinedrivesystem. Tunn. Undergr. SpaceTechnol. 2022, 123, 104405. [CrossRef] 28. Fang,J.;Gong,B.;Caers,J.Data‑DrivenModelFalsificationandUncertaintyQuantificationforFracturedReservoirs. Engineering 2022,18,116–128. [CrossRef] 29. Aboosadi, Z.A.; Rooeentan, S.; Adibifard, M. Estimation of subsurface petrophysical properties using different stochastic algo‑ rithmsinnonlinear regressionanalysis of pressure transients. J. Appl. Geophy. 2018,154, 93–107. [CrossRef] 30. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. 2014. Available online: http://arxiv.org/abs/1412.6980 (ac‑ cessedon7January2023). 31. Kamrava,S.;Tahmasebi,P.;Sahimi,M.Enhancingimagesofshaleformationsbyahybridstochasticanddeeplearningalgorithm. NeuralNetw. 2019,118,310–320. [CrossRef] 32. Wang, Q.; Song, Y.; Zhang, X.; Dong, L.; Xi, Y.; Zeng, D.; Liu, Q.; Zhang, H.; Zhang, Z.; Yan, R.; et al. Evolution of corrosion predictionmodelsforoilandgaspipelines: Fromempirical‑driventodata‑driven. Eng. Fail. Anal. 2023,146,107097. [CrossRef] 33. Liu, Y.Y.; Ma, X.H.; Zhang, X.W.; Guo, W.; Kang, L.X.; Yu, R.Z.; Sun, Y.P. A deep‑learning‑based prediction method of the estimatedultimate recovery(EUR)of shale gaswells. Pet. Sci. 2021,18, 1450–1464. [CrossRef] 34. A Comprehensive Guide on Deep Learning Optimizers. Available online: https://www.analyticsvidhya.com/blog/2021/10/a‑ comprehensive‑guide‑on‑deep‑learning‑optimizers/ (accessedon 10 February 2023). 35. Mohapatra, R.; Saha, S.; Coello, C.A.C.; Bhattacharya, A.; Dhavala, S.S.; Saha, S. AdaSwarm: Augmenting Gradient‑Based Opti‑ mizersinDeepLearningwith SwarmIntelligence. IEEETrans. Emerg. Top Comput. Intell. 2022,6, 329–340. [CrossRef] 36. Hou,L.;Elsworth,D.;Zhang,F.;Wang,Z.;Zhang,J.Evaluationofproppantinjectionbasedonadata‑drivenapproachintegrat‑ ingnumerical andensemble learning models. Energy 2023, 264,126122. [CrossRef] 37. Mukhtar, F.M.; Duarte, C.A. Coupled multiphysics 3‑D generalized finite element method simulations of hydraulic fracture propagationexperiments. Eng. Fract. Mech. 2022, 276, 108874. [CrossRef] 38. Pezzulli, E.; Nejati, M.; Salimzadeh, S.; Matthäi, S.K.; Driesner, T. Finite element simulations of hydraulic fracturing: A compar‑ isonof algorithmsfor extractingthe propagation velocityofthe fracture. Eng. Fract. Mech. 2022,274,108783. [CrossRef] 39. Ou,C.;Liang,C.;Li,Z.;Luo,L.;Yang,X.3Dvisualizationofhydraulicfracturesusingmicro‑seismicmonitoring: Methodology andapplication. Petroleum2022,8,92–101. [CrossRef] Big DataCogn. Comput.2023,7,57 18 of18 40. Ortiz, D.A.A.; Klimkowski, L.; Finkbeiner, T.; Patzek, T.W. The effect of hydraulic fracture geometry on well productivity in shaleoilplayswithhighpore pressure. Energies 2021,14, 7727. [CrossRef] 41. Zhang, Y.; Liu, Z.; Han, B.; Zhu, S.; Zhang, X. Numerical study of hydraulic fracture propagation in inherently laminated rocks accountingforbeddingplane properties. J. Pet. Sci. Eng. 2022, 210,109798. [CrossRef] 42. Kulga, B.; Artun, E.; Ertekin, T. Development of a data‑driven forecasting tool for hydraulically fractured, horizontal wells in tight‑gassands. Comput. Geosci. 2017, 103, 99–110. [CrossRef] 43. Yusof,M.A.M.;Mahadzir,N.A.Developmentofmathematicalmodelforhydraulicfracturingdesign. J.Pet. Explor. Prod. Technol. 2015,5,269–276. [CrossRef] 44. Nguyen, H.T.; Lee, J.H.; Elraies, K.A. A review of PKN‑type modeling of hydraulic fractures. J. Pet. Sci. Eng. 2020, 195, 107607. [CrossRef] 45. Wypych, G. The Effect of Fillers on the Mechanical Properties of Filled Materials. In Handbook of Fillers, 5th ed.; ChemTech Publishing: Toronto, ON, Canada,2021; pp. 525–608. [CrossRef] 46. Fanchi,J.R.FluidFlowEquations. InSharedEarthModeling;GulfProfessionalPublishing: Houston,TX,USA,2002;pp. 150–169. [CrossRef] 47. Fanchi, J.R. Reservoir Simulation. In Integrated Reservoir Asset Management; Elsevier: Amsterdam, The Netherlands, 2010; pp.223–241. [CrossRef] 48. PKN Hydraulic Fracturing Model—FrackOptima Help. Available online: http://www.frackoptima.com/userguide/theory/pkn. html (accessedon21February 2023). 49. Nordgren,R.P.Propagationof aVerticalHydraulic Fracture. Soc. Pet. Eng. J.1972,12, 306–314. [CrossRef] 50. Rahman, M.M.; Rahman, M.K. A review of hydraulic fracture models and development of an improved pseudo‑3D model for stimulatingtightoil/gassand. EnergySourcesPartARecoveryUtil. Environ. Eff. 2010,32,1416–1436. [CrossRef] 51. Misra, S.; Li, H. Deep neural network architectures to approximate the fluid‑filled pore size distributions of subsurface geolog‑ ical formations. In Machine Learning for Subsurface Characterization; Elsevier: Amsterdam, The Netherlands, 2019; pp. 183–217. [CrossRef] 52. Duru, U.I.; Wayo, D.D.K.; Oguh, R.; Cyril, C.; Nnani, H. Computational Analysis for Optimum Multiphase Flowing Bottom‑ Hole Pressure Prediction. Transylv. Rev. 2022, 30, 16010–16023. Available online: http://transylvanianreviewjournal.com/index. php/TR/article/view/907 (accessed on 20 February 2023). 53. Kim, Y.; Satyanaga, A.; Rahardjo, H.; Park, H.; Sham, A.W.L. Estimation of effective cohesion using artificial neural networks basedonindexsoilproperties: ASingaporecase. Eng. Geol. 2021,289, 106163. [CrossRef] Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual au‑ thor(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people orpropertyresulting fromanyideas, methods, instructionsor products referred to in the content.

Journal

Big Data and Cognitive ComputingMultidisciplinary Digital Publishing Institute

Published: Mar 24, 2023

Keywords: hydraulic fracturing; proppants; numerical modeling; data-driven; neural network optimizers

There are no references for this article.