<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Conference on Digital Technologies in Education, Science and
Industry, December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Parallel Implementation of Neural Networks for Solving the Problem of Oil Production</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aksultan A. Mukhanbet</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bazargul Matkerim</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Al-Farabi Kazakh National University</institution>
          ,
          <addr-line>71 Al-Farabi Avenue, Almaty, 050040</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>6</fpage>
      <lpage>07</lpage>
      <abstract>
        <p>Oil production is a pressing challenge in the modern energy sector. Artificial intelligence and neural networks are widely used to enhance the efficiency of the oil and gas extraction processes. However, processing large volumes of data related to oil production requires the efficient parallel implementation of machine learning algorithms. This research addresses the issue of parallel implementation of neural networks for solving oil production tasks. We used data-level parallelism, splitting, and processing in parallel. In addition, parallelism was employed to distribute the training across multiple nodes (processes) and gather the training results. For this purpose, a dataset was created using the BuckleyLeverett model, which allowed us to obtain extensive data on oil reservoirs. The parallel implementation of machine-learning algorithms significantly accelerates the training process of neural networks and enhances the accuracy of their data analysis in oil production. Our work makes a significant contribution to optimizing the oil extraction industry and demonstrates the successful application of parallel data processing in solving complex tasks in this field. MPI technology was used for parallelization, resulting in a twofold acceleration time. The accuracy of the neural network is 98%.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Oil production</kwd>
        <kwd>Neural networks</kwd>
        <kwd>Parallel implementation</kwd>
        <kwd>Machine learning</kwd>
        <kwd>Optimization</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In recent decades, neural networks have been widely used in various scientific and industrial
fields. One such field is the oil industry, in which the application of neural networks can
significantly improve oilfield development and management. In this article, we explore the
parallel implementation of neural networks for solving oil production tasks and provide an
overview of scientific papers dedicated to this topic.</p>
      <p>Oil production, as a crucial component of modern energy, faces constant challenges and
optimization tasks related to the extraction of oil and gas from Earth's depths. With each passing
day, the volume of data associated with oil production is growing, and innovative methods are
required for effective industrial management. In this context, artificial intelligence and neural
networks have reached the rescue stage, offering powerful tools for analyzing and optimizing
production processes.</p>
      <p>
        However, we are faced with a complex task: processing vast amounts of data related to oil
production in real time while maintaining a high accuracy of analysis. The answer to this question
lies in the efficient parallel implementation of machine learning algorithms designed specifically
to work with oil production data. Several studies have been conducted on parallel
implementation of these algorithms. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] proposes an edge computing health model using
P2Pbased deep neural networks. The model utilizes multiple edge nodes connected to a deep neural
network model to process health big data in parallel, reducing response time delays. In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],
systematic methods were proposed based on a graph-theoretical approach for mapping neural
networks onto cellular SIMD arrays. The authors have achieved significant results over time. In
works [
        <xref ref-type="bibr" rid="ref3 ref4">3-4</xref>
        ], the implementation of the full training procedure of artificial neural networks (ANN)
for speech recognition using the backpropagation algorithm in block mode and an intelligent
classifier for two-dimensional objects was considered. By training acoustic models for speech
recognition with a large vocabulary, a 6-fold reduction in the time required to train large real
networks and a significant reduction in both training and recognition times were achieved.
Reference [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] compared parallel and sequential implementations of feedforward neural
networks. The modeling results show that for small networks, sequential implementation
outperforms parallel implementation. However, as the network and training dataset sizes
increase, parallel implementation leads to shorter training times than those of sequential
implementation. The parallel implementation of complex models such as deep neural networks
(DNN) was evaluated in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The distributed training performance of the Tianhe-3 prototype for
deep neural networks using an improved LeNet model, classic models AlexNet, VGG16, and
ResNet18 in a ported distributed PyTorch environment, and a dynamic selection algorithm for
the main communication mechanism showed significant time reductions.
      </p>
      <p>
        The implementation of neural networks in oil production tasks has also been demonstrated in
works [
        <xref ref-type="bibr" rid="ref7 ref8">7-8</xref>
        ]. These studies discussed various types of machine learning and artificial intelligence
methods that can be used for data processing and interpretation in various sectors of the oil and
gas industry. Works [
        <xref ref-type="bibr" rid="ref9">9-10</xref>
        ] consider convolutional neural networks (CNN) and long short-term
memory (LSTM) networks as a comparative approach to forecasting oil production rates in an
Iranian oil field. The highest performance was observed in the results of the rough-NN (CNN) with
a determination coefficient of 0.82 for forecasting test data. In addition, [11] compared machine
learning classifiers, neural networks, and recurrent neural networks. The results showed that the
Gradient Boosting classifier and neural network demonstrated high accuracy of 99.99% and
97.4%, respectively. Research [12] focused on predicting oil production rates using the
Levenberg-Marquardt backpropagation algorithm for training artificial neural networks
(BPANN) and decline curve analysis methods (DCAM). All of the results discussed in these studies
show significant and rapid improvements in both time and accuracy. However, parallel neural
networks have not been specifically used in oil production tasks.
      </p>
      <p>In this study, we address one of the key challenges in the field of oil production: the parallel
implementation of neural networks. To achieve this goal, we compiled an extensive dataset using
the Buckley-Leverett model, which allowed us to obtain detailed information about the oilfields.
Parallel data processing combined with modern machine learning algorithms accelerates the
training of neural networks and improves their accuracy in oil production data analysis.</p>
      <p>This study represents a significant contribution to the optimization of the oil production
industry and demonstrates how parallel data processing can effectively solve complex tasks in
this strategically important field.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods and materials</title>
      <p>In this study, the following methods were used for parallel implementation of neural networks to
solve oil production tasks:
• Data collection using Buckley–Levett model.
• Parallel data processing.
• Development of parallel training algorithms.
• Optimization.
• Evaluation and analysis of results.</p>
      <p>These points were considered separately. The Buckley–Levett model was selected for the
successful implementation of parallel data processing in oil production tasks. This model is a
mathematical tool widely used in geology and geophysics for modeling hydrocarbon reservoirs
such as oil and gas. The Buckley-Leverett model allows us to describe the physical and geological
characteristics of underground formations, which is necessary for understanding the behavior of
oil reservoirs. The formula and implementation of this model are presented in [11].</p>
      <p>Data collection began with the creation of a reservoir model based on the known geological
and hydrodynamic parameters. These parameters included the geometry of the reservoir, rock
properties, permeability, and viscosity. The Buckley-Leverett model was used to simulate the
behavior of fluids in underground formations over time. During the modeling process, an
extensive database containing information on the physical properties and state of the reservoir
at different depths and times was created. The data collected were as follows:</p>
      <p>The final data was obtained from the Bakley-Leverett model, and the collected data provided
extensive information about the geological structure and dynamics of the deposits. The total
number of data used was 403,439. This dataset has become a key foundation for training neural
networks and analyzing data in the context of parallel implementation of machine learning
algorithms.</p>
      <p>Parallel data processing. Parallel data processing is a method that allows the simultaneous
processing of large volumes of information by dividing them into smaller parts and processing
each of them on separate computing nodes or processors. In the context of solving oil production
tasks, this method plays an important role in data collection and preparation for training the
neural networks. First, data visualization is performed, as shown in Figure 2.</p>
      <p>Data Distribution. A large volume of data on oil fields obtained using the Buckley-Leverett
model was divided into multiple parts. These data segments can be evenly distributed among
different computational nodes and processors. Data parallelism is used in this study. Data
parallelism in neural networks is a method for optimizing the training and execution of neural
networks, in which data are divided into batches and processed in parallel on different
computational devices or processor cores. This speeds up training and execution because
different data segments can be processed independently. Data parallelism can be implemented at
the following levels.</p>
      <p>Data Parallelism: In this case, each data batch is sent to a separate computational node (e.g.,
GPU), where it is processed independently. The gradients were then computed for each batch and
summed to update the neural network parameters.</p>
      <p>Model Parallelism: Here model is divided into several parts, and each part is processed using
different computational nodes. This is particularly useful when the model is too large for fitting
the memory of a single device.</p>
      <p>Task Parallelism: In this case, different tasks related to neural networks are executed in
parallel. For example, one part may be responsible for training, whereas the other handles
inferences (predictions).</p>
      <p>Data parallelism was used in this study. The main concept of data parallelism is that data are
divided into parts and each part is processed in parallel. This is particularly useful in situations
where there are large volumes of data to process, such as training neural networks, analyzing
large datasets, or performing distributed computing. An example could be the parallel processing
of images; if one has a set of images to process (e.g., apply a filter to each image), each image can
be processed on a separate processor or core. This allows for faster processing because multiple
images can be processed simultaneously. In this study, 403,440 data points were divided. The
architecture of a traditional neural network is as follows.</p>
      <p>The architecture of parallel data splitting can be represented as follows:</p>
      <p>In this architecture, the following stages are involved:
• Parallel Processing: Each computational node or processor operates independently to
process data in parallel, significantly increasing the efficiency of data collection and
preparation.
• Data Exchange: Information exchange between nodes may occur during processing and
can be performed in parallel to reduce processing time.
• Results Aggregation: After data processing is completed on each node or processor, the
results are aggregated into a unified solution.</p>
      <p>The implementation of these methods allows for efficient training of neural networks on large
volumes of oil production data, reducing training time and increasing model accuracy, ultimately
contributing to process optimization in the oil industry.</p>
      <p>In this implementation, parallelism was utilized to distribute training across multiple nodes,
with the results collected during the main process. An MPI communicator is then created to link
all the processes. During training, the training data and test data are passed to all processes, with
each process receiving its local training dataset using slices. A sequential neural network model
was then created, consisting of three layers: two layers with rectified linear unit (ReLU) activation
functions and one output layer without activation. The model is compiled with the
'mean\_squared\_error' loss function and the 'adam' optimizer and trained on all processes.
However, these results were not obtained. This process is illustrated in Figure 5.</p>
      <p>Currently, a model collection process is in progress. The models from each process were
gathered for the main process (rank=0). Then, in the main process (rank=0), predictions were
made on the test data for each model, and the prediction results were collected in the main
process. Next, the final prediction was performed by calculating the average of all predictions.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>Parallelism in this code is attained by apportioning the training of the model among different
processes, and then consolidating the findings of the primary process for aggregation and model
evaluation. Following training, the outcomes were as follows:</p>
      <p>Subsequently, a forecast for etta was made for a single iteration. The results are as follows:</p>
      <p>It can be observed from the figure that the forecasted values of eta have high accuracy and are
more similar to the real data. To compare the model with parallelized networks, it was necessary
to assess the accuracy and speed improvement estimates.</p>
      <p>The speed improvement estimate is an important aspect of parallel implementation. This
involves comparing the time required to train neural networks on parallel computing nodes with
the time required to train a single node. Similarly, the time required for the prediction of parallel
nodes and a single node were compared. This allows us to determine how much faster and more
efficient models have become owing to parallel implementation.</p>
      <p>The test results for different packet sizes when the data size is 400,000 are as follows:</p>
      <p>The change in the R^2 score when distributing tasks among processes depends on how the
distribution is performed, as well as on the data and neural network architecture.</p>
      <p>Parallel Model Training. In this study, several neural networks were trained in parallel using
different subsets of data. This led to a change in the coefficient of determination (R^2) because
each neural network was trained on different data. Therefore, the R^2 score for each model, as
well as the final R^2 score after averaging the predictions, differed from the value obtained when
training on the entire dataset.</p>
      <p>R^2 evaluation results. The R^2 evaluation shows how well the model fits the data. If the
distribution of data among processes leads to a better generalization of the model, then the R^2
score may increase. Figure 11 shows that, as the number of processes increased, the R^2 score
decreases. This can be explained as follows. If the number of processes is too large compared to
the size of the data, each model may overfit. Overfitting occurs when the model adjusts too closely
to the training data and loses its ability to generalize new data. This can lead to a decrease in the
generalization ability of the models and the R^2 score.</p>
      <p>Analysis of Model Accuracy. After completing the training process of the neural networks,
their accuracy and efficiency were analyzed. This includes evaluating the ability of the models to
predict various aspects of oil production, such as the production volume, oil quality, process
optimization, and other key parameters. Comparative analyses of the model accuracy before and
after parallel implementation were performed to determine the extent to which it improved
owing to the application of parallel methods. The figure shows that the accuracy after training the
neural network is 98%.</p>
      <p>However, the accuracy of the parallel training of the neural network was 94%. The
comparative statistics are presented in Table 1.</p>
      <p>The results indicate that there is a trade-off between model accuracy and training time when
using parallel computing. We now consider each aspect in greater detail.</p>
      <sec id="sec-3-1">
        <title>Model Accuracy:</title>
        <p>Without parallelization: A model trained without parallel computing achieves higher accuracy,
meaning that it better fits the training data and can generalize better to new data. This is reflected
in a lower error and a higher R^2 score.</p>
        <p>With parallelization, the models trained using parallel computing exhibited lower accuracy.
This may be attributed to differences in the data, overfitting of smaller subsets of data for each
process, or other aspects related to parallelization.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Model Errors:</title>
        <p>Without parallelization: A non-parallelized model has a lower error, indicating that its
predictions are closer to actual data. This indicates the model’s ability to make accurate forecasts.</p>
        <p>With parallelization, the models trained using parallel computing had higher errors, implying
that their predictions deviated more from the actual data. This could indicate issues related to
overfitting or the limited amount of data available for each process.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Training Time:</title>
        <p>Without parallelization, the training time was longer because the entire training process was
executed sequentially on a single process or core. However, this approach can be inefficient when
dealing with large volumes of data or complex models.</p>
        <p>Parallelization significantly accelerates the training process. The training time was reduced by
a factor of five, making parallelization highly advantageous for large datasets or complex models.</p>
        <p>In conclusion, parallelization reduces the training time, but models trained in parallel may
have slightly lower accuracy and higher errors compared to models trained sequentially.
However, this difference was not statistically significant. The final analysis of the results allows
us to draw conclusions about the success of the parallel implementation of neural networks for
oil production tasks and the practical benefits they can bring to the energy and oil industry.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>In this study, we focused on optimizing oil production processes using parallel implementation
of neural networks. Oil production continues to be one of the key industries in the energy sector,
and the effective use of modern technologies, such as artificial intelligence and machine learning,
plays an important role in increasing productivity and reducing costs. Our study demonstrated
that the parallel implementation of neural networks at the data level can significantly speed up
the learning process and improve the accuracy of oil production data analysis. We successfully
created a dataset using the Buckley-Leverett model, which allowed us to enrich our initial data
and improve the quality of training. The use of MPI for parallelization has led to a significant
increase in data processing speed, reducing the time required for analysis and decision-making.
The overall result of our study is an important step towards optimizing the oil industry. We
demonstrated that parallel data processing and machine learning can work in symbiosis to speed
up processes and improve outcomes. The proposed neural network achieved an accuracy of 98%,
thereby confirming the successful application of the proposed approach. Our study provides new
insights into the effective use of modern technologies to optimize oil production and create a
basis for future research and development in this area.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgements</title>
      <p>This research was carried out at the expense of a grant from the Science Committee of the
Ministry of Education and Science of the Republic of Kazakhstan under project № BR18574136.
6. References
[10] Amin Sheikhoushaghi, Narges Yarahmadi Gharaei, Amirhossein Nikoofard. (2022).</p>
      <p>Application of Rough Neural Network to forecast oil production rate of an oil field in a
comparative study. Journal of Petroleum Science and Engineering. 209. p.109935.
https://doi.org/10.1016/j.petrol.2021.109935.
[11] Daribayev, B., Mukhanbet, A., Nurakhov, Y., Imankulov, T. (2021). Implementation of the
solution to the oil displacement problem using machine learning classifiers and neural
networks. Eastern-European Journal of Enterprise Technologies, 5 (4 (113)), 55–63.
https://doi.org/10.15587/1729-4061.2021.241858.
[12] Marfo, S.A. and Kporxah, C. (2020), Predicting Oil Production Rate Using Artificial Neural
Network and Decline Curve Analytical Methods, Proceedings of 6th UMaT Biennial
International Mining and Mineral Conference, Tarkwa, Ghana, pp. 43-50.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yoo</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Edge computing health model using P2P-based deep neural networks. Peer-to-Peer Netw</article-title>
          .
          <source>Appl</source>
          .
          <volume>13</volume>
          ,
          <fpage>694</fpage>
          -
          <lpage>703</lpage>
          . https://doi.org/10.1007/s12083-019- 00738-y.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Wojtek</given-names>
            <surname>Przytula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Prasanna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.K.</given-names>
            &amp;
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>WM.</surname>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>Parallel implementation of neural networks</article-title>
          .
          <source>J VLSI Sign Process Syst Sign Image Video Technol</source>
          <volume>4</volume>
          :
          <fpage>111</fpage>
          -
          <lpage>123</lpage>
          https://doi.org/10.1007/BF00925117.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Stefano</given-names>
            <surname>Scanzio</surname>
          </string-name>
          , Sandro Cumani, Roberto Gemello, Franco Mana,
          <string-name>
            <given-names>P.</given-names>
            <surname>Laface</surname>
          </string-name>
          .
          <source>PARALLEL IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK TRAINING. Conference: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing</source>
          ,
          <string-name>
            <surname>ICASSP</surname>
          </string-name>
          <year>2010</year>
          ,
          <volume>14</volume>
          -
          <fpage>19</fpage>
          , Sheraton Dallas Hotel, Dallas, Texas, USA.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Enrique</surname>
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Carrera</surname>
            ,
            <given-names>Cicero</given-names>
          </string-name>
          <string-name>
            <surname>Pereira</surname>
          </string-name>
          .
          <article-title>A Parallel Implementation of a Neural-Network Based Object Classifier</article-title>
          .
          <source>February</source>
          <year>1999</year>
          . Conference: 3rd Workshop on Cybernetic Vision At: Campinas - SP, Brazil Volume:
          <volume>1</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Shou</given-names>
            <surname>King Foo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Saratchandran</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. Sundararajan.</surname>
          </string-name>
          (
          <year>1995</year>
          ).
          <article-title>Comparison of parallel and serial implementation of feedforward neural networks</article-title>
          .
          <source>Journal of Microcomputer Applications</source>
          .
          <volume>18</volume>
          (
          <issue>1</issue>
          ):
          <fpage>83</fpage>
          -
          <lpage>94</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Wei</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ji</surname>
            <given-names>Z</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wei</surname>
            <given-names>Z.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Deploying and scaling distributed parallel deep neural networks on the Tianhe-3 prototype system</article-title>
          .
          <source>Sci Rep</source>
          .
          <volume>12</volume>
          ;
          <issue>11</issue>
          (
          <issue>1</issue>
          ):
          <fpage>20244</fpage>
          . doi:
          <volume>10</volume>
          .1038/s41598- 021-98794-z.
          <source>PMID: 34642373; PMCID: PMC8511035.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Husam</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Alkinani</surname>
          </string-name>
          , Abo Taleb T.
          <string-name>
            <surname>Al-Hameedi</surname>
          </string-name>
          ,
          <article-title>Shari Dunn-Norman, and</article-title>
          <string-name>
            <given-names>Ralph E.</given-names>
            <surname>Flori</surname>
          </string-name>
          .
          <source>Applications of Artificial Neural Networks in the Petroleum Industry: A Review</source>
          . Missouri University of Science and Technology; Mortadha T. Alsaba, Australian College of Kuwait; Ahmed S. Amer, Newpark TechnologyCenter/ Newpark Drilling Fluids.
          <fpage>18</fpage>
          -
          <issue>21</issue>
          <year>March 2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Anirbid</given-names>
            <surname>Sircar</surname>
          </string-name>
          , Kriti Yadav, Kamakshi Rayavarapu, Namrata Bist, Hemangi Oza. (
          <year>2021</year>
          ).
          <article-title>Application of machine learning and artificial intelligence in oil and gas industry</article-title>
          .
          <source>Petroleum Research</source>
          ,
          <volume>6</volume>
          (
          <issue>4</issue>
          ):
          <fpage>379</fpage>
          -
          <lpage>39</lpage>
          . https://doi.org/10.1016/j.ptlrs.
          <year>2021</year>
          .
          <volume>05</volume>
          .009.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Kwon</surname>
            <given-names>S</given-names>
          </string-name>
          , Gayoung Park,
          <string-name>
            <surname>Jang</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jinhyung</surname>
            <given-names>Cho</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chu</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baehyun</surname>
            <given-names>Min.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Determination of oil-well placement using a convolutional neural network coupled with robust optimization under geological uncertainty</article-title>
          .
          <source>Journal of Petroleum Science and Engineering</source>
          .
          <volume>201</volume>
          . p.
          <volume>108118</volume>
          . https://doi.org/10.1016/j.petrol.
          <year>2020</year>
          .
          <volume>10811</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>