<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An AI-based Android Application for Ancient Documents Text Recognition</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tomoki Morioka</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aravinda C V</string-name>
          <email>aravinda.cv@nitte.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lin Meng</string-name>
          <email>menglin@fcg</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept of Computer Science and Engineering, NMAM Institute of Technology</institution>
          ,
          <addr-line>NITTE, Karkala UDUPI</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept.of Electronic and Computer Engineering, Ritsumeikan University. Kusatsu</institution>
          ,
          <addr-line>Shiga</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Advances in technology have led to the development and widespread use of highperformance mobile devices. One such device is the smartphone which is a kind of cell phone with operating systems such as IOS and Android and has the highest penetration rate in the world. The use of deep learning in smartphones leads to the spreading of AI-based services. In addition, the demand for image recognition in edge devices and mobile devices is increasing due to examples such as face recognition. This paper proposes an ancient documents text recognition system for Android, using communication with a recognition AI model equipped server. An image recognition application without uses a server is equipped by using TensorFlow Lite for the performance comparison. The results show that the recognition time of the TensorFlow Lite application is shorter than that of the system using a server. However, the CPU and memory usage of the proposal is lower, and the operation is more stable. Hence, the proposed system is more stable in considering the hardware usage.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Recent technological advances have led to the development and widespread use of
highperformance mobile devices. One of them is the smartphone, which is a kind of cellphones
equipped with the world's commons OS such as IOS and Android. Featuring touch and ick
operation on the touch panel o ers a variety of services through unique applications in
smartphones.</p>
      <p>
        Furthermore, smartphones employ deep learning technologies, letting the application eld
expand even further. In the eld of image recognition, Deep Learning technologies use a
multilayered neural network for learning, which has higher accuracy than conventional image
processing methods, and various studies are being conducted[
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. Usually, the deep learning
models are equipped on high-speci cation PCs with GPUs. Currently, with the researchers'
hard work, some slight deep learning models are realized to be applied on resource-limited edge
devices, mobile devices, and so on [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ].
      </p>
      <p>
        Our team has also been studying ancient documents for some time. One of them is the Oral
Bone Inscriptions, which were used in ancient China 3000 years ago and became the origin of
Chinese Characters. OBIs are carved on animal bones and turtle shells, which are cracked and
deteriorated, making them di cult to recognize. In our previous research, we have succeeded in
achieving a high recognition rate by building a system using multiple deep learning models [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
However, since this system is published as a PC or web API, we need to prepare a high-spec PC
and browser environment. Hence, we consider that we can analyze OBIs more easily by using
an Android application to do these things. For example, an OBIs image taken by a camera
can be recognized instantly. As there are no implementation examples of OBIs recognition in
Android, we research the implementation of deep learning models and the use of deep learning
in smartphones to realize this application.
      </p>
      <p>
        Implementing deep learning on Android devices using frameworks such as TensorFlow[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
and PyTorch[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], are possible to create standalone applications which perform image recognition
only using Android devices.
      </p>
      <p>However, not all Android devices have identical speci cations, and it is di cult to guarantee
that the application can work on all devices because the application depends on the speci
cations. Therefore, in this paper, we develop a recognition system that guarantees the operation
of applications by using a server to perform image recognition instead of the Android device.
Speci cally, the trained deep learning models are equipped on the server previously.</p>
      <p>In the recognition processing, the target images in the Android device are sent to the server
rstly. Then the image recognition is performed in the server using a trained model. Finally, the
android device the recognition results. To prove the e ectiveness of our proposal, we compare
the system performance with the standalone application, implementing a trained model using
TensorFlow Lite.</p>
      <p>In section 2 describes related work. Section 3 introduces the recognition ow of the proposed
system. Section 4 reports and discusses the experimental results, then the paper is concluded
in section 5.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Related work</title>
      <sec id="sec-2-1">
        <title>TensorFlow Lite</title>
        <p>TensorFlow is an open software library for numerical computation released by google. It is
also a numerical library specialized for deep learning that can perform calculations on
multidimensional arrays of \Tensor". Almost all of the major operating systems are applied to
TensorFlow, such as Linux, macOS, and Windows, for the reason of TensorFlow's extensive
library and customization features. Currently, TensorFlow possesses a larger number of users
among deep learning frameworks.</p>
        <p>TensorFlow Lite is a toolset for running TensorFlow models on the edge and mobile devices,
which provides APIs to enable inference on mobile devices, the transformation of TensorFlow
models for mobile devices, and model optimization through quantization and other means.</p>
        <p>The ow of deployment to edge devices and mobile devices is shown in Figure 1 by using
TensorFlow Lite. Firstly, TensorFlow Lite Converter converts a model trained with TensorFlow
APIs. Secondly, the model is deployed using the publicly available TensorFlow Lite API. It
also has an API for deploying to Android.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>System target of Oral Bone Inscriptions Recognition</title>
        <p>
          The goal of our system aims to recognize and organize the ancient documents, including OBIs
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], rubbings[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], Japanese Kuzushiji documents[
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] and so on. This paper only focuses on
OBIs recognition and organization.
        </p>
        <p>
          OBIs are one kind of ancient character, used in China more than 3,000 years ago and are
the origin of Chinese characters. They were carved on turtle shells and animal bones because
there was no ink and paper. Deciphering OBIs is very important for the study of history and
Chinese characters. However, they are di cult to be deciphered due to severe deterioration.
Hence, researchers have tried to recognize the OBIs by image processing and deep learning
[
          <xref ref-type="bibr" rid="ref10 ref13 ref14">13, 10, 14</xref>
          ]. However, these methods should cut OBIs images manually. Furthermore, some
researchers have tried to realize detecting OBIs from Oracle Bone and recognizing the OBIs in
the same system [
          <xref ref-type="bibr" rid="ref14 ref7">14, 7</xref>
          ].
        </p>
        <p>
          For the researchers are easy to recognize the OBIs online, the goal of this paper is to build
an online OBI recognition system based on Android using two deep Learning models shown in
the paper [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>System ow</title>
      <p>The system ow of the proposed method, a server-based image recognition system, is shown in
Figure 2.</p>
      <p>
        In the rst step, YOLOv3[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], an object detection and recognition model, is applied to
recognize multiple characters simultaneously. However, detecting all characters without errors is
very di cult. In the second step, the user trims the undetected characters manually and applies
MobiliNet to recognize the undetected characters. The system is installed in an application
programming interface and is open to researchers and other interested users. The Android
application in this paper is challenging work of this API. Hence, only YOLOv3 is applied for
detecting and recognizing OBIs.
(1) Image uploading
      </p>
      <p>The rst step is to upload the image from the Android device to the server. We use the
POST method of the HTTP protocol to communicate, requesting the URL on the Android
side and writing the image data in the request body. The MIME type is
\multipart/formdata", which sends three pieces of data. The type attribute of each input element is
\submit",\f ile", and \text".
\f ile" is an image le.
\text" contains a number and indicates the threshold of the image recognition result.
\submit" is the trigger for Upload, and the server-side starts processing when it receives
\submit".
(2) Image recognition</p>
      <p>The second step is to perform image recognition on the server. YOLOv3 is applied in
this work, which runs on DarkNet, a framework built-in C programming language, to
recognize OBIs. DarkNet refers to the directory path of the image stored in the server
and the received threshold value and processes it. The output images of the recognition
results are saved in the same directory in the server.
(3) Image Downloading</p>
      <p>The third step is to download the image. At the same time as the recognized recognition
result is stored in the server, the Android side sends a request and starts downloading the
image using the GET method of the HTTP protocol.
(4) Show image</p>
      <p>The fourth step is to display the downloaded image of the recognition result on the
Android application.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Evaluation</title>
      <p>
        We equip the proposal system on android, uses Xperia XZ1 Compact with 4GB memory and
32GB storage. Five OBI rubbing images are employed as the testing image, which is selected
from book [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>For comparing the performance, and TensorFlow Lite system is also equipped on android.
The proposal results and the TensorFlow Lite results are shown in Figure 3 and Figure 4,
respectively. The detected and recognized OBIs are crossed by a bounding box, which proves
two of the system are equipped very well.</p>
      <sec id="sec-4-1">
        <title>Recognition time</title>
        <p>In terms of recognition time, the TensorFlow Lite app is generally shorter than the
proposed system. However, the recognition time of 2426-36-1.jpg is almost the same,
indicating that the recognition time of the TensorFlow Lite application is highly dependent
on the le size.</p>
      </sec>
      <sec id="sec-4-2">
        <title>CPU usage and memory usage</title>
        <p>In terms of average CPU usage, the proposed system uses less than CPU 10% of the CPU
in all ve images. While the TensorFlow Lite app uses approximately 40 ~50%.
About the average memory usage, the TensorFlow Lite application uses about 530MB,
while the proposed system uses about 50MB which is only 10% of the TensorFlow Lite
application used.
00004.jpg
2.jpg
30.jpg
00009.jpg
2426-36-1.jpg
The CPU and memory usage of the proposed system is reduced because Android's tasks
are only communication and image display.</p>
      </sec>
      <sec id="sec-4-3">
        <title>Application size and startup time</title>
        <p>In terms of Application size, TensorFlow Lite utilizes 289MB of memory. However, the
proposed application utilizes 13.72MB is only 4.7% of TensorFlow Lite. Hence, the
proposal is a slight application that has a great advantage in memory utilization.
In terms of startup time, the proposed system is 0.572s. However, the TensorFlow Lite
application costs 3.147s which is 5.5 times longer than the proposal. It also proves the
advantage of our proposal in startup time.
4.2</p>
        <sec id="sec-4-3-1">
          <title>Discussion</title>
          <p>4.2.1</p>
        </sec>
      </sec>
      <sec id="sec-4-4">
        <title>Discussion in Performance Comparison</title>
        <p>In terms of the recognition time, the TensorFlow Lite application is shorter than the proposed
application. This is because the proposed method relies on communication. The
communication ow consists of establishing a connection, waiting for server processing, and receiving the
resulting image. The average waiting time for server processing was about 2 seconds. This
is because the model of the TensorFlow Lite application starts running immediately after the
start of inference, while the model of the proposed method starts running only after the image
transmission is completed.</p>
        <p>In terms of CPU and memory usage, the TensorFlow lite application is more expensive.
Furthermore, Android devices use about 1 to 1.5 GB of memory to run the OS, causing the
system cannot provide too much memory to run the other applications. Otherwise, the proposed
system only uses a few memory and more stable for users.</p>
        <p>
          Since Android devices currently in widespread use range from those with high speci cations
to those with low speci cations, we consider that a proposed system with stable operation is
superior for providing as a service. However, if the speci cations of Android devices increase
with the development of technology and Android devices with high speci cations become
common, the operation of the TensorFlow Lite app, which is a stand-alone application, will be
guaranteed. It is also possible that research on optimization and e ciency of deep learning
models will progress and applications will become even lighter. The stability of the operation
of the stand-alone application is an issue for the future, and we conclude that the proposed
system using communication, which is currently stable, is better.
OBIs Recognition using deep learning has its problems. It is the lack of data sets and the
resulting low character detection rate in the object detection model. In the previous study
[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], another inference model was prepared and reinforced to compensate for the low detection
rate. The data sets for the object detection model has to be created manually, which is not
an easy task because it requires knowledge of the crustacean characters and errors occur. The
proposed method in this paper uses communication to recognize OBIs; OBIs images are sent
from Android devices so that the images can be stored in the server. We can then use the images
and recognition results to create a data sets to enhance the recognition model on the server.
This is an advantage of the proposed method for OBIs recognition because in a standalone
application, it is very di cult to re-train the recognition model because it is implemented in
the application and the speci cation of the device is too low to train it. However, there are
many problems that need to be solved, such as how to modify the collected data and how to
retrain it. These are our future tasks.
5
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>Smartphones are cell phones equipped with an operating system and have the highest
penetration rate in the world. Therefore, the use of deep learning in smartphones will lead to the
further spread of AI-based services. In addition, the demand for image recognition in edge
devices and mobile devices has been rising in recent years. This paper proposes an ancient
documents text recognition system for Android, using communication with a server. The
comparison of performance is done with the image recognition application using TensorFlow Lite.
The results show that the recognition time of the proposal is longer than that of the TensorFlow
Lite application. However, in the viewpoint of CPU and memory usage, the proposed system
can be guaranteed to work even on devices with low performance. Also, in the viewpoint of
OBIs recognition, the proposed method was shown to be superior because of the possibility of
expanding the data sets. Our future work is to build a better system for relearning using the
proposed method and to improve the stability of the operation of the standalone application.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>Part of this work was supported by the Art Research Center of Ritsumeikan University and Key
Laboratory of Oracle Bone Inscriptions Information Processing, Anyang Normal University.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Christian</given-names>
            <surname>Szegedy</surname>
          </string-name>
          , Vincent Vanhoucke,
          <article-title>Sergey Io eand Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision</article-title>
          .
          <source>2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Vincent</given-names>
            <surname>Vanhoucke Christian Szegedy</surname>
          </string-name>
          , Sergey Io e
          <string-name>
            <given-names>and Alex</given-names>
            <surname>Alemi</surname>
          </string-name>
          .
          <article-title>Inception-v4, inceptionresnet and the impact of residual connections on learning</article-title>
          .
          <source>Proceedings of the Thirty-First AAAI Conference on Arti cial Intelligence</source>
          , pages
          <fpage>4278</fpage>
          {
          <fpage>4284</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Laurens</surname>
            <given-names>van der Maaten Gao Huang</given-names>
          </string-name>
          , Zhuang Liu and
          <string-name>
            <given-names>Kilian Q.</given-names>
            <surname>Weinberge</surname>
          </string-name>
          .
          <article-title>Densely connected convolutional networks</article-title>
          .
          <source>2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Andrew</surname>
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Howard</surname>
            , Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and
            <given-names>Hartwig</given-names>
          </string-name>
          <string-name>
            <surname>Adam</surname>
          </string-name>
          .
          <article-title>Mobilenets: E cient convolutional neural networks for mobile vision applications</article-title>
          .
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Ziyi</given-names>
            <surname>Liu</surname>
          </string-name>
          , Junfeng Gao, Guoguo Yang, Huan Zhang, and
          <string-name>
            <given-names>Yong</given-names>
            <surname>He</surname>
          </string-name>
          .
          <article-title>Localization and classi cation of paddy eld pests using a saliency map and deep convolutional neural network</article-title>
          .
          <source>Technical report</source>
          ,
          <year>02 2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Barret</given-names>
            <surname>Zoph</surname>
          </string-name>
          , Vijay Vasudevan, Jonathon Shlens, and
          <string-name>
            <surname>Quoc</surname>
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Le</surname>
          </string-name>
          .
          <article-title>Learning transferable architectures for scalable image recognition</article-title>
          .
          <source>CoRR, abs/1707.07012</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Yoshiyuki</given-names>
            <surname>Fujikawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Hengyi</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Xuebin</given-names>
            <surname>Yue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Lin</given-names>
            <surname>Meng</surname>
          </string-name>
          , et al.
          <article-title>Recognition of oracle bone inscriptions by using two deep learning models</article-title>
          .
          <source>arXiv preprint arXiv:2105.00777</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Mart</surname>
            <given-names>n Abadi</given-names>
          </string-name>
          , Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis,
          <source>Je rey Dean</source>
          , Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geo rey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke,
          <string-name>
            <given-names>Yuan</given-names>
            <surname>Yu</surname>
          </string-name>
          , and Xiaoqiang Zheng.
          <source>TensorFlow: Large-scale machine learning on heterogeneous systems</source>
          ,
          <year>2015</year>
          .
          <article-title>Software available from tensor ow</article-title>
          .
          <source>org.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Adam</given-names>
            <surname>Paszke</surname>
          </string-name>
          , Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang,
          <string-name>
            <surname>Zachary</surname>
            <given-names>DeVito</given-names>
          </string-name>
          , Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and
          <string-name>
            <given-names>Soumith</given-names>
            <surname>Chintala</surname>
          </string-name>
          .
          <article-title>Pytorch: An imperative style, highperformance deep learning library</article-title>
          . In H. Wallach,
          <string-name>
            <given-names>H.</given-names>
            <surname>Larochelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Beygelzimer</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>d'Alche-</article-title>
          <string-name>
            <surname>Buc</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Fox</surname>
          </string-name>
          , and R. Garnett, editors,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>32</volume>
          , pages
          <fpage>8024</fpage>
          {
          <fpage>8035</fpage>
          . Curran Associates, Inc.,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Lin</given-names>
            <surname>Meng</surname>
          </string-name>
          .
          <article-title>Two-stage recognition for oracle bone inscriptions</article-title>
          .
          <source>In Image Analysis and Processing - ICIAP</source>
          <year>2017</year>
          , pages
          <fpage>672</fpage>
          {
          <fpage>682</fpage>
          . Springer International Publishing,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Hiroyuki</given-names>
            <surname>Tomiyama Zhiyu Zhang</surname>
          </string-name>
          , Zhichen Wang and
          <string-name>
            <given-names>Lin</given-names>
            <surname>Meng</surname>
          </string-name>
          .
          <article-title>Deep learning and lexical analysis combined rubbing character recognition</article-title>
          .
          <source>In The 2019 International Conference on Advanced Mechatronic Systems (ICAMechS</source>
          <year>2019</year>
          ),
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Lyu</surname>
            <given-names>Bing</given-names>
          </string-name>
          , Hiroyuki Tomiyama, and
          <string-name>
            <given-names>Lin</given-names>
            <surname>Meng</surname>
          </string-name>
          .
          <article-title>Frame detection and text line segmentation for early japanese books understanding</article-title>
          . volume
          <volume>1</volume>
          , pages
          <fpage>600</fpage>
          {
          <fpage>606</fpage>
          , 01
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Guoying</surname>
            <given-names>Liu</given-names>
          </string-name>
          , Jici Xing, and
          <string-name>
            <given-names>Jing</given-names>
            <surname>Xiong</surname>
          </string-name>
          .
          <article-title>Spatial pyramid block for oracle bone inscription detection</article-title>
          .
          <source>pages 133{140</source>
          , 02
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Lin</surname>
            <given-names>Meng</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bing Lyu</surname>
            , Zhiyu Zhang,
            <given-names>C. V.</given-names>
          </string-name>
          <string-name>
            <surname>Aravinda</surname>
            , Naoto Kamitoku, and
            <given-names>Katsuhiro</given-names>
          </string-name>
          <string-name>
            <surname>Yamazaki</surname>
          </string-name>
          .
          <article-title>Ocrale bone inscription detector based on ssd</article-title>
          .
          <source>Proceedings of New Trends in Image Analysis and Processing - ICIAP</source>
          <year>2019</year>
          , pages
          <fpage>126</fpage>
          {
          <fpage>136</fpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Joseph</given-names>
            <surname>Redmon</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ali</given-names>
            <surname>Farhadi</surname>
          </string-name>
          .
          <article-title>Yolov3: An incremental improvement</article-title>
          .
          <source>In Computer Vision and Pattern Recognition</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>P.M Zuo</surname>
          </string-name>
          .
          <article-title>Shanghai bo wu guan cang jia gu wen zi</article-title>
          .
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>