<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Semantic analysis methods usage for the implementation of systems of graphic objects content analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr Tymchenko</string-name>
          <email>olexandr.tymchenko@uwm.edu.pl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Orest Khamula</string-name>
          <email>khamula@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bohdana Havrysh</string-name>
          <email>dana.havrysh@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Stepana Bandery Street, 12, Lviv, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>MoDaST-2024: 6th International Workshop on Modern Data Science Technologies</institution>
          ,
          <addr-line>May, 31 - June, 1, 2024, Lviv-Shatsk</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Ukrainian Academy of Printing</institution>
          ,
          <addr-line>Pidholosko st., 19, Lviv, 79020</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Warmia and Mazury</institution>
          ,
          <addr-line>Ochapowskiego str,2, Olsztyn, 10-719</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The solution to the problem of reducing data flows about graphic objects was carried out mainly by developing coding and compression formats. Among those methods of statistical compression considering psychovisual and psychophysical properties of human multimedia information perception are particularly effective. However, in terms of implementation, this is an extensive rather than intensive way of development solving the problem of reducing data flows between individual nodes of the system. To make decisions, the operator needs those graphic objects that provide a solution to the given task, which requires organizing the search for the necessary graphic objects among the entire set in the data store. The user compares the given fragment with the obtained aserch result using a graphical representation of the data, which causes additional time costs. Special complications arise graphic objects data is being processed, since information about each image can be characterized by different criteria. In addition, the constant growth of the volume of information causes a decrease in its effectiveness. It is appropriate to define criteria for data processing and retrieval and to outline approaches for evaluating the quality of retrieval. Existing methods and means of semantic description and analysis of a graphic object require the operator's participation at the pre-processing stage, so their implementation requires additional research to improve work efficiency. The analysis of existing methods shows that a significant simplification of the solution of individual storage and search problems can be achieved by structuring the description of data in repositories using a specially created semantic description for this purpose. However, now these methods are practically not used to organize information search, and their implementation in search engines requires significant time and system resources.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;graphic objects</kwd>
        <kwd>semantic analysis</kwd>
        <kwd>semantic significance of elements 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The article considers approaches of information technology creation for searching graphic data
by the content of graphic objects based on their semantic analysis. Before considering the
method, it is necessary to dwell on some conceptual provisions regarding semantics. First of all,
we note that the term "semantics" is used in different works in different aspects that determine
its meaning. In the theory of information, which investigates general scientific methodological
problems of the development of science, information and semantics are considered as concepts
that are close in essence [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. To implement semantic analysis, the concept of a semantic unit
is introduced, which is compared with elements of information, determined by them and serves
as a hypothetical unit of information measure [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Such concepts replacement makes it
possible to confirm to a certain extent the correctness of the known ratios that were built to
describe the technical processes under study. At the same time, there was a need to carry out
quantitative assessments of parameters for their description. The most popular example is the
technical process associated with the transmission of digital data through communication
channels. The study of this process led to the creation of ideas about information and its
amount. In order to calculate the numerical values of information amount, appropriate
expressions were discussed. The proposed ratios allowed a non-controversial interpretation and
gained the greatest popularity thanks to the works of K. Shanon [
        <xref ref-type="bibr" rid="ref3 ref5">3, 5</xref>
        ]. Thus, the concepts of
semantics and information are related and closely intertwined.
      </p>
      <p>The purpose of the work is to develop and research the principles of semantic analysis
implementation and the use of its capabilities in the organization of searching by the graphic
object content.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>
        Today, there is a fairly large number of algorithms and methods of graphic data analysis
that are used in various areas of digital information processing: from the fight against
spam to systems for passenger traffic tracking [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
      </p>
      <p>We will dive into the most common methods of image content analysis:
•
•
frequency-spatial display of a graphic object;
image histograms.</p>
      <p>Image analysis can be interpreted as its recognition, i.e. the maximum compression of the
volume of information to eliminate the redundancy of processed images. Several difficulties
arise in solving this problem: algorithmic and technical implementation. The compression
algorithm should be sufficiently reliable and at the same time, if possible, economical. The
search for effective algorithms is now carried out by researching the mechanisms of the brain,
as well as the development and research of various recognition programs.</p>
      <p>The technical complexity is caused primarily by relatively low speed and insufficient
memory capacity. In addition, they are poorly adapted to the input and processing of
multidimensional arrays of information, which are images.</p>
      <p>
        The task of automatic analysis is reduced to finding some function that maps a set of images
into a set whose elements are image classes. The process of determining such function should
be carried out in three stages [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]:
1. Preliminary processing. A given imagef (x ,y ) will be transformed into one or more
new images {f 1(x ,y ),...,f n (x ,y )} by some set or sequence of certain operations.
      </p>
      <p>functions that determine the features, as a result of which the image is coded.
Classification. As a result of these stages, a set of data appears, which can be considered
the features of the initial imagef (x ,y ) .</p>
      <p>This set should be considered as a point in n-dimensional space. If the areas occupied by a
class in this space are indicated or the probability density for each class is specified on it, then,
using measures of geometric proximity and maximum likelihood, this image can be assigned to
a certain class, that is, "classified."</p>
      <p>The first two stages — image pre-processing and feature selection — are quite difficult. This
is due to the redundancy of the image due to the presence of an extra background; uncertainty
of position, orientation, scale; therefore, the correct choice of various methods of image
filtering, normalization, obtaining invariants, etc., is of great importance.</p>
      <p>The procedure of forming signs or parameters of the primary description, which is a
transition from a two-dimensional function (image) to a system of numbers (signs), can be
interpreted as a task of some functionality.</p>
      <p>The third stage of solving the problem of analysis — classification, associated with the
performance of a complex of simple arithmetic or logical operations, can be successfully solved
with the use of computer technology.</p>
      <p>Therefore, in the images analysis there are mainly difficulties associated with pre-processing
and selection of features.</p>
      <p>During analysis tasks solving, two groups of tasks can be distinguished. The first one is when
the information is contained in the integral properties of the entire image (textures — cloud
cover, interferograms and shadow pictures, etc.). Such tasks are solved by analysing the
statistical properties of a objects set that are combined simultaneously in the field of vision,
without considering the individual local properties of each of them separately. To compress the
information in such images, the histograms of the informative parameters’ distribution most
related to image decomposition need to be used. The second group is the recognition of
individual objects (or fragments) according to the given geometric parameters and their
selection against the background of objects collection, their counting, which is another method
of images abbreviated description.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Semantic analysis implementation principles for searching by the content of a graphic</title>
      <p>The basis of any semantic analysis are the following factors:
•
•
•
•
availability of an interpretation system;
rules of the interpretation system usage;
homomorphism between the rules for using the interpretation system and the rules for
building and presenting the objects to be interpreted;
semantic consistency of the description system and presentation of the graphic objects
interpretation system.</p>
      <p>First, it is needed to be noted that in this case only graphic objects will be discussed and only
their semantic analysis will be investigated. The constituent parts of the interpretation system
are the following components:
semantic dictionary of the graphic objects system Sc ;
semantic environment G ( Sc ) ;
a system of environment rules, which is formally recorded in the form Σ ={ξ1 ,ξ2 ,...,ξn } .</p>
      <p>A semantic dictionary Sc — is an ordered set of attributes, primitives, and elements used to
describe graphical objects. Each of the attributes, primitives and elements is defined by a certain
identifier, which will be used later for formal description as the value of a certain variable. It is
considered that each individual graphic object as an object from which primary attributes or
graphic primitives can be isolated.</p>
      <p>
        Semantic dictionary Sc is a more complex structure than a traditional language dictionary,
as it is focused on solving problems related to the semantic analysis of the graphic object
content. Given that the attributes and graphic primitives that we will use to describe graphic
objects are a limited set, it is possible to structure a corresponding dictionary. If necessary, the
dictionary will only grow due to attributes or new primitives. For example, among the
dictionary elements, you can introduce a certain hierarchical dependence of the importance of
the elements in the description of the subject area. This principle is widely used in various
analysers and is known as the principle of determining key, or control, elements [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Such
structuring can be complicated and developed. The minimum level of S dictionary structure
c
is the assignment of key elements in the dictionary, and the maximum level of S
c
structure is the degeneration of the semantic dictionary into a set of fragments of graphic
objects that determine the only possible forms and ways of describing graphic objects
      </p>
      <p>For the full operation of the system, it is necessary to investigate the possibility of creating
a system for evaluating the search engine quality based on the use of semantic analysers. In this
case, the task of creating and researching search result evaluation models for a certain system
arises. To create them, you can use the following approaches that make it possible to solve the
problem of determining the search evaluation in graphic objects at the semantic level:
dictionary
view of the analysis of the reduction degree in the effectiveness of the system's
functioning;
system load measures;
the strategic significance of the object and the search result.
•
•
•
•
•
•</p>
      <p>Let us consider in more detail the approaches to the implementation of semantic search of
graphic objects by content.</p>
      <p>The first approach consists of the semantic loading implementation of the its carriers, which
for a graphic object are attributes, primitives, elements, their sets, fragments and the object in
general. The listed components are means that are directly placed in the graphic object and
actually form this object. Mediated carriers of semantics are the means by which objects are
created or analysed. These include the following components:
•
•
•
semantic dictionary structure G (S c ) ;
a system of rules for building graphic objects ψ = {ψ 1 ,...,ψn } ;
a system of parameters characterizing the object P = {Pc ,...,Pk } .</p>
      <p>Lets consider a possible formal description of the semantic structure of the dictionary.
dictionary means the value of its semantic</p>
      <p>Assigning a key value attribute x i from the S c
significance for the entire environment of the subject area. It is obvious that the degree of such
significance can be different for different attributes and primitives. This means that the S c
dictionary can be partitioned into subsets that combine sets of attributes or primitives with
equal or close values of semantic significance. Such subsets are independent of the specific
interpretation of a particular set of elements I (x i ) . Therefore, it is possible to formally
represent the simplest structure of the S c dictionary in the form of a D (S c ) tree. Semantic
criticality k i of an attribute x i of ax i (k i ) set can have a certain level in the given context of
using other attributes and primitives with lower levels of criticality, for which k is a
j &lt;k i
critical element in S c if x i (k i ) . Such element will be called a key. We will assume that critical
dependence on the context can be developed only at one level of the hierarchy of the degree of
significance of the elements of the context. Further measures of significance, although they
exist in other attributes or primitives, are not contextually determined for key or critical
elements. This is because the measure of semantic element criticality is that it has the maximum
value of its own semantic expressiveness or semantic value. In this case, the structure of the
dictionary can be formally represented in the form of some hierarchical tree, which will be
analytically written as follows:</p>
      <p>G (S c ) =ik,1 S ik (x ,...,x ik ,m ) →S jz (x jz ,1 ,...,x jz ,m ) &amp;S iz (z 1 ) &amp; ... &amp;S iz (z m ) , (1)
where S ik</p>
      <p>— set of key elementsx ik ,j ;
S jz</p>
      <p>— a set of contextually determined elements x jz ,i ;
S iz (z i ) — sets with different levels of semantic significance of the elements listed in the
dictionary S c .</p>
      <sec id="sec-3-1">
        <title>3.1. Semantic significance of dictionary</title>
        <p>Consider the relationship between the semantic significance x
i
elements x i (k i ) and the interpretation of the corresponding element I (x i ) . To ensure the
of an element with key
effectiveness of this approach, we will assume that the interpretation I (x i ) depends on the
amount of semantic significance for key elements x i (k i ) and for other elements x i (z i )
equally in relation to their semantic significance. For this, we will assume that the value
determined by the interpretation I (x i ) depends on the number of attributes or primitives used
to describe this element. For example, a dictionary element Sc can be the following structure:
(2)
n :=&lt;x i &gt;→&lt;x i 1 ,...,x i*k ,...,x in ,
where n — numeric or any other identifier of the S c dictionary elementx i ; x i*k — an element
I (x ik )
that belongs to the subject area described by the S c dictionary and has its own
interpretation.</p>
        <p>It can be seen from the given expression that the Sc dictionary should not contain all the
elements that exist in the description of the graphic object. The S c dictionary contains only the
terminological base of the elements of the description of the object used in the format chosen
to represent such a graphic object. The number of elements implementing the I (x i )
interpretation is not only the number of elements of one line of I (x i ) interpretation but is also
summed up with the number of elements of the I (x ik ) interpretation extension for the element
x ik from the S c dictionary. It can be written like this:
π x i (k i ) =π I (x i ) =x i 1 +x i 2 + ... +x i ,k −1 + ... +x in + (x k 1 + ... +x km ) ,
(3)
x ik
interpretation within the S c dictionary.
where π — the function of counting the number of the interpretive description elements x i and
— elements of the interpretive description of the element x i*k , which has its own I (x ik )
Thanks to this definition of the interpretive I (x i ) description it is possible to determine the
degree of semantic significance of individual elements of the S c dictionary. It is obvious that
for S ik (x ik 1 ,...,x ikm ) does not necessarily hold for π (x ik 1 ) = π (x ik 2 ) = ... = π (x ikm ) equality.
The value or the semantic significance of anx i (k i ) element in a S ik set can be specified by a
certain interval of numerical values or π x i (k i ) = m ,n  . Similarly, values of semantic
significance are calculated for other elements that are not included in the set of S ik key
elements, which will be equal to π x i (z i ) . Ranges of S iz values are also specified for such
πi (x iz ) sets, which determine whether each x iz of them belongs to one or another S iz set.
Such a structure of the semantic dictionary will be written in the form of a relation:</p>
        <p>S ik πi (x i (k i )) → {S j π j (x j (z j )) ⊕ ... ⊕S m πm (x m (z m ))},
where ⊕ — the symbol of the logical function is the sum modulo two. Let’s consider the
following statement.</p>
        <p>Statement. For a set of key elements S ik (x i ) , such a set of semantically significant elements
S jz (x j ) will be contextually conditioned, for which the relation holds:</p>
        <p>π j (x j (z j )) = max{π 1 (x 1 (z 1 )) ,,πn (x n (z n ))} .</p>
        <p>Assume that the statement is not correct or that condition (2) will not hold. In this case, it
(5)
can be specified that S ik (x i (k i )) →S jz (x j (z j )) and there is a relation:</p>
        <p>π (x j (z j )) ≠ max{π 1 (x 1 (z 1 )) ,,πm (x m (z m ))} ,
It means that through S i (x i (z i )) there is x j* (z j* ) , for which:</p>
        <p>π j* (x j* (z j* )) = max{π 1 (x 1 (z 1 )) ,,πm (x m (z m ))} .
are determined α ,β  in a S
which there is a relation:</p>
        <p>As each x j (z j ) with the value π j x j (z j ) automatically defines the range for eachx j (z j ) ,
that included in the corresponding setS j x j (z j ) , because for allS i (x i ) the ranges of their values

c dictionary, then it can be claimed thatthere isS j (x j (z j )) in S c for
Then the equality holds:
π j 1 (x j 1 (z j 1 )) &amp;&amp; π jm (x jm (z jm )) =
= max {{ππin11((xxin11(z(zi n11))),) ,,π, πimnm(x(ixmn m(z i(mzn)m) =)S) =iS(xni (x(zni ()z)}n, ))},.</p>
        <p> 
π j* (x j* (z j* )) =1(x π j j 1 (z j 1 )) ∨∨ π jm (x jm (z jm
)) .</p>
        <p>
</p>
        <p>This equality indicates that the I ( xi ) interpretation in theS c dictionary is formed in such a
way that all x i elements for whichπ (x i ) will fall into one α ,β  range defined by I (x i ) are
combined into oneS j (x j (z j )) set of semantically significant elements. Therefore, the
assumption that there is no condition for statement (5) is controversial.</p>
        <p>We will prove its sufficiency.
(4)
(6)
(7)
(8)
(9)
Let {S ik x i (k i ) →S j x j (z j )} &amp; {S ik x i (k i ) →S r x r (z r )} , then
S ik x i (k i ) allows an ambiguous interpretation:

But, according
to
the</p>
        <p>I i* (x i (k i )) =I i 1 (x i (k i )) &amp;I i 2 (x i (k i )) .</p>
        <p>statement, the following relations
must
hold
π j x j (z j ) ∈π j S j (x j ) and for components πr x r (z r ) ∈πr S r (x r ) :
з
(10)
for
π j x j (z j ) = max{π 1 (x 1 (z 1 )) ,, πn (x n (z n ))}
πr x r (z r ) = max{π 1 (x 1 (z 1 )) ,, πn (x n (z n ))} ,
where the maximum is taken for all elements of S c , except for the S ik (x ik ) set, which means
also for all S (
j x j (z j )) . Then the ranges forπ j ( x j ( z j )) and for π r ( xr ( zr )) intersect or:
{π j ( x j ( z j )) ∈ α j ,β j }∩{π r ( xr ( zr )) ⊂ [α r ,β r ]} ≠ 0
.</p>
        <p>(11)
And this is impossible according to the S (</p>
        <p>j x j (z j )) structure within the S c limits, which
are put in accordance with the I i (x i (z i )) interpretation for allS j (x j (z j ))∈S c , which proves
the statement about the condition of the existence of a contextually conditioned set for a set
S ik (x i (k i )) .</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Semantic dictionary structure</title>
        <p>The structure of the S</p>
        <p>
          dictionary can change in the direction of increasing individual
c
components, and, accordingly, the number of components can change during the transition
from one structure to another. We will call the structure that operates on S dictionary
c
elements or G (S c ) , the lowest structure in the hierarchy, by which we will distinguish them.
We take the name of the lowest structure since it operates with dictionary elements, which are
the smallest elements of the accepted set, which is used to describe graphic objects from the
point of view of their value as the minimum possible element that defines semantics as such
[
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ].
        </p>
        <p>The next structure in the hierarchy is the structure that operates on sets of elements. An
element set is a collection of semantically defined attributes or primitives. Formally, a set of
elements can be written in the form of such a ratio:</p>
        <p>V = {v 1 (x i 1 ) ,,v m (x im )} , where v j (x ij ) = {x 1j ,,x kj } .</p>
        <p>
          It is obvious that each element of the expression receives and conveys a certain semantic
significance based on the expression that consists of these elements. It is clear that an expression
is not formed from an arbitrary set of elements, and the rules fo r forming a set of elements are
based on the rules defined by the content and on the rules of semantic admissibility. The rules
of semantic admissibility are the result of the interpretation accepted in the subject area. In the
case of subject areas of interpretation that are artificial and created by man, such prerequisites
are quite few, and one or another semantic significance in many cases is based on generally
accepted and agreed agreements. A well-known example of such arrangements are standards
or formats. Of course, any agreement about the semantic significance of a certain element or a
set of elements is justified by the nature of the object, which is defined by the corresponding
definitions, but the basis of such justification is the possibility of connecting or connecting the
new semantic significance with the semantic meanings of already defined objects. objects or
elements [
          <xref ref-type="bibr" rid="ref15 ref16 ref17">15, 16, 17</xref>
          ].
        </p>
        <p>
          The subject area, in relation to which graphic objects are composed, is quite narrow and
quite precisely defined. This is due to the generally accepted practice of defining graphic
primitives, which originates from geometry. In addition, there is a significant part of graphic
objects so highly specialized that S dictionaries can describe such sets of elements only if they
c
are structured at the level of sets of elements. If we are not talking about objects — the works
of artists, then it can be assumed that in dictionaries structured at the level of sets of elements
or V (S c ) , each of them has its own specific and not very large in number of elements
interpretation I (v i ) :=&lt;x i 1 ∗∗x &gt; , where the sign "∗" means the function of consistency
between the elements that make up the description. Obviously, such an interpretation is not
sufficient for a non-specialist in the relevant field, but we will not consider these aspects.
Therefore, during the formation of graphic objects, it should be possible, regardless of the
specifics of the object, to use separate sets of elements contained in the S dictionary [
          <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
          ].
        </p>
        <p>If a S c</p>
        <p>dictionary with V (S c ) structure is created for the analysis of objects, then in this
case, as well as for a dictionary with G (S c ) structure, it is possible to form hierarchical
dependencies between sets of elements, depending on their semantic meanings. In many cases,
it is possible to define control sets by analogy with control elements, which are quite convenient
in solving the problems of analysing the content of a graphic object.
c</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Application of neural networks for image contour selection</title>
        <p>
          The problem of extracting a graphic primitive from a graphic object can be formally
presented as a classification problem. If we use the results of the wave algorithm described
above for the input of the neural network, then we will get a classification problem when a
certain object P is a graphic primitive based on a set of features p — the list of key points of
i
the algorithm contour returns the value of the class c i — the type of graphic primitive to which
the contour refers, which is considered [
          <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
          ].
        </p>
        <p>If you create a neural network with N inputs and M
outputs and train it to return C
c i ∈C , vector at the output, when P is input, we will solve the problem of determining graphic
primitives.</p>
        <p>To solve classification problems, it is advisable to use the Kohonen network. To determine the
class to which an object belongs, it is necessary to select one of the neurons of the Kohonen
layer with the maximum output. This is done by an interpreter, which is either a software
product that selects the neuron with the highest output, or an additional layer of neurons. So,
a Kohonen network will consist of 3-4 layers of neurons, the first of which is the input layer,
the next is the Kohonen layer, the additional layer and the output. In order to obtain the sum
of outputs approaching unity, it is advisable to apply the Softmax function, which can be
interpreted as the probability of the object belonging to a certain class.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental study of the semantic dictionary development methods based on graphic</title>
      <p>To build an image model and its semantic description, it is necessary to use the alphabet of
conventional designations to describe graphic primitives. To do this, it is necessary to highlight
their main types that can be used in the image, while the advanatge of their use is that the image
model built on their basis is flexible, does not depend on scaling and positioning. In addition,
such a model does not depend on the colour of the primitive execution (unless special conditions
are specified).</p>
      <p>Let's consider the proposed model of presentation of a conditional alphabet or graphic
primitives.</p>
      <p>
        First, let's build a dictionary of graphic primitives [
        <xref ref-type="bibr" rid="ref22 ref23 ref24">22, 23, 24</xref>
        ]. For this, it is necessary to
solve the problem of which graphic primitives to choose for the dictionary. Since the array of
graphic data contains more graphic objects processed or created with the help of automated
graphic systems, it is necessary to add those primitives used by graphic systems to the
dictionary. Since the most popular graphic systems are the Adobe Photoshop, Xara Designer,
CorelDRAW and 3Ds Max software complexes, the main graphic primitives used by these
graphic systems were selected for the dictionary.
      </p>
      <p>In addition, graphic primitives that must be used to create a semantic description of a graphic
image must meet the following requirements:
•
•
it is sufficient to fully submit the content of the graphic object;
easy to formalize.</p>
      <p>Therefore, the contours of the image selected from the graphic object should correspond
as much as possible to a certain type of graphic primitives of the semantic dictionary. In order
to maximally reduce the time of the process of identifying the contour of the image and its
comparison with the elements of the semantic dictionary, it is necessary to choose such graphic
primitives as the elements of the semantic dictionary, which can be formalized most simply.</p>
      <p>To determine the set of elements of the semantic dictionary, we will classify image
contours that may occur in a graphic object [25, 26, 27].</p>
      <p>By continuity, the contour of the image can be closed and open. We consider such a contour
of the image as closed, in which the wave, launched along the contour, returns to the starting
point. An open loop is one in which the wave going around the loop fades out at two key points
and does not return to the initialization point. According to this feature, graphic primitives are
divided into lines and shapes. A line is an open contour, a shape is a closed one.</p>
      <p>According to the presence of key points, there are contours that can have intermediate key
points (except for the two points where the wave faded in the open contours) and contours
without them.</p>
      <p>After the segments connecting the key points, the contours can be with straight segments,
with curved segments, and with mixed segments.</p>
      <p>According to the given classification, the semantic dictionary should contain the following
graphic primitives:
•
•
•
•</p>
      <p>Graphical primitives describing open contours. We will call them graphic primitives of
the second type.</p>
      <p>Line — an open contour with two key points connected by a line segment.</p>
      <p>Curve — an open contour that does not correspond to a straight line in any way.
Graphical primitives describing closed contours. We will call them graphic primitives
of the first type. Since most of the geometric shapes that will be used to denote the first
type of graphic primitives generally consist of vertices connected by straight line
segments, and perfect straight lines are rarely found in a graphic object, it is necessary
to introduce additional types of graphic primitives that will identify contours with
segments of curves. To designate such contours, we introduce the word "almost" (fuzzy),
which means that formally the figure meets all the requirements for defining the
contour and the key points are connected in it by segments of curves or by a mixed type.
Circle — a closed contour without key points, all points of which are equidistant from
the center of gravity of the contour.
“Fuzzy" circle — a closed contour without key points, not all points of which are
equidistant from the center of gravity of the contour.</p>
      <p>Triangle — a closed contour with three key points connected by three straight line
segments.
"Fuzzy" triangle — a closed contour with three key points connected by three curve
segments or mixed segments.</p>
      <p>Quadrangle — a closed contour with four key points connected by four straight line
segments.
"Fuzzy" quadrilateral — a closed contour with four key points connected by four curve
segments or mixed segments.</p>
      <p>Polygon — a closed contour with five or more key points connected by line segments.
"Fuzzy" polygon — a closed contour with five key points or more, which are connected
to each other by curve segments or mixed segments.</p>
      <p>Primitives of the first type are the most significant elements that unambiguously
characterize the content of a graphic object. Other graphics primitives are smaller and may
simply be elements of primitives of the first type in the case that one of the graphics primitives
is derived from another by cutting graphics.</p>
      <p>With the help of tools of the XML language, we will present the alphabet in the following
way.</p>
      <p>Having selected graphic primitives as elements, each of them, depending on the shape, can
be assigned a value and get their semantic description:
&lt;Element GroupID&gt;
&lt;ID&gt;1&lt;/ID&gt;
&lt;Name&gt;&lt;/Name&gt;
&lt;Value&gt;&lt;/Value&gt;
&lt;/Element&gt;</p>
      <p>If necessary, the dictionary can be expanded by entering element attributes, such as fill
colour, line colour, etc. XML tools make it possible to implement any changes by changing the
format of the dictionary, introducing new elements into it, and reformatting existing
descriptions.</p>
      <p>For example, after expanding to include colour, fill, and line colour attributes, the dictionary
description would have the following form:
&lt;Element GroupID&gt;
&lt;ID&gt;1&lt;/ID&gt;
&lt;Name&gt;&lt;/Name&gt;
&lt;Value&gt;&lt;/Value&gt;
&lt;Attributes&gt;
&lt;Attribute Name=”color”&gt;</p>
      <p>&lt;Value&gt;&lt;/Value&gt;
&lt;/Attribute&gt;
&lt;Attribute Name=”filling”&gt;</p>
      <p>&lt;Value&gt;&lt;/Value&gt;
&lt;/Attribute&gt;
&lt;/Attributes&gt;
&lt;/Element&gt;</p>
      <p>The dictionary will consist of a set of elements with their meanings, and, accordingly, the
semantic description will consist of dictionary elements (Fig. 1).</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and discussion</title>
      <p>Graphical objects in their essence are most fully characterized by words of natural language,
therefore it is advisable to describe them based on methods of formal grammars and semantic
features.</p>
      <p>To determine the semantic significance of graphic objects and their fragments, it is necessary
to build a semantic dictionary from the subject area, which would include the main features of
graphic objects, their attributes and hierarchy. In the subject area of the attributes of graphic
objects, their completeness, semantic significance and consistency should be determined, which
will reduce the search time and improve its quality.</p>
      <p>The rules for forming a set of dictionary elements are based on the rules defined by the
content and on the rules of semantic admissibility. The rules of semantic admissibility are the
result of the interpretation accepted in the subject area. In the case of subject areas of
interpretation, which are artificial and created by man, such prerequisites are quite few, and
one or another semantic significance in many cases is based on generally accepted and agreed
agreements. Proposed inference rules for the semantic analyser of graphic objects by the
methods of formal grammars based on the consistency and completeness of the semantic
dictionary.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The authors are appreciative of colleagues for their support and appropriate suggestions, which
allowed to improve the materials of the article.
Scientific and Technical Conference on Computer Sciences and Information Technologies,
2, art. no. 8929750, pp. 32-36, doi: 10.1109/STC-CSIT.2019.8929750.
[25] Y. Wang, Z. Luo, D. Chen and Y. Li, "Semantic Segmentation of Fire and Smoke Images
Based on Dual Attention Mechanism," 2022 4th International Conference on Frontiers
Technology of Information and Computer (ICFTIC), Qingdao, China, 2022, pp. 185-190, doi:
10.1109/ICFTIC57696.2022.10075210.
[26] A. Ortis, G. M. Farinella, G. Torrisi and S. Battiato, "Visual Sentiment Analysis Based on on
Objective Text Description of Images," 2018 International Conference on Content-Based
Multimedia Indexing (CBMI), La Rochelle, France, 2018, pp. 1-6, doi:
10.1109/CBMI.2018.8516481.
[27] W. Wang, Y. Ding and C. Tian, "A Novel Semantic Attribute-Based Feature for Image
Caption Generation," 2018 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), Calgary, AB, Canada, 2018, pp. 3081-3085, doi:
10.1109/ICASSP.2018.8461507.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mondal</surname>
          </string-name>
          and
          <string-name>
            <given-names>C. V.</given-names>
            <surname>Jawahar</surname>
          </string-name>
          ,
          <article-title>"Graphical Object Detection in Document Images," 2019 International Conference on Document Analysis and Recognition (ICDAR), Sydney</article-title>
          ,
          <string-name>
            <surname>NSW</surname>
          </string-name>
          , Australia,
          <year>2019</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>58</lpage>
          , doi: 10.1109/ICDAR.
          <year>2019</year>
          .
          <volume>00018</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Seel-audom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Naiyapo</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Chouvatut</surname>
          </string-name>
          ,
          <article-title>"A search for geometric -shape objects in a vector image: Scalable Vector Graphics (SVG) file format," 2017 9th International Conference on Knowledge and Smart Technology (KST), Chonburi</article-title>
          , Thailand,
          <year>2017</year>
          , pp.
          <fpage>305</fpage>
          -
          <lpage>310</lpage>
          , doi: 10.1109/KST.
          <year>2017</year>
          .
          <volume>7886098</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Tymchenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Havrysh</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khamula</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lysenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tymchenko</surname>
            ,
            <given-names>O.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Havrysh</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>Risks of loss of personal data in the process of sending and printing documents (Open Access) (</article-title>
          <year>2020</year>
          ) CEUR Workshop Proceedings,
          <volume>2805</volume>
          , pp.
          <fpage>373</fpage>
          -
          <lpage>384</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Adi</surname>
          </string-name>
          and T. M. Cheng,
          <article-title>"Analysis of Impact of Synthetic Image Data with Multiple Randomization Strategies on Object Detection Performance," 2022 IEEE 4th Eurasia Conference on IOT, Communication and Engineering (ECICE), Yunlin</article-title>
          , Taiwan,
          <year>2022</year>
          , pp.
          <fpage>206</fpage>
          -
          <lpage>210</lpage>
          , doi: 10.1109/ECICE55674.
          <year>2022</year>
          .
          <volume>10042887</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Roy</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Boppana</surname>
          </string-name>
          ,
          <article-title>"Interactive web-based image and graph analysis using Sonification for the Blind,"</article-title>
          <source>2022 IEEE Region 10 Symposium (TENSYMP)</source>
          , Mumbai, India,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/TENSYMP54529.
          <year>2022</year>
          .
          <volume>9864411</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bisikalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kovtun</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovtun</surname>
          </string-name>
          , “
          <article-title>The Concept of Automated Phonetic Analysis of a Speech with Asymptotic Adaptation to the Specifics of Phonation of Language Units,” 2022 12th International Conference on Advanced Computer Information Technologies (ACIT)</article-title>
          .
          <source>IEEE, Sep. 26</source>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1109/acit54803.
          <year>2022</year>
          .
          <volume>9913100</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Villemoes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hirvonen</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Purnhagen</surname>
          </string-name>
          ,
          <article-title>"Decorrelation for audio object coding,"</article-title>
          <source>2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</source>
          , New Orleans, LA, USA,
          <year>2017</year>
          , pp.
          <fpage>706</fpage>
          -
          <lpage>710</lpage>
          , doi: 10.1109/ICASSP.
          <year>2017</year>
          .7952247
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C. -F.</given-names>
            <surname>Chen</surname>
          </string-name>
          and
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Rosenberg</surname>
          </string-name>
          ,
          <article-title>"Virtual Content Creation Using Dynamic Omnidirectional Texture Synthesis," 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR</article-title>
          ), Tuebingen/Reutlingen, Germany,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          , doi: 10.1109/VR.
          <year>2018</year>
          .
          <volume>8446410</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nehmé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Abid</surname>
          </string-name>
          , G. Lavoué,
          <string-name>
            <given-names>M. P. D.</given-names>
            <surname>Silva</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Callet</surname>
          </string-name>
          ,
          <article-title>"Cmdm-Vac: Improving A Perceptual Quality Metric For 3D Graphics By Integrating A Visual Attention Complexity Measure,"</article-title>
          <source>2021 IEEE International Conference on Image Processing (ICIP)</source>
          , Anchorage,
          <string-name>
            <surname>AK</surname>
          </string-name>
          , USA,
          <year>2021</year>
          , pp.
          <fpage>3368</fpage>
          -
          <lpage>3372</lpage>
          , doi: 10.1109/ICIP42928.
          <year>2021</year>
          .
          <volume>9506662</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Durnyak</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tymchenko</surname>
            ,
            <given-names>B.H.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tymchenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anastasiya</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>Research of image processing methods in publishing output systems (Open Access) (</article-title>
          <year>2018</year>
          ) International Conference on Perspective Technologies and Methods in MEMS Design, pp.
          <fpage>178</fpage>
          -
          <lpage>181</lpage>
          . doi:
          <volume>10</volume>
          .1109/MEMSTECH.
          <year>2018</year>
          .8365728
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiaqi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jian</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>"A UI LayoutGeneration Method Based on Shared Feature Extraction,"</article-title>
          <source>2023 International Conference on New Trends in Computational Intelligence (NTCI)</source>
          , Qingdao, China,
          <year>2023</year>
          , pp.
          <fpage>286</fpage>
          -
          <lpage>292</lpage>
          , doi: 10.1109/NTCI60157.
          <year>2023</year>
          .
          <volume>10403714</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Narmadha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ranjithapriya</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Kannaambaal</surname>
          </string-name>
          ,
          <article-title>"Survey on image processing under image restoration,"</article-title>
          2017 IEEE International Conference on Electrical,
          <article-title>Instrumentation and Communication Engineering (ICEICE), Karur</article-title>
          , India,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.1109/ICEICE.
          <year>2017</year>
          .
          <volume>8191919</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <article-title>"Attention Transfer Network for Nature Image Matting," in IEEE Transactions on Circuits and Systems for Video Technology</article-title>
          , vol.
          <volume>31</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>2192</fpage>
          -
          <lpage>2205</lpage>
          ,
          <year>June 2021</year>
          , doi: 10.1109/TCSVT.
          <year>2020</year>
          .
          <volume>3024213</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>"Contact Part Detection From 3D Human Motion Data Using Manually Labeled Contact Data and Deep Learning,"</article-title>
          <source>in IEEE Access</source>
          , vol.
          <volume>11</volume>
          , pp.
          <fpage>127608</fpage>
          -
          <lpage>127618</lpage>
          ,
          <year>2023</year>
          , doi: 10.1109/ACCESS.
          <year>2023</year>
          .
          <volume>3331687</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>H.</given-names>
            <surname>Abu-Rasheed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dornhöfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Weber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Kismihók</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Buchmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Fathi</surname>
          </string-name>
          ,
          <article-title>"Building Contextual Knowledge Graphs for Personalized Learning Recommendations Using Text Mining and Semantic Graph Completion,"</article-title>
          <source>2023 IEEE International Conference on Advanced Learning Technologies (ICALT)</source>
          , Orem,
          <string-name>
            <surname>UT</surname>
          </string-name>
          , USA,
          <year>2023</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>40</lpage>
          , doi: 10.1109/ICALT58122.
          <year>2023</year>
          .
          <volume>00016</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Tymchenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Havrysh</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tymchenko</surname>
            ,
            <given-names>O.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khamula</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalskyi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Havrysh</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>Person voice recognition methods (2020)</article-title>
          <source>Proceedings of the 2020 IEEE 3rd International Conference on Data Stream Mining and Processing</source>
          ,
          <string-name>
            <surname>DSMP</surname>
          </string-name>
          <year>2020</year>
          ,
          <article-title>art</article-title>
          . no.
          <issue>9204023</issue>
          , pp.
          <fpage>287</fpage>
          -
          <lpage>290</lpage>
          , doi: 10.1109/DSMP47368.
          <year>2020</year>
          .
          <volume>9204023</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mao</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>"Image Target Detection Algorithm of Smart City Management Cases,"</article-title>
          <source>in IEEE Access</source>
          , vol.
          <volume>8</volume>
          , pp.
          <fpage>163357</fpage>
          -
          <lpage>163364</lpage>
          ,
          <year>2020</year>
          , doi: 10.1109/ACCESS.
          <year>2020</year>
          .
          <volume>3021248</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>"Semantic Segmentation of Remote Sensing Images With Self-Supervised Semantic-Aware Inpainting,"</article-title>
          <source>in IEEE Geoscience and Remote Sensing Letters</source>
          , vol.
          <volume>19</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          ,
          <year>2022</year>
          , Art no.
          <issue>3513705</issue>
          , doi: 10.1109/LGRS.
          <year>2022</year>
          .
          <volume>3212795</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lin</surname>
          </string-name>
          et al.,
          <article-title>"Deep Learning-Based Image Analysis Framework for Hardware Assurance of Digital Integrated Circuits," 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA)</article-title>
          ,
          <year>Singapore</year>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/IPFA49335.
          <year>2020</year>
          .
          <volume>9261081</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Song</surname>
          </string-name>
          and
          <string-name>
            <surname>C. - I. Chang</surname>
          </string-name>
          ,
          <article-title>"A Semantic Feature Extraction Method For Hyperspectral Image Classification Based On Hashing Learning,"</article-title>
          <source>2018 9th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)</source>
          , Amsterdam, Netherlands,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.1109/WHISPERS.
          <year>2018</year>
          .
          <volume>8747106</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>A. K. Sharma</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Bargavi</surname>
            and
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>"Object-Based Image Analysis of Hyper Spectral Imagery Using Semantic Segmentation Techniques,"</article-title>
          <source>2024 International Conference on Optimization Computing and Wireless Communication (ICOCWC)</source>
          ,
          <source>Debre Tabor, Ethiopia</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          , doi: 10.1109/ICOCWC60930.
          <year>2024</year>
          .
          <volume>10470905</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zheng</surname>
          </string-name>
          and J. Cheng,
          <article-title>"Remote Sensing Image Scene Classification Method Based on Semantic and Spatial Interactive Information," 2022 7th International Conference on Image, Vision and Computing (ICIVC), Xi'an,</article-title>
          <string-name>
            <surname>China</surname>
          </string-name>
          ,
          <year>2022</year>
          , pp.
          <fpage>436</fpage>
          -
          <lpage>441</lpage>
          , doi: 10.1109/ICIVC55077.
          <year>2022</year>
          .
          <volume>9887201</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>F.</given-names>
            <surname>Lahoud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V. O.</given-names>
            <surname>Segovia</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Süsstrunk</surname>
          </string-name>
          ,
          <article-title>"Keyword-based image color rerendering with semantic segmentation,"</article-title>
          <source>2017 IEEE International Conference on Image Processing (ICIP)</source>
          , Beijing, China,
          <year>2017</year>
          , pp.
          <fpage>2936</fpage>
          -
          <lpage>2940</lpage>
          , doi: 10.1109/ICIP.
          <year>2017</year>
          .
          <volume>8296820</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Tymchenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Havrysh</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khamula</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovalskyi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vasiuta</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lyakh</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <article-title>Methods of Converting Weight Sequences in Digital Subtraction Filtration (</article-title>
          <year>2019</year>
          ) International
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>