<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Determining the volume of revolutionary bodies from Two-Dimensional photographic images ⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ihor Konovalenko</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vadim Piscio</string-name>
          <email>pisciovp@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andriy Y. Hospodarskyy</string-name>
          <email>hospodarskyy@tdmu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abdellah Menou</string-name>
          <email>a.menou@onda.ma</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Academie Internationale Mohammed VI de l'Aviation Civile(AIAC/ONDA)</institution>
          ,
          <addr-line>B.P. 005, Casablanca 8101</addr-line>
          ,
          <country country="MA">Morocco</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>I. Horbachevsky Ternopil National Medical University</institution>
          ,
          <addr-line>Maidan Voli, 1, Ternopil, 46002</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Ternopil National Ivan Puluj Technical University</institution>
          ,
          <addr-line>Rus'ka str. 56, Ternopil, 46001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>In the paper, an algorithm is proposed for determining the geometric parameters and volume of bodies of revolution from two-dimensional photographic images. Unlike approaches based on contour extraction or the use of neural networks, the proposed method is robust against noise and requires minimal computational resources. The developed software provides object extraction, evaluation of their color characteristics, calculation of moments of different orders, and determination of volume in physical quantities.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Computer vision</kwd>
        <kwd>evaluation of structures from images</kwd>
        <kwd>іmage analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the processing of photometric information, the task arises to determine the volume of
bodies and their main parameters based on a known photographic image. This task is relevant
in computer vision, industrial quality control systems, and medical imaging. In recent years,
considerable attention has been paid by researchers to computational geometry methods that
allow obtaining 3D models based on a limited amount of input data. However, most methods
require specialized equipment (for example, 3D scanners or stereo cameras).
Therefore, the problem of restoring the spatial parameters of bodies from their single
twodimensional image arises. In this case, the image of the object to be evaluated is often noisy,
and the background of the image contains parts with the same color as the desired object.
There may also be a situation when several objects that need to be evaluated are present in
the photographic image at once, and the color of each object may differ from the others.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature review and problem statement</title>
      <p>The most common method of object selection is the method [1], associated with the sequential
selection of the contour of the object under study, approximation of its parameters, for</p>
      <p>© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
example, the Hough transform [2]. And already based on the approximation, the desired
geometric parameters are determined. In this case, the accuracy of measuring the geometric
parameters, first of all, depends on the quality of the selection of the object's boundaries. In
this case, the applied methods of processing contour images play an important role.
Traditional segmentation methods based on contour selection (Sobel, Laplace, etc.) often give
significant errors in the case of noisy data or a heterogeneous background. The uncertainty
can be reduced by forming a clear contour of the desired object with a width of one pixel
without breaks in the contour line at the image acquisition stage; however, in real systems,
due to the presence of irregularities on the surface of objects, as well as light diffraction, the
object boundary will always be blurred. In this case, the application of known differential
contour processing methods does not give the desired results [ 3 ].</p>
      <p>In [4], instead of using differential operators to the image, it is proposed to use a method for
extracting the contour of an object in an image based on the wavelet transform, which has
several advantages: it increases the accuracy of determining the contour points and reduces
noise due to the concentration of the "energy" of the contour of the object image in the
vicinity of intensity differences by selecting the appropriate processing scale. However,
increasing the scale of the wavelet transform (the carrier length) expands the spatial
localization of the object, which affects the accuracy of determining the boundaries.
In our opinion, the problem of the classical method that uses contour extraction to calculate
geometric parameters is associated with a significant loss of information during the
processing process - in fact, only information about the contour separating the object and the
background is used from the entire image. In addition, algorithms of this class become quite
complicated, as they require the removal of artifacts that arise during the process of contour
extraction.</p>
      <p>This paper proposes an algorithm that combines ease of implementation and high noise
immunity, since it does not require explicit boundary detection or other similar operations,
and allows determining the volume of bodies of revolution using only two-dimensional
photographic images of bodies of revolution.
2.1 Problem statement
The aim of the work is to develop a method and software that provide for determining the
volume of bodies of revolution from their two-dimensional images. To achieve this goal, the
following tasks must be performed:
1. Develop a segmentation method that allows separating objects from the background in
complex conditions of a noisy image.
2. Implement an algorithm for calculating the geometric parameters of selected objects.
3. Propose a method for determining the axis of rotation and calculating the object's volumes.
4. Create software to test the method's performance.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed methodology</title>
      <p>For simplicity, the properties of the source image points are stored as an array MapObj[i,j],
the dimensions of which correspond to the dimensions of the source image. The description
contains information about the point's belonging to a specific object or background, or that
the point has not been analyzed before. Since the number of objects is significantly less than
256, these properties are stored in one byte.</p>
      <p>The key stage of the work is image segmentation, on which the accuracy of further
calculations depends. The proposed method uses a combined approach: first, a small area is
determined that is guaranteed to belong to the object; then, based on information about its
color, points belonging to the background are determined, and based on the background
points, the points of the object itself are determined. Finally, based on the belonging of the
points, the parameters of the object are estimated.</p>
      <p>Briefly, the main process of the program can be described by the following algorithm:
1 Determining the position of a group of image points belonging to an object and estimating
the color of the object and the background.
2. Image segmentation
2.1 Highlighting the background of an object - that is, points that have a color far from the
color of the object.
2.2 Smoothing the background boundary by estimating neighboring background points
2.3. Highlighting the interior of an object using background information.
3 Determining the color of the object based on the segmentation performed.
4 Geometric analysis based on segmentation.</p>
      <p>The above steps are performed for each selected object, ignoring the background information
from the previous object.</p>
    </sec>
    <sec id="sec-4">
      <title>3.1. Determining the Position of a Group of Image Points</title>
    </sec>
    <sec id="sec-5">
      <title>Belonging to an Object and Estimating the Color of the Object</title>
      <p>Determining the position of a group of image points belonging to an object in a stationary
scene is not difficult and can be done using a priori assumptions about the probable position
of the object. In the program written to check the results, the position of the centers of the
sought objects is fixed and selected from several options.</p>
      <p>
        Then the average color of this area is calculated, which is considered the initial average color
of the sought object.
,
(
        <xref ref-type="bibr" rid="ref1 ref7">1</xref>
        )
where
analyzed object,
point with coordinates , and - the average color components.
Based on the information about the brightness of the searched object, the hypothetical
minimum and maximum brightness of the object points are calculated using heuristic
formulas.
      </p>
      <p>- the number of points in the selected area guaranteed to belong to the
,
,</p>
      <p>- the corresponding color components of the image
,
.</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref3">2</xref>
        )
The resulting minimum and maximum components are checked for being within the
permissible colors (i.e., belonging to the interval from 0 to 240) and adjusted if necessary.
      </p>
    </sec>
    <sec id="sec-6">
      <title>3.2 Image segmentation is carried out in 3 stages</title>
      <p>Image segmentation is performed in three stages. Due to noise and heterogeneity of the
original image, direct segmentation of the object often becomes impossible: in the middle of
the object, there may be zones completely surrounded by the object, but not classified as
object points. Therefore, inverse segmentation is performed: first, what is not an object (the
background) is selected, and then the object is segmented based on information about the
background.</p>
    </sec>
    <sec id="sec-7">
      <title>3.2.1 Defining the background of an object</title>
      <p>The background color information is used to fill the background, allowing the object to be
smoothly separated from other areas of the image. A point is considered to belong to the
background if it does not belong to the object color in the range to and is adjacent
to a previously defined background point.</p>
      <p>Unlike classical methods of boundary selection, this approach is more resistant to
interference, in particular, non-uniform illumination, the presence of noise, and different color
intensity in different parts of the object. Therefore, to select the background of the object, we
will use the algorithm for filling the area, which is guaranteed to have a color far from the
color of the object, which we will carry out using a modified version of the "Flood-Fill with
stack" type filling algorithm, which uses a stack for intermediate storage of information about
the points being analyzed. Information about the properties of the points is stored in the
aforementioned MapObj[i,j] array.</p>
      <p>The background extraction algorithm starts by placing several different points in a stack,
which are forcibly marked as background and correspond to image elements that are
guaranteed not to be parts of the object being analyzed.</p>
      <p>Next, in the loop, one point is extracted from the top of the stack, and its four neighbors are
analyzed, the coordinates of which differ by ±1. During the analysis, 2 conditions are checked:
1) the admissibility of the point coordinates, i.e., whether it has not gone beyond the
boundaries of the background definition area;
2) lack of a note about a previously performed analysis of the point or about the point
belonging to the background.</p>
      <p>If both conditions are met, a mark is made that the point analysis has taken place, and the
color of the point is analyzed for belonging to the background. If the point belongs to the
background, the corresponding belonging is marked, and the point is placed at the top of the
stack. This cycle is repeated until the stack is exhausted. Additional marking of points as
analyzed, allows you to avoid repeated comparison of colors, which is a rather slow
procedure.</p>
    </sec>
    <sec id="sec-8">
      <title>3.2.2 Smoothing background edges</title>
      <p>The next step is background smoothing to suppress artifacts that arise during the background
extraction process. In addition, the use of a boundary smoothing algorithm reduces the
number of false classifications. The background boundary smoothing and artifact removal are
performed by evaluating the classification results of neighboring points contained in the
MapObj[i,j] array. For each internal point of the bounds in the MapObj[i,j] array, 8
neighboring points are checked, and the number of points belonging to the background is
calculated. If the number of background points becomes less than the specified threshold
value, the point is considered not to belong to the background, and if the number of points
belonging to the background in the vicinity of the point exceeds the specified threshold, then
the central point is also considered to belong to the background.</p>
    </sec>
    <sec id="sec-9">
      <title>3.2.3 Selecting an object Defining object points</title>
      <p>Having defined and smoothed the background, we proceed to identify the object. Since the
color of the inner part of the object can differ significantly from the average due to random
processes, we select objects by analyzing the background. That is, we assume that the inner
border of the background is the outer border of the object. This assumption is in good
agreement with practice in cases where the contour of objects is limited, while the objects
themselves can have a non-uniform structure. The implementation of this stage utilizes a
filling algorithm that begins with points deliberately confirmed to be part of the object. The
object detection process also employs the "flood-fill" algorithm mentioned earlier; however,
instead of analyzing the original image, it focuses on the MapObj[i,j] array, which results
from background selection. Once the procedure is complete, the positions of all points
belonging to the identified object are established within theMapObj[i,j] array.</p>
    </sec>
    <sec id="sec-10">
      <title>3.3. Determining the average color of an object</title>
      <p>
        After pinpointing all the points on an object, determining its average color becomes a
straightforward task and can be done using a formula:
(
        <xref ref-type="bibr" rid="ref5">3</xref>
        )
,
where
- the number of points belonging to the object being analyzed,
,
      </p>
      <p>- the corresponding color components of the image point with coordinates
, and - the average color components, but unlike the previous color definition,
now the summation is performed over all the defined points of the object.</p>
    </sec>
    <sec id="sec-11">
      <title>3.4. Geometric analysis</title>
      <p>The last step of the algorithm is to determine the geometric parameters of the previously
obtained object:
- geometric center (using the first-order moments method),
- moments of inertia of the second order relative to the center of mass,
- orientation of the principal axes.</p>
      <p>First, the position of geometric center in screen coordinates is determined using the
method of moments. For this, we will use the well-known formulas of first-order moments [5].
,
,
where
,</p>
      <p>- are the moments of the object found in the image in relation to the
horizontal axis x and the vertical axis y of the image, and - is the area occupied by the
selected object. As is known, in the continuous case, the moments and area can be found by
the formulas:
and the so-called centrifugal moment of inertia can be defined as:
Knowing the center of the object, we can find the equation of the line, which is the axis of
rotation of the corresponding body. To determine such an equation, firstly, calculate the
second-order moments about the center of the figure . As is known, the axial
moment of a plane figure in discrete coordinates about an axis passing through the center and
parallel to the x-axis is determined by the formula:
Similarly, the moment about an axis parallel to the y-axis can be calculated in the discrete case
by the formula:
,
,
where - the function of belonging of a point to the corresponding image, which is
equal to 1 if the point corresponds to the object and 0 if it does not correspond, P - the entire
image being analyzed. Moving to a discrete domain and taking the coordinate step equal to 1,
we obtain the following formulas for calculating the moments and the corresponding area:
,
,
,
and the membership function for the object ObjNum is calculated by the formula:
As is known, the principal direction of the axes of a plane figure passing through its center of
mass is called the one relative to which the centrifugal moment of inertia is zero [ 6 ]. As is
known, there are at least two principal axes of rotation of a plane section, which are
perpendicular, pass through the center of mass, and whose angle of inclination to the x-axis is
determined by the ratio between the moments calculated above:
where atan() is the arctangent function.
,
,
(11)
,
,
Now we can calculate the volume by analyzing both values of the axis tilt angle. If the object
pulled out along the axis of rotation, we should choose the smaller value. Conversely, if the
object is flattened, we should select the larger value. It is easy to show that for a shape
elongated along the axis of rotation, the moment of inertia:
reaches its maximum value. Therefore, instead of calculating the volume, we can compare the
moment of inertia
for the two obtained values
and choose the one that gives the
maximum of . Knowing the position of the axis of rotation, you can find the volume
of the body of revolution using the formula:
where is the distance between the point with coordinates (x, y) and the axis of rotation,
K is a scale factor that indicates the size of a single pixel of the image in physical coordinates.
As is known [ 7 ], the distance between a point and a line can be definedusing the relation:
where a, b, c are the parameters of the equation of the line. If we know a point on a line along
with its angle of inclination, we can express its parameters as follows:
,
,</p>
    </sec>
    <sec id="sec-12">
      <title>4. Implementation</title>
      <p>, we arrive at the following formula for
A program was developed based on the algorithm mentioned above. This program enables
users to analyze the structure of objects in a photograph, select specific objects, calculate the
geometric parameters of the image, and determine the volume in natural units.
The user program interface consists of tabs: displaying the initial image, displaying the
segmented image, and displaying the numerical analysis results. The program allows you to
load images in raster formats, run the analysis procedure, and save results and log files. The
addition of a manual color range adjustment option improves analysis accuracy, especially for
low-quality photographs. It also introduced a scale factor that allows for obtaining geometric
parameters and volumes in real physical units.</p>
      <p>The main program window, illustrated in the following figures, features three tabs, a
corresponding menu, and a settings area (position A-D). Interaction with the program is
conducted in an interactive mode, controlled by a set of buttons and selectors (Figure. 1).
The buttons (position A) provide basic program control as follows:
◊ The "Open" button allows you to select the initial image file, supporting the popular BMP
and JPEG formats.
◊ The "Analysis" button initiates the analysis of the loaded image.
◊ The "Save Image" button enables you to save the graphical results from the object selection
in the analyzed photo.
◊ The "Save Log" button allows you to save the numerical results, including calculations of
plane moments, colors, and volumes of the analyzed objects, for future reference and
processing.
◊ The "Clear" button erases all numerical and graphical results generated by the program.
The input fields (pos. B) "Manual correction" allow you to change the settings for selecting the
object's color range; in most cases, when working with photographs of acceptable quality,
there is no need to change the values of these fields.</p>
      <p>
        The input field (pos. C) "Pixels per mm" allows you to define the image scaling and switch
from the number of pixels in the image to the corresponding linear dimensions.
The input field (pos. D) "4 Eggs"/"1 Egg" allows the configuration of the image being analyzed,
the number of objects in the image, and the approximate position of the objects themselves.
The first tab ( Fig. 1) is used to display the initial image. The image is loaded into the tab by
clicking the "Open" button and selecting the appropriate file. In this case, the loaded image of
the objects was obtained by shining the objects through - that is, the light source is placed in
front of the camera and shines into it through the object under study. The image contains
typical defects:
◊ uneven color of an object, having light (
        <xref ref-type="bibr" rid="ref1 ref7">1</xref>
        ) and dark spots (
        <xref ref-type="bibr" rid="ref3">2</xref>
        ),
◊ are foreign defects that look like white lines (
        <xref ref-type="bibr" rid="ref5">3</xref>
        )
◊ is external illumination with direct light from a light source (
        <xref ref-type="bibr" rid="ref6">4</xref>
        )
◊ color of different objects in the same image differs (5) from each other.
The image in the second tab (Figure 2) appears after loading the initial image and clicking the
"Analysis" button. In the analysis, the background and objects of the image are highlighted in
different colors. Simultaneously with the formation of the image on the tab being described
(Figure 3), a textual result of the numerical analysis is also formed, which includes:
◊ The number of the selected object
◊ The color of the object in the vicinity of the starting point of the image, and its
dispersion.
◊ The color of the object in the entire selection and its dispersion
◊ The geometric center of the corresponding object.
◊ Moments of the image of the first order with respect to the coordinate axes and of the
second order with respect to the geometric center of the object.
◊ The angle of inclination of the object axis relative to the x-axis
◊ The volume of the corresponding solid of revolution, having an axis of rotation that
passes through the geometric center and is inclined to x with the angle of inclination given
above.
      </p>
    </sec>
    <sec id="sec-13">
      <title>5. Results</title>
      <p>The algorithm was tested on a set of photographs of various objects, including objects with
smooth and non-uniform surfaces. The results showed that the method is robust to noise,
illumination, and color variations. For most objects, the volume calculation error did not
exceed 5%, which is acceptable for practical applications.</p>
      <p>Comparison with methods based on contour extraction showed that the proposed approach
provides significantly higher robustness. In cases where contour analysis yielded significant
errors due to color heterogeneity, our method successfully performed segmentation and
subsequent analysis.</p>
    </sec>
    <sec id="sec-14">
      <title>6. Conclusions</title>
      <p>An algorithm and a program for its implementation were developed, which allows obtaining
information about the volume of objects in the image, without using third-party libraries. As
can be seen from Fig. 2, the selection of objects is relatively resistant to interference and easily
removes defects such as color inhomogeneity of objects, illumination, external noise, and
other similar defects from the analyzed image. Determining the direction of the object's axes
and its center adaptively helps effectively address positioning errors of the analyzed objects.
The proposed method effectively calculates the volume of bodies of revolution using their
two-dimensional images. The main advantages of this approach are:
 - Resistance to noise and image defects.
 - No need for prior training or the use of complex models.</p>
      <p> - Ease of implementation and low computational costs.</p>
      <p>The developed software confirmed the effectiveness of the algorithm in practical technical
control tasks.</p>
      <p>The methods of moment analysis and segmentation are useful not only for calculating the
volume of bodies of revolution but also in related fields. These techniques can be applied in
biomedical visualization, machine vision for robotic systems, and quality control for industrial
products. The expansion of this approach can include the use of statistical filtering methods to
increase accuracy, as well as the use of hybrid methods that combine traditional algorithms
with neural networks. This will allow achieving a balance between speed and accuracy.</p>
    </sec>
    <sec id="sec-15">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] Digital image processing</article-title>
          . Rafael C. Gonzalez, Richard E. Woods //Global 4th edition
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Pearson</given-names>
            <surname>Education Limited</surname>
          </string-name>
          ,
          <fpage>2017</fpage>
          -
          <lpage>976p</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [2]
          <string-name>
            <given-names> </given-names>
            <surname>Ballard</surname>
          </string-name>
          <string-name>
            <surname>DH</surname>
          </string-name>
          ,
          <article-title>Generalizing the Hough Transform to Detect Arbitrary Shapes</article-title>
          , Pattern
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Recognition</surname>
          </string-name>
          , Vol.
          <volume>13</volume>
          , No.
          <volume>2</volume>
          , p.
          <fpage>111</fpage>
          -
          <lpage>122</lpage>
          ,
          <year>1981</year>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [3]
          <string-name>
            <given-names> Babilunga</given-names>
            <surname>Yu</surname>
          </string-name>
          .
          <article-title>Measurement uncertainty of geometrical parameters of objects in optical</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [4] Polyakova,
          <string-name>
            <surname>MV</surname>
          </string-name>
          , and
          <string-name>
            <given-names>VN</given-names>
            <surname>Krylov</surname>
          </string-name>
          .
          <article-title>"Morphological method of contour segmentation of</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>vol 1</source>
          <volume>25</volume>
          ( (
          <year>2006</year>
          ) p.
          <fpage>98</fpage>
          -
          <lpage>103</lpage>
          . [5]
          <string-name>
            <surname>Gabrusiev</surname>
            <given-names>G. V. Higher</given-names>
          </string-name>
          <string-name>
            <surname>Mathematics</surname>
          </string-name>
          . Part 3: Multiple, Curvilinear and Surface Integrals /
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>G. V.</given-names>
            <surname>Gabrusiev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Yu. Gabruseva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. G.</given-names>
            <surname>Shelestovsky -</surname>
          </string-name>
          Ternopil
          <source>: SMP "Type"</source>
          ,
          <year>2021</year>
          - 60 p. [
          <volume>6</volume>
          ] Resistance of materials: Textbook /
          <string-name>
            <given-names>G.S.</given-names>
            <surname>Pisarenko</surname>
          </string-name>
          .
          <string-name>
            <given-names>O.L.</given-names>
            <surname>Kvitka</surname>
          </string-name>
          , E.S. Umansky; Ed. G.S.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Pisarenko</surname>
          </string-name>
          .
          <article-title>-</article-title>
          K.: Higher school,
          <year>2004</year>
          . - 655 p.:
          <fpage>ill</fpage>
          . [7]
          <string-name>
            <surname>Dudkin</surname>
            ,
            <given-names>M. E.</given-names>
          </string-name>
          <string-name>
            <surname>Higher</surname>
          </string-name>
          <article-title>Mathematics: a textbook for bachelor's degree applicants in</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>Kyiv</given-names>
            <surname>Polytechnic Institute</surname>
          </string-name>
          . - Kyiv: Igor Sikorsky Kyiv Polytechnic Institute,
          <year>2022</year>
          . - 449 p. [
          <volume>8</volume>
          ]
          <string-name>
            <surname>Habrusiev</surname>
            <given-names>HV</given-names>
          </string-name>
          <source>Higher Mathematics. Part</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Linear</surname>
            <given-names>Algebra</given-names>
          </string-name>
          ,
          <source>Vector Algebra and Analytical</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>Geometry / HV Habrusiev, I. Yu. Habrusieva, BH Shelestovskyi - Ternopil : SMP "TAYP"</source>
          ,
          <year>2021</year>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>- 84 p.</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>