<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Neural network-based automation of spatial solutions in modern interior design</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pavlo Kruk</string-name>
          <email>kruk_pm-2023@knuba.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Matsiievskyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Igor Achkasov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ITTAP'2025: 5th International Workshop on Information Technologies: Theoretical and Applied Problems</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Kyiv National University of Construction and Architecture</institution>
          ,
          <addr-line>31, Air Force Avenue, Kyiv, 03037</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>The work is devoted to the automation of spatial solutions in modern interior design using deep learning. An approach is proposed in which a neural network performs semantic segmentation of the room plan and predicts the placement of the main interior objects, taking into account planning constraints and stylistic preferences. Architecturally, the core of the system is a U-Net segmentation model with an extended input: in addition to the room mask, binary channels for doors and windows, a continuous distance map to the walls, and one-hot encoding of style (Minimalistic, Cozy, Modern) are provided. To enhance generalization ability, a combined loss function (weighted cross-entropy together with Soft Dice for the foreground) and augmentations relevant to the plans are used. The alternative DeepLabv3+ (ResNet-50) architecture is considered separately as a strong comparative basis for thin boundary structures.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;deep learning</kwd>
        <kwd>interior design</kwd>
        <kwd>semantic segmentation</kwd>
        <kwd>furniture layout</kwd>
        <kwd>generative design</kwd>
        <kwd>U-Net</kwd>
        <kwd>DeepLabv3+</kwd>
        <kwd>automation of spatial solutions</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Modern interior design [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is increasingly taking place under conditions of tight deadlines,
highly variable customer requirements, and the need to quickly review alternative planning
scenarios. Even for relatively simple spaces, designers must coordinate room geometry,
window and door placement, functional zoning, ergonomic standards, communications, and
stylistic preferences [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Traditional work processes — a combination of manual sketching,
CAD tools [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and a set of empirical rules — support individual craftsmanship well, but create
bottlenecks when it is necessary to quickly review dozens of layouts and justify the choice [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
In this context, methods that can automate routine steps, ensure consistency in decisions, and
at the same time leave room for authorial control become particularly relevant [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Deep learning offers a natural path to such automation, as it allows the task of furniture
placement to be interpreted as a class prediction task for each pixel of the plan [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Unlike
hard-coded heuristics, neural networks learn spatial patterns and relationships between
elements of the scene [
        <xref ref-type="bibr" rid="ref7">7-8</xref>
        ].
      </p>
      <p>At the same time, practical effectiveness depends not only on the choice of architecture,
but also on the correct setting of inputs: the model should receive exactly those signals that
the designer uses in their work. In this work, we formalize this approach and propose a
compact, reproducible system called “Neural Network-Based Automation of Spatial Solutions
in Modern Interior Design [9-10]”.The key idea is to use explanatory features at the model
input. In addition to the room mask, binary information about doors and windows, a
continuous map of distances to walls, and stylistic context in one-hot format (Minimalistic,
Cozy, Modern) are provided [11]. Thus, the neural network “sees” not just an image, but a
structured set of clues that in practice form the layout: where the openings are located, where
the periphery and center are, and what compositional scenario is expected. U-Net [12] with
three levels of encoder/decoder and skip connections was chosen as the base architecture; for
finer boundary structures, a comparative baseline DeepLabv3+ with ResNet-50[13 ]and ASPP
[14] is considered. Training is conducted with a combined loss function: weighted
crossentropy (background is underestimated, small classes are amplified) combined with Soft-Dice
for the foreground, which increases sensitivity to thin and small objects [15-16].</p>
      <p>To reduce the burden of manual labeling and speed up iterations, we generate synthetic
examples with built-in semantic rules (table by the window, sofa and TV on opposite walls,
rug in the center, etc.) and anticipate further fine-tuning on real user plans. The system is
accompanied by a simple interactive interface that allows you to upload a plan, select a style,
and obtain a color map with a legend and basic metrics; all artifacts (convergence graphs,
confusion matrix, class-wise IoU) are automatically generated for inclusion in the publication.</p>
      <p>Our contribution lies in combining explanatory multi-channel input, compact
segmentation architecture, and a transparent training/evaluation pipeline focused on practical
applicability for early design stages. It has been demonstrated that even with moderate
computational resources, the system consistently “pulls” the scene skeleton (perimeter, doors,
windows, central dominants) and creates a basis for further enhancement of complex
furniture classes through data balancing and retraining on real drawings.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>The automation of spatial solutions in interior design has historically relied on rule-based
and parametric approaches in CAD/CAAD environments. In such systems, the layout is
determined by sets of deterministic constraints — passability, door opening areas, minimum
clearances, permissible furniture dimensions — and a search is performed in the space of
permissible configurations. The advantage is complete controllability and transparent
validation, but the complicated representation of “soft” criteria, stylistic preferences, and local
exceptions leads to uniformity of results and high costs for manual sorting of options.</p>
      <p>With the advent of deep learning, methods of semantic processing of plans have attracted
considerable attention, where the layout task is interpreted as pixel-wise class prediction.
Thanks to symmetrical encoder-decoder paths and skip connections, U-Net family
architectures have become the standard for marking up two-dimensional plans and drawings.
Further development is associated with multi-scale context aggregation, extended
convolutions, and the ASPP module, which is enshrined in approaches such as DeepLabv3+.
Such models are better at capturing thin lines and boundaries, which is especially relevant for
doors and windows, but require high-quality data and well-thought-out input features to
avoid semantic mergers of objects with similar geometry.</p>
      <p>At the same time, scene-oriented representations are being developed: the interior is
described as a graph of objects with distance, orientation, and compatibility relationships.
These approaches are convenient for further parametric editing and generative synthesis, but
their practical effectiveness on “raw” plans often depends on reliable initial segmentation or
detection, which brings us back to the task of pixel interpretation.</p>
      <p>A key problem for all learning approaches is the lack of annotated data. Therefore,
synthetic datasets, domain adaptation, and targeted augmentations are actively used.
Synthetics allow us to embed “common sense” spatial relationships — the desk gravitating
toward the window, the sofa opposite the TV, the rug in the center — and quickly increase the
variety of scenarios. However, transfer to real plans usually requires additional training and
simple tools for manual validation of results.</p>
      <p>An important component of modern solutions is interactivity and transparency, where the
system not only outputs a mask, but also visualizes confidence, errors, and class legends. This
simplifies integration into the studio's workflow, where the designer, while maintaining
creative control, quickly weeds out unsuccessful configurations and fixes successful ones.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem Formulation</title>
      <p>
        The task of semantic interior layout is considered as a pixel-based multi-class classification.
For a room with dimensions H × W the input tensor is formed X ∈ RH ×W ×C with an
extended set of features: binary mask of the room, binary channels of doors and windows,
continuous map of distance to walls dw all∈ [
        <xref ref-type="bibr" rid="ref1">0 , 1</xref>
        ], as well as one-hot encoding style
s∈ {0 , 1}s (Minimalistic, Cozy, Modern), repeated in spatial coordinates. Collectively C =4 + s
channels. Model f ¿0 (U-Net або DeepLabv3+) reflects X in log Z =f 0( X )∈ R H ×W × K, K —
number of classes (background class and at least 12 types of furniture). Probabilities are
calculated as P=softmax ( Z ), final marking —Y^ =argmaxk P [: , : , k ].
      </p>
      <p>Training is conducted under supervision with a reference mask Y ∈ {0 , … , K −1}H ×W . To
account for the imbalance in class areas and enhance sensitivity to thin objects, the combined
loss function is optimized.</p>
      <p>L (θ )=aC Ew (Y , Z )+( 1−a )( 1− Dicefg ( P , onehot ( Y )))
(1)</p>
      <p>
        Where C Ew— weighted cross-entropy (weights are inverse logarithmic to class
frequencies, background is additionally reduced), D icef g — Soft-Dice without background
class, a∈ [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] (in practical settings α =0.5). Parameters θ AdamW is optimized with
moderate regularization and standard augmentations that are correct for plans (90° rotations,
flips, slight scaling, noise in dw all).
      </p>
      <p>The evaluation is performed under deferred validation. The main integral metric is the
mean intersection-over-union ratio (mIoU), defined as:</p>
      <p>E mlou= 1 ∑ T Pk (2)</p>
      <p>k k T Pk + FPk + FN k
where mIoU — mean intersection-over-union, computed as the average overlap between
predicted and ground-truth regions across K classes; TP ₖ, FP ₖ, FN ₖ — numbers of true
positives, false positives, and false negatives for class k, respectively. This metric evaluates
segmentation quality by measuring the consistency between predicted and actual object
boundaries, independent of class size. The reporting is performed in two modes: “all classes”
and “no background”. Additionally, pixel accuracy is provided as:</p>
      <p>PixelAcc=
∑ ( TPk + FN k )</p>
      <p>k
where PixelAcc — pixel accuracy, representing the proportion of correctly classified pixels
among all labeled pixels; TP ₖ — true positive pixels for class k ; FN ₖ— false negative pixels
for class k . Pixel accuracy reflects overall prediction correctness without accounting for class
imbalance.</p>
      <p>In addition, standard classification metrics are reported, including Precision, Recall, and
F1score, as well as a confusion matrix normalized by rows for all foreground classes. Confidence
maps are used to interpret the results through the maximum class probabilities
max ⁡k P [: , : , k ] and corresponding error masks 1[Y^ ≠ Y ]. This approach ensures
reproducibility, style parameterization, and integration into automated sketching tools.
(3)
∑ TPk</p>
      <p>k</p>
    </sec>
    <sec id="sec-4">
      <title>4. Neural network model</title>
      <p>At the heart of the proposed system is a U-Net-type segmentation network with an
extended input, shown in Figure 1, which encodes spatial features relevant to the layout.</p>
      <p>A seven-channel image is fed into the network: a room mask, binary maps of doors and
windows, a continuous map of distances to walls, and three one-hot style channels
(Minimalistic, Cozy, Modern). Thanks to this, the model receives not a “picture” but a
structured set of hints for planning.</p>
      <p>The architecture is U-Net in a symmetric configuration with three encoder/decoder levels
(64→128→256) and a “bottleneck” on 512 channels. The encoder extracts increasingly
abstract features (with max-pooling), while the decoder restores detail with transposed
convolutions and skip connections that return local contours. The final head converts the
features into class channels (background + 12 types of furniture).</p>
      <p>Each block consists of two consecutive convolutions with BatchNorm and ReLU, which
ensures stability and sufficient expressiveness without unnecessary complexity. Skip
connections act as a “conveyor” of spatially accurate features: door/window contours, carpet
edges, straight table and cabinet fronts are correctly reproduced in the upper layers.</p>
      <p>The key to quality is correct features at the input: the distance map to the walls gives a
“center-periphery” hint, doors and windows fix landmarks, and the stylistic code allows you
to change compositional preferences without changing the architecture.</p>
      <p>Training is performed using a combination of weighted Cross-Entropy (less weight for the
background, more for small classes) and Soft-Dice for the foreground, which enhances
sensitivity to subtle boundaries. The optimizer is AdamW. Augmentations take into account
the nature of the plans: 90° rotations, horizontal/vertical flips, slight scaling, and minor noise
only in the dist2wall channel — this preserves the geometric meaning of the scene.</p>
      <p>For comparison of fine structures, we consider DeepLabv3+ (ResNet-50, ASPP) with the
first layer adapted to 7 channels. In production scenarios, it is useful for critical boundaries,
but in most prototype cases, the compact and fast U-Net remains the baseline.
5. Software implementation</p>
      <p>The software implementation is built on Python 3.10+ / PyTorch, focused on
reproducibility and direct inclusion of artifacts in the article. The model input is formed as a
seven-channel image: a room mask, binary maps of doors and windows, a continuous map of
distances to walls, and three one-hot style channels (Minimalistic, Cozy, Modern). The base
architecture is U-Net with three encoder/decoder layers and skip connections; the
classification head returns 13 logits (background + 12 furniture types). The loss function
combines weighted cross-entropy (background underestimated; rare classes reinforced) and
Soft-Dice only for the foreground, which increases sensitivity to thin and small objects. The
optimizer is AdamW with moderate regularization; augmentations are adapted to the plans
(90° rotations, flips, slight scaling, weak noise only in the dist2wall channel).</p>
      <p>Figure 2 shows a monotonic decrease in both training and validation losses (from
approximately ~1.75 to ~1.30), with no noticeable difference between the curves. This indicates
stable convergence and no early overfitting under low resolution (160×160) and multi-class
labeling conditions. The generalized metrics confirms this trend: mIoU(all) increases to ~0.20,
mIoU without background gradually gains, and pixel accuracy reaches ~0.7 at the end of the
trial cycle. For a prototype with a limited number of epochs, this is a typical scenario: the
model first “learns” the scene framework, then gradually improves the quality by class.</p>
      <p>The normalized confusion matrix for the foreground, shown in Figure 3, shows strong
diagonals for doors and windows (values close to 1.00), which is logical: these objects have
distinct cues at the entrance and clear geometry. The carpet also stands out with a relatively
high value (≈0.75). At the same time, there are characteristic cross-errors between rectangular
furniture of similar size and location: the bed is partially perceived as a wardrobe; the
bookshelf as a desk; the TV has spillover into adjacent rectangular classes. This pattern is
consistent with expectations for imbalanced data with background dominance and classes
with similar shapes.</p>
      <p>Figure 4 shows the IoU for each class without background in descending order with an
additional “support” curve (number of pixels). The best values are demonstrated by doors
(~0.59) and windows (~0.54), followed by carpets (~0.31) and desks (~0.31). Tables, chairs, and
beds have low IoUs (about 0.10, 0.08, and 0.03, respectively), and several classes with very
small areas have practically no scores — a typical consequence of imbalance and a small
number of epochs. The accompanying support curve indicates that some classes are
objectively present in the data much less frequently or occupy a very small area, which
increases the volatility of estimates.</p>
      <p>The grid in Figure 5 illustrates the model's behavior on a typical set of scenes: the “Input”
column with the selected style, the ground truth (GT) in the colors of the legend, the
prediction (Pred), the confidence map (Confidence), and the error map (Error). It is clearly
visible that the network “pulls” the perimeter of the room thanks to dist2wall, consistently
localizes doors and windows, as well as the central carpet. Errors are concentrated on the
edges of large rectangular furniture and in small classes — on the error maps, this is
manifested by bright “islands” in the corners and along the contours.</p>
      <p>The selected feature set and architecture correctly “extract” the scene's framework
(perimeter, doors, windows, central landmarks) and gradually improve the quality of furniture
classes. At the same time, cross-errors between similar rectangular silhouettes and low IoU of
small classes indicate the need for additional measures: balancing the sample, strengthening
the boundary-oriented component of losses (Lovász-Softmax/BCE-Boundary), soft
postprocessing (CRF/morphology) and, most importantly, retraining on real plans with a sufficient
number of examples for weak classes. In a production scenario, it is also appropriate to test
the alternative DeepLabv3+ head (ResNet-50, ASPP) — it usually works better with thin
door/window boundaries while maintaining compatibility with the seven-channel input.</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>This paper proposes and experimentally substantiates an approach to automated support
for spatial solutions in modern interior design based on deep learning. The central elements
are a seven-channel explanatory input (room, door, window, dist2wall, style×3) and a compact
U-Net segmentation architecture with a combined loss function (weighted Cross-Entropy +
Soft-Dice for the foreground). The results obtained showed stable convergence and convincing
quality for the “anchor” classes (doors, windows, carpet), confirming the feasibility of
presenting precisely those features that guide practical planning. At the same time, expected
limitations were identified for classes with similar rectangular geometry and for small objects,
which necessitates data balancing and additional boundary enhancement mechanisms.</p>
      <p>The limitations of the study are related to moderate image resolution, a limited number of
epochs, class imbalance, and the proportion of synthetic examples. In line with this, steps are
planned to further improve quality: expanding the real sample, purposefully balancing “weak”
classes, testing boundary-aware loss functions (Tversky/focal-Tversky) and options for
multiscale context aggregation (in particular DeepLabv3+ as a comparative architecture). A
separate area of research is domain adaptation between synthetic plans and real drawings.</p>
      <p>In summary, the work demonstrates a viable approach to using deep learning as a tool to
support spatial decision-making in the early stages of design. The combination of explanatory
input, segmentation architecture, and transparent visualizations already provides consistent,
interpretable results. Further development in terms of data, losses, architectures, and,
especially, visualizations and UI/UX with customizable furniture selection and room
parameters has every reason to significantly increase the practical usefulness of the system for
a wide range of users.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used OpenAI GPT-5 Thinking as a generative
AI assistant for limited, non-substantive support. In line with the CEUR-WS activity
taxonomy, the usage falls under editorial assistance (grammar, punctuation, and style checks
for English text), programming assistance (help with code editing such as fixing minor errors,
small optimizations, clarifying inline comments, and light refactoring of the provided
snippets), and data support (suggestions for assembling and selecting synthetic test examples
and simple augmentation schemes for training/validation).</p>
      <p>After using the above tool, the author reviewed and edited all text, code, and figures as needed
and takes full responsibility for the publication’s content, methods, and results.
multidimensional data model, ARPN Journal of Engineering and Applied Sciences, 16(7),
2021, pp. 802–809.
https://www.arpnjournals.org/jeas/research_papers/rp_2021/jeas_0421_8555.pdf.
[8] Y. Riabchun, T. Honcharenko, V. Honta, K. Chupryna, O. Fedusenko. Methods and means
of evaluation and development for prospective students’ spatial awareness, International
Journal of Innovative Technology and Exploring Engineering, 8(11), 2019, pp. 4050–4058
DOI:10.35940/ijitee.K1532.0981119.
[9] Matsiievskyi, O., Achkasov, I., Borodavka, Y., Mazurenko, R. Behavioral model of
autonomous robotic systems using reinforcement learning methods (2024) CEUR
Workshop Proceedings, 3896, pp. 560-568  https://ceur-ws.org/Vol-3896/short14.pdf.
[10] Oleksii Matsiievskyi, Pavlo Kruk, Volodymyr Levytskyi. Neural Network Model for
Automating Programming Language Conversion. 2024 IEEE AITU: Digital Generation
Conference https://ceur-ws.org/Vol-3966/W3Paper7.pdf.
[11] D. Chernyshev, G.Ryzhakova, T. Honcharenko, H. Petrenko, I. Chupryna, N. Reznik.</p>
      <p>Digital Administration of the Project Based on the Concept of Smart Construction.
Lecture Notes in Networks and Systems, 495 LNNS, 2023, pp. 1316-1331. DOI:
10.1007/978-3-031-08954-1_114 DOI:10.1007/978-3-031-08954-1_114.
[12] X. Wu, D. Hong та J. Chanussot, “UIU-Net: U-Net in U-Net for Infrared Small Object</p>
      <p>Detection”, IEEE Trans. Image Process., с. 1, 2022: https://doi.org/10.1109/tip.2022.3228497
[13] B. Li та D. Lima, “Facial expression recognition via ResNet-50”, Int. J. Cogn. Comput. Eng.,
т. 2, с. 57–64,. 2021: https://doi.org/10.1016/j.ijcce.2021.02.002
[14] Q. Zhu, “ACDNet with ASPP for Camouflaged Object Detection”, J. Phys.: Conf. Ser.,
т. 1982, № 1, с. 012082,. 2021: https://doi.org/10.1088/1742-6596/1982/1/012082
[15] D. Chernyshev, G.Ryzhakova, T. Honcharenko, H. Petrenko, I. Chupryna, N. Reznik,
"Digital Administration of the Project Based on the Concept of Smart Construction",
Lecture Notes in Networks and Systems, 495 LNNS, pp. 1316-1331, 2023.
https://doi.org/10.1007/978-3-031-08954-1_114
[16] A. M. Rostami, M. M. Homayounpour та A. Nickabadi, “Efficient Attention Branch
Network with Combined Loss Function for Automatic Speaker Verification Spoof
Detection”, Circuits, Syst., Signal Process.. 2023: https://doi.org/10.1007/s00034-023-02314-5</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>E. B.</surname>
          </string-name>
           
          <article-title>Khalmurzaeva та M. O. Orozova, “FELT IN MODERN INTERIOR DESIGN”, Her</article-title>
          . KSUCTA, №
          <volume>3</volume>
          ,
          <year>2021</year>
          , № 
          <fpage>3</fpage>
          -
          <lpage>2021</lpage>
          , с. 
          <fpage>343</fpage>
          -
          <lpage>349</lpage>
          ,.
          <year>2021</year>
          : https://doi.org/10.35803/
          <fpage>1694</fpage>
          -
          <lpage>5298</lpage>
          .
          <year>2021</year>
          .
          <volume>3</volume>
          .
          <fpage>343</fpage>
          -
          <lpage>349</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>I. </surname>
          </string-name>
          <article-title>Burchak та V. Shmelov, “MODERN TRENDS IN RESIDENTIAL INTERIOR DESIGN”, Theory pract</article-title>
          . des., № 26, с. 
          <fpage>133</fpage>
          -
          <lpage>139</lpage>
          ,
          <year>2022</year>
          : https://doi.org/10.32782/
          <fpage>2415</fpage>
          -
          <lpage>8151</lpage>
          .
          <year>2022</year>
          .
          <volume>26</volume>
          .16.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
             
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
           Shah,
          <string-name>
            <surname>R.</surname>
          </string-name>
           Wunderlich та J. Hasler, “
          <article-title>CAD synthesis tools for floating-gate SoC FPAAs”</article-title>
          , Des. Automat. Embedded Syst., т. 
          <volume>25</volume>
          , № 3, с. 
          <fpage>161</fpage>
          -
          <lpage>176</lpage>
          ,.
          <year>2021</year>
          : https://doi.org/10.1007/s10617-021-09247-9.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. M.</given-names>
             
            <surname>Sari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
             F. 
            <surname>Nilasari та P. E. D.</surname>
          </string-name>
           Tedjokoesoemo, “Implementation of Interior Branding in Retail Interior Design”,
          <source> GATR J. Manage. Marketing Rev., т. 7</source>
          , № 1, с. 
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          ,.
          <year>2022</year>
          : https://doi.org/10.35609/jmmr.
          <year>2022</year>
          .
          <volume>7</volume>
          .
          <issue>1</issue>
          (
          <issue>2</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>T.</surname>
          </string-name>
           Hadjiyanni, “
          <article-title>Decolonizing interior design education”, </article-title>
          <string-name>
            <given-names>J. Interior</given-names>
            <surname>Des</surname>
          </string-name>
          ., т. 
          <volume>45</volume>
          , № 2, с. 3-
          <fpage>9</fpage>
          , квіт.
          <year>2020</year>
          : https://doi.org/10.1111/joid.12170.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Matsiievskyi</surname>
          </string-name>
          , O.; Honcharenko, T.; Solovei, O.; Liashchenko, T.; Achkasov, I.; Golenkov, V.
          <article-title>Using Artificial Intelligence to Convert Code to Another Programming Language</article-title>
          .
          <source>У 2024 IEEE 4th International Conference on Smart Information Systems and Technologies (SIST)</source>
          , Astana, Kazakhstan,
          <fpage>15</fpage>
          -
          <lpage>17</lpage>
          травня
          <year>2024</year>
          ; IEEE,
          <year>2024</year>
          ; с 
          <fpage>379</fpage>
          -
          <lpage>385</lpage>
          . DOI: 10.1109/sist61555.
          <year>2024</year>
          .
          <volume>10629305</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Honcharenko</surname>
          </string-name>
          , G. Ryzhakova, Ye.Borodavka,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ryzhakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Savenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Polosenko</surname>
          </string-name>
          .
          <article-title>Method for representing spatial informationof topological relations based on a</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>