<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Neural Architecture Search Case Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Petra Vidnerová</string-name>
          <email>petra@cs.cas.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roman Neruda</string-name>
          <email>roman@cs.cas.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ITAT'25: Information Technologies - Applications and Theory</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>The Czech Academy of Sciences, Institute of Computer Science</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Understanding the evolution of research trends is critical for navigating rapidly developing scientific literature. Large Language Models (LLMs) ofer powerful tools for analysing scientific texts, enabling the extraction of key concepts and the construction of semantic networks. These capabilities can support the study of emerging ideas and research trends through graph-based representations. In this paper, we present a network-based text analysis framework designed to map the evolution of scientific knowledge. Our goal is to extract conceptual structures from research papers and construct graphs that represent both the occurrence of terms and their interrelationships. The integration of temporal information allows us to track the emergence and transformation of research themes.</p>
      </abstract>
      <kwd-group>
        <kwd>network text analysis</kwd>
        <kwd>semantic graphs</kwd>
        <kwd>large language models application</kwd>
        <kwd>neural architecture search</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>https://www.cs.cas.cz/~petra (P. Vidnerová)</p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
the underlying content. By combining both qualitative insights and quantitative metrics, the framework
ofers a comprehensive tool for analysing large volumes of text, making it applicable across a range of
domains where text-based information plays a central role.</p>
      <p>The designed paper networks enable better orientation among scientific papers as well as retrieval
mechanisms delivering papers based on topic similarity.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>In this section, we propose a methodology for analysing a collection of research papers using network
text analysis. We begin by outlining the process of constructing the dataset, which includes assembling
the papers and incorporating citation information. Next, we detail the procedure for selecting keywords
and generating network graphs.</p>
      <p>Dataset</p>
      <p>We construct a dataset of research papers using the arXiv repository. Specifically, we use the arXiv
API to download  papers based on a search query related to the topic or field of interest.</p>
      <p>In our case study, we retrieved 10,000 papers using the query ”neural architecture search.” For each
paper, we obtained both the PDF and metadata, including the title, authors, abstract, and other relevant
information.</p>
      <p>However, not all papers retrieved through this method are truly relevant to the target
topic—particularly when using a large value of  . To address this, we apply a filtering step using an LLM. For each
paper, we provide the abstract and a prompt asking whether the content is related to the target topic.</p>
      <p>After filtering out irrelevant papers, we retained a final set of 2,423 arXiv papers that we believe to
be related to Neural Architecture Search.</p>
      <p>Citations</p>
      <p>
        The second step involves retrieving citation information from the OpenAlex [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] database. The
simplest approach is to locate each paper in OpenAlex using its arXiv ID and extract its citation data.
However, many papers also have peer-reviewed versions published in conferences or journals, which
are often cited more frequently than their arXiv preprints.
      </p>
      <p>To account for this, we search OpenAlex for all papers with identical titles (ignoring diferences in
punctuation and whitespace) and aggregate the citations from these matched entries. We then consider
only those citations that originate from the set of arXiv papers compiled in the previous step.
Keywords</p>
      <p>To enable the analysis of topic trends, we associate each paper with a set of keywords. We consider
two approaches for keyword generation. The first involves prompting a large language model (LLM)
with a paper’s abstract and directly requesting a list of keywords. The second approach starts with a
manually curated set of fixed keywords; for each abstract-keyword pair, the LLM is then asked whether
the keyword is relevant to the given abstract.</p>
      <p>The first approach can yield rich and diverse keywords, but it requires substantial postprocessing to
unify synonyms and eliminate overly general terms. The second approach produces a clean, boolean
keyword vector for each paper, which is easier to analyse, but it depends heavily on the quality and
scope of the initial keyword set, which may be limited or biased.</p>
      <p>In this study, we adopt a hybrid method. We begin by generating a set of keywords for each paper
using the first approach. We then aggregate all generated keywords, select the most frequent ones, and
iflter out overly general terms (e.g., ”neural network” in the context of neural architecture search). The
resulting refined list serves as our fixed keyword set for further analysis.</p>
      <p>Networks</p>
      <p>A key advantage of network text analysis is its ability to reveal structural relationships between
concepts, keywords, or entities within a large text corpus. This is achieved by constructing graphs—sets
of vertices  and edges  , where each edge is a pair (,  ) such that  ∈  and  ∈  . Vertices represent
entities (e.g., keywords or papers), and edges represent the relationships between them.</p>
      <p>In our study, we construct three types of networks: a keyword network, a paper network, and a
citation network.</p>
      <p>• In the keyword network, vertices correspond to individual keywords. An edge is created between
two keywords if they co-occur in at least one paper. Each edge is weighted by the number of
shared papers in which the two keywords appear.
• In the paper network, vertices represent papers, and an edge is established between two papers if
they share one or more keywords. The edge weight reflects the number of shared keywords.
• In the citation network, vertices again represent papers, but edges are directed: an edge (,  )
indicates that paper  cites paper  .</p>
      <p>
        These networks serve both as analytical tools and visualisation aids, helping to map relationships and
navigate large collections of research papers. Additionally, we can compute various network properties
(such as centrality measures [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) to capture the broader structure of the network. These metrics help
identify key nodes based on their position within the overall graph topology.
      </p>
      <p>
        The first centrality measure we use is the PageRank [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], which evaluates the importance of a document
based on both the number and the quality of links pointing to it. In our context, the PageRank centrality
helps identify influential papers.
      </p>
      <p>
        Secondly, we will use the betweenness centrality [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], a graph-based metric that reflects how often a
node appears on the shortest paths between other nodes. This measure highlights nodes that serve as
bridges within the network, such as interdisciplinary papers. In the context of a single research field,
high betweenness may indicate papers that connect distinct concepts or subtopics.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Neural Architecture Search</title>
      <p>As a case study, we chose the field of Neural Architecture Search (NAS) to showcase the capabilities of
the proposed text analysis method.</p>
      <p>
        NAS is a subfield of automated machine learning ( AutoML) focused on the automatic design of neural
network architectures. Traditional deep learning models require significant manual efort and expert
knowledge to design efective architectures, which may not generalise well across tasks or datasets.
NAS aims to automate this process by searching through a predefined space of possible architectures
to find models that achieve optimal performance for a given task. It typically involves three main
components: the search space (defining which architectures can be explored), the search strategy (how
architectures are sampled or generated), and the performance estimation strategy (how the quality of
each candidate architecture is evaluated) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Over the past few years, NAS has seen rapid advances in both eficiency and efectiveness. Early
approaches, such as those based on reinforcement learning or evolutionary algorithms, were
computationally expensive, often requiring thousands of GPU hours. More recent methods—like diferentiable
NAS (e.g., DARTS [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]) or weight-sharing approaches—have significantly reduced the cost of the search
process, making NAS more accessible and practical. NAS has been applied successfully in areas such as
image classification, object detection, and natural language processing, and continues to evolve toward
broader applicability, better generalisation, and improved search eficiency.
      </p>
      <p>In the next section, we describe our findings on the field of NAS.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>
        We used a dataset of research papers retrieved from arXiv, following the procedure outlined in Section 2,
resulting in a collection of 2,423 papers. As the LLM, we used the tiger-gemma-9b-v3:fp16 model because
of its relatively small size and good performance. Visualisations were created using the graph-tool
library [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The code is publicly available at [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The temporal distribution of these papers is shown in Fig. 1. While the topic has appeared in the
literature since the late 1990s, a significant increase in research activity began after 2017. This rapid
growth was driven by the increased availability of high-performance hardware and the widespread
adoption of deep learning techniques.</p>
      <p>The following figure, Fig. 2, displays the proportion of papers associated with each keyword relative
to the total number of papers. Based on these keyword counts only, we can make several observations.</p>
      <p>First, the keywords include several objectives such as eficiency, accuracy, inference time or latency,
robustness, sparsity, adversarial robustness, and energy eficiency. It is evident that the proportion of
papers addressing secondary but still important objectives—like energy eficiency or robustness—is
much smaller compared to those focusing on accuracy or overall eficiency. Although multi-objective
optimisation is increasingly important today and solutions face multiple demands, many studies continue
to concentrate solely on accuracy.</p>
      <p>Objectives such as latency and energy eficiency correspond to hardware-aware NAS, which focuses
on optimising neural architectures not only for performance but also for practical deployment constraints
on specific hardware platforms. In Fig. 3, we can see the proportions of keywords over time, and we
can see that the importance of hardware-aware NAS increased over time.</p>
      <p>Another group of keywords relates to the optimisation techniques used in the NAS process. Notably,
multi-objective optimisation plays a significant role, appearing in nearly 30% of the papers. The most
prominent optimisation methods include evolutionary algorithms, diferentiable architecture search,
reinforcement learning, and Bayesian optimisation.</p>
      <p>Fig.4 shows the number of papers using diferent optimisation algorithms, with evolutionary
algorithms and diferentiable architecture search clearly dominating, and Bayesian optimisation being the
least common. However, the trend over time, illustrated in Fig.5, tells a diferent story: diferentiable
architecture search methods have gradually emerged and become the dominant approach, whereas
evolutionary algorithms were more prominent in the early years of the NAS field.</p>
      <p>The next step is to move beyond individual keywords and examine the keyword graph. In Fig.6,
keywords are represented as vertices, with the size of each circle proportional to the number of associated
papers. The complete graph is shown in Fig.7 (left), while Fig. 7 (right) highlights the most significant
edges connecting the keywords.</p>
      <p>The strongest edges are:
• latency - inference time
• performance prediction - surrogate model
• multi-objective optimization - performance prediction
• model compression - multi-objective optimization
• hyperparameter optimization - performance prediction
• multi-objective optimization - inference time
• hardware aware - multi-objective optimization</p>
      <p>The strong connection between latency and inference time is expected, as the two terms are essentially
synonymous. Similarly, the prominent edge between performance prediction and surrogate model
highlights the central role of surrogate models in performance prediction tasks. Additionally, the
strong link between hardware-aware NAS and multi-objective optimization reflects the inherently
multi-objective nature of hardware-aware approaches.</p>
      <p>A diferent type of graph is presented in Fig. 8, where each node represents a paper and edges indicate
shared keywords. This graph includes only papers tagged with the ”evolutionary algorithms” keyword,
and outlier nodes have been filtered out. The resulting clusters reveal groups of papers that share
common concepts, potentially aiding in the identification of related work. Papers associated with the
”performance prediction” keyword are highlighted in red.</p>
      <p>
        We reach a deeper level of analysis by incorporating citation data. Fig.9 (left) presents the citation
graph, excluding singleton nodes, while Fig.9 (right) displays papers with more than five citations,
visualized using the algorithm from [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Nodes with a high number of outgoing edges—those citing
many other papers—are typically review or survey articles. For example, node 1192 in the figure
corresponds to the paper Weight-Sharing Neural Architecture Search: A Battle to Shrink the Optimization
Gap [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], which provides a literature review of NAS, focusing specifically on weight-sharing methods.
      </p>
      <p>
        The three most cited papers (taking into account only citation inside the dataset) are SNAS: Stochastic
Neural Architecture Search [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], ProxylessNAS: Direct Neural Architecture Search on Target Task and
Hardware [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ],
with equal number of citations. The [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] is the most popular benchmark for testing NAS algorithms, so
not surprisingly is referred by many NAS papers.
      </p>
      <p>
        On the other hand, papers with most references are Weight-Sharing Neural Architecture Search: A
Battle to Shrink the Optimization Gap [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and A Comprehensive Survey on Hardware-Aware Neural
Architecture Search [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], both survey type papers.
      </p>
      <p>Citation counts correspond only to the number of incoming and outgoing edges in the citation graph.
However, they do not capture the broader structure of the network. To address this limitation, we have
used the two centrality measures – PageRank and betweenneess centrality–described in section 2.</p>
      <p>
        The most influential papers according PageRank are:
• Designing Neural Network Architectures using Reinforcement Learning [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
• Eficient Architecture Search by Network Transformation [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]
      </p>
      <p>
        • DeepArchitect: Automatically Designing and Training Deep Architectures [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]
• Progressive Neural Architecture Search [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]
      </p>
      <sec id="sec-4-1">
        <title>The most important papers according betweenness centrality are:</title>
        <p>
          • Single Path One-Shot Neural Architecture Search with Uniform Sampling [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]
• Weight-Sharing Neural Architecture Search: A Battle to Shrink the Optimization Gap [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]
• Evaluating the Search Phase of Neural Architecture Search [22]
• Random Search and Reproducibility for Neural Architecture Search [23]
• CARS: Continuous Evolution for Eficient Neural Architecture Search [24]
        </p>
        <p>
          • Neural Architecture Search: A Survey [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper, we introduced a network-based text analysis approach supported by large language
models (LLMs) to explore and better understand the structure of scientific literature. By applying our
methodology to the field of NAS, we demonstrated its ability to uncover key research trends, thematic
clusters, and the evolution of methods over time. Through keyword co-occurrence networks, citation
analysis, and centrality measures, our approach provided insights into dominant areas within the NAS
domain.</p>
      <p>The integration of LLMs enabled finer keyword extraction and classification, enhancing the quality of
the resulting networks. Our hybrid method ofers a scalable and interpretable way to analyse complex
scientific landscapes and can be readily adapted to other research domains.</p>
      <p>Future work will focus on extending this methodology in several directions. One promising direction
is the incorporation of full-text analysis, which would allow for a deeper semantic understanding
beyond abstracts and keywords. However, the work done shows that regarding keywords and topics
extractions, using abstracts only gives suficient results.</p>
      <p>Additionally, applying dynamic network analysis could help capture how research topics and their
interconnections evolve over time. Another direction involves refining the integration of LLMs—for
instance, by using them not only for keyword extraction but also for automated topic labeling, trend
detection, and hypothesis generation.</p>
      <p>Future work will also explore the use of our approach for novelty detection, aiming to identify
emerging topics, unconventional connections, or underexplored research directions within the scientific
literature.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgement</title>
      <p>This work has been funded by a grant from the Programme Johannes Amos Comenius under the
Ministry of Education, Youth and Sports of the Czech Republic ’Knowledge in the Age of Distrust’
(reg. no. CZ.02.01.01/00/23_025/0008711) and by the long-term strategic development financing of the
Institute of Computer Science (RVO:67985807).</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <sec id="sec-7-1">
        <title>The authors have not employed any Generative AI tools.</title>
        <p>arXiv:1904.00420.
[22] C. Sciuto, K. Yu, M. Jaggi, C. Musat, M. Salzmann, Evaluating the search phase of neural architecture
search, CoRR abs/1902.08142 (2019). URL: http://arxiv.org/abs/1902.08142. arXiv:1902.08142.
[23] L. Li, A. Talwalkar, Random search and reproducibility for neural architecture search, CoRR
abs/1902.07638 (2019). URL: http://arxiv.org/abs/1902.07638. arXiv:1902.07638.
[24] Z. Yang, Y. Wang, X. Chen, B. Shi, C. Xu, C. Xu, Q. Tian, C. Xu, CARS: continuous evolution for
eficient neural architecture search, CoRR abs/1909.04977 (2019). URL: http://arxiv.org/abs/1909.
04977. arXiv:1909.04977.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schütze</surname>
          </string-name>
          ,
          <article-title>Foundations of statistical natural language processing</article-title>
          , MIT Press, Cambridge, MA, USA,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>T. B. Brown</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ryder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Subbiah</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Kaplan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dhariwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Neelakantan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Shyam</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Askell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Herbert-Voss</surname>
            , G. Krueger,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Henighan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Child</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ramesh</surname>
            ,
            <given-names>D. M.</given-names>
          </string-name>
          <string-name>
            <surname>Ziegler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Winter</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Hesse</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            , E. Sigler,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Litwin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gray</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Chess</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Berner</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>McCandlish</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Radford</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Sutskever</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Amodei</surname>
          </string-name>
          ,
          <article-title>Language models are few-shot learners</article-title>
          , CoRR abs/
          <year>2005</year>
          .14165 (
          <year>2020</year>
          ). URL: https://arxiv.org/abs/
          <year>2005</year>
          .14165. arXiv:
          <year>2005</year>
          .14165.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Priem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Piwowar</surname>
          </string-name>
          , R. Orr,
          <article-title>OpenAlex: A fully-open index of scholarly works, authors</article-title>
          , venues, institutions,
          <source>and concepts</source>
          ,
          <source>2022</source>
          . URL: https://arxiv.org/abs/2205.
          <year>01833</year>
          . arXiv:
          <fpage>2205</fpage>
          .
          <year>01833</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Newman</surname>
          </string-name>
          , Networks, Oxford University Press,
          <year>2018</year>
          . URL: https://doi.org/10.1093/oso/ 9780198805090.001.0001. doi:
          <volume>10</volume>
          .1093/oso/9780198805090.001.0001.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Page</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Motwani</surname>
          </string-name>
          , T. Winograd,
          <article-title>The PageRank Citation Ranking: Bringing Order to the Web</article-title>
          .,
          <source>Technical Report 1999-66</source>
          ,
          <string-name>
            <surname>Stanford</surname>
            <given-names>InfoLab</given-names>
          </string-name>
          ,
          <year>1999</year>
          . URL: http://ilpubs.stanford.edu:
          <volume>8090</volume>
          /422/, previous number =
          <string-name>
            <surname>SIDL-WP-</surname>
          </string-name>
          1999-0120.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Freeman</surname>
          </string-name>
          ,
          <article-title>A set of measures of centrality based on betweenness</article-title>
          ,
          <source>Sociometry</source>
          <volume>40</volume>
          (
          <year>1977</year>
          )
          <fpage>35</fpage>
          -
          <lpage>41</lpage>
          . URL: http://www.jstor.org/stable/3033543.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Elsken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Metzen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hutter</surname>
          </string-name>
          ,
          <source>Neural architecture search: A survey</source>
          ,
          <year>2019</year>
          . URL: https://arxiv. org/abs/
          <year>1808</year>
          .05377. arXiv:
          <year>1808</year>
          .05377.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>DARTS: diferentiable architecture search</article-title>
          , CoRR abs/
          <year>1806</year>
          .09055 (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1806</year>
          .09055. arXiv:
          <year>1806</year>
          .09055.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Peixoto</surname>
          </string-name>
          ,
          <article-title>The graph-tool Python library, figshare (</article-title>
          <year>2014</year>
          ). URL: http://figshare.com/articles/ graph_tool/1164194. doi:
          <volume>10</volume>
          .6084/m9.figshare.
          <volume>1164194</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Vidnerová</surname>
          </string-name>
          , Source code,
          <year>2025</year>
          . URL: https://github.com/PetraVidnerova/TRUST_codes.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Peixoto</surname>
          </string-name>
          ,
          <article-title>Hierarchical block structures and high-resolution model selection in large networks</article-title>
          ,
          <source>Phys. Rev. X 4</source>
          (
          <year>2014</year>
          )
          <article-title>011047</article-title>
          . URL: https://link.aps.org/doi/10.1103/PhysRevX.4.011047. doi:
          <volume>10</volume>
          . 1103/PhysRevX.4.011047.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <article-title>Weightsharing neural architecture search: A battle to shrink the optimization gap</article-title>
          , CoRR abs/
          <year>2008</year>
          .01475 (
          <year>2020</year>
          ). URL: https://arxiv.org/abs/
          <year>2008</year>
          .01475. arXiv:
          <year>2008</year>
          .01475.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>SNAS: stochastic neural architecture search</article-title>
          , CoRR abs/
          <year>1812</year>
          .09926 (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1812</year>
          .09926. arXiv:
          <year>1812</year>
          .09926.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , S. Han,
          <article-title>ProxylessNAS: Direct neural architecture search on target task and hardware</article-title>
          , CoRR abs/
          <year>1812</year>
          .00332 (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1812</year>
          .00332. arXiv:
          <year>1812</year>
          .00332.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>X.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          , NAS-Bench-
          <volume>201</volume>
          :
          <article-title>Extending the scope of reproducible neural architecture search</article-title>
          , CoRR abs/
          <year>2001</year>
          .00326 (
          <year>2020</year>
          ). URL: http://arxiv.org/abs/
          <year>2001</year>
          .00326. arXiv:
          <year>2001</year>
          .00326.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H.</given-names>
            <surname>Benmeziane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Maghraoui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ouarnoughi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Niar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wistuba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>A comprehensive survey on hardware-aware neural architecture search</article-title>
          ,
          <source>CoRR abs/2101</source>
          .09336 (
          <year>2021</year>
          ). URL: https: //arxiv.org/abs/2101.09336. arXiv:
          <volume>2101</volume>
          .
          <fpage>09336</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Naik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Raskar</surname>
          </string-name>
          ,
          <article-title>Designing neural network architectures using reinforcement learning</article-title>
          ,
          <source>CoRR abs/1611</source>
          .02167 (
          <year>2016</year>
          ). URL: http://arxiv.org/abs/1611.02167. arXiv:
          <volume>1611</volume>
          .
          <fpage>02167</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Reinforcement learning for architecture search by network transformation</article-title>
          ,
          <source>CoRR abs/1707</source>
          .04873 (
          <year>2017</year>
          ). URL: http://arxiv.org/abs/1707.04873. arXiv:
          <volume>1707</volume>
          .
          <fpage>04873</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Negrinho</surname>
          </string-name>
          , G. Gordon, Deeparchitect:
          <article-title>Automatically designing and training deep architectures</article-title>
          ,
          <year>2017</year>
          . URL: https://arxiv.org/abs/1704.08792. arXiv:
          <volume>1704</volume>
          .
          <fpage>08792</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shlens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fei-Fei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Yuille</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Murphy</surname>
          </string-name>
          ,
          <article-title>Progressive neural architecture search</article-title>
          ,
          <source>CoRR abs/1712</source>
          .00559 (
          <year>2017</year>
          ). URL: http://arxiv.org/abs/1712.00559. arXiv:
          <volume>1712</volume>
          .
          <fpage>00559</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Heng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Single path one-shot neural architecture search with uniform sampling</article-title>
          , CoRR abs/
          <year>1904</year>
          .00420 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1904</year>
          .00420.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>