<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>mendation Using Code Embeddings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Damian Garber</string-name>
          <email>damian.garber@tugraz.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Lubos</string-name>
          <email>sebastian.lubos@tugraz.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexander Felfernig</string-name>
          <email>alexander.felfernig@tugraz.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Compiler Autotuning</institution>
          ,
          <addr-line>Code Embeddings, Collaborative Filtering, Code Metrics</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ConfWS'25: 27th International Workshop on Configuration</institution>
          ,
          <addr-line>Oct 25-26, 2025, Bologna</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Graz University of Technology</institution>
          ,
          <addr-line>Infeldgasse 16b, Graz, 8010</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>We present a lightweight compiler autotuning approach that combines concepts from configuration space learning with recommender techniques. Our approach uses code embeddings generated by diferent large language models for data representation and calculation of similarity scores. The best-performing code embedding approach shows, on average, 4.11% faster binaries than the best-performing code metric-based alternative. Compilers are powerful and highly configurable tools. The C compiler GCC 1 has about 200 optimization options that can be activated or deactivated independently. Each option may positively or negatively impact diferent properties, such as the generated binary's runtime, size, or energy consumption. If these options are correctly utilized, the generated program binaries can be faster, smaller, or more energy-eficient without further investing resources into code refinement. However, choosing the correct options requires expertise in compiler optimization and the program to be optimized. Compiler autotuning addresses this issue by recommending optimization options for a program without any expert involvement. Most approaches for compiler autotuning are computationally expensive and take days to continuously refine the recommended options [ 1, 2, 3, 4, 5, 6]. Alternative lightweight approaches for compiler autotuning proposed by Burgstaller et al. [7] and Garber et al. [8], can reduce the time needed for recommendation to milliseconds allowing an interactive user experience. This lightweight approach is called Optimisation Space Learning (OSL) [7] and relies on training data collected in advance that is then used for recommendation utilizing nearest-neighbor-based collaborative filtering [ 9] based on extracted code metrics. The major contributions of this paper are as follows: (1) We extend OSL by incorporating and comparing diferent code embeddings. (2) We show that the new embeddings significantly outperform the standard compiler optimization options in terms of the runtime performance of the generated program.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Related</title>
    </sec>
    <sec id="sec-3">
      <title>Work</title>
      <p>
        Compiler autotuning is the automated selection of advantageous compiler optimization options for a
program. It can be divided into the phase selection problem and the phase ordering problem [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Phase
ordering tries to find an optimal sequence to apply the options, while phase selection, the focus of this
      </p>
      <p>ISSN1613-0073
work, tries to identify which optimizations should be applied. The optimality of options can be defined
with diferent properties, the most common of which is runtime. However, space, energy, or similar
measurable properties could also be employed.</p>
      <p>
        The state-of-the-art in compiler autotuning is primarily dominated by iterative approaches [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6">1, 2, 3,
4, 5, 6</xref>
        ]. Bodin et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] propose one of the first compiler autotuning approaches. They generate an
initial set of optimization options to be activated, compile the program using these options, measure its
performance, and refine the configuration in a loop until achieving satisfactory results. Most newer
approaches build on this concept, like COBAYN [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], which uses Bayesian Networks to narrow the
search space. The current state-of-the-art method, BOCA [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], employs Bayesian Optimization to
identify key optimizations and streamline the search process. CompTuner [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] builds a prediction model
for the runtime of diferent optimization options and uses a particle swarm optimization algorithm [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
to improve the search performance. Cole [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] can perform multi-target optimization by iteratively
creating a Pareto front.
      </p>
      <p>
        Performance is a key challenge for the computationally intensive, iterative, state-of-the-art
approaches, as they require numerous compilations. As project sizes grow, this becomes a significant
issue. For instance, Cole must construct a Pareto front, which takes 50 days on a single machine [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. To
address these limitations, newer lightweight approaches like Optimization Space Learning (OSL) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
adopt alternative strategies to provide faster optimization recommendations, trading of a small degree
of recommendation quality for improved responsiveness.
      </p>
      <p>
        OSL achieves this by combining configuration space learning [
        <xref ref-type="bibr" rid="ref16">16, 17</xref>
        ] techniques like the t-wise
feature coverage heuristic [
        <xref ref-type="bibr" rid="ref16">16, 17, 18</xref>
        ] with collaborative filtering [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In this context, collaborative
ifltering relies on code metrics (e.g., McCabe, Halstead, or counts of keywords) extracted from the
optimized programs. This paper presents an alternative collaborative filtering approach based on code
embeddings [19].
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Recommendation Approach</title>
      <p>
        Optimization Space Learning (OSL) is a compiler autotuning approach introduced initially by Burgstaller
et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The approach combines concepts from configuration space learning [
        <xref ref-type="bibr" rid="ref16">16, 17</xref>
        ] for data generation
and collaborative filtering [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] for configuration recommendation. The key contribution of OSL is its
recommendation speed, which is achieved after a one-time collection of training data within tens of
milliseconds. Meanwhile, the iterative state-of-the-art compiler autotuning approaches [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6">1, 2, 3, 4, 5, 6</xref>
        ],
report computation times of several days. These diferences are due to the iterative approaches requiring
a continuous refinement process of recommendation result testing, adaptation, and restarting.
      </p>
      <sec id="sec-4-1">
        <title>3.1. Data Collection</title>
        <p>OSL needs to collect initial training data to provide recommendations for a new hardware environment.
Two decisions have to be made to generate the training data. The first is which programs to use for
training. The second is the heuristic used for generating the sample configurations.</p>
        <p>
          The training approach is based on configuration space learning [
          <xref ref-type="bibr" rid="ref16">16, 20, 17</xref>
          ], motivated by the
infeasibility of exhaustively exploring configuration spaces due to their exponential size [ 21]. For example,
GCC includes around 200 options, yielding a configuration space of roughly 2200 configurations. Even
assuming 1 per compilation and measurement, full exploration would take 5 ∗ 1049 years. Therefore,
following Pereira et al. [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], collecting a small, representative configuration subset is necessary.
        </p>
        <p>
          In order to collect such a small representative set of configurations, we use sampling approaches
discussed by Pereira et al. [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] and Garber et al. [17]. Burgstaller et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] considered initially two
sampling approaches: Uniform Random Sampling (URS) and t-wise Feature Coverage Heuristics (FCH).
URS is well-established [
          <xref ref-type="bibr" rid="ref16">16, 17, 22, 23, 24</xref>
          ], but has drawbacks with scalability. The main drawback of
FCH, on the other hand, is its expensive computation, which is mitigated by the unconstrained nature
of the problem (options are independent of each other) and the fact that this needs to be performed
only once. Therefore, OSL ultimately relies on the t-wise FCH [
          <xref ref-type="bibr" rid="ref16">16, 17</xref>
          ] to generate the samples for the
programs’ runtime [s] when compiled with the referenced compiler configuration   .
        </p>
        <p>
          Programs
al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] report FCH with  = 3 to perform best for this task, which is confirmed by Garber et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and
our work presented in this paper. Next, we need a set of programs to synthesize the needed data. We
use the same benchmark used in the original work by Burgstaller et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and in the improved OSL by
Garber et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The PolyBench benchmarks [25] provide 30 programs written in C and are widely
used in related literature [
          <xref ref-type="bibr" rid="ref13 ref14 ref7 ref8">7, 13, 14, 8</xref>
          ].
        </p>
        <p>We construct the training data by compiling an executable for each configuration provided by
the sampling approach and each program in the benchmark. The performance properties of these
executables are then measured using perf-stat.2</p>
        <p>OSL extracts at this point a vector of 111 source code metrics, such as McCabe’s Cyclomatic
Complexity [26], Halstead Complexity [27], or simple counts like the number of times a particular keyword
occurs, using the CQMetrics tool by [28]. OSL uses the first 66 of those source code metrics to calculate
program similarities during the recommendation process since the latter metrics primarily are related
to coding style, i.e., indentation space counts. A complete list of the metrics extracted is provided in the
CQMetrics documentation.3 We compare the performance of this code metric-based similarity with our
approach of using code embeddings-based similarity. A description of the code embeddings used is
provided in Section 4.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Recommendation</title>
        <p>
          Essentially, we apply nearest neighbor-based collaborative filtering [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] on synthesized data [29, 22],
which has been obtained using heuristics known from configuration space learning [
          <xref ref-type="bibr" rid="ref16">16, 17</xref>
          ].
        </p>
        <p>Our variant of user-based collaborative filtering difers slightly from the standard setting (see Table
1).</p>
        <p>
          Here, programs act as users, configurations as items, and runtime serves as the rating (a lower runtime
is analogous to a higher rating). Unlike typical scenarios, we have complete performance data for all
program-configuration pairs generated by the data collection process, except for the target program.
Thus, we require an external metric to estimate program similarity. The version of OSL used by
Burgstaller et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and Garber et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] computes similarity using source code metrics and the
Euclidean distance [
          <xref ref-type="bibr" rid="ref7">30, 7</xref>
          ] (see Formula 1 and Formula 2). In this context,  and  are  -dimensional
vectors with components  1 to   and  1 to   , representing programs  and  . In OSL, these vectors
consist of  = 66 code metrics, while in our approach, they are the extracted fixed-size (n) embedding
vectors.
        </p>
        <p>(,  ) =
(,  ) =
∑ |  −   |2

=1</p>
        <p>1
1 + (,  )

for these values. In the example, the highest similarity is 16.7 % between  1 and  3. Thus,  3 is the most
similar program to  1. We propose the use of code embeddings extracted from the programs instead. To
this end, we test the performance of two embeddings, shown in Table 4. After testing several common
ways of calculating the similarities of two embedding vectors, such as the cosine similarity, we use the
same Euclidian distance-based approach described earlier.</p>
        <p>The remaining process is identical to the typical user-based collaborative filtering procedure. The
best-rated (fastest runtime) configuration of the most similar program recommends a configuration for
 4.</p>
        <p>
          The final recommendation step aggregates the results. Since the FCH-collected configurations cover
only a small subset of all compiler settings, we generate multiple recommendations from the nearest
neighbors and combine them via majority vote (Table 5). Following Burgstaller et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], we set the
number of top configurations and nearest neighbors to 5, a choice we confirmed and applied to all
experiments. Thus, the final recommendation aggregates the 5 best configurations from the 5 nearest
neighbors.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Evaluation</title>
      <p>
        In this section, we evaluate the use of code embeddings to recommend compiler optimization options
and whether they outperform code metric-based approaches like OSL [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] or its enhanced version of OSL
Normalized and Equalized (OSL N&amp;E) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Code embeddings represent code as fixed-sized numerical
vectors containing semantic and structural information [19]. They are usually employed by machine
learning or large language models when working with code. We test two embeddings BGE 4, a general
4https://huggingface.co/BAAI/bge-base-en-v1.5
text embedding, and RoBERTa 5, a specialized code embedding (described in Table 4).
      </p>
      <sec id="sec-5-1">
        <title>4.1. Experimental Setup</title>
        <p>
          Our evaluation uses GCC version 14.2.1 on a Lenovo ThinkPad P53s machine with an Intel i7-8665U
processor and 32GB memory running Linux 6.1.119-1-MANJARO. We use the PolyBench/C
Benchmark [25] for training and testing, which contains 30 programs written in C and is commonly used in
compiler autotuning evaluation settings [
          <xref ref-type="bibr" rid="ref13 ref14 ref7 ref8">7, 13, 14, 8</xref>
          ]. Due to the relatively small sample size, we apply
leave-one-out cross-validation [31]. Thus, each benchmark program in PolyBench was tested using a
model trained with the remaining 29. In order to visualize the performance more eficiently, we define
a speedup factor compared to GCC’s set of default optimizations O3 in Equation 3.
 =
 3


(3)
 3 and
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Results</title>
        <p>represent the program’s runtime compiled using O3 and the recommended parameter
settings respectively. A speedup of 1.1 indicates a 1.1 times faster runtime.
code embeddings outperformed the baseline OSL method, which achieved an average speedup of 1.126.
BGE reached 1.141, slightly below the enhanced OSL N&amp;E at 1.144. RoBERTa achieved the highest
average speedup of 1.191. Regarding frequency as a top performer, RoBERTa leads (Top 1 in 10/30 cases,
Top 2 in 19/30), followed by BGE narrowly outperforming OSL, while OSL N&amp;E comes last. These
results indicate that embeddings are efective for recommending compiler optimizations, especially
when using models like RoBERTa, which are specifically trained on code.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Future Work</title>
    </sec>
    <sec id="sec-7">
      <title>6. Conclusion</title>
      <p>The first results of using code embeddings in the context of lightweight compiler autotuning show
promise. However, in future work, we would like to expand the number of evaluated code embeddings,
especially further towards models specialized in coding or code manipulations, such as CodeBERT or
GraphCodeBERT, potentially improving our results further.</p>
      <p>In summary, we evaluated using code embeddings to recommend compiler optimizations. Our results
show that embeddings perform comparably to code metric-based approaches and surpass them in the
case of embeddings from models trained on code. The best-performing method leverages embeddings
from a RoBERTa model trained for code search, achieving an average runtime speedup factor of 1.191,
4.11% faster than the enhanced code metrics baseline. Major tasks of future work include the extension
of the dataset as well as the testing of additional embeddings.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This study was funded by GENRE, Austrian Research Promotion Agency (Grant No. 915086).
5https://huggingface.co/flax-sentence-embeddings/st-codesearch-distilroberta-base</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>While preparing this work, the author(s) used ChatGPT-4 (GPT-4-turbo) and Grammarly to check
grammar and spelling and improve formulations. After using these tool(s)/service(s), the author(s)
reviewed and edited the content as needed and take(s) full responsibility for the publication’s content.
software configuration spaces: A systematic literature review, Journal of Systems and Software
182 (2021) 111044.
[17] D. Garber, T. Burgstaller, A. Felfernig, V.-M. Le, S. Lubos, T. Tran, S. Polat-Erdeniz, Collaborative
recommendation of search heuristics for constraint solvers, in: ConfWS’23: 25th International
Workshop on Configuration, Sep 6–7, 2023, Málaga, Spain, 2023.
[18] J. Oh, P. Gazzillo, D. Batory, T-wise coverage by uniform sampling, in: Proceedings of the 23rd
International Systems and Software Product Line Conference - Volume A, SPLC ’19, Association for
Computing Machinery, New York, NY, USA, 2019, p. 84–87. URL: https://doi.org/10.1145/3336294.
3342359. doi:10.1145/3336294.3342359.
[19] Z. Chen, M. Monperrus, A literature study of embeddings on source code, arXiv preprint
arXiv:1904.03061 (2019).
[20] D. Benavides, P. Trinidad, A. Ruiz-Cortés, Automated reasoning on feature models, in: International</p>
      <p>Conference on Advanced Information Systems Engineering, Springer, 2005, pp. 491–503.
[21] M. Acher, H. Martin, J. A. Pereira, A. Blouin, J.-M. Jézéquel, D. E. Khelladi, L. Lesoil, O. Barais,
Learning very large configuration spaces: What matters for Linux kernel sizes, Ph.D. thesis, Inria
Rennes-Bretagne Atlantique, 2019.
[22] K. S. Meel, Counting, sampling, and synthesis: The quest for scalability., in: IJCAI, 2022, pp.</p>
      <p>5816–5820.
[23] J. Oh, D. Batory, R. Heradio, Finding near-optimal configurations in colossal spaces with statistical
guarantees, ACM Transactions on Software Engineering and Methodology 33 (2023) 1–36.
[24] Q. Plazar, M. Acher, G. Perrouin, X. Devroey, M. Cordy, Uniform sampling of sat solutions for
configurable systems: Are we there yet?, in: 2019 12th IEEE Conference on Software Testing,
Validation and Verification (ICST), IEEE, 2019, pp. 240–251.
[25] L.-N. Pouchet, Polybench: The polyhedral benchmark suite, http://www.cs.ucla.edu/~pouchet/
software/polybench/, 2012. Accessed: 2024.
[26] T. J. McCabe, A complexity measure, IEEE Transactions on software Engineering (1976) 308–320.
[27] M. H. Halstead, Elements of Software Science (Operating and programming systems series), Elsevier</p>
      <p>Science Inc., 1977.
[28] D. Spinellis, P. Louridas, M. Kechagia, The evolution of c programming practices: a study of
the unix operating system 1973–2015, in: Proceedings of the 38th International Conference on
Software Engineering, ICSE ’16, Association for Computing Machinery, New York, NY, USA, 2016,
p. 748–759. URL: https://doi.org/10.1145/2884781.2884799. doi:10.1145/2884781.2884799.
[29] J. Alves Pereira, M. Acher, H. Martin, J.-M. Jézéquel, Sampling efect on performance prediction of
configurable systems: A case study, in: Proceedings of the ACM/SPEC International Conference
on Performance Engineering, 2020, pp. 277–288.
[30] G. Jain, T. Mahara, K. N. Tripathi, A survey of similarity measures for collaborative filtering-based
recommender system, in: Soft Computing: Theories and Applications: Proceedings of SoCTA
2018, Springer, 2020, pp. 343–352.
[31] T.-T. Wong, Performance evaluation of classification algorithms by k-fold and leave-one-out cross
validation, Pattern recognition 48 (2015) 2839–2846.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Eeckhout</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Fursin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Temam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>Deconstructing iterative optimization</article-title>
          ,
          <source>ACM Transactions on Architecture and Code Optimization (TACO) 9</source>
          (
          <issue>2012</issue>
          )
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>U.</given-names>
            <surname>Garciarena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Santana</surname>
          </string-name>
          ,
          <article-title>Evolutionary optimization of compiler flag selection by learning and exploiting flags interactions</article-title>
          ,
          <source>in: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1159</fpage>
          -
          <lpage>1166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Gheorghita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Corporaal</surname>
          </string-name>
          , T. Basten,
          <article-title>Iterative compilation for energy reduction</article-title>
          ,
          <source>Journal of Embedded Computing</source>
          <volume>1</volume>
          (
          <year>2005</year>
          )
          <fpage>509</fpage>
          -
          <lpage>520</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K.</given-names>
            <surname>Hoste</surname>
          </string-name>
          , L. Eeckhout,
          <article-title>Cole: compiler optimization level exploration</article-title>
          ,
          <source>in: Proceedings of the 6th annual IEEE/ACM international symposium on Code generation and optimization</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>165</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pérez Cáceres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pagnozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Franzin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Stützle</surname>
          </string-name>
          ,
          <article-title>Automatic configuration of gcc using irace</article-title>
          ,
          <source>in: Artificial Evolution: 13th International Conference</source>
          , Évolution Artificielle,
          <string-name>
            <surname>EA</surname>
          </string-name>
          <year>2017</year>
          , Paris, France,
          <source>October 25-27</source>
          ,
          <year>2017</year>
          ,
          <source>Revised Selected Papers 13</source>
          , Springer,
          <year>2018</year>
          , pp.
          <fpage>202</fpage>
          -
          <lpage>216</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Triantafyllis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vachharajani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vachharajani</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. I. August</surname>
          </string-name>
          ,
          <article-title>Compiler optimization-space exploration</article-title>
          ,
          <source>in: International Symposium on Code Generation and Optimization</source>
          ,
          <year>2003</year>
          .
          <source>CGO</source>
          <year>2003</year>
          ., IEEE,
          <year>2003</year>
          , pp.
          <fpage>204</fpage>
          -
          <lpage>215</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Burgstaller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Garber</surname>
          </string-name>
          , V.
          <string-name>
            <surname>-M. Le</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Felfernig</surname>
          </string-name>
          ,
          <article-title>Optimization space learning: A lightweight, noniterative technique for compiler autotuning</article-title>
          ,
          <source>in: Proceedings of the 28th ACM International Systems and Software Product Line Conference</source>
          , SPLC '24,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>36</fpage>
          -
          <lpage>46</lpage>
          . URL: https://doi.org/10.1145/3646548.3672588. doi:
          <volume>10</volume>
          .1145/ 3646548.3672588.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Garber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lubos</surname>
          </string-name>
          , V.
          <string-name>
            <surname>-M. Le</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Felfernig</surname>
          </string-name>
          ,
          <article-title>Enhanced optimization space learning: Towards real-time compiler optimization</article-title>
          ,
          <source>in: 38th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems</source>
          ,
          <year>2025</year>
          . Accepted.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Ekstrand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Riedl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Konstan</surname>
          </string-name>
          , et al.,
          <article-title>Collaborative filtering recommender systems, Foundations and Trends® in Human-Computer Interaction 4 (</article-title>
          <year>2011</year>
          )
          <fpage>81</fpage>
          -
          <lpage>173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Ashouri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Killian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cavazos</surname>
          </string-name>
          , G. Palermo,
          <string-name>
            <given-names>C.</given-names>
            <surname>Silvano</surname>
          </string-name>
          ,
          <article-title>A survey on compiler autotuning using machine learning</article-title>
          ,
          <source>ACM Computing Surveys (CSUR) 51</source>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bodin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kisuki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Knijnenburg</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. O'Boyle</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Rohou</surname>
          </string-name>
          ,
          <article-title>Iterative compilation in a non-linear optimisation space</article-title>
          ,
          <source>in: Workshop on profile and feedback-directed compilation</source>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Ashouri</surname>
          </string-name>
          , G. Mariani,
          <string-name>
            <given-names>G.</given-names>
            <surname>Palermo</surname>
          </string-name>
          , E. Park,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cavazos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Silvano</surname>
          </string-name>
          ,
          <article-title>Cobayn: Compiler autotuning framework using bayesian networks</article-title>
          ,
          <source>ACM Transactions on Architecture and Code Optimization (TACO) 13</source>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <surname>H. Zhang,</surname>
          </string-name>
          <article-title>Eficient compiler autotuning via bayesian optimization</article-title>
          ,
          <source>in: 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1198</fpage>
          -
          <lpage>1209</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Compiler autotuning through multiple phase learning</article-title>
          ,
          <source>ACM Trans. Softw. Eng. Methodol</source>
          . (
          <year>2024</year>
          ). URL: https://doi.org/10.1145/3640330. doi:
          <volume>10</volume>
          .1145/3640330, just Accepted.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kennedy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Eberhart</surname>
          </string-name>
          ,
          <article-title>Particle swarm optimization</article-title>
          ,
          <source>in: Proceedings of ICNN'95-international conference on neural networks</source>
          , volume
          <volume>4</volume>
          , IEEE,
          <year>1995</year>
          , pp.
          <fpage>1942</fpage>
          -
          <lpage>1948</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J. Alves</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Acher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-M.</given-names>
            <surname>Jézéquel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Botterweck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ventresque</surname>
          </string-name>
          , Learning
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>