<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Improving LLM-based Code Completion Using LR Parsing-Based Candidates</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Md Monir Ahammod Bin Atique</string-name>
          <email>monir024@jnu.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kwanghoon Choi</string-name>
          <email>kwanghoon.choi@jnu.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Isao Sasano</string-name>
          <email>sasano@sic.shibaura-it.ac.jp</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hyeon-Ah Moon</string-name>
          <email>hamoon@sogang.ac.kr</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Chonnam National University</institution>
          ,
          <addr-line>Gwangju 61186</addr-line>
          ,
          <country country="KR">South Korea</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Shibaura Institute of Technology</institution>
          ,
          <addr-line>Tokyo</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Sogang University</institution>
          ,
          <addr-line>Seoul</addr-line>
          ,
          <country country="KR">South Korea</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Programmers often use syntax completion and code suggestion features. Our methodology enhances code completion by combining structural candidate information from LR parsing with LLMs. These structural candidates are utilized to compose prompts so that ChatGPT can predict actual code under the specified structure. Tested on Small Basic and C benchmarks, this approach ofers textual suggestions rather than just structural ones, showing nearly 50% prediction accuracy for Small Basic programs. While efective for Small Basic, we report that challenges remain with C11 programs.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Syntax Completion</kwd>
        <kwd>Large Language Model</kwd>
        <kwd>LR parsing</kwd>
        <kwd>Integrated Development Environments</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        For example, on a request for a code completion on (the part of) a prefix ‘For i = 1’,  is ‘For ID =
Expr’, which is a sequence of terminal and noterminal symbols describing the beginning of the for loop.
Sasano &amp; Choi’s method [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] can automatically uncover a candidate  , which is ‘To Expr OptStep
CRStmtCRs EndFor’ to complete the rest of the for loop by a production ‘Stmt → For ID = Expr To Expr
OptStep CRStmtCRs EndFor’. Consequently, IDEs will respond with this candidate  to the request
for a code completion on ‘For i = 1’. Their continuing research [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] proposed a ranking method useful
to choose more likely candidate than the others when there are more than one candidate possible
for a prefix. It pre-investigates the frequencies of the occurrences of the candidates in the existing
open-source projects.
      </p>
      <p>
        These methods [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] are advantageous. The suggested candidates are guaranteed to be correct
syntactically, ranking can be customized for an individual software project, and this method can be
implemented in a programming language agnostic way.
      </p>
      <p>However, the suggested candidates by the methods are limited to the form of terminal and noterminal
symbols. After choosing a candidate, programmers should manually edit it into a code text, which
diminishes productivity. Determining such a code text for a candidate is beyond the LR parsing-based
syntax analysis.</p>
      <p>
        In this work, we study how Large Language Model [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] can complement these methods. Given a prefix
text for  , the LR parsing based method firstly suggests a candidate  , and the LLM produces a code
completion satisfying the structure of the suggested candidate for the given prefix text. For example,
our system can automatically compose a prompt to the LLM as
      </p>
      <p>This is the incomplete Small Basic programming language code:
1: For i = 1 To 5
2: TextWindow.Write("User" + i + ", enter name: ")
3: name[i] = TextWindow.Read()
4: EndFor
5: TextWindow.Write("Hello ")
6: For i = 1 {To Expr OptStep CRStmtCRs EndFor}
Complete the {To Expr OptStep CRStmtCRs EndFor} part of the code.</p>
      <p>Just show your answer in place of {To Expr OptStep CRStmtCRs EndFor}.
where the suggested structural candidate is placed inside the braces. Then the LLM successfuly returned
exactly what we expected as this.</p>
      <p>6: To 5
7: TextWindow.Write(name[i] + ", ")
8: EndFor</p>
      <p>
        Thus the two approaches can complement each other. The LR parsing based analytic approach can
precisely specify the syntactic code structure to complete, while the LLM-based statistical approach can
predict the code text under the specified structure. According to [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], the top 1.8 suggested candidates in
the SmallBasic programs and the top 3.15 suggested candidates in the C11 programs on average were
found to be what are expected for testing. This evaluation results imply that candidates in the form of
the rest structural candidates should not be considered by the LLM. Composing prompts using the top
suggested structual candidates will be efective to instruct the LLM to exclude the bottom ones for code
completion.
      </p>
      <p>To the best of our knowledge, this is the first attempt to guide an LLM using prompts that utilize
candidate structural information obtained from LR-parsing. We report ongoing work in this direction.</p>
      <p>Our contributions are as follows.</p>
      <p>Firstly, we propose a code completion prediction method that combines LR-parsing-based ranking of
candidate skeletons with Large Language Model (LLM)-based fleshing out of those skeletons.</p>
      <p>Secondly, we have setup an environment to evaluate the proposed method and report initial results
using SmallBasic and C11 benchmarks.</p>
      <p>Section 2 introduces our system and presents initial evaluation results . Section 3 compares our work
with existing research. Section 4 concludes the paper with future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. An Overview of Our System and Its Evaluation</title>
      <p>
        In this work, we focus on one aspect of this system: automatically composing prompts to the LLM
using structural candidates ofered by the LR-based method is feasible and is an advancement to the
previous work [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] in that this system can now suggest textual candidates rather than structural candiates.
      </p>
      <p>
        Under this goal, we design an experiment to evaluate the efectiveness of using ChatGPT for code
completion suggestions and to ofer structural candidates (composed of terminals and nonterminals) to
guide these suggestions. Our proposed system can be assessed by addressing the following two research
questions (RQ):
• RQ1: Does the proposed system ofer textual (actual) candidate suggestions with the aid of LLM
such as ChatGPT that are beneficial in introductory programming?
• RQ2: Is it reasonable to implement the system as a language-parametric tool?
In order to address the above research questions, our methodology includes the following steps.
Selection of Programming Languages: We selected two programming languages, Microsoft SmallBasic
(MSB) and C11, for our experiments as in the previous work [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. These languages are popular choices
for introductory programming. Data Collection: The testing set for SmallBasic was obtained from
its community. It consists of 27 programs totaling 155 lines taken from the well-known MSB tutorial.
Talking about C, the test set of C11 comprises 106 programs (11,218 lines in total) that are solutions from
the well-known book on the C programming language by Kernighan and Ritchie. Prefix Extraction:
Prefixes were collected from the source code files, along with cursor position information. This
information was obtained from candidates’ database by the lexical analysis in the LR parsing-based method [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Prompt Engineering: Using the collected prefix and structural candidate data, we crafted prompts for
ChatGPT. In this experiment, we selected the ‘gpt-3.5-turbo-0125’ model for its better performance with
code completion. These information were then fed into the ChatGPT prompt to ask for substitutes for
our structural candidates into actual candidates. We compared the answers provided by ChatGPT with
the correct answers from our database. Evaluation: The responses from ChatGPT were evaluated using
well-known techniques such as SacreBLEU, which is a popular method for assessing large language
models and SequenceMatcher similarity. The SacreBLEU score measures the n-gram (sequences of n
items, typically words or characters) similarity between the reference code sequence and the generated
code sequence. It counts how many n-grams in the generated code sequence match (token-by-token)
n-grams in the the reference code sequence. The SequenceMatcher is a class available in python module
named “diflib”. It compares the similarity between two sequences of strings (in terms of characters)
by identifying the best alignment between them. Given two sequences, find the length of the longest
subsequence present in both of them. Here, we used the parameter isjunk=None so that no elements
are ignored. The data set as well as the developed software are all available in the public repository 1.
1https://github.com/monircse061/ChatGPT-Code-Completion-Work
      </p>
      <p>We present a summary of the experimental results for both MSB and C11 which is dipicted in Table 1.
For MSB, we experimented with 27 programs where, for each program, we iterated our system for each
structural candidate, calculated the evaluating metrics values, and then averaged the precision for the
whole program. This process was done for every program. Finally, we calculated the mean precision
for the 27 programs in terms of SacreBLEU and sequence matcher similarity. On average, our system
predicts the textual code suggestion with over 45% accuracy for each testing program when using
SacreBLEU as an evaluation metric. Precision is almost similar at nearly 45% when sequence matcher
similarity is taken into account. The similar process was used with C11. For 106 C11 programs, the
average SacreBLEU score is 21.463%, indicating that our system forecasts the correct code completion
suggestions. Sequence matcher similarity is nearly the same for C11.</p>
      <p>To show the efectiveness of guidance by a structural candidate, we discuss a case representing the
best prediction of our system as this. In the MSB experiment case depicted in Figure 3, line 2600 marks
the parse state and cursor position, followed by the next few lines (2602 to 2612), which provide the
prompt for the ChatGPT. Lines spanning from 2603 to 2608 represent the prefix code. Subsequently, a
candidate structure appears in line 2609: ‘To Expression OptStep CRStmtCRs EndFor’. It interprets that
the actual candidate should be ‘To 5 \n TextWindow . Write ( name [ i ] + ", " ) \n EndFor’, which is
shown in line 2618. Line 2614 outlines the time taken from the query to the ChatGPT response, which
is 0.6903 seconds. In this candidate structure, the response generated by ChatGPT is highly accurate.
The precision at the unigram level (1-gram) is 100% which is seen at line 2621, and other metric also
show satisfactory result (line 2622). This example demonstrates that our candidate suggestion plays a
crucial role in guiding ChatGPT’s responses. Each terminal and non-terminal component contributes
to achieving an accurate result from ChatGPT.
Based on the evidence provided, we can answer Research Question 1 in the afirmative.
Using ChatGPT with LR parsing-based structural candidates is efective in providing code completion
suggestions for introductory programming languages, particularly for MSB. Our system shows correct
suggestions with minimal prefixes (hints), which is notable. This indicates that the system can be
beneficial in educational contexts where MSB is used. However, improvements are needed to increase
precision, especially for more complex languages like C11. The precision for C programs in C11 is
low due to short candidate structures like ’[’, ’;’ and complex, hard-to-infer structures. Additionally,
predicting the next token or line of code with minimal prefix is challenging, especially in long files.</p>
      <p>On answering the second research question, based on the successful application of our system to the
two programming languages, we can claim that our code completion system is language-agnostic. This
system can be incorporated into any programming language.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Related Work</title>
      <p>
        There are various studies conducted up to now which use large code base and/or machine learning
to code completion. One is by Svyatkovskiy et al. from Microsoft, who introduced a system named
IntelliCode Compose [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This system leverages GPT-C, a variant of OpenAI’s GPT-2 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], trained on a vast
dataset of program source code. It is designed to generate sequences of tokens that form syntactically
correct language constructs, such as statements containing local variables, method names, and keywords,
for languages including C#. Another study by [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] explored identifier completion with ranking candidates.
They sought solutions to improve the eficiency of the completion process. Rather than relying on prefix
matching, used in many completion systems, they introduced subsequence matching, where user-input
sequences of characters are compared to names containing them, even if they are non-consecutive.
Recently a study by [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] delved into method invocation and field access completion. Nguyen et al.
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] combined program analysis and langauge model for completing a partially-input statement or
suggesting a statement that immediately follows the current statement if it is a complete one. Gabel
et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] first observed the regularity of software code mentioned above. There are infinitely many
syntactically valid statements, but there are much smaller, or may even be finite, number of pracitally
useful statements. Liu et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] presented a non-autoregressive model for concurrently computating
candidates, each of which is a line of code starting at the cursor position. 10 lines of code immediately
before the current empty line is given to the completion system when programmers write code, and
also 10 lines of code immediately before every line is given as training data together with the current
line. They also use some information of tokens such as keywords, identifiers, operators, etc.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion and Future Work</title>
      <p>
        In this research, we introduced a method for automatically composing prompts to the LLM using
structural candidates ofered by the LR-based method and assessed the method using two programming
languages. Compared to the the previous work [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], this system can now suggest textual candidates rather
than structural candiates. By using structural candidates in the prompts, the system can efectively
instruct the LLM to exclude the bottom structural candidates for code completion.
      </p>
      <p>There are many topics for future work. A few important topics are to build an IDE for usability
evaluation, to measure the efectiveness of structural candidates in the prompts to the LLM, and to
compare the prediction performance of our system with that of the others particularly based on the
Large Language Models.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work was supported by Innovative Human Resource Development for Local Intellectualization
program through the Institute of Information &amp; Communications Technology Planning &amp; Evaluation
(IITP) grant funded by the Korea government (MSIT) (IITP-2023-RS-2023-00256629). This work was
partially supported by the Korea Internet &amp; Security Agency (KISA) - Information Security College
Support Project. Also, this work was partially supported by JSPS KAKENHI under Grant Number
23K11053.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Aho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Lam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sethi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Ullman</surname>
          </string-name>
          , Compilers - principles, techniques, and tools,
          <source>2nd edition</source>
          , Addison Wesley,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Sasano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <article-title>A text-based syntax completion method using lr parsing</article-title>
          ,
          <source>in: Proceedings of the 2021 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation</source>
          ,
          <string-name>
            <surname>PEPM</surname>
          </string-name>
          <year>2021</year>
          ,
          <article-title>Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2021</year>
          , p.
          <fpage>32</fpage>
          -
          <lpage>43</lpage>
          . URL: https: //doi.org/10.1145/3441296.3441395. doi:
          <volume>10</volume>
          .1145/3441296.3441395.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Sasano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <article-title>A text-based syntax completion method using lr parsing and its evaluation, Science of Computer Programming (</article-title>
          <year>2023</year>
          )
          <article-title>102957</article-title>
          . URL: https://www.sciencedirect.com/science/ article/pii/S0167642323000394. doi:https://doi.org/10.1016/j.scico.
          <year>2023</year>
          .
          <volume>102957</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hwang</surname>
          </string-name>
          , H. Moon,
          <string-name>
            <surname>I. Sasano</surname>
          </string-name>
          ,
          <article-title>Ranked syntax completion with lr parsing</article-title>
          ,
          <source>in: Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing</source>
          , SAC '24,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>1242</fpage>
          -
          <lpage>1251</lpage>
          . URL: https://doi.org/10.1145/3605098.3635944. doi:
          <volume>10</volume>
          .1145/3605098.3635944.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Luan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Sutskever</surname>
          </string-name>
          ,
          <article-title>Language models are unsupervised multitask learners</article-title>
          , https://paperswithcode.com/paper/ language
          <article-title>-models-are-unsupervised-</article-title>
          <string-name>
            <surname>multitask</surname>
          </string-name>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Svyatkovskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sundaresan</surname>
          </string-name>
          , Intellicode compose:
          <article-title>Code generation using transformer</article-title>
          ,
          <source>in: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering</source>
          , ESEC/FSE 2020,
          <article-title>Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2020</year>
          , p.
          <fpage>1433</fpage>
          -
          <lpage>1443</lpage>
          . URL: https://doi.org/10.1145/3368089.3417058. doi:
          <volume>10</volume>
          .1145/3368089.3417058.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ishikawa</surname>
          </string-name>
          ,
          <article-title>Scope-aware code completion with discriminative modeling</article-title>
          ,
          <source>Journal of Information Processing</source>
          <volume>27</volume>
          (
          <year>2019</year>
          )
          <fpage>469</fpage>
          -
          <lpage>478</lpage>
          . doi:
          <volume>10</volume>
          .2197/ipsjjip.27.469.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Jiang</surname>
          </string-name>
          , H. Liu,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , H. Mei,
          <article-title>Heuristic and neural network based prediction of project-specific api member access</article-title>
          ,
          <source>IEEE Transactions on Software Engineering</source>
          <volume>48</volume>
          (
          <year>2022</year>
          )
          <fpage>1249</fpage>
          -
          <lpage>1267</lpage>
          . doi:
          <volume>10</volume>
          .1109/TSE.
          <year>2020</year>
          .
          <volume>3017794</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. N.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Combining program analysis and statistical language model for code statement completion</article-title>
          ,
          <source>in: Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering, ASE '19</source>
          , IEEE Press,
          <year>2020</year>
          , p.
          <fpage>710</fpage>
          -
          <lpage>721</lpage>
          . URL: https://doi.org/10.1109/ASE.
          <year>2019</year>
          .
          <volume>00072</volume>
          . doi:
          <volume>10</volume>
          .1109/ASE.
          <year>2019</year>
          .
          <volume>00072</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gabel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <article-title>A study of the uniqueness of source code</article-title>
          ,
          <source>in: Proceedings of the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering</source>
          , FSE '10,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2010</year>
          , p.
          <fpage>147</fpage>
          -
          <lpage>156</lpage>
          . URL: https://doi.org/10.1145/ 1882291.1882315. doi:
          <volume>10</volume>
          .1145/1882291.1882315.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jin</surname>
          </string-name>
          , H. Liu,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hao</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Zhang,</surname>
          </string-name>
          <article-title>Non-autoregressive line-level code completion</article-title>
          ,
          <source>ACM Trans. Softw. Eng. Methodol</source>
          . (
          <year>2024</year>
          ). URL: https://doi.org/10.1145/3649594. doi:
          <volume>10</volume>
          .1145/ 3649594, just Accepted.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>