<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Tasks at the Scientific Document Understanding Workshop 2022</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amir Pouran Ben Veyseh</string-name>
          <email>apouranb@cs.uoregon.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicole Meister</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Franck Dernoncourt</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thien Huu Nguyen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer and Information Science, University of Oregon</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Electrical and Computer Engineering, Princeton University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Acronyms are short forms of longer phrases that facilitate the communication, specifically in technical domain that are replete with lengthy phrases. Due to the prevalence of acronyms in various types of documents, it is useful for document understanding systems to have the capability of correctly processing acronyms in text. More specifically, a system should be capable of recognizing the acronym and their long-forms in text (i.e., acronym extraction) and also to provide the correct meaning for the acronyms in case their long-form is missing from the document (i.e., acronym disambiguation). Due to their importance, both acronym extraction (AE) and acronym disambiguation (AD) are studied in the literature. However, the prior works are limited to English and specific domains (e.g., biomedical). To address this limitations, we introduce new resources for AE and AD in multiple languages and domains. Moreover, we organized two shared tasks on multilingual and multi-domain AE and AD. This paper gives an overview of the proposed resources and the participating systems in both shared tasks.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Document Understanding
1. Introduction
specific phrases that might be lengthy to repeat in every
mention. As such, to facilitate communication, acronyms
an acronym is defined as a shortened form of a longer
phrases and consists of few letters selected from the long
phrase. Using acronyms saves space and could help the
they might also propose challenges for those that are not
familiar with the meaning of the acronym. The acronyms
that are not defined in a technical document prevent the
eficient communication of concepts due to lack of
clarity. Therefore, providing the meaning for acronyms is
an important requirement for any technical document
to avoid any confusion about the concepts mentioned in
the document. Manual glossaries could be an option to
plete and also preparing them takes considerable amount
of time in case the number of acronyms in the document
are huge. Thus, automatic processing of acronyms is
highly demanded to facilitate writing and reading
technical documents. Both AE and AD models could be used
in downstream applications including information
exThe second workshop on Scientific Document Understanding at AAAI
nEvelop-O
traction [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] and question answering [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Beyond
      </p>
    </sec>
    <sec id="sec-2">
      <title>AE, other related work looked at definition extraction</title>
      <p>An automatic acronym understanding system should
be able to recognize the mentions of the acronyms and
traction (AE). For instance, in the sentence “All input
features are encoded by the Long Short-Term Memory (LSTM)
network”, an acronym, i.e., “LSTM ”, and a long-form, i.e.,
tem should be able to recognize the acronym and the
long-form in the sentence. This task is normally
modeled as a sequence classification. In particular, the
input sentence is sent to a sequential model (e.g.,
Recurrent Neural Network (RNN)) to predict the boundaries
for the acronym and the long-form. Another task that
an automatic acronym understanding system should be
capable is acronym disambiguation (AD). In this task,
acronym in a sentence or paragraph while its long-form
is missing from the context. For instance, in the
sentence “The event is fully covered by CNN ”, the meaning
of the acronym “CNN ” is not provided in the context,
therefore, an AD system is needed to find the correct
meaning. Note that an acronym might refer to multiple
meanings. For instance in the above mentioned
example, the acrony “CNN ” can be expanded to “Cable News
Network” or “Convolution Neural Network”. To correctly
select the right meaning for an ambiguous acronym, an
AD system should employ the context of the acronym the sentence and identify and find the correct meaning
and other information regarding diferent meanings of of the ambiguous acronyms [28].
an acronym. Despite all progress so far on AD and AE, the majority</p>
      <p>Due to the importance of both AD and AE, in the liter- of the prior works are trained and evaluated on limited
ature, there are various models proposed for each task. domains and languages. In particular, English and
BioHowever, one limitation in the existing methods is that medicine are the predominant language and domain for
they are trained and evaluated on specific languages and these tasks. This is a shortcoming as the challenges for
domains. In particular, the majority of the existing AD AD and AE in other domains and languages are not
adeand AE resources are limited to English and biomedical or quately studied. To address this limitation, in this work,
general domain. As such, the. challenges of these tasks in we propose a large scale acronym extraction and
disamother languages and domains are not adequately studied. biguation dataset in multiple languages and domains.
To fill this gap, we present novel acronym extraction and
disambiguation datasets that covers multiple languages
and domains. In particular, for acronym extraction, we 3. Acronym Extraction
collect and manually annotate documents in scientific
and legal domain in languages: English, Spanish, French, We collect information in two spaces of legitimate and
Danish, Persian and Vietnamese. For acronym disam- logical records for AE explanation. For each space,
biguation task, we collect and automatically annotate archives totally diferent dialects are required. As such,
documents in scientific and legal domains in languages: for the legitimate space, we utilize the Joined together
English, Spanish and French. We also conduct two shared Countries Parallel Corpus (UNPC) [29] and the Europarl
tasks on the proposed dataset. In Acronym Extraction corpus [30]. The UNPC corpus contains oficial records
shared task, 58 teams participates and in Acronym Disam- in 6 dialects whereas the Europarl corpus comprises of
biguation shared task 44 teams participates. This paper the procedures of the European Parliament in European
present the details of the dataset and the overview of the dialects. To suit our comment budget and diferentiate
submitted systems for each task. the coming about dataset, we select reports from four
dialects within the two corpora (i.e., English, French, and
Spanish in UNPC, and Danish in Europarl) for our AE
2. Related work explanation. In expansion, for the scientific domain, we
utilize the freely accessible papers and M.S./Ph.D. theses
Acronym Extraction and Disambiguation are well known within the field of computer science for AE explanation.
tasks for document understanding. In the last two Particularly, we collect the papers distributed within the
decades, several methods have been proposed for AE or ACL collection of common dialect handling inquire about
AD [11, 12, 13, 14, 15, 16, 17, 18]. Early works employed for English. Also, for typologically diferent languages,
rule-based models. More specifically, a set of linguis- we crawl public computer science thesis in Persian and
tic rules are defined to identify the acronyms and their Vietnamese.
long-forms in text. Schwartz and Hearst [13] proposed To annotate the data, we hire freelancers from Upwork.
to identify the long-forms and their acronyms based on The workers are fluent in the target language and have
character match. That is, an acronym is labeled as the experience in data annotation. For a sentence in a dialect,
short-form of a phrase if there are a sequence of char- we as it were comment on long shapes that are within the
acters in the phrase that can form the acronym. Veyseh same dialect as the sentence’s. A short time later, for each
et al. [19] extended the Schwartz’s rules by by identi- dialect, we hold two candidates who pass and accomplish
fying the acronyms that are not accompanied by their most elevated comes about in our planned test for AE as
long form. Later, feature engineering methods and deep our oficial annotators. Following, the two annotators in
learning have been also employed for acronym extraction each dialect autonomously perform AE explanation for
[20, 21]. Acronym disambiguation have been also exten- the inspected sentences of that dialect. At long last, the
sively studied in the literature. This task can be modeled two annotators will examine to resolve any diference
as a supervised classification task [ 22, 23, 24, 25, 26, 27]. within the comment, hence creating a last adaptation of
Also, Zero-shot models, in which the long-forms of the our MACRONYM dataset [31]. The dataset statistics and
acronym in test set are not seen by the models, have been agreement scores are presented in Table 1.
proposed [19]. We conduct a shared task on Acronym Extraction at</p>
      <p>Moreover, in addition to the shared tasks presented SDU@AAAI-22 workshop. In this shared task, 58 teams
in this work, SDU@AAAI-21 also hosted two shared participated in the task. Among which, 9 teams submit
tasks on acronym identification and disambiguation. In their systems in the test phase. Table 2 shows the
perthese shared tasks, the winning solutions employed deep formance of the participating systems in the test phase.
learning models based on BERT transformer to encode Among all participating teams, “WENGSYX ” achieve the
l
a
g
e
L
c
iift
n
e
i
c
S</p>
      <sec id="sec-2-1">
        <title>Domain</title>
        <p>&amp; Language</p>
        <p>
          English
Spanish
French
Danish
English
Persian
Vietnamese
highest score on four language-domain pairs (Spanish
and Danish in legal and Persian and Vietnamese in
scientific domains). This model [
          <xref ref-type="bibr" rid="ref9">32, 33</xref>
          ] employs an
adversarial training strategy. In particular, two methods are
employed for extracting the acronym and long-forms: (1)
Sequence labeling, the task is modeled as sequence
classiifcation in BIO format. To this end, a BILSTM+CRF model
is employed. (2) Spand Detection: In this method the
acronyms and long-forms spans are directly predicted by
the transformer-based model. “shihanmax ” achieve best
performance on English test set for both scientific and
legal domain, and “nithishkannen” has the highest score on
French legal domain. This model [
          <xref ref-type="bibr" rid="ref10">34</xref>
          ] employs
characterlevel BERT model to address the out-of-vocabulary issues
which is restricting for acronym extraction.
        </p>
        <p>From Table 2, is is evident that the performance of the
models in scientific domain is lower than their
performance on legal domain. This performance drop indicates
the challenges in the scientific domain. Also, the lower
performance of the models in non-English languages,
specifically Persian and Vietnamese, reveal the
challenging nature of AE in non-English languages.</p>
        <sec id="sec-2-1-1">
          <title>4. Acronym Disambiguation</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>In addition to AE, an acronym understanding system</title>
      <p>should be able to find the correct meaning of the
acronyms that are not accompanied with their long-form.
To evaluate the performance of the systems for this task,
we automatically construct a dataset for acronym
disambiguation task. More specifically, given the annotations
for the AE dataset, for every acronym in a document
that is expanded to a long-form, we employed its
provided long-form as the label for any other mention of the
acronym in the given document (i.e., one meaning per
discourse assumption). Using this approach, we construct a
dataset on English (legal and scientific domain), French
Legal and Spanish - Legal. The statistics of the dataset
are presented in Table 4. In this shared task, “WENGSYX ”
achieve the highest score on all languages and domains.</p>
      <sec id="sec-3-1">
        <title>WENGSYX</title>
        <p>
          In this model [
          <xref ref-type="bibr" rid="ref15 ref16">39, 40</xref>
          ] a multi-choice approach is
employed for acronym disambiguation. In particular, the
input sentence containing the ambiguous acronym along
with all possible expansions are provided to the model
via diferent channels. Each expansion is scorees
separately. Finally a unified model is employed to select
the expansion with the highest score. From Table 4, it is
evident that models obtain higher score on English
Scientific compared to other splits (i.e., legal test sets). This
higher performance indicates that in scientific domain,
the acronyms are less ambiguous than the legal domain.
        </p>
        <p>Using the prepared dataset, we conduct a shared task
on acronym disambiguation at SDU@AAAI-22 workshop.
In this shared task, 44 teams participated. Among which,
11 teams submitted their system in the test phase. Table
3 shows the performance of the participating teams in
the test phase.</p>
        <sec id="sec-3-1-1">
          <title>5. Conclusion</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>In this work, we presented two new acronym under</title>
      <p>standing resources in multiple languages and domains.</p>
      <p>In particular, we presented manually annotated acronym
extraction dataset in two domains of scientific and
legal documents and in six languages of English, Spanish,
French, Danish, Persian, and Vietnamese. Moreover, we
presented a novel automatically annotated dataset for
acronym disambiguation in scientific and legal domain
and in English, Spanish, and French. Using the proposed
dataset, we conduct two shared tasks on acronym
extraction and disambiguation. For each task, 9 and 11
teams participates in diferent domains and language.</p>
      <p>The performance of the winning systems, especially in
non-English languages and legal domain, indicates the
necessity of further research on this task.
aware neural architectures for definition extraction, [22] Y. Wang, K. Zheng, H. Xu, Q. Mei, Clinical word
in: NAACL-HLT, 2018. sense disambiguation with interactive search and
[9] Y. Jin, M.-Y. Kan, J.-P. Ng, X. He, Mining scientific classification, in: AMIA Annual Symposium
Proterms and their definitions: A study of the ACL ceedings, volume 2016, American Medical
InformatAnthology, in: EMNLP, 2013. ics Association, 2016, p. 2062.
[10] V. D. Lai, A. P. B. Veyseh, F. Dernoncourt, T. H. [23] Y. Li, B. Zhao, A. Fuxman, F. Tao, Guess me if you
Nguyen, SemEval-2022 task 13: Symlink: Linking can: Acronym disambiguation for enterprises, in:
mathematical symbols to their descriptions, in: Pro- Proceedings of the 56th Annual Meeting of the
Asceedings of the Fourteenth Workshop on Semantic sociation for Computational Linguistics (Volume 1:
Evaluation, 2022. Long Papers), Association for Computational
Lin[11] Y. Park, R. J. Byrd, Hybrid text mining for finding guistics, Melbourne, Australia, 2018, pp. 1308–1317.
abbreviations and their definitions, in: Proceedings URL: https://www.aclweb.org/anthology/P18-1121.
of the 2001 conference on empirical methods in doi:1 0 . 1 8 6 5 3 / v 1 / P 1 8 - 1 1 2 1 .</p>
      <p>natural language processing, 2001. [24] Y. Wu, J. Xu, Y. Zhang, H. Xu, Clinical abbreviation
[12] J. D. Wren, H. R. Garner, Heuristics for identifi- disambiguation using neural word embeddings, in:
cation of acronym-definition patterns within text: Proceedings of BioNLP 15, 2015, pp. 171–176.
towards an automated construction of comprehen- [25] R. Antunes, S. Matos, Biomedical word sense
disamsive acronym-definition dictionaries, Methods of biguation with word embeddings, in: International
information in medicine 41 (2002) 426–434. Conference on Practical Applications of
Computa[13] A. S. Schwartz, M. A. Hearst, A simple algorithm for tional Biology &amp; Bioinformatics, Springer, 2017, pp.
identifying abbreviation definitions in biomedical 273–279.
text, in: Biocomputing 2003, World Scientific, 2002, [26] J. Charbonnier, C. Wartena, Using word
embedpp. 451–462. dings for unsupervised acronym disambiguation, in:
[14] E. Adar, Sarad: A simple and robust abbreviation Proceedings of the 27th International Conference
dictionary, Bioinformatics 20 (2004) 527–533. on Computational Linguistics, Association for
Com[15] D. Nadeau, P. D. Turney, A supervised learning putational Linguistics, Santa Fe, New Mexico, USA,
approach to acronym identification, in: Conference 2018. URL:
https://www.aclweb.org/anthology/C18of the Canadian Society for Computational Studies 1221.</p>
      <p>of Intelligence, Springer, 2005, pp. 319–329. [27] M. R. Ciosici, T. Sommer, I. Assent,
Unsuper[16] H. Ao, T. Takagi, Alice: an algorithm to extract ab- vised abbreviation disambiguation, arXiv preprint
breviations from medline, Journal of the American arXiv:1904.00929 (2019).</p>
      <p>Medical Informatics Association 12 (2005) 576–586. [28] A. P. B. Veyseh, F. Dernoncourt, T. H. Nguyen,
[17] K. Kirchhof, A. M. Turner, Unsupervised resolution W. Chang, L. A. Celi, Acronym identification and
of acronyms and abbreviations in nursing notes us- disambiguation shared tasks for scientific document
ing document-level context models, in: Proceedings understanding, arXiv preprint arXiv:2012.11760
of the Seventh International Workshop on Health (2020).</p>
      <p>Text Mining and Information Analysis, 2016, pp. [29] M. Ziemski, M. Junczys-Dowmunt, B. Pouliquen,
52–60. The united nations parallel corpus v1.0, in:
[18] A. P. B. Veyseh, F. Dernoncourt, Q. H. Tran, T. H. Proceedings of the Tenth International
ConferNguyen, What does this acronym mean? introduc- ence on Language Resources and Evaluation
ing a new dataset for acronym identification and LREC 2016, Portorož, Slovenia, May 23-28, 2016,
disambiguation, arXiv preprint arXiv:2010.14678 2016. URL: http://www.lrec-conf.org/proceedings/
(2020). lrec2016/summaries/1195.html.
[19] A. P. B. Veyseh, F. Dernoncourt, W. Chang, T. H. [30] P. Koehn, Europarl: A parallel corpus for
Nguyen, Maddog: A web-based system for acronym statistical machine translation, in:
Proceedidentification and disambiguation, arXiv preprint ings of Machine Translation Summit X: Papers,
arXiv:2101.09893 (2021). MTSummit 2005, Phuket, Thailand, September
[20] C.-J. Kuo, M. H. Ling, K.-T. Lin, C.-N. Hsu, Bioadi: a 13-15, 2005, 2005. URL: https://aclanthology.org/
machine learning approach to identifying abbrevia- 2005.mtsummit-papers.11.
tions and definitions in biological literature, in: [31] A. P. B. Veyseh, N. Meister, S. Yoon, R. Jain, F.
DerBMC bioinformatics, volume 10, Springer, 2009, noncourt, T. H. Nguyen, Macronym: A large-scale
p. S7. dataset for multilingual and multi-domain acronym
[21] J. Liu, C. Liu, Y. Huang, Multi-granularity sequence extraction, arXiv preprint arXiv:2202.09694 (2022).
labeling model for acronym expansion identifica- [32] X. Huang, B. Li, F. Xia, Y. Weng, A novel initial
tion, Information Sciences 378 (2017) 462–474. reminder framework for acronym extraction, in:</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Xu,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Gcdt: A global context enhanced deep transition architecture for sequence labeling</article-title>
          ,
          <source>in: ACL</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pouran Ben Veyseh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dou</surname>
          </string-name>
          ,
          <article-title>Graph based neural networks for event factuality prediction using syntactic and semantic structures</article-title>
          ,
          <source>in: ACL</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C. F.</given-names>
            <surname>Ackermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Beller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Boxwell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. G.</given-names>
            <surname>Katz</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Summers</surname>
          </string-name>
          ,
          <article-title>Resolution of acronyms in question answering systems</article-title>
          ,
          <year>2020</year>
          . US Patent
          <volume>10</volume>
          ,
          <issue>572</issue>
          ,
          <fpage>597</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. P. B.</given-names>
            <surname>Veyseh</surname>
          </string-name>
          ,
          <article-title>Cross-lingual question answering using common semantic space</article-title>
          ,
          <source>in: Proceedings of TextGraphs-10: the workshop on graph-based methods for natural language processing</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. P. B.</given-names>
            <surname>Veyseh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Dernoncourt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <article-title>A joint model for definition extraction with syntactic connection and semantic consistency</article-title>
          .,
          <source>in: AAAI</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>9098</fpage>
          -
          <lpage>9105</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Spala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Dernoncourt</surname>
          </string-name>
          , C. Dockhorn, SemEval
          <article-title>-2020 task 6: Definition extraction from free text with the DEFT corpus</article-title>
          ,
          <source>in: Proceedings of the Fourteenth Workshop on Semantic Evaluation</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Spala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Dernoncourt</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Dockhorn, DEFT: A corpus for definition extraction in free- and semi-structured text</article-title>
          ,
          <source>in: Proceedings of the 13th Linguistic Annotation Workshop</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Espinosa-Anke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schockaert</surname>
          </string-name>
          ,
          <string-name>
            <surname>Syntactically</surname>
            <given-names>SDU</given-names>
          </string-name>
          <source>@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>B.</given-names>
            <surname>Li1</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Psg: Prompt-based sequence generation for acronym extraction</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kannen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sheth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chandra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          , Cabace:
          <article-title>Injecting character sequence information and domain knowledge for enhanced acronym and long-form extraction</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Balouchzahi</surname>
          </string-name>
          , Vitman, Shashirekha, Sidorov, Gelbukh,
          <article-title>Acronym identification using transformers and flair framework</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Acronym extraction with hybrid strategies</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Usama</surname>
          </string-name>
          <string-name>
            <surname>Yaseen</surname>
          </string-name>
          ,
          <article-title>Domain adaptive pretraining for multilingual acronym extraction</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>P.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Saadany</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zilio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kanojia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Orasan</surname>
          </string-name>
          ,
          <article-title>An ensemble approach to acronym extraction using transformers</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>Adbcmm : Acronym disambiguation by building counterfactuals and multilingual mixing</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Simclad: A simple framework for contrastive learning of acronym disambiguation</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>G.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shim</surname>
          </string-name>
          ,
          <article-title>T5 encoder based acronym disambiguation with weak supervision</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Multilingual acronym disambiguation with multichoice classification</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>Y. Y.</given-names>
            <surname>Taiqiang Wu</surname>
          </string-name>
          , Xingyu Bai,
          <article-title>Prompt-based model for acronym disambiguation via negative sampling</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>Anaconda: Adversarial training with in-trust loss in acronym disambiguation</article-title>
          ,
          <source>in: SDU@AAAI-22</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>