<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Parlez-vous Picto? A Transformer-Based Approach for Text-to-Picto and Speech-to-Picto Translation in French</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maja J. Hjuler</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Indira Fabre</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science, Queensland University of Technology</institution>
          ,
          <addr-line>Brisbane QLD 4000</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Télécom Paris</institution>
          ,
          <addr-line>91120 Palaiseau</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University Grenoble Alpes</institution>
          ,
          <addr-line>CNRS, Grenoble INP, LIG, 38000 Grenoble</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This study was conducted in the context of the ToPicto task of ImageCLEF 2025. It investigates the performance of a Transformer-based approach for Text-to-Picto and Speech-to-Picto translation from French language using the pre-trained Google-T5 model fine-tuned on the provided dataset. The T5-large version of the model resulted for the Text-to-Picto task in a score of 93.0, 95.7, and 3.4 for SacreBLEU, METEOR, and PictoER, respectively. To solve the Speech-to-Picto task, this model was combined with a pre-trained ASR model and gave promising results. These findings indicate potential for developing tools to facilitate communication between AAC users and others.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Natural Language Processing</kwd>
        <kwd>Transformer model</kwd>
        <kwd>Google-T5</kwd>
        <kwd>French text translation</kwd>
        <kwd>Pictogram generation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Large Language Models (LLMs) have revolutionized various text- and speech-based tasks, including
speech recognition, language translation, and augmentative communication systems [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
        ]. In the
context of AAC, LLMs based on the Transformer architecture enable more accurate and context-aware
language processing. Unlike traditional statistical models, Transformer utilize self-attention mechanisms
to capture long-range dependencies, improving speech-to-pictogram translation and next-pictogram
prediction. The first study on automatic translation of French speech into a sequence of pictograms
was presented by Vaschalde et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Their methodology adapts the Text-to-Picto [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] system by
integrating four modules: an ASR system, a simplification system, a word sense disambiguation model,
and a module to display the sequence of pictograms. The automatic translation of speech into pictogram
terms (Speech-to-Picto) has the potential to improve communication for individuals with language
impairments [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. For example, this technology can facilitate communication from a non-AAC user
to an AAC user, or it can help individuals with speech disabilities learn how to use pictograms for
self-expression.
      </p>
      <p>
        Macaire et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] investigated two approaches for Speech-to-Picto (S2P) translation: (1) the cascade
approach combines an Automatic Speech Recognition (ASR) system with a machine translation system,
and (2) the end-to-end approach, which tailors a speech translation system to perform direct translation
from an audio sequence. Propicto-orféo, described in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], is used for training after preprocessing by
splitting into training, validation, and test sets (80/10/10 split). Propicto-orféo [13] contains 230 hours
of French speech resources with speech units aligned to pictograms. The team created another dataset,
Propicto-eval, with speech transcriptions from 62 speakers, and used a subset of 100 sentences for
the final performance evaluation. Based on BLEU scores [ 14], the cascade approach outperforms the
end-to-end approach. The cascade approach achieves scores of 62.5 and 77.2 on the Propicto-orféo and
-eval datasets, respectively, compared to scores of 60.2 and 54.5 for the end-to-end S2P approach.
      </p>
      <p>Previous years’ submissions to ImageCLEF have also explored the use of LLMs to solve the
Text-toPicto task. Anand et al. [15] implemented a Transformer model utilizing embeddings from CamemBERT
[16], a French BERT model fused with a contrastive learning technique. Elliah et al. [17] finetuned
pretrained translation models (GPT-2 [18] and Helsinki-BERT [19]) for Text-to-Picto conversion, utilizing
tokenization and lexical simplification. Similarly, Koushik et al. [ 20] fine-tuned Google-T5 [ 21] for the
task of translating French text into pictogram sequences. Their proposed model obtained a PictoER
score of 13.9, a BLEU score of 74.4, and a METEOR score of 87.1.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset</title>
      <p>The dataset used in this study is sourced from the CommonVoice v.15 corpus [22] and the Orféo
corpus [23]. CommonVoice is a multilingual, publicly available voice dataset recorded by users on the
Common Voice platform (http://voice.mozilla.org/). It is intended for speech technology research and
development and is based on text from various public domain sources. Only French language data
were used from this dataset. Orféo is a corpus consisting of both spoken and written French samples.
It contains interactions between adults, adults and children, as well as between children. It has the
advantage of being representative of the interactions observed between caregivers and individuals who
rely on pictograms due to language impairments. Training, validation, and test splits consist of 20,177,
1,208, and 2,901 utterances, respectively. For the Speech-to-Picto task, a corresponding audio sequence
associated with a pictogram sequence is provided (S2P src in Table 1). For the Text-to-Picto translation
task, a corresponding sequence of terms associated with a pictogram sequence is provided, derived
from the speech transcription (T2P src in Table 1).
target of the utterance - sequence of pictogram terms
(tokens)
a list of pictogram identifiers linked to each pictogram
terms (the size is the same as the target output).*</p>
      <p>Example
common_voice_fr_21455110
common_voice_fr_21455110.wav
il a découvert deux astéroïdes et une
comète
passé il inventer deux pluton et une
comète
[9839, 6480, 6531, 2628, 10299, 11399,
8474, 2711]</p>
      <p>ARASAAC pictograms are used as a reference for pictogram translation. Images can be obtained via
the ARASAAC API using https://api.arasaac.org/v1/pictograms/{pictogram_ref_number}.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Approach</title>
      <sec id="sec-4-1">
        <title>4.1. Text-to-Picto</title>
        <p>This research focuses on addressing the Text-to-Picto task using a text-to-text approach based on the
pre-trained Google T5 model, which is fine-tuned on the provided corpus. The output sequence of French
terms must correspond to a sequence of French pictogram terms and comply with the specifications of
AAC. T5 is an encoder-decoder Transformer available in various sizes, ranging from 60 million to 11
billion parameters [21]. Its ability to handle a wide range of NLP tasks by treating them all as text-to-text
problems makes it an attractive choice for Text-to-Picto translation. Unlike other SOTA models, such
as BERT [24] and GPT-2 [18], which are primarily designed for specific tasks like language modeling
or masked language modeling, T5’s unified text-to-text framework allows for greater flexibility and
adaptability across tasks.</p>
        <p>
          Fine-tuning is performed using the Seq2SeqTrainer class from the HuggingFace framework2 [25],
with code adapted from Macaire et al.3 [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Table 2 gives an overview of the model training and GPU
resources. Other hyperparameters for training include:
• Batch size: 8
• Learning rate: 2 · 10− 5
• Weight decay: 0.01
        </p>
        <p>Both the source data and target pictogram sequences are tokenized using the pre-trained tokenizer
corresponding to the size of the T5 model. Padding and truncation are used to ensure text sequence
lengths of 128 tokens, and the tokenizer has not been fine-tuned.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Speech-to-Picto</title>
        <p>
          For the Speech-to-Picto task, the speech is first converted to text using two models of the Whisper
family4 [26] before applying the same Text-to-Picto approach described above. The models used are
Whisper-small (244 million parameters) and Whisper-large (1,550 million parameters). We directly use
the Whisper models for inference; hence, no model training is involved in this process. The choice to
implement a cascade approach (Speech-to-Text followed by Text-to-Picto) rather than directly
finetuning on the audio was primarily dictated by the limited time available for the project. Furthermore,
the work by Macaire et al. [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] suggests that the cascade approach is superior to end-to-end models
that directly translate audio into pictogram tokens.
1google-t5/t5-small; google-t5/t5-base; google-t5/t5-large
2Hugging Face Transformers Seq2SeqTrainer documentation and repository.
3macairececile/speech-to-pictograms
4openai/whisper-small; openai/whisper-large
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Evaluation Methodology</title>
      <p>The evaluation is conducted using SacreBLEU [14], METEOR [27], and the Picto-term Error Rate
(PictoER), derived from the Word Error Rate (WER) [28].</p>
      <p>SacreBLEU is a standardized version of the BLEU score, which measures the number of common
n-grams between the two sequences. METEOR (Metric for Evaluation of Translation with Explicit
ORdering) provides a more nuanced evaluation by incorporating synonymy and stemming and capturing
additional semantic information that is not encoded in the BLEU score. PictoER is tailored for evaluating
translations involving pictorial terms. Instead of evaluating the number of errors at the word level, it
focuses on the number of errors of tokens, each linked to an ARASAAC pictogram.</p>
      <p>It is worth noting that this evaluation method does not account for cases where diferent words or
phrases correspond to the same pictogram. For instance, the French words “épuisé”, “exténué”, and
“fatigué” all convey similar meanings and are mapped to the same pictogram (displayed in Figure 1).
However, under the current evaluation approach, substituting one of these synonyms for another
would result in a lower score, despite semantic equivalence. The same limitation applies to numbers:
whether expressed as digits (e.g., “3” ) or in written form (e.g., “trois” ), they are represented by the same
pictogram, yet such variations are still penalized in the scoring.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Results and Discussion</title>
      <p>The model performance is evaluated using the three diferent metrics by comparing the predicted
pictogram sequence to the target (tgt).</p>
      <sec id="sec-6-1">
        <title>6.1. Text-to-Picto</title>
        <p>The results obtained by fine-tuning T5-base for 15 epochs are lower than those reported by Koushik
et al. [20], who achieved scores of 13.9, 74.4, and 87.1 for PictoER, BLEU, and METEOR, respectively,
after only 6 epochs. Comparable results are achieved after additional training up to 20 epochs. This
discrepancy may be attributed to diferences in the data used. Using the T5-large version of the model
significantly improved performance, yielding scores of 93.0, 95.7, and 3.4 for SacreBLEU, METEOR, and
PictoER, respectively. However, the scores obtained on the validation and test sets are substantially
lower, indicating limited generalization to unseen data. The loss curves presented in Appendix A.3 show
a slight increase in validation loss after 10 epochs, a trend that continues until 20 epochs, potentially
indicating overfitting. Figure 2 visualizes the performance metrics evaluated on the training and
validation sets for each epoch during T5-large fine-tuning. Performance appears to saturate around
15 epochs, with little improvement in evaluation metrics thereafter. A comparison of checkpoints at
epochs 19 and 20 of the same training run (Table 2) supports this observation; however, the diference
is insignificant compared to the variation between diferent training runs. An analysis of uncertainty
in performance metrics, such as by averaging over training runs, would be necessary to confirm this
observation.</p>
        <p>Some representative examples are selected for qualitative analysis to highlight behavioral diferences
between model sizes and inherent challenges associated with text-to-pictogram translation for this
particular dataset. The pictogram sequences are generated from predicted tokens using the Hugging
Face platform5. Extensive analysis can be found in the Appendix B.</p>
        <p>Key improvements observed with the T5-large model, compared to smaller versions, include a reduced
tendency to generate words that do not correspond to any existing pictogram. Both T5-base and
T5large demonstrate enhanced ability to correctly translate past tense and proper nouns of names and
places, which are often translated by a generic pictogram in the target. Moreover, sentences containing
numbers are challenging for smaller models but are translated more accurately by T5-large.</p>
        <p>Furthermore, we investigate the models’ ability to adapt to the in-domain training vocabulary,
specifically the pictogram terms encountered during training. By "training vocabulary," we refer to
the unique words present in all sentences within the training set. We diferentiate between the source
vocabulary (words) and the target vocabulary (pictogram terms). For instance, the training set contains
20,177 sentences with 23,731 words in the source vocabulary and 4,354 pictogram terms in the target
vocabulary. Similarly, the validation set includes 1,208 sentences with 3,558 words in the source
vocabulary and 1,502 pictogram terms in the target vocabulary. Notably, the target vocabularies for the
training and validation sets share 1,432 pictogram terms.</p>
        <p>The T5 models are pre-trained on a vast amount of diverse text data; therefore, we expect the models
to incorporate words seen during their pre-training in their predictions. During fine-tuning, the models
should learn a new vocabulary of pictogram terms. We estimate the model’s ability to do so by counting
the words in the mutual vocabulary between the target sentences and the model predictions. We
ifnd that the T5-small and T5-base models include between 2,700 and 2,800 of the pictogram terms
5https://huggingface.co/spaces/ToPicto/Visualize-Pictograms
encountered during training in their predictions. In comparison, the T5-large model appears to better
adopt the target vocabulary seen during training, with approximately 3,500 mutual pictogram terms.
This suggests that T5-large has a greater capacity to learn and utilize the target vocabulary efectively.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Speech-to-Picto</title>
        <p>To solve the Speech-to-Picto task, we combine a pre-trained ASR model with the best model fine-tuned
for Text-to-Picto translation. No training is involved in this process. Instead, we directly use the Whisper
models for inference, hence, we do not make use of the training and validation sets for this task. As
shown in Table 4, two diferent models from the Whisper family are used to produce transcripts from
the audio of the test data. As expected, the larger Whisper model outperforms the smaller one, most
likely due to higher-quality transcriptions.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion and Future Work</title>
      <p>In conclusion, the fine-tuned Google T5-large model exhibits strong performance in translating French
text into appropriate sequences of pictograms. These promising results contribute to eforts to bridge the
gap between AAC users and the broader society, facilitating efective communication. However, there
is still room to improve the model’s ability to generalize to unseen data and to reduce the generation of
non-pictogram words.</p>
      <p>Additionally, the Speech-to-Text-to-Picto solution, which utilizes Whisper to produce transcripts
and the fine-tuned T5 model for translation, shows potential. Further refinement is needed to ensure
accurate translations from spoken language to pictogram sequences.</p>
      <p>In this study, the maximum number of tokens generated by the model was set to 64, since the longest
sentences in the test set contained 62 words. Increasing this parameter could potentially improve
predictions, depending on how tokens are generated with the T5 tokenizer. To enhance generalization
on unseen data, techniques such as regularization or dropout could be employed, or the model could be
trained on more diverse datasets. Furthermore, models fine-tuned for Text-to-Picto translation must
adapt to a specialized vocabulary of pictogram terms. Future work could focus on further investigation
and optimization of this in-domain adaptation.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>Co-funded by the European Union under the Marie Skłodowska-Curie Grant Agreement No 101081465
(AUFRANDE). Views and opinions expressed are however, those of the author(s) only and do not
necessarily reflect those of the European Union or the Research Executive Agency. Neither the European
Union nor the Research Executive Agency can be held responsible for them.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT and Grammarly to check grammar and
spelling, paraphrase, and reword. After using these tools, the authors reviewed and edited the content
as needed and take full responsibility for the content of the publication.
Multimodal French Corpus of Aligned Speech, Text, and Pictogram Sequences for
Speech-toPictogram Machine Translation, in: LREC-COLING 2024, Turin, Italy, 2024. URL: https://hal.
science/hal-04534234.
[13] C. Macaire, C. Dion, L. Ormaechea, J. Arrigo, C. Lemaire, E. Esperança-Rodier, B. Lecouteux,
D. Schwab, Propicto, 2024. URL: https://hdl.handle.net/11403/propicto/v1.1, ORTOLANG (Open
Resources and Tools for Language) – www.ortolang.fr.
[14] M. Post, A call for clarity in reporting BLEU scores, in: O. Bojar, R. Chatterjee, C. Federmann,
M. Fishel, Y. Graham, B. Haddow, M. Huck, A. J. Yepes, P. Koehn, C. Monz, M. Negri, A. Névéol,
M. Neves, M. Post, L. Specia, M. Turchi, K. Verspoor (Eds.), Proceedings of the Third Conference on
Machine Translation: Research Papers, Association for Computational Linguistics, Brussels,
Belgium, 2018, pp. 186–191. URL: https://aclanthology.org/W18-6319/. doi:10.18653/v1/W18-6319.
[15] B. Anand, T. J, S. Sai R, C. P, M. TT, SSN-MLRG at Text to Picto 2024: A BERT-Based Approach for
Mapping French Sentences to Pictogram Terms, in: Working Notes of CLEF 2024 – Conference
and Labs of the Evaluation Forum, volume 3740 of CEUR Workshop Proceedings, CEUR-WS.org,
2024. URL: https://ceur-ws.org/Vol-3740/paper-135.pdf, notebook for the ImageCLEF Lab at CLEF
2024.
[16] L. Martin, B. Muller, P. J. Ortiz Suárez, Y. Dupont, L. Romary, de la Clergerie, D. Seddah, B. Sagot,
Camembert: a tasty french language model, in: Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics, Association for Computational Linguistics, 2020. URL:
http://dx.doi.org/10.18653/v1/2020.acl-main.645. doi:10.18653/v1/2020.acl-main.645.
[17] A. Elliah, A. Narayanan P, B. S, P. Mirunalini, Text-to-picto using lexical simplification, in: Working
Notes of CLEF 2024 – Conference and Labs of the Evaluation Forum, volume 3740 of CEUR Workshop
Proceedings, CEUR-WS.org, 2024. URL: https://ceur-ws.org/Vol-3740/paper-146.pdf, notebook for
the ImageCLEF Lab at CLEF 2024.
[18] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, Language models are unsupervised
multitask learners (2019).
[19] M. Zampieri, P. Nakov, Y. Scherrer, Natural language processing for similar languages,
varieties, and dialects: A survey, Natural Language Engineering 26 (2020) 595–612. doi:10.1017/
S1351324920000492.
[20] A. Koushik, J. Morrison S, P. Mirunalini, J. A. R K, Text-to-picto using lexical simplification, in:
Working Notes of CLEF 2024 – Conference and Labs of the Evaluation Forum, volume 3740 of CEUR
Workshop Proceedings, CEUR-WS.org, 2024. URL: https://ceur-ws.org/Vol-3740/paper-146.pdf,
notebook for the ImageCLEF Lab at CLEF 2024.
[21] C. Rafel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, P. J. Liu, Exploring
the limits of transfer learning with a unified text-to-text transformer, 2023. URL: https://arxiv.org/
abs/1910.10683. arXiv:1910.10683.
[22] R. Ardila, M. Branson, K. Davis, M. Kohler, J. Meyer, M. Henretty, R. Morais, L. Saunders, F. Tyers,
G. Weber, Common voice: A massively-multilingual speech corpus, Proceedings of the Twelfth
Language Resources and Evaluation Conference (2020) 4218–4222. URL: https://aclanthology.org/
2020.lrec-1.520/.
[23] C. Benzitoun, J.-M. Debaisieux, H.-J. Deulofeu, Le projet ORFÉO : un corpus d’étude pour le
français contemporain, Corpus (2016). URL: http://journals.openedition.org/corpus/2936. doi:10.
4000/corpus.2936, en ligne, mis en ligne le 15 janvier 2017, consulté le 24 mai 2025.
[24] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers
for language understanding, 2019. URL: https://arxiv.org/abs/1810.04805. arXiv:1810.04805.
[25] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M.
Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger,
M. Drame, Q. Lhoest, A. M. Rush, Transformers: State-of-the-art natural language processing,
in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing:
System Demonstrations, Association for Computational Linguistics, Online, 2020, pp. 38–45. URL:
https://www.aclweb.org/anthology/2020.emnlp-demos.6.
[26] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, I. Sutskever, Robust speech recognition
via large-scale weak supervision, 2022. URL: https://arxiv.org/abs/2212.04356. doi:10.48550/
ARXIV.2212.04356.
[27] S. Banerjee, A. Lavie, METEOR: An automatic metric for MT evaluation with improved correlation
with human judgments, in: J. Goldstein, A. Lavie, C.-Y. Lin, C. Voss (Eds.), Proceedings of the
ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or
Summarization, Association for Computational Linguistics, Ann Arbor, Michigan, 2005, pp. 65–72.</p>
      <p>URL: https://aclanthology.org/W05-0909/.
[28] J. Woodard, J. Nelson, An information theoretic measure of speech recognition performance,
in: Workshop on standardisation for speech I/O technology, Naval Air Development Center,
Warminster, PA, 1982.</p>
    </sec>
    <sec id="sec-10">
      <title>A. Loss Curves and T5-small and T5-base Model Performances</title>
      <sec id="sec-10-1">
        <title>A.1. T5-small</title>
        <p>A.2. T5-base</p>
      </sec>
      <sec id="sec-10-2">
        <title>A.3. Loss Curves</title>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>B. Detailed Analysis of Generated Pictogram Sequences</title>
      <p>This section presents the analysis of results obtained on the validation set with the diferent model sizes
across four cases of linguistic analysis.</p>
      <sec id="sec-11-1">
        <title>B.1. Case 1 - Generation of Non-pictograms Words</title>
        <p>Case 1 (Table 5 and Figure 6) demonstrates the tendency of models to generate words that do not
correspond to any existing pictogram. Although this tendency diminishes with increasing model size,
the example in Table 5 shows that the word "constater" is still produced by the T5-large model, despite
lacking an associated pictogram.
non non ça non en france j’ai constaté à plusieurs reprises que on ne
savait même pas dire si on était belge
non celle-là non au france passé me à plusieurs une_autre_fois
prise_murale que nous même dire non si nous être belgique
non celle-là non au france passé me constater à plusieurs reprise que
nous dire non si nous être
non celle-là non au france passé me constater à plusieurs reprise que
nous avoir même dire non si nous être belgique
non celle-là non au france passé me constater à plusieurs une_autre_fois
prise_murale que nous savoir non dire si nous être belgique</p>
      </sec>
      <sec id="sec-11-2">
        <title>B.2. Case 2 - Handling Past Tense</title>
        <p>A limitation of the T5-small model was observed in its handling of past tense. As illustrated in Case 2
(Table 6 and Figure 7), although all generated pictograms are valid, the temporal aspect is lost in the
output of T5-small. Both T5-base and T5-large correctly retain this temporal information.</p>
      </sec>
      <sec id="sec-11-3">
        <title>B.3. Case 3 - Handling Names and Places</title>
        <p>A specific feature of pictogram translation is that only some cities and countries have their own
pictograms, otherwise a city will be translated by the generic pictogram "ville", a person by the generic
pictogram "haut_du_corps", a pictogram that represents the upper body of a person. This rule is
generally understood across all models. However, as illustrated in Case 3A (Table 7 and Figure 8),
T5-small incorrectly interprets the city name "Saint-Paul" as a person, resulting in the pictogram "haut
du corps". Both T5-base and T5-large provide the correct translation in this instance.</p>
        <p>Case 3B (Table 8 and Figure 9) presents a more complex scenario involving the proper noun
"Musikhochschule", the German word for "music school". The correct translation corresponds to
the generic pictogram "association_à_but_non_lucratif" (non-profit organization). Both T5-small and
son président est joseph sinimalé maire de saint-paul
son président être haut_du_corps maire de ville
son président être haut_du_corps maire de haut_du_corps
son président être haut_du_corps maire de ville
son président être haut_du_corps maire de ville
T5-base translate this term into the French "école_musicale", which, although semantically accurate,
lacks a corresponding pictogram and is thus not a valid output. In contrast, T5-large successfully
generates the appropriate pictogram. Nevertheless, the final two pictograms in T5-large’s output are
missing, indicating incomplete translation.</p>
      </sec>
      <sec id="sec-11-4">
        <title>B.4. Case 4 - Handling Numbers</title>
        <p>The final example, Case 4 (Table 9 and Figure 10), highlights a scenario in which even the T5-large model
struggles to produce an accurate translation, particularly in handling numerical data and addresses.
Although there is a noticeable improvement in translation quality with increasing model size in this
example, one pictogram remains incorrectly translated in the T5-large output.
quatorze t square des tilleuls trente et un huit cent vingt pibrac
14 ville 30 et un 8 vingt ville
quatorze t carré de ville trente et un huit cent vingt pibrac
quelqu’un toi carré de trente et un 8 20
14 de 30 et un 8 vingt ville
(c) T5-base</p>
        <p>(a) tgt
(c) T5-base</p>
        <p>(a) tgt
(c) T5-base
(d) T5-large
(b) T5-small</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Syriopoulou-Delli</surname>
          </string-name>
          , E. Gkiolnta,
          <article-title>Efectiveness of diferent types of augmentative and alternative communication (aac) in improving communication skills and in enhancing the vocabulary of children with asd: a review</article-title>
          ,
          <source>Review Journal of Autism and Developmental Disorders</source>
          <volume>9</volume>
          (
          <year>2021</year>
          ).
          <source>doi:10.1007/s40489-021-00269-4.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drăgulinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          , L. Bloch,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Pakull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Damm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bracke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Prokopchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Karpenka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Macaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lecouteux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Esperança-Rodier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yetisgen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hicks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Thambawita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Storås</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kiesel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          , Overview of imageclef 2024:
          <article-title>Multimedia retrieval in medical applications, in: Experimental IR Meets Multilinguality</article-title>
          , Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 15th International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Lecture Notes in Computer Science, Springer, Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Macaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fabre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lecouteux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwab</surname>
          </string-name>
          ,
          <article-title>Overview of the 2025 imagecleftopicto task - investigating the generation of pictogram sequences from text and speech</article-title>
          , in: CLEF2025 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org, Madrid, Spain,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.-C.</given-names>
            <surname>Stanciu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-G.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Prokopchuk</surname>
          </string-name>
          , Ştefan, LiviuDaniel, M.-G. Constantin,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Damm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M. G.</given-names>
            <surname>Pakull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bracke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Pelka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Eryilmaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Becker</surname>
          </string-name>
          , W.-W. Yim,
          <string-name>
            <given-names>N.</given-names>
            <surname>Codella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Novoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Malvehy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dimitrov</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. J. Das</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>H. M.</given-names>
          </string-name>
          <string-name>
            <surname>Shan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Nakov</surname>
            , I. Koychev,
            <given-names>S. A.</given-names>
          </string-name>
          <string-name>
            <surname>Hicks</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gautam</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <string-name>
            <surname>Riegler</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Thambawita</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Halvorsen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fabre</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Macaire</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Lecouteux</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Schwab</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Potthast</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Heinrich</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Kiesel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Stein</surname>
          </string-name>
          , Overview of imageclef 2025:
          <article-title>Multimedia retrieval in medical, social media and content recommendation applications, in: Experimental IR Meets Multilinguality</article-title>
          , Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 16th International Conference of the CLEF Association (CLEF</source>
          <year>2025</year>
          ), Springer Lecture Notes in Computer Science LNCS, Madrid, Spain,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>30</volume>
          (nips
          <year>2017</year>
          )
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>T. B. Brown</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ryder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Subbiah</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Kaplan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dhariwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Neelakantan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Shyam</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Askell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Herbert-Voss</surname>
            , G. Krueger,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Henighan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Child</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ramesh</surname>
            ,
            <given-names>D. M.</given-names>
          </string-name>
          <string-name>
            <surname>Ziegler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Winter</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Hesse</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            , E. Sigler,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Litwin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gray</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Chess</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Berner</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>McCandlish</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Radford</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Sutskever</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Amodei</surname>
          </string-name>
          ,
          <article-title>Language models are few-shot learners</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>33</volume>
          ,
          <year>Neurips 2020</year>
          33 (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <source>Naacl Hlt</source>
          <year>2019</year>
          - 2019
          <article-title>Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -</article-title>
          <source>Proceedings of the Conference</source>
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baevski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Auli</surname>
          </string-name>
          , wav2vec
          <volume>2</volume>
          .
          <article-title>0: A framework for self-supervised learning of speech representations</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>33</volume>
          ,
          <year>Neurips 2020</year>
          33 (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Vaschalde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Trial</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Esperança-Rodier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lecouteux</surname>
          </string-name>
          ,
          <article-title>Automatic pictogram generation from speech to help the implementation of a mediated communication</article-title>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vandeghinste</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sevens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Schuurman</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. Van Eynde</surname>
          </string-name>
          ,
          <article-title>Translating text into pictographs</article-title>
          ,
          <source>Natural Language Engineering</source>
          <volume>23</volume>
          (
          <year>2017</year>
          )
          <fpage>217</fpage>
          -
          <lpage>244</lpage>
          . doi:
          <volume>10</volume>
          .1017/S135132491500039X.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Macaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lecouteux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Esperança-Rodier</surname>
          </string-name>
          ,
          <article-title>Towards Speech-to-Pictograms Translation</article-title>
          ,
          <source>in: Interspeech</source>
          <year>2024</year>
          , ISCA, Kos / Greece, Greece,
          <year>2024</year>
          , pp.
          <fpage>857</fpage>
          -
          <lpage>861</lpage>
          . URL: https: //hal.science/hal-04687483. doi:
          <volume>10</volume>
          .21437/Interspeech.2024-
          <volume>490</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Macaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Arrigo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lemaire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Esperança-Rodier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lecouteux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwab</surname>
          </string-name>
          , A
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>