<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the Limitations of Zero-Shot Classification of Causal Relations by LLMs (Work in Progress)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vani Kanjirangat⇤</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Antonucci</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Za</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>We aim to explore and analyze the capabilities and limitations of the large language models in understanding and distinguishing causal sentences under a zero-shot setting. We experiment on a multi-class dataset of direct causal, conditional causal, and correlational sentences. In the experiments, the GPT and Falcon models are validated against a ne-tuned BERT model under di erent settings to explore zero-shot capabilities in causality detection. Zero-shot approaches exhibit good performance in other classi cation tasks, such as sentiment analysis or question answering. Yet, for this task, the ne-tuned approach seems superior, and the situation does not change if language cues are added or a few-shot setting is considered. This is a preliminary analysis of a work in progress. Still, the results suggest that identifying causal relations is a particularly challenging task that is hard to address in a zero-shot setup.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Large language models</kwd>
        <kwd>zero-shot classi cation</kwd>
        <kwd>few-shot classi cation</kwd>
        <kwd>causal inference</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The adoption of large language models (LLMs) [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] is rapidly growing, primarily because of the
zero-shot capabilities exhibited by these tools in a wide range of natural language processing
tasks, such as sentiment analysis or recommendations and knowledge-intensive tasks, such as
question answering and domain-speci c entity recognition [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6 ref7">3, 4, 5, 6, 7</xref>
        ]. Despite such popularity,
it is essential to understand the limitations and address questions such as: where can these
models fall back? What are the possibilities of such fallbacks? How can we improve their
performance beyond prompting engineering [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]?
      </p>
      <p>
        This paper is a preliminary report on our work (in progress) on evaluating the potential
of state-of-the-art LLMs in the eld of causal inference. More speci cally, we investigate the
performance of LLMs in a classi cation task with sentences possibly involving causal relations.
Our analysis focuses on zero- and few-shot capabilities of LLMs compared against a ne-tuning
setting with encoder-based BERT models, which are nowadays the most common choice for
classi cation tasks [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
        ]. Our tests show some limitations of LLM approaches in the causal
domain, this being the case either for zero-shot and few-shot setups. Notably, the situation
remains the same even if language cues are provided. Such negative results are in line with
some recent works presenting LLMs as causal parrots [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], not yet capable of genuine causal
reasoning [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], beyond just distinguishing between causes and e ects [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Recently, a plethora of research has been going on in the direction of exploiting the
zeroshot and few-shot capabilities of LLMs. Because of the vast amount of pre-trained data they
have been exposed to, large (&gt;10B parameters) language models are considered to have an
inherent ability to generalise across unseen tasks [
        <xref ref-type="bibr" rid="ref18 ref19 ref20">18, 19, 20</xref>
        ]. For instance, the number of
parameters of the recent GPT-3 and GPT-4 models is about, respectively, 175B and 1.76T.
Zeroshot and few-shot techniques have been tried with di erent prompting strategies (e.g., the
chain of thought) for both classi cation and generation tasks. In many knowledge-intensive
tasks (e.g., question answering), translations, classi cation tasks (e.g., sentiment analysis) and
recommendations, those approaches seem compelling, provided that an adequate, prompt
engineering e ort is achieved [
        <xref ref-type="bibr" rid="ref21 ref22 ref23 ref24">21, 22, 23, 24</xref>
        ]. These techniques may be inaccurate for many
other tasks, signi cantly when the complexity increases, such as multi-task classi cations and
hard sequence labelling tasks, especially in domain-speci c problems [
        <xref ref-type="bibr" rid="ref25 ref8">25, 8</xref>
        ]. Researchers have
come up with soft prompting approaches and parameter e cient tuning (PEFT) [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] approaches
such as P-tuning [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], prompt-tuning [
        <xref ref-type="bibr" rid="ref28 ref29">28, 29</xref>
        ] and variations of prompt infusions to overcome
these problems, while trying to achieve ne-tuning-based performances. The causal reasoning
ability of LLMs has been initially investigated in [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. The authors observe a good performance
with a pairwise causal discovery task, counterfactual reasoning task and actual causality by
conducting experiments on datasets of cause-e ect pairs. A critical review of the causality
inference and reasoning with LLMs on benchmark datasets is reported in [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]. The authors
specify the requirements of causal datasets and problems of evaluations with LLMs, such as
memorisation (the dataset could be part of LLM pre-training). They also indicate that LLMs can
answer many datasets by simply computing similarities between options and questions in a
vector space. Further, they also indicate that the good performance of LLMs can sometimes be
due to spurious language cues in the datasets. In the rest of the paper, we explore the capability
of LLMs with simple prompt-based approaches in identifying causality on a multi-class dataset.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset Settings</title>
      <p>
        For our analysis, we focus on the dataset from [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], developed to automate the identi cation of
causal language use in the scienti c literature. The data source was a collection of PubMed1
abstracts with ve main health topics –nutrition, diabetes, obesity, breast cancer, and cholesterol.
Two domain experts were asked to annotate the sentences manually. A good agreement
(Cohen’s kappa = 0.98) was reported. The original dataset refers to a multi-class setup with
four options: correlational, direct causal, conditional causal, and one without any relations [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ].
The entities possibly involved in the causal and correlational relations are not provided. Thus,
the identi cation depends on the speci c language patterns used in the input sentences. In
the correlational case, the sentence describes some association between variables. With direct
causal sentences, the cause and e ect are directly mentioned, while in the conditional case, the
relation de nition carries out an element of doubt. Finally, there are sentences with neither
causation nor correlation.
      </p>
      <p>We use the original dataset in the native multi-class setting and a binary classi cation task.
For the binary class, we drop the correlational sentences and combine the direct and conditional
causal sentences, thus having only two classes, one with no relations and the other with causal
relations. This is intended to allow for a focus on causal relation discrimination. The multi-class
dataset includes 1356 no-relation, 494 direct causal, 213 conditional and 998 correlational cases,
which makes up 3061 cases. In the binary class setting, we have 1356 no-relation cases and 707
cases of causal relations (which combines direct and conditional cases), with 2063 cases overall.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>To test the capability of LLM models in classifying causal and non-causal sentences under
zero-shot settings, we initially design suitable prompts to tackle the task with LLMs. We use
both binary and multi-class settings with the prompt including the text input in Fig. 1 and some
variations. For the binary settings, we just need to change the classes in the prompt.
system_msg = You are a helpful assistant for causal reasoning and cause-and-effect relationship discovery.
Your aim is to identify the entities and to categorize the input sentences into either direct causal relation
or conditional causal relation or correlational relation or no relationship exist
intro_msg = You will be provided with a text. Text: &lt;Text&gt;{text}&lt;/Text&gt;
instructions_msg = Please read the provided text carefully to comprehend the context and content.</p>
      <p>Examine the roles, interactions, and details surrounding the entities within the text.</p>
      <p>Based only on the information in the text, categorize the causal relation as
0. no relation
1. direct causal
2. conditional causal
3. correlational
Your response should analyze the situation in a step-by-step manner, ensuring the correctness of the ultimate conclusion,
which should accurately reflect the likely causal connection based on the information presented in the text.
If no clear causal relationship is apparent,
select the appropriate option accordingly, i.e., ’no relation’.
option_choice_msg = Your response should analyze the situation in a step-by-step manner,
ensuring the correctness of the ultimate conclusion,
which should accurately reflect the likely causal connection between the two entities based on the information presented in the text.
If no clear causal relationship is apparent, select the appropriate option accordingly.</p>
      <p>Then provide your final answer within the tags &lt;Answer&gt;[answer]&lt;/Answer&gt;, (e.g. &lt;Answer&gt;1&lt;/Answer&gt;).</p>
      <p>
        Following the indications from [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] and ndings from [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], we create another prompt
including language cues intended to help the LLM in providing more accurate classi cations. In
ne-tuning approach, we assume the model automatically captures these patterns given the
training data. In a zero-shot setting, with the absence of such training information, we want
to see the impact on model performance when some explicit domain knowledge is available.
We added the following cues – association, associated with, predictor for correlational, increase,
decrease, lead to, e ective in, contribute to, reduce for causal and along with may, might, appear to,
probably for conditional causal. These cues were then added to the zero-shot prompt (ZS-Cues).
Further, we tried them in a few-shot setup (FS-Cues) with some examples from each class (e.g.,
two samples for each class). Finally, we also consider a 500-shot experiment with labelled
samples, used also to train the BERT model under the same settings (500 samples for training).
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Results and Discussion</title>
      <p>For the ne-tuning approach, we use thebest-base-cased model [34] in both binary and
multiclass settings with k-fold cross-validation. We use SimpleTransformers2 with four epochs and a
learning rate of 2E-5. We experiment with GPT (3.5 Turbo) and the open-source Falcon model
(falcon-7b-instruct 3 and falcon-40b-instruct) in zero-shot settings. Falcon-7b-instruct is a 7B
parameters causal decoder-only model ne-tuned on a mixture of chats and instructions, while
falcon-40b-instruct is a bigger model with 40B parameters. From Tab. 1, it can be observed
that Falcon-7b and 40b give inferior performance compared to GPT. Comparing the two Falcon
models, the 40b outperformed the 7b model in multi-class and binary settings. This expected
result motivates us to focus our experiments on GPT models only.</p>
      <p>For further experiments, we use GPT to analyse the performance under di erent prompt
settings and compare them with ne-tuned BERT-based models. Tab.2 shows that, in both
settings, the performance of GPT under zero-shot settings is poor and the ne-tuned BERT model
performs better. In the multi-class case, many conditional causal relations are misclassi ed
as direct causal. Yet, the accuracy does not improve signi cantly in the binary setting. The
addition of cues (ZS-Cues) improves the performance, showing the importance of speci c
patterns that help in classi cation, especially with multi-class settings. In both cases, ZS-Cues
performed better than FS-Cues. This could be because the sentences in this dataset are quite
varied (extracted from the scienti c literature) and we cannot pre-assume that the selected
sentences for few-shot experiments could be the best representative for a class.
2http://simpletransformers.ai.
3https://huggingface.co/tiiuae/falcon-7b-instruct.</p>
      <p>For a deeper comparison against the FFT model, we prompt the GPT model with more
examples. An option would be ne-tuning the GPT model, but we keep this as a future study,
as here the focus is on prompting approaches. Further, there are restrictions on the number
of prompt tokens processed by GPT 3.5 model. As a reasonable prompting solution, we use
500 samples (corresponding to a 1:4 train-test split). The same samples are used to train BERT
under multi-class settings. This proportion makes the BERT performance comparable with
the one with k-fold cross validation (F1=0.81), while a drastic drop is obtained with a 1:9 split
(F1=0.36). For GPT, this setup requires a slight change in the prompt (Fig. 1), to include a list
of input text and give the corresponding predictions as a list. We then chunked the remaining
2449 test samples, each containing ten samples, to be passed on to the prompt. These steps are
intended to optimise prompt e ciency in terms of costs and time. The results are in Tab. 4.</p>
      <p>It can be observed that, with 500 samples, the performance of GPT model was better than its
zero-shot counterpart, but is not comparable with BERT model ne-tuned with the same 500
training samples. This seems to con rm, in the causal domain, the general ndings discussed in
[35]. At the same time, it is also notable that simply adding pattern information, like in ZS-Cues
and FS-Cues, makes the LLM performance better than the 500-shot model.</p>
      <p>Moreover, in some cases, GPT gives predictions not explicitly mentioned in the prompts. For
instance, the prediction was multi-label (neither of the labels included true prediction) or the
prediction was categorised into a new class (not described in the prompt). For the evaluations,
we had to remove such samples. With zero-shot multi-class, nally, we had 3055 instances and
2016 instances with binary class.</p>
      <p>Some explicit results from the zero-shot experiments are in Figs. 2, 3 and 4. Tab. 5 reports
examples of misclassi cation from the zero-shot prompting of GPT. Some of these instances
are hard to classify even for human experts. GPT almost always classi es a negative causality
as a no-relation case (I2, 3, and 4). Some other misclassi cations (I8) predicted as causal while
conditional in practice are too obvious. Similar patterns are observed in correlational cases (I10).
It can be also observed that the addition of cues helped in the correct classi cation of some
instances. The correct predictions are in bold. E.g., I7 and I9 show that the cue may could have
helped in the correct classi cation as conditional causal, and for I12 the cueassociated with.
text: However, obesity seems to be associated with more wound complications.</p>
      <p>The text states that "obesity seems to be associated with more wound complications."
This suggests a potential causal relationship between obesity and wound complications.</p>
      <p>To determine the type of causal relationship, we need to analyze the information provided.
1. Direct Causal Relation: This type of causal relation implies that one event directly causes another event.
In this case, if obesity directly causes wound complications, we would expect to see a clear cause-effect relationship between the two.
2. Conditional Causal Relation: This type of causal relation implies that one event causes another event
under certain conditions or circumstances. In this case, if obesity causes wound complications
only under specific conditions, we would need more information to determine the specific conditions.
3. Correlational Relation: This type of relation implies that two events are statistically related or occur together,
but one does not necessarily cause the other. In this case, if obesity and wound complications
are correlated but not causally related, we would expect to see a statistical association between the two.
Based on the information provided, it is not clear whether the relationship between obesity and wound complications
is a direct causal relation, a conditional causal relation, or a correlational relation.</p>
      <p>The text only states that obesity is associated with more wound complications,
but it does not provide information on the specific nature of the relationship or any conditions under which the relationship occurs.
Therefore, the appropriate categorization for the causal relationship in this text is: &lt;Answer&gt;3&lt;/Answer&gt; (correlational)
text: This study is one of the first proofs of concept showing that a nutrition-sensitive agriculture value chain approach
can contribute to improved child nutrition in a remote pastoralist population.</p>
      <p>To analyze the causal relationship in the provided text, we need to identify the entities involved and understand their interactions.
Entities:
1. Study
2. Nutrition-sensitive agriculture value chain approach
3. Improved child nutrition
4. Remote pastoralist population
Based on the information in the text, let’s analyze the causal relationship:
1. The study is one of the first proofs of concept.
2. The study shows that a nutrition-sensitive agriculture value chain approach can contribute to improved child nutrition.
3. The improved child nutrition is in a remote pastoralist population.</p>
      <p>From the above analysis, we can conclude that there is a direct causal relationship between the nutrition-sensitive agriculture
value chain approach and improved child nutrition in a remote pastoralist population.</p>
      <p>Therefore, the causal relationship in the text can be categorized as a direct causal relation. &lt;Answer&gt;1&lt;/Answer&gt;</p>
    </sec>
    <sec id="sec-6">
      <title>6. Limitations</title>
      <p>Finally, as our paper presents the results of a work in progress, let us discuss the limitations of
the present work and the possible enhancements we might consider for the ongoing work.</p>
      <p>We have used only one dataset; hence, whether our ndings remain valid in the general case
might be questionable. The dataset facilitates understanding how well LLMs identify the causal
descriptions embedded in scienti c literature under a more challenging multi-class setting,
including correlative and causal relations. Distinguishing between direct and conditional
causation is especially di cult. To the best of our knowledge, there are no datasets with
analogous characteristics, at least for multi-class settings. Yet, manually annotating scienti c
abstracts and creating new benchmarks for deeper validation is a realistic and necessary e ort.</p>
      <p>
        Moreover, in the current paper, we have focused on the GPT model and compared it with
the open-sourced Falcon and BERT-based models. This can be enhanced by comparing with
di erent LLMS. Further, the focus was on using prompt-based techniques, which have a broad
scope to be explored. Based on the nding from [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], we investigate techniques such as
incorporating language cues while prompting LLMs. One major problem is that LLMs can be
sensitive to manually engineered prompt designs; hence, automating prompts and using soft
prompt techniques would be the way forward.
      </p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusions and Outlooks</title>
      <p>This work is a preliminary exploration to understand the capabilities and limitations of the GPT
model in causality identi cations, speci cally in multi-class settings. The experiments show
that GPT has limited zero-shot and few-shot capabilities in capturing such causal relations,
subject to the data in consideration. Focusing on the limitations, in the future, we would like to
enhance our experiments on a range of causal data to have conclusive generalisations on the
studied facts. Prompt engineering as such has a lot of potential to be explored, while hard-core
engineering of prompts may not be always bene cial. Hence, we also plan to explore PEFT
techniques such as soft prompting for causal detection and further extractions of causal graphs.
[34] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional
transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018).
[35] T. Schick, H. Schütze, It’s not just size that matters: Small language models are also
few-shot learners, in: Proceedings of the 2021 Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language Technologies, 2021,
pp. 2339–2352.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Narasimhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Salimans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. Sutskever</given-names>
            ,
            <surname>Improving Language Understanding by Generative</surname>
          </string-name>
          Pre-Training,
          <source>Technical Report, OpenAI</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kojima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Reid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Matsuo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Iwasawa</surname>
          </string-name>
          ,
          <article-title>Large language models are zero-shot reasoners</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>35</volume>
          (
          <year>2022</year>
          )
          <fpage>22199</fpage>
          -
          <lpage>22213</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Zero-shot aspect-based sentiment analysis</article-title>
          ,
          <source>arXiv preprint arXiv:2202</source>
          .
          <year>01924</year>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tripathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>From fully supervised to zero shot settings for twitter hashtag recommendation</article-title>
          , arXiv preprint arXiv:
          <year>1906</year>
          .
          <volume>04914</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Teney</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          v. d. Hengel,
          <article-title>Zero-shot visual question answering</article-title>
          ,
          <source>arXiv preprint arXiv:1611.05546</source>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Z.-Y.</given-names>
            <surname>Dou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <article-title>Zero-shot commonsense question answering with cloze translation and consistency optimization</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Arti cial Intelligence</source>
          , volume
          <volume>36</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>10572</fpage>
          -
          <lpage>10580</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Katiyar</surname>
          </string-name>
          ,
          <article-title>Simple and e ective few-shot named entity recognition with structured nearest neighbor learning</article-title>
          , arXiv preprint arXiv:
          <year>2010</year>
          .
          <volume>02405</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chiriatti</surname>
          </string-name>
          , GPT-3
          <article-title>: Its nature, scope, limits, and consequences</article-title>
          ,
          <source>Minds and Machines</source>
          <volume>30</volume>
          (
          <year>2020</year>
          )
          <fpage>681</fpage>
          -
          <lpage>694</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>K.</given-names>
            <surname>Elkins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chun</surname>
          </string-name>
          ,
          <string-name>
            <surname>Can</surname>
            <given-names>GPT</given-names>
          </string-name>
          -
          <article-title>3 pass a writer's Turing test?</article-title>
          ,
          <source>Journal of Cultural Analytics</source>
          <volume>5</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. W.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Miao</surname>
          </string-name>
          ,
          <article-title>A survey of zero-shot learning: Settings, methods, and applications</article-title>
          ,
          <source>ACM Transactions on Intelligent Systems and Technology (TIST) 10</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          , G. Chen, G. Qian,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.-Y.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <article-title>Large-scale multi-modal pre-trained models: A comprehensive survey</article-title>
          ,
          <source>Machine Intelligence Research</source>
          <volume>20</volume>
          (
          <year>2023</year>
          )
          <fpage>447</fpage>
          -
          <lpage>482</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Khetan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramnani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Anand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sengupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Fano</surname>
          </string-name>
          ,
          <string-name>
            <surname>Causal</surname>
            <given-names>BERT</given-names>
          </string-name>
          :
          <article-title>Language models for causality detection between events expressed in text</article-title>
          ,
          <source>in: Intelligent Computing: Proceedings of the 2021 Computing Conference</source>
          , Volume
          <volume>1</volume>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>965</fpage>
          -
          <lpage>980</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Aftan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>A survey on BERT and its applications, in: 2023 20th Learning</article-title>
          and Technology
          <string-name>
            <surname>Conference (L&amp;T)</surname>
          </string-name>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ze evi</surname>
          </string-name>
          , M. Willig,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Dhami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kersting</surname>
          </string-name>
          ,
          <article-title>Causal parrots: Large language models may talk causality but are not causal</article-title>
          ,
          <source>Transactions on Machine Learning Research</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Janzing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. van der</given-names>
            <surname>Schaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Locatello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Spirtes</surname>
          </string-name>
          ,
          <article-title>Causality in the time of LLMs: Round table discussion results of CLeaR 2023</article-title>
          ,
          <source>Proceedings of Machine Learning Research</source>
          vol TBD
          <volume>1</volume>
          (
          <year>2023</year>
          )
          <article-title>7</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhiheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mihalcea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sachan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schölkopf</surname>
          </string-name>
          ,
          <article-title>Can large language models distinguish cause from e ect?</article-title>
          ,
          <source>in: UAI 2022 Workshop on Causal Representation Learning</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Antonucci</surname>
          </string-name>
          , G. Piqué,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Za alon, Zero-shot causal graph extrapolation from text via LLMs</article-title>
          , arXiv preprint arXiv:
          <volume>2312</volume>
          .14670 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>E.</given-names>
            <surname>Almazrouei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Alobeidli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Alshamsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cappelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cojocaru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Debbah</surname>
          </string-name>
          , É. Go net, D. Hesslow,
          <string-name>
            <given-names>J.</given-names>
            <surname>Launay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Malartic</surname>
          </string-name>
          , et al.,
          <article-title>The falcon series of open language models</article-title>
          ,
          <source>arXiv preprint arXiv:2311.16867</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Koopman</surname>
          </string-name>
          , G. Zuccon,
          <article-title>Open-source large language models are strong zero-shot query likelihood models for document ranking</article-title>
          ,
          <source>arXiv preprint arXiv:2310.13243</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>Large language models are zero-shot text classi ers</article-title>
          ,
          <source>arXiv preprint arXiv:2312.01044</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>T.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ryder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Subbiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Neelakantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shyam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          , et al.,
          <article-title>Language models are few-shot learners</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>1877</fpage>
          -
          <lpage>1901</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Joo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Seo</surname>
          </string-name>
          ,
          <article-title>The CoT collection: Improving zero-shot and few-shot learning of language models via chain-of-thought ne-tuning</article-title>
          ,
          <source>arXiv preprint arXiv:2305.14045</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Regen:
          <article-title>Zero-shot text classication via training data generation with progressive dense retrieval</article-title>
          ,
          <source>arXiv preprint arXiv:2305.10703</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Gan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>An empirical study of GPT-3 for few-shot knowledge-based vqa</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Arti cial Intelligence</source>
          , volume
          <volume>36</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>3081</fpage>
          -
          <lpage>3089</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moradi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Blagec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Haberl</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Samwald, GPT-3 models are poor few-shot learners in the biomedical domain</article-title>
          ,
          <source>arXiv preprint arXiv:2109.02555</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. M.-C. So</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Lam</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Bing</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Collier</surname>
          </string-name>
          ,
          <article-title>On the e ectiveness of parametere cient ne-tuning</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Arti cial Intelligence</source>
          , volume
          <volume>37</volume>
          ,
          <year>2023</year>
          , pp.
          <fpage>12799</fpage>
          -
          <lpage>12807</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Tam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          , P-tuning:
          <article-title>Prompt tuning can be comparable to ne-tuning across scales and tasks</article-title>
          ,
          <source>in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>2</volume>
          :
          <string-name>
            <surname>Short</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <year>2022</year>
          , pp.
          <fpage>61</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>X. L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <article-title>Pre x-tuning: Optimizing continuous prompts for generation</article-title>
          ,
          <source>arXiv preprint arXiv:2101.00190</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          , GPT understands, too,
          <source>AI</source>
          Open (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kıcıman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>Causal reasoning and large language models: Opening a new frontier for causality</article-title>
          ,
          <source>arXiv preprint arXiv:2305.00050</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>L.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Clivio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shirvaikar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Falck</surname>
          </string-name>
          ,
          <article-title>A critical review of causal inference benchmarks for large language models</article-title>
          ,
          <source>in: AAAI 2024 Workshop on “Are Large Language Models Simply Causal Parrots?”</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>B.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Detecting causal language use in science ndings</article-title>
          ,
          <source>in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>4664</fpage>
          -
          <lpage>4674</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>P.</given-names>
            <surname>Sumner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vivian-Gri ths</surname>
          </string-name>
          , J.
          <string-name>
            <surname>Boivin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
          <string-name>
            <surname>Venetis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Davies</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Ogden</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Whelan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Hughes</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Dalton</surname>
          </string-name>
          , et al.,
          <article-title>The association between exaggeration in health related science news and academic press releases: retrospective observational study</article-title>
          ,
          <source>BMJ</source>
          <volume>349</volume>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>