=Paper= {{Paper |id=Vol-2482/paper35 |storemode=property |title=2nd International Workshop on Rumours and Deception in Social Media: Preface |pdfUrl=https://ceur-ws.org/Vol-2482/paper35.pdf |volume=Vol-2482 |authors=Ahmet Aker,Arkaitz Zubiaga,Kalina Bontcheva,Maria Liakata,Rob Procter |dblpUrl=https://dblp.org/rec/conf/cikm/AkerZBLP18 }} ==2nd International Workshop on Rumours and Deception in Social Media: Preface== https://ceur-ws.org/Vol-2482/paper35.pdf
2nd International Workshop on Rumours and Deception
               in Social Media: Preface

                    Ahmet Aker                                 Arkaitz Zubiaga
      University of Duisburg-Essen, Germany         Queen Mary University of London, UK
            University of Sheffield, UK                     a.zubiaga@qmul.ac.uk
              a.aker@is.inf.uni-due.de
          Kalina Bontcheva                         Maria Liakata, Rob Procter
     University of Sheffield, UK      University of Warwick and Alan Turing Institute, UK
     k.bontcheva@sheffield.ac.uk             {m.liakata,rob.procter}@warwick.ac.uk



                                                                 with public opinion formation. Information disorder
                                                                 has been categorised into three types [WD17]: (1) mis-
                         Abstract                                information, an honest mistake in information sharing,
                                                                 (2) disinformation, deliberate spreading of inaccurate
    This preface introduces the proceedings of the               information, and (3) malinformation, accurate infor-
    2nd International Workshop on Rumours and                    mation that is intended to harm others, such as leaks.
    Deception in Social Media (RDSM’18), co-
    located with CIKM 2018 in Turin, Italy.                      2     Accepted papers
1    Introduction                                                The workshop received 17 submissions from multiple
                                                                 countries, of which 10 (58.9%) were ultimately ac-
Social media is an excellent resource for mining all             cepted for inclusion in these proceedings and presen-
kinds of information, varying from opinions to actual            tation at the workshop:
facts. However, not all information in social media
posts is reliable [ZAB+ 18] and thus their truth value               • Kefato et al. [KSB+ 18] propose a fully network-
can often be questionable. One such category of in-                    agnostic approach called CaTS that models the
formation types is rumours where the veracity level is                 early spread of posts (i.e., cascades) as time series
not known at the time of posting. Some rumours are                     and predicts their virality.
true, but many of them are false, and the deliberate
fabrication and propagation of false rumours can be a                • Caled and Silva [CS18] describe ongoing work on
powerful tool for the manipulation of public opinion.                  the creation of a multilingual rumour dataset on
It is therefore very important to be able to detect and                football transfer news, FTR-18.
provide verification of false rumours before they spread
widely and influence public opinion. In this workshop                • Yao and Hauptmann [YH18a] analyse the power
the aim is to bring together researchers and practition-               of the crowd for checking the veracity of rumours,
ers interested in social media mining and analysis to                  which they formulate as a reviewer selection prob-
deal with the emerging issues of rumour veracity as-                   lem. Their work aims to find reliable reviewers for
sessment and their use in the manipulation of public                   a particular rumour.
opinion.
    The 2nd edition of the RDSM workshop took place                  • Yang and Yu [YY18] propose a reinforcement
in Turin, Italy in October 2018, co-located with CIKM                  learning framework that aims to incorporate in-
2018. It was organised with the aim of focusing partic-                terpersonal deception theories to fight against so-
ularly on online information disorder and its interplay                cial engineering attacks.

Copyright © CIKM 2018 for the individual papers by the papers'       • Conforti et al. [CPC18] propose a simple archi-
authors. Copyright © CIKM 2018 for the volume as a collection
                                                                       tecture for stance detection based on conditional
                                                                       encoding, carefully designed to model the internal
by its editors. This volume and its papers are published under
the Creative Commons License Attribution 4.0 International (CC
BY 4.0).
    structure of a news article and its relations with     [KSB+ 18] Zekarias T. Kefato, Nasrullah Sheikh,
    a claim.                                                         Leila Bahri, Amira Soliman, Alberto Mon-
                                                                     tresor, and Sarunas Girdzijauskas. Cats:
  • Roitero et al. [RDMS18] report on collecting                     Network-agnostic virality prediction model
    truthfulness values (i) by means of crowdsourc-                  to aid rumour detection. In Proc. of 2nd
    ing and (ii) using fine-grained scales. They collect             RDSM, 2018.
    truthfulness values using a bounded and discrete
    scale with 100 levels as well as a magnitude esti-     [PBP18]    Endang Wahyu Pamungkas, Valerio
    mation scale, which is unbounded, continuous and                  Basile, and Viviana Patti. Stance clas-
    has infinite amount of levels.                                    sification for rumour analysis in twitter:
                                                                      Exploiting affective information and
  • Skorniakov et al. [STZ18] describe an approach to                 conversation structure. In Proc. of 2nd
    the detection of social bots using a stacking based               RDSM, 2018.
    ensemble, which exploits text and graph features.
                                                           [RDMS18] Kevin Roitero, Gianluca Demartini, Ste-
                           +
  • Caetano et al. [CMC 18] investigate the public                  fano Mizzaro, and Damiano Spina. How
    perception of WhatsApp through the lens of me-                  many truth levels? six? one hundred?
    dia. They analyse two large datasets of news and                even more? validating truthfulness of
    show the kind of content that is being associated               statements via crowdsourcing. In Proc. of
    with WhatsApp in different regions of the world                 2nd RDSM, 2018.
    and over time.                                         [STZ18]    Kirill Skorniakov, Denis Turdakov, and
                                                                      Andrey Zhabotinsky. Make social net-
  • Pamungkas et al.        [PBP18] describe an ap-
                                                                      works clean again: Graph embedding and
    proach to stance classification, which leverages
                                                                      stacking classifiers for bot detection. In
    conversation-based and affective-based features,
                                                                      Proc. of 2nd RDSM, 2018.
    covering different facets of affect.
                                                           [WD17]     Claire Wardle and Hossein Derakhshan.
  • Yao and Hauptmann [YH18b] analyse a publicly                      Information disorder: Toward an interdis-
    available dataset of Russian trolls. They analyse                 ciplinary framework for research and poli-
    tweeting patterns over time, revealing that these                 cymaking. Council of Europe report, DGI
    accounts differ from traditional bots and raise new               (2017), 9, 2017.
    challenges for bot detection methods.
                                                           [YH18a]    Jianan Yao and Alexander G. Hauptmann.
                                                                      Reviewer selection for rumor checking on
Acknowledgments                                                       social media. In Proc. of 2nd RDSM, 2018.
We would like to thank the programme committee
members for their support.                                 [YH18b]    Jianan Yao and Alexander G. Hauptmann.
                                                                      Temporal patterns of russian trolls: A case
                                                                      study. In Proc. of 2nd RDSM, 2018.
References
                                                           [YY18]     Grace Hui Yang and Yue Yu. Use of inter-
[CMC+ 18] Josemar Alves Caetano, Gabriel Magno,                       personal deception theory in counter social
          Evandro Cunha, Wagner Meira Jr., Hum-                       engineering. In Proc. of 2nd RDSM, 2018.
          berto T. Marques-Neto, and Virgilio
          Almeida. Characterizing the public per-          [ZAB+ 18] Arkaitz Zubiaga, Ahmet Aker, Kalina
          ception of whatsapp through the lens of                    Bontcheva, Maria Liakata, and Rob Proc-
          media. In Proc. of 2nd RDSM, 2018.                         ter. Detection and resolution of rumours in
                                                                     social media: A survey. ACM Computing
[CPC18]    Costanza Conforti, Mohammad Taher                         Surveys (CSUR), 51(2):32, 2018.
           Pilehvar, and Nigel Collier. Modeling the
           fake news challenge as a cross-level stance
           detection task. In Proc. of 2nd RDSM,
           2018.

[CS18]     Danielle Caled and Mário J. Silva. Ftr-
           18: Collecting rumours on football transfer
           news. In Proc. of 2nd RDSM, 2018.