=Paper= {{Paper |id=Vol-2151/preface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-2151/preface.pdf |volume=Vol-2151 }} ==None== https://ceur-ws.org/Vol-2151/preface.pdf
         Proceedings of the First SICSA Workshop on
           Reasoning, Learning and Explainability:
                          ReaLX 18

                 Kyle Martin1 , Nirmalie Wiratunga1 , Leslie S. Smith2
                      1
                        Robert Gordon University, Aberdeen, Scotland
                      {k.martin, n.wiratunga}@rgu.ac.uk
                         2
                           University of Stirling, Stirling, Scotland
                            l.s.smith@cs.stir.ac.uk


Preface
Reasoning, Learning and Explainability are key to AI systems that must interact natu-
rally to support users in decision making. Systems need to be capable of explaining their
output. Regulations increasingly support users rights to fair and transparent processing
in automated decision-making systems. Addressing this challenge is steadily becoming
more urgent as the increasing reliance on learned models in deployed applications con-
tinues to be driven by the recent success of deep learning and other data-driven systems.
    Though models learned directly from data offer improved accuracy, mapping these
concepts to facilitate human reasoning is difficult. In contrast, reasoning systems can
offer transparency through logical alignment of representation and reasoning methods
to allow the necessary insight into the decision-making process. This is a core principle
behind explainability and is critical if we are to use AI with the intent of improving user
performance and experience.

The Event
The Reasoning, Learning and Explainability (ReaLX) Workshop was designed to present
a forum for the dissemination of ideas on AI methods. The event was organised into
several themed sessions.
 – Session 1 - Applications of AI
 – Session 2 - Learning to Recognise
 – Session 3 - Learning to Reason
    The SICSA ReaLX Workshop 2018 was an inspiring success. Despite initially be-
ing limited to a Scottish university event, we were fortunate enough to receive approx-
imately 50 attendees on the day, some of whom traveled from Europe. A total of 12
papers were submitted (nine short papers, two position papers and one demo paper) and
peer-reviewed. Of those 12, 10 papers appear in these proceedings. There were two in-
vited talks presented over the course of the event.The first, presented by Professor Ehud
Reiter (Aberdeen University), examined the terminology we use to communicate data
with people. The second, presented by Professor Helen Hastie (Heriot-Watt University),
discussed the topic of trustworthy robotics through explainability.
2         K. Martin et al.

Organising Committee

This workshop was only possible through the hard work and dedication of a number of
individuals.

     Workshop Chairs

    – Prof Nirmalie Wiratunga (Robert Gordon University)
    – Prof Leslie Smith (University of Stirling)
    – Prof Emma Hart (Edinburgh Napier University)
     Local Organisers
    – Mr Kyle Martin (Robert Gordon University)
    – Dr Sadiq Sani (Robert Gordon University)
     Programme Committee
    – Dr David Corsar (Aberdeen University)
    – Dr Eyad Elyan (Robert Gordon University)
    – Dr Chenghua Lin (Aberdeen University)
    – Dr Stewart Massie (Robert Gordon University)
    – Dr Nir Oren (Aberdeen University)
    – Dr Andrei Petrovski (Robert Gordon University)
    – Miss Anjana Wijekoon (Robert Gordon University)

   The organising committee would also like to thank all of our authors and attendees.
Without you, this event could not have happened.