<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preface of the 2018 Symposium on Adversary Aware Learning Techniques and Trends in Cybersecurity (ALEC) (co-located with AAAI Fall Symposium Series 2018)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Prithviraj Dasgupta</string-name>
          <email>gupta@unomaha.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joseph B. Collins</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ranjeev Mittu</string-name>
          <email>ranjeev.mittug@nrl.navy.mil</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amitabh Mishra</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>U.S. Army CERDEC</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Abebaw Tadesse, Langston University</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Adversary-aware Machine Learning - Reinforcement Learning</institution>
          ,
          <addr-line>Lifelong Learning, Deep Learning</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Adversaryaware Prediction</institution>
          ,
          <addr-line>Forecasting and Decision Making Techniques</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Applications of Adversarial Learning</institution>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Krishnendu Ghosh, Miami University of Ohio</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>Operations Research related to Adversarial Learning</institution>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>P. Dasgupta is with the Computer Science Department, University of Nebraska</institution>
          ,
          <addr-line>Omaha, NE</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff7">
          <label>7</label>
          <institution>Ying Zhao, Naval Postgraduate School</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Machine learning-based intelligent systems have experienced a massive growth over the past few years, and are close to becoming ubiquitous in the technology surrounding our daily lives. However, a critical challenge in machine learning-based systems is their vulnerability to security attacks from malicious adversaries. The vulnerability of these systems is further aggravated as it is non-trivial to establish the authenticity of data used to train the system, and even innocuous perturbations to the training data can be used to manipulate the systems behavior in unintended ways. Recent reports about malicious manipulation of social media feeds masquerading as authentic news items provide compelling evidence towards developing stronger and more resilient measures for combating adversarial attacks on machine learning-based systems. The ALEC'18 symposium was organized to address the overarching need towards making automated, machine learning-based systems more robust and resilient against adversarial attacks, so that humans can use them in a safe and sustained manner. The areas of interest of the symposium included the following topics: Generative Adversarial Networks Security Threats and Vulnerabilities of Adversarial Learning</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Human factors and adversarial learning with
human-inthe-loop</p>
      <p>The symposium included two keynote talks and ten orally
presented papers. The first keynote talk titled AI Canonical
Architecture and Robust AI by David R. Martinez from MIT
Lincoln Laboratories discussed the performance assessment
of AI-based systems and the need for robust AI. Xiaojin
(Jerry) Zhu from the University of Wisconsin-Madison
presented the second keynote titled An Optimal Control View
of Adversarial Machine Learning on a novel control
theorybased framework for representing various adversarial
learning problems. The research papers presented at the
symposium were grouped into three theme-based sessions: (1)
Adversarial Data Generation and Adversarial Training, (2)
Countering Adversarial Attacks in Cybersecurity, and, (3)
Novel Approaches in Adversarial Artificial Intelligence. The
symposium concluded with a group discussion on the
immediate technological enablers and hurdles in adversarial
learning as well as determining a roadmap for addressing longer
term problems and challenges in the field.</p>
      <p>Finally, we would like to thank the following ALEC’18
program committee members and reviewers for their support
with reviewing papers and with various aspects of
organizing the symposium:</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>