<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a model of goal autonomous agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>M. Bonifacio</string-name>
          <email>bonifacio@itc.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>P. Bouquet</string-name>
          <email>bouquet@dit.unitn.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>R. Ferrario</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>D. Ponte</string-name>
          <email>ponte@itc.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Information and Communication Technologies University of Trento</institution>
          ,
          <addr-line>Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we sketch a model in which agents are autonomous not only because they can choose among alternative courses of action (executive autonomy), but also because they can select and pursue new goals on the basis of endogenously generated interests (goal autonomy). We use economic models to argue that goal autonomy can be defined by introducing a precise notion of an agent's identity.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>2 Identity and the principle of sunk costs
March suggests that rational agents are entities that not only can set appropriate courses
of action (including sub-goals) to achieve a given goal, but can also change their mind
about their top level goals and preferences when planned achievements become
unrealistic. The research question now is: is there any principled way in which we can explain
when and how agents should adopt new goals or change their preferences? Our research
theses are: that such a principled explanation is possible; that is based on the notion of
an agent’s identity; that an agent’s identity can be defined in terms of economical
principles, namely economies of scale and irreversibility of investments.</p>
      <p>In short, the idea is the following. First of all, it is clear that agents sustain costs
to acquire a capability or the right to use a resource. Not always these costs are
(completely) reversible, and this generates sunk costs. Therefore, the more such a capability
(resource) is used, the more its costs are amortized (economies of scale effect). Under
this respect, we believe that acquired capabilities (resources) are an essential part of
an agent identity (what an agent is), and play a crucial role in deciding what goals are
to be preferred and pursued on the basis of endogenously generated interests (not
using an available capability, especially when it is not reversible, implies a loss of value
generated by the lost opportunity of a cost saving!!).</p>
      <p>The conclusion is that rational agents should consider not only the current costs of
achieving a goal, but also the losses generated by the non-use of sunk investments. Now
the point is that in a non predictable environment, circumstances can lead an agent to
develop and acquire resources that, to some extent, have no use in order to achieve the
current goal. Our thesis is that sometimes the cost of changing one’s mind about what
is desirable is lower than the cost of going on in the pursuit of current intentions. This
happens when, in the decision function, the weight of sunk costs overcomes the weight
of current opportunities. In such a situation, instead of reasoning about means necessary
to achieve ends that happen to be irrational, rational agents may rationalize their
current state as an end which is appropriate to his means, and to change their preferences
accordingly. In this sense the sunk cost effect is an attempt to demonstrate the
rationality of behaviors that are otherwise not explained and thus labelled as “irrational” by
traditional theories of rationality.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>C.</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          .
          <article-title>Guarantees for autonomy in cognitive agent architecture</article-title>
          . In M. Wooldridge and
          <string-name>
            <surname>N. R</surname>
          </string-name>
          . Jennings, editors,
          <source>Intelligent Agents: Theories, Architectures, and Languages (LNAI Volume 890)</source>
          , pages
          <fpage>56</fpage>
          -
          <lpage>70</lpage>
          . Springer-Verlag: Heidelberg, Germany,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>M. D. Cohen</surname>
            ,
            <given-names>J. G.</given-names>
          </string-name>
          <string-name>
            <surname>March</surname>
            , and
            <given-names>J. P.</given-names>
          </string-name>
          <string-name>
            <surname>Olsen</surname>
          </string-name>
          .
          <article-title>A garbage can model of organizational choice</article-title>
          .
          <source>Administrative Schience Quartely</source>
          ,
          <volume>17</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          ,
          <year>1972</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>J. G.</given-names>
            <surname>March</surname>
          </string-name>
          .
          <article-title>How decisions happen in organizations</article-title>
          .
          <source>Human Computer Interaction</source>
          ,
          <volume>6</volume>
          :
          <fpage>95</fpage>
          -
          <lpage>117</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>J. G.</given-names>
            <surname>March</surname>
          </string-name>
          .
          <article-title>A primer on decision making : how decisions happen</article-title>
          . The Free Press,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Simon</surname>
          </string-name>
          .
          <article-title>Reason in human affairs</article-title>
          . Stanford University press,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>