<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Mechanism Design: (Ir)Rationality and Obvious Strategyproofness⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Diodato Ferraioli</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carmine Ventre</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>King's College London</institution>
          ,
          <addr-line>London WC2R 2LS</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università degli Studi di Salerno</institution>
          ,
          <addr-line>Fisciano SA 84084</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Multi-agent systems (MAS) are comprised by autonomous agents, each with a potentially specific goal that may be diferent from the objective of the system designer. MAS represent the perfect environment for the work in Algorithmic Mechanism Design (AMD), which seeks to design incentive-compatible mechanisms, the core idea being to maximise the profit of the agents when they behave honestly, thus preventing misbehaviour and allowing the designer to optimise her goal. AMD often assumes full rationality of agents who are expected to know their full preferences (however complex they are) and to strategise optimally so that the mechanism is guided towards outcomes they prefer. However, in real MAS, this is too strong an assumption. Humans could interact with software agents and irrationally choose suboptimal strategies due to their cognitive biases and/or limitations [1]. Software agents themselves could be “irrational” since they could have been “badly” programmed either because the programmer misunderstood the incentive structure in place or due to computational barriers [2].</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Mechanism Design</kwd>
        <kwd>Bounded Rationality</kwd>
        <kwd>Limited Contingent Reasoning</kwd>
        <kwd>Greedy Algorithms</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Mechanism Design provides tools for developing protocols that align the goals of a planner
with the selfish interests of the participating agents. Indeed, agents may, in principle, have an
advantage if they deviate from the protocol’s prescriptions. This could invalidate the guarantees
of the protocol, such as, the maximization of some social measure of welfare or the revenue of
the designer, that only hold under the assumption that agents behave as dictated. Hence, the goal
is to design special protocols, named mechanisms, that allow to optimize the planner goals, and
at the same time incentivize agents to follow the protocol, a property called strategyproofness.</p>
      <p>In Artificial Intelligence, mechanism design has found applications in many settings: from
allocation, to facility location and matching problems [2].</p>
      <p>Recently, a lot of interest has been devoted to designing mechanisms that not only aim to
maximize the goal of the planner and to incentivize the correct behaviour of agents, but are
also simple. Simplicity is usually intended in terms of the ability for the agents to understand
their incentives without the need to engage in complex case analyses. From this point of view,
simplicity is related with the transparency and the accountability of the protocol, that are often
desirable properties, especially for democratic institutions.</p>
      <p>This definition of simplicity has been recently formalized by Li [ 8] with the concept of
Obviously Strategyproof (OSP) mechanisms. Roughly speaking, a mechanism is OSP if whenever
it requires an agent to take an action, the worst outcome that she can achieve by following the
protocol is not worse than the best outcome that she can achieve by deviating. Unfortunately, it
has been observed that designing eficient OSP mechanisms can be a hard task [ 9], and indeed,
most of the early works on the subject focus on special mechanism formats that are OSP, as
posted price mechanisms [10, 11] and deferred acceptance auctions [12].</p>
      <p>In this paper we describe a characterization of OSP mechanisms for single-parameter problems
– wherein agent behaviour depends on a single parameter, also known as type. Interestingly,
this characterizations relate OSP mechanisms to greedy and reverse greedy (a.k.a., deferred
acceptance) algorithms, stating that algorithms with this format can be easily enriched with
payments to guarantee obvious strategyproofness.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Notation</title>
      <p>We let  denote a set of  selfish agents and  a set of feasible outcomes. Each agent  has a
type  ∈  that we assume to be her private knowledge. We call  the domain of . With
() ∈ R we denote the cost of agent  with type  for the outcome  ∈ . When costs are
negative, the agent has a profit from the solution, called valuation. We will be working with
costs and use that terminology accordingly but our results do not assume that costs are positive.</p>
      <p>A mechanism interacts with the agents in  to select an outcome  ∈ . Specifically, agent
 takes actions (e.g., saying yes/no) that may signal to the mechanism a type  ∈  diferent
from  (e.g., saying yes could signal that the type has some properties that  has but  does
not). We then say that agent  takes actions compatible with (or according to)  and call  the
presumed type.</p>
      <p>For a mechanism ℳ, ℳ(b) denotes the outcome returned by the mechanism when the
agents take actions according to their presumed types b = (1, . . . , ) (i.e., each agent  takes
actions compatible with the corresponding ). This outcome is computed by a pair (, ), where
 =  (b) = (1(b), . . . , (b)) (termed social choice function or algorithm) maps the actions
taken by the agents according to b to a feasible solution in , and (b) = (1(b), . . . , (b)) ∈
R maps the actions taken by the agents according to b to payments.</p>
      <p>Each selfish agent is equipped with a quasi-linear utility function, i.e., agent  has utility
function  :  ×  → R: for  ∈  and for an outcome  ∈  returned by a mechanism
ℳ, (, ) is the utility that agent  has for the implementation of outcome  when her type
is , i.e., (, ℳ(, b− )) = (, b− ) − ( (, b− )).</p>
      <p>A single-parameter agent  has as private information a single real number  and () can
be expressed as w() for some publicly known function w; note that w() is a non-negative
real number (and  = R ). Moreover, observe that the cost of player  is independent on
≥ 0
what the outcome  prescribes for players diferent from . We make no other assumption
on . To simplify the notation, we will write (b) when we want to express the cost of a
single-parameter agent  of type  for the output of social choice function  on input the actions
corresponding to a bid vector b.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Obvious Strategyproofness</title>
      <p>We here introduce the concept of implementation tree to formally define (deterministic) OSP
mechanisms. Our definition is, w.l.o.g., built on the one by Mackenzie [ 13] rather than the
original definition by Li [8].</p>
      <p>An extensive-form mechanism ℳ is a triple (, ,  ) where, as from above, the pair (, )
determines the outcome of the mechanism, and  is a tree, called implementation tree, such
that:
• Every leaf ℓ of the tree is labeled with a possible outcome of the mechanism ((ℓ), (ℓ)),
where (ℓ) ∈  and (ℓ) ∈ R;
• Each node  in the implementation tree  defines the following:
– An agent  = () to whom the mechanism makes a query. Each possible answer to this
query leads to a diferent child of .
– A subdomain () = ((), −()) containing all types that are compatible with , i.e.,
compatible with all the answers to the queries from the root down to node . Specifically,
() into  ≥ 2
the query at node  defines a partition of the current domain of  = (), 
subdomains, one for each of the  children of node . Thus, the domain of each of these
() corresponding to a diferent
children will have as the domain of , the subdomain of 
answer of  at , and an unchanged domain for the other agents.</p>
      <p>Observe that, according to the definition above, for every profile b there is only one leaf
ℓ = ℓ(b) such that b belongs to (ℓ). Similarly, to each leaf ℓ there is at least a profile b that
belongs to (ℓ). For this reason, we say that ℳ(b) = ((ℓ), (ℓ)).</p>
      <p>Two profiles b, b′ are said to diverge at a node  of  if this node has two children , ′ such
that b ∈ (), whereas b′ ∈ (′). For every such node , we say that () is the divergent
agent at .</p>
      <p>We are now ready to define obvious strategyproofness. An extensive-form mechanism ℳ
is obviously strategy-proof (OSP) if for every agent  with real type , for every vertex  such
that  = (), for every b− , b−′  (with b−′  not necessarily diferent from b− ), and for every
 ∈ , with  ̸= , such that (, b− ) and (, b−′ ) are compatible with , but diverge at ,
it holds that (, ℳ(, b− )) ≥ (, ℳ(, b−′ )). Roughly speaking, an OSP mechanism
requires that, at each time step agent  is asked to take a decision that depends on her type, the
worst utility that she can get if she behaves according to her true type is at least the best utility
she can get by behaving diferently.
4. Cycle-monotonicity Characterizes OSP Mechanisms
We next describe the main tools needed for our characterization: i.e., OSP can be characterized
by the absence of negative-weight cycles in a suitable weighted graph over the possible strategy
profiles. Specifically, we consider a mechanism ℳ with implementation tree  for a social
choice function  , and define:
• Separating Node: A node  in the implementation tree  is (a, b)-separating for agent
 = () if a and b are compatible with  (that is, a, b ∈ ()), and the two types  and 
belong to two diferent subdomains of the children of  (thus implying  ̸= ).
• OSP-graph: For every agent , we define a directed weighted graph  having a node for each
profile in  = × . The graph contains edge (a, b) if and only if  has some node  which
is (a, b)-separating for  = (), and the weight of this edge is (a, b) = ((b) − (a)).
Throughout the paper, we will denote with a → b an edge (a, b) ∈  , and with a ⇝ b a
path among these two profiles in  .
• OSP Cycle Monotonicity (OSP CMON): OSP cycle monotonicity (OSP CMON) holds if,
for all , the graph  does not contain negative-weight cycles. Moreover, OSP two-cycle
monotonicity (OSP 2CMON) holds if the same is true when considering cycles of length two
only, i.e., cycles with only two edges. Sometimes, we will simply say CMON and 2CMON
below.</p>
      <p>Theorem 1. A mechanism with implementation tree  for a social function  is OSP on finite
domains if and only if OSP CMON holds. Moreover, for any OSP mechanism ℳ = (, ,  ) where
 is not a binary tree, there is an OSP mechanism ℳ′ = (, ,  ′) where  ′ is a binary tree.</p>
      <p>Given the result above, we henceforth assume that the agents have finite domains and that
the implementation trees of our mechanisms are binary.
5. Algorithmic Characterization of OSP Mechanisms
We first observe that it is w.l.o.g. to restrict to a specific class of mechanisms, that are ordered.
Specifically, let ℳ = (, ,  ) be an extensive-form mechanism. Let  ∈  be a node where
() is separated into  and . We say that the query at  is ordered if for all
 = () and 
,  ∈ () with  in  and  in ,  &lt;  and ℒ()() ⪯ ℒ ()(). Then, we say that a mechanism
is ordered if it only makes ordered queries. Next result shows that we can focus on ordered
mechanisms without loss of generality as long as we are interested in OSP.</p>
      <p>Theorem 2. Any OSP mechanism ℳ = (, ,  ) can be transformed into an equivalent OSP
mechanism ℳ′ = (, ,  ′) where all queries in  ′ are ordered.</p>
      <p>
        In order to provide our algorithmic characterization, we begin by defining the concept of
antimonotone types and of pivots for a pair of types. We say that two types like  (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) for
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) &gt; 
which there are profiles b(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and b(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) such that (b(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )) &gt; (b(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )) are antimonotone and call
the profiles b(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and b(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) witnesses of antimonotonicity of (1) and (2). Given a node  and a
pair of types (1), (2) ∈ (), we say that types (), () are pivots for (1) and (2) if
• they are separated from (1), (2) respectively at nodes (), () ∈  that are ancestors of 
with (()) = (()) = ;
• for  ∈ ℒ()((1)) and  ∈ ℒ()((2)) with  &gt; , there are  ∈ ℒ(())(()) with  ≥ ,
and  ∈ ℒ(())(()) with  ≤ .
      </p>
      <p>
        For a label  ∈ ℒ()() of some type  at node  ∈  , we call b−  a -buddy for  at  if
b−  ∈ −() and (, b− ) = . Given a node  and a pair of types (1), (2) ∈ (), we say that
tayspaebsov(e,)w,e(h)aavreetehxattrem(e1p(ivo)t)s+ifth(ey2(ar)e) p+ivot(sf2(or)()1)+and(1((2))a)n≥ d, 0g,ivfoenrall≥ path&gt;s1(≥),
2(), 2(), 1() in  defined as follows:
1() := ((1), b−()) → a(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ⇝
2() := ((), b−()) → c(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ⇝
2() := ((2), b−()) → c(ℓ+1) ⇝
1() := ((), b−()) → a(+1) ⇝
a() → ((), b())
      </p>
      <p>− 
c(ℓ) → ((2), b())</p>
      <p>− 
c(ℓ+ℎ) → ((), b())</p>
      <p>− 
a(+) → ((1), b−()),
where b−() is a -buddy for (1) at , b−() is an -buddy for (2) at , b−() is a -buddy for () at
(), b−() is a -buddy for</p>
      <p>
        () at (), a(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), . . . , a(+) and c(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), . . . , c(ℓ+ℎ) are profiles in  .
      </p>
      <p>We are now ready to provide the definition of the mechanism format that characterizes OSP:
A mechanism ℳ = (, ,  ) is three-way greedy if all its queries are ordered and for all internal
nodes  ∈  such that () ̸=  and (1) and (2) in () are antimonotone, it holds that any
pair of pivots  () are extreme. We then have the following theorem:
() and 
Theorem 3. An OSP mechanism ℳ implementing  exists if and only if a three-way greedy
mechanism implementing  exists.</p>
      <p>To make sense of the notion (and the name) of three-way greedy mechanisms, we now explore
few of their properties. Let us first assume that we would like to avoid the introduction of pivots
until there are antimonotone types (this will surely satisfy the definition of three-way greedy
mechanism). How can a mechanism avoid two pivots? Clearly, the mechanism can query an
agent in a greedy fashion (i.e., by querying about the best type that has not yet been queried,
and in case of positive answer, by guaranteeing her an outcome at least as good as the one
she may achieve in case of negative answer) or in a reverse greedy fashion (i.e., by asking her
whether her type is the worst that has not yet been queried, and in case of positive answer,
by guaranteeing her an outcome at least as bas as the one she may achieve in case of positive
answer). The third possibility is for the mechanism to first query an agent about whether her
type is large or small (with the exact threshold defining large or small types depending on the
problem at the hand), whilst ensuring that a label for a large type is never better than the label of
a small type, and then in case of large types, proceeding by querying the agent greedily, whereas
in the case of small types, the mechanism queries the agent in a reverse greedy fashion. These
three ways of ensuring that no two pivots exist justify the name of our mechanism. Actually,
the definition of three-way greedy mechanisms allows pivots to exist, as long as the one with
small (large) label is large (small) enough. Here, the thresholds for these pivots to be considered
small/large enough depend on cycles that go through the four aforementioned points, cf. the
definitions of the paths in the OSP-graph.</p>
      <p>Interestingly, this leads to an even simpler characterization in case of binary allocation
problems. Indeed, with only two outcomes available, the only pivots possible must have
outcomes that are equal to those of the two antimonotone types. This implies that the existence
of pivots leads to negative-weight cycles. Hence, the only way to satisfy the definition of
three-way greedy mechanism is to avoid pivots, that in turn means that the mechanism has to
interact with each agent either in a greedy fashion or in a reverse greedy fashion as long as
there are still antimonotone types (for this reason, we term such a mechanism two-way greedy
mechanisms).</p>
    </sec>
    <sec id="sec-4">
      <title>6. Payments</title>
      <p>Theorem 3 is essentially existential, since it does not provide explicit payments. The existence of
the payments follows from Theorem 1: the payments for a particular player are defined therein
as the shortest path in the corresponding OSP graph. However, these graphs have in general
exponential size with respect to the description of the instance, meaning that this approach is
infeasible from a computational point of view. Moreover, the implicit definition of payments
“hides” the simplicity of the decision making of agents facing an OSP mechanism. We next show
that these payments actually have a simple structure.</p>
      <p>To this aim, let ℳ be a mechanism with a three-way greedy implementation for a social
function  . We say that the outcomes corresponding to bid profiles a and b are equivalent to
agent , denoted as a = b, whenever (a) = (b), and that agent  prefers  to  , denoted as
 ≻   , whenever (a) &gt; (b). Hence, we can partition profile types in equivalence classes
0, . . . , , for some  ≥ 0 such that 0 = {b : (b) = mina (a)}, i.e., it contains all bid
profiles returning the minimum outcome to , and  = {b : (b) = mina∈/0,...,− 1 (a)},
i.e. it contains all bid profiles returning to  the smallest outcome larger than the one returned
by profiles in previous equivalence classes. We also define &lt; = ⋃︀ℓ=−01 ℓ and &gt; =
⋃︀ℓ=+1 ℓ. Moreover for  = 1, . . . , , we also let  = (b) for some b ∈  . Finally, given
a profile b′ we will say that it is related to b if b and b′ are either not separated until agent  is
queried about type , or they have been separated by . Now, for  = 0, . . . , , and every b let
 b() = max ′. That is,  b() is the largest bid which may cause the assignment of
b′∈
b′ related to b
outcome  to agent  on the path from the root of  until agent  is queried about .</p>
      <p>We will start by defining the payment for an agent  that interacts with this mechanism in a
reverse greedy fashion (i.e., the agent is queried for the worst type not yet queried, and upon a
positive answer she receives an outcome not larger than the outcome received by declaring a
better type).</p>
      <p>Proposition 4. Let ℳ be a mechanism with a three-way greedy implementation and let  be
an agent interacting with ℳ in a reverse greedy way. Then truthfulness is an obvious dominant
strategy for  if for every b ∈  (b) =  b() + ∑︀=−01( b() −  b( + 1)) .</p>
      <p>It is immediate to check that payments defined in Proposition 4 are essentially the same as
strategyproof payments as defined in [14].</p>
      <p>Let us now consider an agent  that interacts with the mechanism in a greedy fashion (i.e.,
the agent is queried for the best type not yet queried, and upon a positive answer she receives
an outcome that is not smaller than the outcome received by declaring a worse type). To this
aim we let (), for  = 1, . . . , , be the type corresponding to the first query in the tree
that, if positively answered, will assign to agent  the outcome  , that is the smallest type on
which a query is issued with promised outcome  . Moreover, we let  (0) = maxb , and,
for  = 1, . . . , ,  () = minb∈&lt; . That is,  () is the smallest bid which may cause the
&gt;()
assignment of an outcome worse than  to agent  after the query (). Observe that for each
 such that there is b−  such that (, b− 1) ∈  we have that  () ≤ . We next show that
payments in this case have a very similar structure as the one described above, but they fail to
match SP payments.</p>
      <p>Proposition 5. Let ℳ be a mechanism with a three-way greedy implementation tree and let  be
an agent interacting with ℳ in a greedy way. Let 0, . . . ,  be the partition of type profiles in
equivalence class for  as defined above. Then truthfulness is an obvious dominant strategy for  if
for every b ∈ 
(b) =
⎧ ⎧
⎪⎪⎨min ⎨0, min</p>
      <p>′≤ 
⎪⎩⎪ ()⎩ + ∑︀∃− b1−′  : b′∈&gt;0 
=0 ( () −  ( + 1))
((b′) − ′(b′))
⎫
⎬ if  = 0;
⎭
o.w..</p>
      <p>Note that there are two main diferences between payments as defined in Proposition 5 and
payments provided in Proposition 4: first, we changed the threshold for outcome  from the
SP threshold  () to a smaller threshold  (); second, the payment associated with the lowest
outcome depends not only on the outcome, but also on when this outcome is achieved.</p>
      <p>The third way our mechanism can interact with an agent consists in first asking to separate
the domain in good and bad types (with outcomes for good types being not worse than outcomes
for bad types), and then proceeding greedily over bad types and reverse greedily over good
types. Hence, it is not surprising that in this case payments are a composition of the ones
described above.</p>
    </sec>
    <sec id="sec-5">
      <title>7. Applications</title>
      <p>Our algorithmic characterization of OSP mechanisms, showing a connection with a certain
family of greedy algorithms, allows us to show quite easily the existence of a host of new
mechanisms, and to provide a set of upper bounds on their approximation guarantee. We
summarize some of these results in Table 1.</p>
    </sec>
    <sec id="sec-6">
      <title>8. Conclusions</title>
      <p>We believe that our characterization helps in the construction of simple incentive-compatible
mechanisms for real MAS, in which agents may have imperfect rationality. Clearly, there are
many more settings than the one showed in Table 1 in which it would be interesting to design
these mechanisms, or to evaluate their approximation guarantee.</p>
      <p>Moreover, we believe that our approach can be extended to work also with other definitions
of simple mechanisms based on extensive form mechanisms, as Expected OSP [16], OSP with
lookahead [17], -step OSP [18], Non-Obvious Manipulations [19, 20].
[8] S. Li, Obviously strategy-proof mechanisms, American Economic Review 107 (2017)
3257–87.
[9] D. Ferraioli, C. Ventre, Obvious strategyproofness needs monitoring for good
approximations, in: Thirty-First AAAI Conference on Artificial Intelligence, 2017.
[10] M. Babaiof, N. Immorlica, B. Lucier, S. M. Weinberg, A simple and approximately optimal
mechanism for an additive buyer, in: FOCS 2014, 2014, pp. 21–30.
[11] M. Adamczyk, A. Borodin, D. Ferraioli, B. de Keijzer, S. Leonardi, Sequential posted price
mechanisms with correlated valuations, in: WINE 2015, 2015, pp. 1–15.
[12] P. Milgrom, I. Segal, Clock auctions and radio spectrum reallocation, Journal of Political</p>
      <p>Economy (2020).
[13] A. Mackenzie, A revelation principle for obviously strategy-proof implementation,
Research Memorandum 014, Maastricht University, Graduate School of Business and
Economics (GSBE), 2018.
[14] A. Archer, É. Tardos, Truthful mechanisms for one-parameter agents, in: 42nd Annual
Symposium on Foundations of Computer Science, FOCS 2001, 14-17 October 2001, Las
Vegas, Nevada, USA, IEEE Computer Society, 2001, pp. 482–491. URL: https://doi.org/10.
1109/SFCS.2001.959924. doi:10.1109/SFCS.2001.959924.
[15] D. Hausmann, B. Korte, T. A. Jenkyns, Worst case analysis of greedy type algorithms for
independence systems, Combinatorial Optimization (1980) 120–131.
[16] D. Ferraioli, C. Ventre, Probabilistic verification for obviously strategyproof mechanisms,
in: J. Lang (Ed.), Proceedings of the Twenty-Seventh International Joint Conference on
Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, ijcai.org, 2018, pp.
240–246. URL: https://doi.org/10.24963/ijcai.2018/33. doi:10.24963/ijcai.2018/33.
[17] D. Ferraioli, C. Ventre, Obvious strategyproofness, bounded rationality and approximation,
Theory Comput. Syst. 66 (2022) 696–720. URL: https://doi.org/10.1007/s00224-022-10071-2.
doi:10.1007/s00224-022-10071-2.
[18] P. Troyan, T. Morrill, Obvious manipulations, Journal of Economic Theory 185 (2020)
104970.
[19] T. Archbold, B. De Keijzer, C. Ventre, Non-obvious manipulability for single-parameter
agents and bilateral trade, in: Proceedings of the 22nd International Conference on
Autonomous Agents and Multiagent Systems (AAMAS 2023), 2023.
[20] T. Archbold, B. De Keijzer, C. Ventre, Non-obvious manipulability in extensive-form
mechanisms: the revelation principle for single-parameter agents, in: Proceedings of the
32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), 2023.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Ausubel</surname>
          </string-name>
          ,
          <article-title>An eficient ascending-bid auction for multiple objects</article-title>
          ,
          <source>American Economic Review</source>
          <volume>94</volume>
          (
          <year>2004</year>
          )
          <fpage>1452</fpage>
          -
          <lpage>1475</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Nisan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Roughgarden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tardos</surname>
          </string-name>
          , V. Vazirani (Eds.),
          <source>Algorithmic Game Theory</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferraioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ventre</surname>
          </string-name>
          ,
          <article-title>Approximation guarantee of OSP mechanisms: The case of machine scheduling and facility location</article-title>
          ,
          <source>Algorithmica</source>
          <volume>83</volume>
          (
          <year>2021</year>
          )
          <fpage>695</fpage>
          -
          <lpage>725</lpage>
          . URL: https://doi.org/10. 1007/s00453-020-00771-x. doi:
          <volume>10</volume>
          .1007/s00453-020-00771-x.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferraioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Meier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Penna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ventre</surname>
          </string-name>
          ,
          <article-title>New constructions of obviously strategyproof mechanisms</article-title>
          ,
          <source>Mathematics of Operations Research</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferraioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Penna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ventre</surname>
          </string-name>
          ,
          <article-title>Two-way greedy: Algorithms for imperfect rationality</article-title>
          ,
          <source>in: WINE</source>
          , volume
          <volume>13112</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2021</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferraioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ventre</surname>
          </string-name>
          ,
          <article-title>Explicit payments for obviously strategyproof mechanisms</article-title>
          , in: AAMAS, ACM,
          <year>2023</year>
          , pp.
          <fpage>2125</fpage>
          -
          <lpage>2133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferraioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ventre</surname>
          </string-name>
          ,
          <article-title>On the connection between greedy algorithms and imperfect rationality</article-title>
          , in: EC, ACM,
          <year>2023</year>
          , pp.
          <fpage>657</fpage>
          -
          <lpage>677</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>