<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Vocabulary Alignment for Agents with Flexible Protocols</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Paula</forename><surname>Chocron</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">IIIA-CSIC</orgName>
								<address>
									<settlement>Bellaterra</settlement>
									<region>Catalonia</region>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marco</forename><surname>Schorlemmer</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">IIIA-CSIC</orgName>
								<address>
									<settlement>Bellaterra</settlement>
									<region>Catalonia</region>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Vocabulary Alignment for Agents with Flexible Protocols</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">F346B3B1926CB2AB7C056E02AA8A2891</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In recent work we presented an interaction-based alternative to the problem of vocabulary alignment in multi-agent systems. Previous approaches required external knowledge or a previously shared meta-language to fid a translation between the vocabularies of heterogeneous agents. Instead, we assume that agents share the knowledge of how to perform the tasks for which they need to collaborate, and we show how they can learn alignments from repeated interaction. The result is an online, on-demand alignment method, that uses only information agents need to interact. Until now, we have always required agents to share the complete procedural knowledge of the task. In this paper we present an extension that allows us to consider, in a meaningful way, differences between the agents' specifications. To this aim, we propose a new kind of protocols with constraints that have weights, to represent a punishment that is received when they are violated. We adapt the learning techniques from previous work to these new protocols, and present preliminary experimentation for two possible uses.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The problem of aligning vocabularies has been identified as particularly relevant in the context of dynamic and open environments such as multi-agent systems <ref type="bibr" target="#b4">[5]</ref>. Participants in this kind of systems have different backgrounds and may therefore not share the same vocabulary, making alignments necessary to interact meaningfully with each other. This problem has been considered in the past decades from two perspectives. Some approaches <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b3">4]</ref> assume the existence of external elements, such as physical objects, that all agents perceive in common, and explore how those can be used to explain the meaning of words. A second group of techniques <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> provide explicit ways of learning or agreeing on an alignment, such as argumentation techniques, explanations, or definitions. These techniques always require agents to share a meta-language. The question of how to communicate with heterogeneous interlocutors when neither a physical context nor a meta-language are available remains practically unexplored.</p><p>In recent work <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3]</ref> we proposed a novel approach to the problem of multi-agent vocabulary heterogeneity. In our techniques, that we call interaction-based, the alignment is performed considering only the context given by the interactions in which agents engage. In this work we assume the interacting agents share the knowledge of how to perform a task, or, more concretely, the specification of an interaction. As an example, consider an ordering drinks interaction between an English speaking customer and an Italian waiter. Since the agents share the dynamics of the conversation they will, for example, both know that the waiter will ask for the colour if wine is ordered. However, the words used by each agent can be different (vino and colore instead of wine and colour).</p><p>In previous work, we developed techniques that agents can use to learn progressively, from the experience of interacting, which mappings lead to successful interactions. After several dialogues, agents learn an alignment that they can use to order and deliver drinks with that particular interlocutor.</p><p>An important restriction that we (until now) imposed on agents is that they have to share the entire structure of the interactions they perform. This means that they need to agree on exactly how each interaction should be performed, modulo an alignment. This is a reasonable decision considering that our goal was to study how agents can learn alignments from shared interaction specifications. However, it is in practice a strong restriction, and it always raises the question of what agents can learn if the protocol is not completely shared. Our answer was that if the differences are small, they will be ignored by the learning methods, which work statistically. If, instead, there are significant differences, agents have nothing to learn, since there is no alignment that is useful to perform the tasks together. However, we have never performed a systematic analysis of how these differences affect the learning process.</p><p>In this paper we present an extension to the problem we studied in previous work, that considers the case when agents do not share the complete alignment. To this aim, we present a new kind of interaction specifications, based on the LTL constraint protocols that we used in <ref type="bibr" target="#b2">[3]</ref>. These new protocols include a weight for each interaction rule, introducing a meaningful way to include differences between specifications. We provide an adaptation of our learning techniques to these protocols, and a way of measuring the quality of an alignment. These ideas can be seen from two points of views. One is to consider agents that use unrelated protocols, and to analyse how the techniques can find the alignment that minimize the punishment. Alternatively, we can suppose agents use protocols that have a shared portion and then diverge in some local rules, and use the method to find the alignment for the common part. In section 4 we present preliminary experiments for these two applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Vocabulary Alignment in (Rigidly) Openly Specified Interactions</head><p>In this section we summarize the approach that we proposed in <ref type="bibr" target="#b2">[3]</ref> to let agents infer alignments from the experience of interacting. We consider two agents a 1 and a 2 that have different vocabularies V 1 and V 2 respectively. Agents interact repeatedly using, for each interaction, the specification defined in a protocol. In this work we used open specifications consisting of rules that constrain what can be said without specifying completely the flow of the interaction. We chose to use a reduced version of the ConDec protocols <ref type="bibr" target="#b5">[6]</ref>, which define the constraints in linear temporal logic (LTL).</p><p>To define the protocols formally, let a vocabulary V be a set of words and consider a pair of agent IDs A = {a 1 , a 2 }. The set of messages M V,A is the set of all tuples a i , v where v ∈ V and i ∈ {1, 2}. An interaction protocol over V and A is a tuple M V,A ,C , where C is a set of constraints on how messages can be uttered. Constraints are LTL sen-tences renamed conveniently, that are shown in Table <ref type="table">2</ref>  <ref type="foot" target="#foot_0">1</ref> . In this table, n ∈ N, a, b ∈ M V,A , and ♦, , d are the LTL operators that mean eventually, globally and next respectively. These constraints are generally divided into two classes. Existential constraints (absence) predicate over how many times a message can be uttered, while relational constraints describe binary relations between two utterances. In a protocol M V,A ,C , the constraints C needs to be a subset of the possible constraints over M V,A . </p><formula xml:id="formula_0">!correlation(a, b) ♦a → ¬♦b !response(a, b) (a → ¬♦b) !be f ore(a, b) (♦b → ¬a) premise(a, b) ( c b → a) !premise(a, b) ( c b → ¬a) imm a f ter(a, b) (a → c b) !imm a f ter(a, b) (a → c ¬b)</formula><p>During an interaction, a 1 and a 2 have protocols</p><formula xml:id="formula_1">P 1 = M A,V 1 ,C 1 and P 2 = M A,V 2 ,C<label>2</label></formula><p>respectively, and each agents send messages to its interlocutor respecting the constraints in its protocol. As an example, consider again the ordering drinks scenario where the waiter w speaks Italian and the customer c speaks English. In this situation, agents could be using the following the following two protocols P w and P c , where w is the waiter and c is the customer. P w = {premise( w, da bere , c, birra ), premise( w, da bere , c, vino ), !response( c, birra , w, colore ), imm a f ter( c, vino , w, colore )} P c = {premise( w,to drink , c, beer ), premise( w,to drink , c, wine ), !response( c, beer , w, color ), imm a f ter( c, wine , w, color )}</p><p>A sequence of messages is called an interaction. We say an interaction is possible for a protocol if it does not violate any of its constraints, or, more formally, if it is an LTL-model of each of its constraints.</p><p>As we mentioned, until now we required agents to share the complete structure of the protocols they use for an interaction. To formalize this restriction, we defined the notion of compatibility between protocols. Intuitively, two protocols are compatible if the interactions that are possible for each protocol are equivalent under an alignment. Given two vocabularies V 1 and V 2 , an alignment from</p><formula xml:id="formula_2">V 2 to V 1 them is a partial function α : V 2 → V 1 .</formula><p>An interaction in vocabulary V 2 can be translated to an interaction in V 1 using α if all its messages are in the domain of the alignment, by translating each word.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 1 Consider two interaction protocols</head><formula xml:id="formula_3">P 1 = M A,V 1 ,C 1 and P 2 = M A,V 2 ,C 2 ,</formula><p>and let Int(P i ) be the set of possible interactions in V i . P 1 and P 2 are compatible if there exists a partial function α :</p><formula xml:id="formula_4">V 2 → V 1 such that Int(P 1 ) = α(Int(P 2 ))</formula><p>If we know the alignment α for which the condition holds, we can say they are compatible under α.</p><p>Since agents send messages composed of their own words, when they receive a foreign message they need to interpret it in their local vocabulary to continue the interaction. The objective of the learning techniques we propose in <ref type="bibr" target="#b2">[3]</ref> is to infer the alignment α under which the protocols are compatible, something that would allow agents to interpret correctly the messages they receive.</p><p>The approach we present for learning alignments from interactions is simple. From now on we will use the point of view of a 1 , but everything is analogous for its interlocutor. The agent maintains a confidence distribution ω :</p><formula xml:id="formula_5">V 2 × V 1 → [0, 1]</formula><p>that assigns a value to each mapping between a known foreign word (V 2 ⊆ V 2 ) and a word in their vocabulary. These values are updated according to what agents observe in interactions. We present here only the simplest one of the methods we proposed in <ref type="bibr" target="#b2">[3]</ref>. Briefly, when an agent receives a word, it punishes all interpretations that are not possible because they violate some constraint. Suppose a 1 receives v 2 after observing interaction i, and let r be a punishment parameter and i . m the operation of appending a message m to i. The update is described in Eq. <ref type="bibr" target="#b0">(1)</ref>.</p><formula xml:id="formula_6">ω(v 2 , v 1 ) := ω(v 2 , v 1 ) • (1 − r) if i . a 2 , v 1 is not possible for P 1 ω(v 2 , v 1 ) otherwise<label>(1)</label></formula><p>For example, if the customer receives colore right after saying wine, it infers that it cannot mean to drink. By interacting repeatedly with different protocols, agents gradually learn an alignment between their vocabularies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Flexible Protocols</head><p>We now propose an approach that considers more carefully the question of to what extent agents can align their vocabularies when their protocols are different. To this aim, we introduce flexible protocols, in which each constraint has a weight that represents a punishment received when that constraint is violated. This punishment can be interpreted, for example, as a way of expressing preferences (heavier constraints are those that agents prefer not to violate), or degrees of confidence on a constraint, when there is uncertainty about the interaction context. Definition 2 A flexible protocol P f over a vocabulary V and a set of agents A is a tuple M V,A ,C f . The set C f is composed of pairs c, ρ , where c is one of the constraints in Table <ref type="table">2</ref> over M V,A , and ρ ∈ [0, 1].</p><p>As an example, consider again the ordering drinks scenario. Assume they have the same constraints as before with high weight, but now the waiter also believes that the customer should not order two different alcoholic beverages in one interaction. This constraint, however, is less strict than the others, since the waiter is willing to accept that behaviour some times. The protocols would look as follows.</p><p>P w = { premise( w, da bere , c, birra ), 1 , premise( w, da bere , c, vino ), 1 , !response( c, birra , w, colore ), 1 , imm a f ter( c, vino , w, colore ), 1 , !correlation( c, vino , c, birra ), 0.5 } P c = { premise( w,to drink , c, beer ), 1 , premise( w,to drink , c, wine ), 1 , !response( c, beer , w, color ), 1 , imm a f ter( c, wine , w, color ), 1 }</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Updating Technique</head><p>The core contribution of our previous work consists in the learning techniques that we proposed to update the agent's mapping confidence distribution. In rigid protocols the objective of the learning is to find a correct alignment α under which the protocols were compatible. In <ref type="bibr" target="#b2">[3]</ref> we show how, to that aim, it is enough to consider whether a word is a possible interpretation for a received message. The notion of compatibility cannot be applied to a flexible protocol, since the question of whether an interaction is possible is not meaningful anymore. When interacting with a flexible protocol, any sequence of messages can be accepted, but at some cost. This implies that there is no reference alignment α that agents can use to understand each other completely. Instead, each alignment will have an expected punishment that will make it better or worse. The goal now is to find an alignment that optimizes the punishment that is received while interacting.</p><p>To do so, agents need to take into account the weight of the rules that would be violated for each interpretation. We propose only one simple approach, that combines the punishment that would be received in a particular moment as well as the information that the agent already had.</p><p>As in the rigid cae, agents that use flexible protocols have a confidence distribution ω that assigns a value to each possible mapping. Consider again agents a 1 , a 2 with flexible protocols P f j = M V i ,A ,C f j for j ∈ {1, 2}. Suppose a 1 receives a word v 2 from a 2 after interaction i. For a word v ∈ V 1 , let B(v, P f 1 ) be the set of all constraints c included in a tuple in C f 1 such that the interaction obtained by appending a 1 , v to i violates c, but i does not. Let ρ(P f , c) be the weight of constraint c in P f . Then, for each v ∈ V 1 , and for each br ∈ B(v, P f 1 ), agent a 1 iteratively updates:</p><formula xml:id="formula_7">ω(v 2 , v) := ω(v 2 , v) * (1 − ρ(P f 1 , br))</formula><p>After all updates are done, the values are normalized to make ∑ v∈V 1 (ω(v 2 , v)) = 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">A Distance Measure</head><p>An important question to address before adapting our techniques to the flexible case is the one of when an alignment is good for these new protocols. In the rigid case we measured the quality of an alignment by computing its distance to the reference alignment α, using the standard precision and recall measures. In flexible protocols, where the objective is to minimize the received punishment, a quality measure should reflect how much agents would pay when using them.</p><p>We propose a method that evaluates a pair of alignments for a given pair of protocols. Alignments are evaluated in pairs, and not by themselves, because the punishment that an agent receives depends on how its interlocutor interprets the words it utters. The objective is, therefore, to measure how well they would interact together if using the obtained alignments.</p><p>Our objective will be to measure the quality of the alignments that agents learn during with our techniques. Since agents have only confidence distributions, we first need to explain how to obtain an alignemnt from them. For simplicity, in this explanation we will assume that these alignments are, for agent a 1 , a partial function α 1 : V 2 → V 1 . This alignment maps each foreign word known by the agent with the local word for which the mapping value ω is maximal. In practice, agents frequently do not have enough information to distinguish between possible mappings, so there are many of them with maximal value. Therefore the alignment is actually a function α 1 : V 2 → P V 1 , where P S is the set of all subsets of a set S. To adapt the measure we present now to this kind of alignments it is only necessary to consider α 1 and its analogue for a 2 , α 2 , as a set of possible alignments, and compute the measure for each combination.</p><p>The straightforward way to compute the expected punishment for a pair of protocols and a pair of alignments consists in obtaining all interactions, filtering the ones that could actually occur in an interaction, and computing the average of the received punishment for them. Even if the interactions have bounded length, this method is computationally very expensive, since the amount of interaction grows exponentially with the length. Fortunately, for our particular set of constraints, it is enough to consider only the punishment received by interactions of length 2 to have a measure of the alignment's quality. This is because the constraints are unary or binary, so any violation that could happen would also occur in an interaction of length 2. In future work we plan to investigate in more depth the relation between the measure computed in this way and the actual expected punishment for longer interactions.</p><p>We consider only interactions that could be uttered, and discard all sequences of messages that would never occur in a real exchange. These impossible interactions are the ones in which an agent sends an utterance that causes an unnecessary punishment. The notion of utterable interactions is defined over abstract interactions. These are sequences that are actually said before any interpretation, where the messages uttered by a 1 have words in V 1 and the ones by a 2 have words in V 2 .</p><p>Definition 3 A two-message abstract interaction i = [ a j , w , a j , w ], for j ∈ {1, 2} is utterable for alignments α 1 , α 2 and protocols P 1 and P 2 if:</p><p>• [ a j , w ] does not violate any constraint in P j and • α j (i) does not violate more constraints for P j than α j ([ a j , w ]) does.</p><p>We will call U P 1 ,P 2 ,α 1 ,α 2 to the set of all utterable sentences for those protocols and alignments. Next we need to define the punishment for an interaction with a given alignment and protocol.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 4</head><p>The punishment for an interaction i and a protocol P (noted ρ(P, i)) is the added weight of all constraints that are violated by i in P.</p><p>The quality of alignments α 1 , α 2 for P 1 , P 2 is:</p><formula xml:id="formula_8">∑ i∈U P 1 ,P 2 ,α 1 ,α 2 ρ(P 1 , α 1 (i)) + ρ(P 2 , α 2 (i)) |U P 1 ,P 2 ,α 1 ,α 2 |</formula><p>This quality measure is bilateral and measures the amount paid by two agents. It can also be made unilateral by only considering the punishment for one protocol, but it will still depend on both alignments. Note also that we defined the measure for a case in which all words have the same likelihood of being uttered, but it can be easily adapted for other cases. If we know the probability of uttering a given word, to take it into account in the measure it is enough to multiply the punishment for an interaction i ∈ U P 1 ,P 2 ,α 1 ,α 2 by the likelihood of i. The extension is not so simple when agents have more complicated distributions to choose an utterance, such as one in which the likelihood of sending a message depends on the previous interaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimental Evaluation</head><p>Flexible protocols, along with the updating technique that we proposed, can be analysed from two points of view. First, we can consider agents that have structurally different protocols for the same task, without requiring any a priori relation between them. In this case, the technique can be used as a tool to find an alignment that minimizes the punishment received by agents for that specific protocol. This can also be used for small sets of protocols.</p><p>A second point of view considers flexible protocols as local variations of a common compatible part. In this case we study how the technique lets agents find the correct alignment (the one that is useful for the common par) in spite of the differences. In the following sections we explain the experiments that we performed to study these two cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Learning an Alignment that Optimizes Punishment</head><p>In this set of experiments we analysed how our method can be used by agents that have different protocols, without any similarity assumption. The objective of the interacting agents is to find the alignment that would make them pay as little as possible. For each experiment we created a pair of flexible protocols, gave one to each agent, and let them interact for some time, observing the alignment that they obtained in the end. To decide the quality of the alignments, we used two different measures:</p><p>1. The real punishment received by agents when they used the alignment 2. The quality of the alignments with respect to the protocols measured as explained in Section 3.2</p><p>Since we proposed the alignment quality measure as an expected value of the punishment received by agents, our hypothesis is that the curves obtained with it are similar to the ones that show the average received punishment. However, the distance measure only considers the expected punishment for interactions of length 2. Since agents use longer interactions, the value of the real punishment will be larger.</p><p>An experiment consists of two agents, each of them with a vocabulary and a protocol over it. In this preliminary experimentation the size of the vocabularies is of 4 words. We performed experiments with different protocol sizes and total weight (obtained by adding up the weight for each constraint). Concretely, we used protocols with 6 and 8 rules, and total weight 3.0 and 5.0. In each experiment, both protocols have the same number of constraints and the same added weight. In an experiment, we let agents go through a learning phase in which they interacted n times, performing the experiment for n = [0, <ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr">10,</ref><ref type="bibr">15,</ref><ref type="bibr">20,</ref><ref type="bibr">30]</ref>. During these alignments they updated their confidence distributions on mappings. All interactions were of length 6. After this training phase, we ran a test phase of 20 interactions, during which agents did not learn new information. Instead, they interpreted foreign words using the alignments obtained in the previous phase. For the second measure, we added the total punishment that agents received during the test phase. We repeated the experimentation 10 times, for 10 different protocols. Figure <ref type="figure" target="#fig_0">1</ref> shows the results obtained for each experiment. We can see that with more training, agents find alignments that are better according to the measure, and that let them interact receiving lower punishment. The distance measure seems to change in the same way as the actual received punishment, which indicates it is a good estimate. We would need a larger set of experiments to answer some interesting questions that are not reflected in this data, such as the relative importance of the amount of constraints and the total weight.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Learning a Correct Alignment in Protocols with Variations</head><p>The objective of the second type of experiments is to analuze the learning techniques when applied by agents with protocols that diverge in some degree from a shared compatible part. To build these kind of protocols, we used four parameters:</p><p>r s : the number of shared constraints r d : the number of different constraints w s : the total weight for shared constraints w d : the total weight for different constraints We started from two vocabularies V 1 and V 2 and an alignment α between them, and created two protocols with r s constraints that were compatible under α. This was the shared part. We distributed the weight w s over those constraints (to simplify the experimentation, we did this uniformly). Then, we added r d different constraints to each protocol, and distributed the weight w d between them (not necessarily uniformly).</p><p>In each experiment, we let agents interact 100 times with different pairs of protocols, and measured when (and if) they converged to α. It is important to make one comment here. In our previous work, it was enough to consider that agents had found α when their learned alignments were equivalent to it. This is because in rigid protocols, using α would always lead to correct interactions. This is not the case any more, since agents could receive punishments even using α, that could made them change the alignment. For this reason, we considered an agent had converged when after they had spent 15 interactions without changing the alignment (but we recorded the number where they first got to α, before those 15 interactions). The results can be seen in Figure <ref type="figure" target="#fig_1">2</ref>. We performed the experiment 10 times, with different protocols. The figure shows the number of experiments after each amount of interactions. We used r d = {0, 1, 2} and w d = {0.0, 0.5, 1.0, 1.5, 2.0}. Some combinations were not performed, because the weight was impossible to distribute in the amount of rules. We see how more and more heavy different constraints imply more difficulty to find α. We also observe an unexpected effect. Even with the same weight, more constraints slow down the convergence. We think this is because of how agents choose which word to utter. Now, they only utter words that do not violate any rule, so they are more restricted, which could make interactions finish in failure earlier, or could make all interactions very similar between them. Another thing to comment is that the difference between small weights cannot be seen correctly in these experiments (for example, see the case od rd = 2, wd = 0.0 vs rd = 2, wd = 0.5). We plan to study this effect more exhaustively in the future, with a larger set of experiments.</p><p>To have a deeper comprehension of how these techniques work, they should be compared to the performance of the original learning methods for rigid protocols, when used for protocols that have some diverging constraints. In this way, we would be able to detect if taking into account explicitly the weight of different constraints is better than considering them as noise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>We introduced a new kind of protocols that allow us to investigate the problem of vocabulary alignment for agents that do not share exactly the same procedural knowledge of the tasks they perform together. This can mean either that they have unrelated specifications, or that they share a common part that has a significant number of constraints and another portion differs. The technique that we propose to learn alignments in this case is a simple adaptation of the one we had introduced in previous work, and is useful for both situations.</p><p>This work is still in a preliminary state, but we see it as a basis to many extensions. First of all, the experimental part needs to be improved, considering larger protocols and making a more accurate analysis. The current experiments should only be seen as preliminary tests that give an idea of how the method works and provide input for future experimentation. For example, in the first experiment it is possible to consider the optimality of the solution the agents find, something that would be useful to understand how the technique works. In addition to this, there are different questions to consider regarding the technique. We think it would be particularly interesting to consider interactions between agents with different kinds of preferences. For example, what kind of alignment would a pair of agents reach if one of them has very high preference values and the other one very low?</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Results for a vocabulary of 4 words</figDesc><graphic coords="8,123.59,411.07,172.80,118.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Results for a vocabulary of 4 words and protocols with 8 common constraints</figDesc><graphic coords="9,123.59,387.46,172.80,118.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>LTL definitions of constraints</figDesc><table><row><cell>Constraint</cell><cell>LTL meaning</cell></row><row><cell>absence(a)</cell><cell>¬♦a</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The number of constraints in the original paper was longer, but we only include here the ones that we actually used in the learning method (the non-monotonic ones).</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">An interaction-based approach to semantic alignment</title>
		<author>
			<persName><forename type="first">Manuel</forename><surname>Atencia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Schorlemmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Web Semantics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="131" to="147" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Attuning ontology alignments to semantically heterogeneous multi-agent interactions</title>
		<author>
			<persName><forename type="first">Paula</forename><surname>Chocron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Schorlemmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Netherlands -Including Prestigious Applications of Artificial Intelligence (PAIS 2016)</title>
				<meeting><address><addrLine>The Hague</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016-09-02">29 August-2 September 2016. 2016</date>
			<biblScope unit="page" from="871" to="879" />
		</imprint>
	</monogr>
	<note>ECAI 2016 -22nd European Conference on Artificial Intelligence</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Vocabulary alignment in openly specified interactions</title>
		<author>
			<persName><forename type="first">Paula</forename><surname>Chocron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Schorlemmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017)</title>
				<meeting>the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017)</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>To appear</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Ontology negotiation: Goals, requirements and implementation</title>
		<author>
			<persName><forename type="first">Jurriaan</forename><surname>Van Diggelen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robbertjan</forename><surname>Beun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Frank</forename><surname>Dignum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rogier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><forename type="middle">Jules</forename><surname>Van Eijk</surname></persName>
		</author>
		<author>
			<persName><surname>Meyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. J. Agent-Oriented Softw. Eng</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="63" to="90" />
			<date type="published" when="2007-04">April 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Ontology Matching</title>
		<author>
			<persName><forename type="first">Jérôme</forename><surname>Euzenat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pavel</forename><surname>Shvaiko</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>Springer-Verlag New York, Inc</publisher>
			<pubPlace>Secaucus, NJ, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A declarative approach for flexible business processes management</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pesic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M P</forename><surname>Van Der Aalst</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2006 International Conference on Business Process Management Workshops, BPM&apos;06</title>
				<meeting>the 2006 International Conference on Business Process Management Workshops, BPM&apos;06<address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="169" to="180" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A dialogue protocol to support meaning negotiation</title>
		<author>
			<persName><forename type="first">Gabrielle</forename><surname>Santos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Valentina</forename><surname>Tamma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Terry</forename><forename type="middle">R</forename><surname>Payne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Floriana</forename><surname>Grasso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;16</title>
				<meeting>the 2016 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;16<address><addrLine>Richland, SC</addrLine></address></meeting>
		<imprint>
			<publisher>International Foundation for Autonomous Agents and Multiagent Systems</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1367" to="1368" />
		</imprint>
	</monogr>
	<note>extended abstract</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">An approach to ontology mapping negotiation</title>
		<author>
			<persName><forename type="first">Nuno</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gecad</forename><surname>Isep Ipp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paulo</forename><surname>Maio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Workshop on Integrating Ontologies</title>
				<meeting>the Workshop on Integrating Ontologies</meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="54" to="60" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The Origins of Ontologies and Communication Conventions in Multi-Agent Systems</title>
		<author>
			<persName><forename type="first">Luc</forename><surname>Steels</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Autonomous Agents and MultiAgent Systems</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="169" to="194" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
