=Paper=
{{Paper
|id=Vol-2970/aspocppaper7
|storemode=property
|title=Asynchronous Forward-Bounding algorithm with Directional Arc Consistency
|pdfUrl=https://ceur-ws.org/Vol-2970/aspocppaper7.pdf
|volume=Vol-2970
|authors=Rachid Adrdor,Lahcen Koutti
|dblpUrl=https://dblp.org/rec/conf/iclp/AdrdorK21
}}
==Asynchronous Forward-Bounding algorithm with Directional Arc Consistency==
Asynchronous Forward-Bounding algorithm with Directional Arc Consistency Rachid Adrdor1 , Lahcen Koutti1 1 Ibn Zohr University, Faculty of Sciences, Department of Computer Science, Agadir, Morocco Abstract The AFB_BJ+ -AC* algorithm is one of the latest algorithms used to solve Distributed Constraint Opti- mization Problems known as DCOPs. It is based on soft arc consistency techniques (AC* ) to speed up the process of solving a problem by permanently removing any value that doesnβt belong to the opti- mal solution. In fact, these techniques have greatly contributed to improving the performance of the AFB_BJ+ algorithm in solving DCOPs, but there are some exceptions in which they have no effect due to the limited number of deletions made. For that, we use in this paper a higher consistency level, which is a directional arc consistency (DAC* ). This level makes it possible to erase more values and thus to quickly reach the optimal solution of a problem. Experiments on some benchmarks show that the new algorithm, AFB_BJ+ -DAC* , is better in terms of communication load and computation effort. Keywords DCOP, AFB_BJ+ , AC* , Directional Arc Consistency 1. Introduction A large number of multi-agent problems can be modeled as DCOPs such as meetings schedul- ing [1], sensor networks [2], and so on. In a DCOP, variables, domains, and constraints are distributed among a set of agents. Each agent has full control over a subset of variables and constraints that involve them [3]. A DCOP is solved in a distributed manner via an algorithm al- lowing the agents to cooperate and coordinate with each other to find a solution with a minimal cost. A solution to a DCOP is a set of value assignments, each representing the value assigned to one of the variables in that DCOP. Algorithms with various search strategies have been suggested to solve DCOPs, for example, Adopt[4], BnB-Adopt[5], BnB-Adopt+ [6], SyncBB[7], AFB[3], AFB_BJ+ [8], AFB_BJ+ -AC* [9, 10, 11], etc. In AFB_BJ+ -AC* , agents synchronously develop a current partial assignment (CPA) in order to find the optimal solution to the problem to be solved. During this process, and in order to reduce the number of retries, each agent uses arc consistency (AC* ) to remove any suboptimal values in its domain. But sometimes, the number of deletions generated by AC* is insufficient, which negatively affects the performance of the algorithm. In this paper, instead of using the basic level of arc consistency (AC* ), we use directional arc consistency (DAC* ), which is the next higher level of AC* . DAC* allows AFB_BJ+ to generate ASPOCP 2021: Workshop on Answer Set Programming and Other Computing Paradigms 2021 co-located with ICLP 2021 Porto, Portugal, September 21, 2021 " rachid.adrdor@edu.uiz.ac.ma (R. Adrdor); l.koutti@uiz.ac.ma (L. Koutti) Β© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) more deletions and thus quickly reach the optimal solution of a problem. The new algorithm is called AFB_BJ+ -DAC* . It uses DAC* to filter agent domains by performing a set of cost extensions from an agent to its neighbors, then executing AC* . Our experiments on different benchmarks show the superiority of AFB_BJ+ -DAC* algorithm in terms of communication load and computation effort. This paper comprises four sections. Section 2 gives an overview of DCOPs, soft arc consistency rules, and AFB_BJ+ -AC* algorithm. Section 3 gives a description of AFB_BJ+ -DAC* algorithm. Section 4 talks about experiments fulfilled on some benchmarks. The last section gives the conclusion. 2. Background 2.1. Distributed Constraint Optimization Problem (DCOP) A DCOP [12, 13, 11] is defined by 4 sets, set of agents π = {π΄1 , π΄2 , ..., π΄π }, set of variables π³ = {π₯1 , π₯2 , ..., π₯π }, set of finite domains π = {π·1 , π·2 , ..., π·π }, where each π·π in π contains the possible values for its associated variable π₯π in π³ , and set of soft constraints π = {πππ : π·π Γ π·π β R+ } βͺ {ππ : π·π β R+ }. In a DCOP, each agent is fully responsible for a subset of variables and the constraints that involve them. In this paper, while maintaining the generality, each DCOP is characterized in that each agent is responsible for a single variable and that two variables, at most, are linked by a constraint (i.e., unary or binary constraint) [14]. We consider these notations: π΄π is an agent, where π is its level. (π₯π , π£π ) is an assignment of π΄π , where π£π β π·π and π₯π β π³ . πΆππ is a binary constraint between π₯π and π₯π , and πππ is its binary cost. πΆππ ππ is an identical copy of the πΆ constraint, used in the AC* process. πΆ is a ππ π unary constraint on π₯π and ππ is its unary cost. πΆπ is a zero-arity constraint that represents a lower bound of any problem solution. πΆπ π is the contribution value of π΄π in πΆπ . π π΅π is the cost of the optimal solution reached so far and it is also the lowest unacceptable cost used for AC* process. [π΄1 , π΄2 , . . . , π΄π ] is the lexicographic ordering of agents (the default ordering), Ξ(π₯π ) = {Ξβ : π₯π β π³ | πΆππ β π·π Γ π·π , π < π} βͺ {Ξ+ : π₯π β π³ | πΆππ β π·π Γ π·π , π > π} is the set of neighbors of π΄π . Ξβ (resp. Ξ+ ) is a set of neighbors with a higher priority (resp. with a lower priority). π = π π = [(π₯1 , π£1 ), . . . , (π₯π , π£π )] is a current partial assignment (CPA). π£π* is the optimal value of π΄π . πΏπ΅π [π][π£π ] are the lower bounds of a lower neighbor π΄π obtained for π π . πΊπΆ (resp. πΊπΆ * ) are the guaranteed costs of π (resp. in AC* ). π·π πππ is a list of π arrays containing deleted values, each array, π·π πππ [π], contains two elements, πππ π‘π πππ which is the list of values deleted by π΄π and π ππ£π πππ which is a counter of the π΄π neighbors that have not yet processed πππ π‘π πππ . πΈπ πππ is a list of arrays containing extension values. The guaranteed cost of π is the sum of πππ involved in π (1). βοΈ πΊπΆ(π ) = πππ (π£π , π£π ) | (π₯π , π£π ), (π₯π , π£π ) β π (1) πππ βπ A CPA π is said to be a complete assignment (i.e., a solution) when it comprises a value assign- ment for each variable of a DCOP. Solving a DCOP is to find a solution such that the sum of the Proc. 1: ProjectUnary() Proc. 5: DAC* () 1 foreach (π΄π β Ξ+ ) do 1 π½ β ππππ£π βπ·π {ππ (π£π )} ; 2 foreach (π£π β π·π ) do 2 πΆπ β πΆπ + π½ ; π π 3 πΈ[π£π ] β ππ (π£π ) ; 3 foreach (π£π β π·π ) do 4 πΈπ₯π‘πππ(π₯π , π₯π , πΈ) ; 4 ππ (π£π ) β ππ (π£π ) β π½ ; 5 πΈπ πππ [ππ].ππ’π‘(πΈ) ; 6 π ππππππ‘π΅πππππ¦(π₯π , π₯π ) ; Proc. 2: ProjectBinary(π₯π , π₯π ) 1 foreach (π£π β π·π ) do Proc. 6: ProcessPruning(msg) 2 πΌ β ππππ£π βπ·π {πππ (π£π , π£π )} ; 1 π·π πππ β ππ π.π·π πππ ; 3 foreach (π£π β π·π ) do 2 foreach (π΄π β Ξ) do 4 πππ (π£π , π£π ) β πππ (π£π , π£π ) β πΌ ; 3 foreach (π β π·π πππ [π]) do 5 if (π΄π is the current agent) 4 π·π β π·π β {π} ; 6 ππ (π£π ) β ππ (π£π ) + πΌ ; 5 if (π·π ππ πβπππππ) 6 π·π πππ [π].π ππ£π πππ .ππππ(β1) ; Proc. 3: Extend(π₯π , π₯π , πΈ) 7 if (π·π πππ [π].π ππ£π πππ = 0) 1 foreach (π£π β π·π ) do 8 π·π πππ [π].πππ π‘π πππ .πππππ ; 2 foreach (π£π β π·π ) do 9 if (ππ π.π‘π¦ππ = ok?) 3 πππ (π£π , π£π ) β πππ (π£π , π£π ) + πΈ[π£π ] ; 10 πΈπ πππ β ππ π.πΈπ πππ ; 4 if (π΄π is the current agent) 11 foreach (π΄π β Ξβ ) do 5 ππ (π£π ) β ππ (π£π ) β πΈ[π£π ] ; 12 πΈπ₯π‘πππ(π₯π , π₯π , πΈπ πππ [ππ]) ; 13 πΈπ πππ [ππ].πππππ ; Proc. 4: CheckPruning() 14 π ππππππ‘π΅πππππ¦(π₯π , π₯π ) ; 15 π ππππππ‘π ππππ¦() ; 1 foreach (π β π·π ) do {οΈ 16 πΆπ β πππ₯ πΆπ , ππ π.πΆπ + πΆπ ; }οΈ π 2 if (ππ (π) + πΆπ β₯ π π΅π ) 17 πΆπ β 0 ; 3 π·π β π·π β {π} ; π 4 π·π πππ [π].πππ π‘π πππ .πππ(π) ; 18 if (πΆπ β₯ π π΅π ) 5 if (π·π ππ πβπππππ) 19 broadcastMsg : stp(π π΅π ) ; 6 π·π πππ [π].π ππ£π πππ β π΄π .π πππ ; 20 πππ β π‘ππ’π ; 7 if (π·π ππ ππππ‘π¦) 21 πΆβππππ ππ’ππππ() ; 8 broadcastMsg : stp(π π΅π ) ; 22 DAC* () ; 9 πππ β π‘ππ’π ; 23 πΈπ₯π‘ππππΆπ π΄() ; costs of constraints involved in this solution is minimal, i.e., π * = arg min{πΊπΆ(π ) | π£ππ(π ) = π π³ }. 2.2. Soft arc consistency Soft arc consistency techniques are used when solving a problem to delete values that are not part of the optimal solution of this problem. To apply these techniques to a problem, a set of transformations known as equivalence preserving transformations are used. They allow the exchange of costs between the constraints of the problem according to three manners that are a binary projection, a unary projection, and an extension. The binary projection (Proc. 2) is an operation which subtracts, for a value π£π of π·π , the smallest cost πΌ of a binary constraint πΆππ and adds it to the unary constraint πΆπ . The unary projection (Proc. 1) is an operation which subtracts the smallest cost π½ of a unary constraint πΆπ and adds it to the zero-arity constraint πΆπ . The extension (Proc. 3) is an operation which subtracts, for a value π£π of π·π , the extension value (πΈ[π£π ]) of π£π from a unary constraint πΆπ and adds it to the binary constraint πΆππ , with 0 < πΈ[π£π ] β€ ππ (π£π ). All of these transformations are applied to a problem under a set of conditions represented by soft arc consistency levels [15], namely: Node Consistency (NC* ) : a variable π₯π is NC* if each value π£π β π·π satisfies πΆπ + ππ (π£π ) < π π΅π and there is a value π£π β π·π with ππ (π£π ) = 0. A problem is NC* if each variable π₯π of this problem is NC* . Arc Consistency (AC* ) : a variable π₯π is AC* with respect to its neighbor π₯π if π₯π is NC* and there is, for each value π£π β π·π , a value π£π β π·π which satisfies πππ (π£π , π£π ) = 0. π£π is called a simple support of π£π . A problem is AC* if each variable π₯π of this problem is AC* . Directional Arc Consistency (DAC* ) : a variable π₯π is DAC* with respect to its neighbor π₯π(π>π) if π₯π is NC* and there is, for each value π£π β π·π , a value π£π β π·π which satisfies πππ (π£π , π£π ) + ππ (π£π ) = 0. π£π is called a full support of π£π . A problem is DAC* if each variable π₯π of this problem is DAC* with its neighbors π₯π(π>π) . To make a given problem DAC* , we first compute, for each variable π₯π with respect to its neighbors of lower priority π₯π(π>π) , the extension values appropriate to the values of its domain π·π (Proc. 5, π. 3). Next, we perform the extension operation (Proc. 5, π. 4) by subtracting the extension values from the unary constraints πΆπ and adding them to the binary ones πΆππ (Proc. 3). Then, each neighbor π₯π performs, successively, a binary projection (Proc. 2), a unary projection (Proc. 1), and finally a deletion of non-NC* values. These last three instructions ensure the fulfillment of arc consistency (AC* ). In a distributed case, each agent π΄π performs DAC* locally and shares the value of its zero- arity constraint πΆπ π with the other agents in order to calculate the global πΆπ (i.e., πΆπ = π΄π βπ πΆπ π )(Proc. 6, π. 16). Each agent π΄π keeps locally for each of its constraints πΆππ an βοΈ identical copy marked by πΆππ ππ and used in DAC* procedure. During DAC* , πΆ ππ constraints are ππ changed. To keep the symmetry of these constraints in the agents, each agent π΄π applies, on its copy πΆππ ππ , the same action of its neighbor π΄ and vice versa (Proc. 5, π. 6) [16]. π 2.3. AFB_BJ+ -AC* algorithm Each agent π΄π carries out the AFB_BJ+ -AC* [9] according to three phases. First, π΄π initializes its data structures and performs the AC* in which it deletes permanently all suboptimal values from its domain π·π . Second, π΄π chooses, for its variable π₯π , a value from its previously filtered domain π·π in order to extend the CPA π π by its value assignment (π₯π , π£π ). If π΄π has successfully extended the CPA, it sends an ok? message to the next agent asking it to continue the extension of CPA π π . This message loads the extended CPA π π , its guaranteed cost (2), its guaranteed cost in AC* (3), the πΆπ , and the list π·π πππ . βοΈ πΊπΆ(π π )[π] = πΊπΆ(π πβ1 ) + πππ (π£π , π£π ) (2) (π₯π ,π£π )βπ πβ1 | π<π βοΈ πΊπΆ * (π π ) = πΊπΆ * (π πβ1 ) + ππ (π£π ) + πππ (π£π , π£π ) (3) (π₯π ,π£π )βπ πβ1 πππ ππ βπ Otherwise, that is to say, the agent π΄π fails to extend the CPA, either because it doesnβt find a value that gives a valid CPA, or because all the values in its domain are exhausted, it stops the CPA extension and sends a back message, containing the same data structures as an ok? message excluding πΊπΆ and πΊπΆ * , to the appropriate agent. If such an agent doesnβt exist or the domain of π΄π becomes empty, π΄π stops its execution and informs the others via stp messages. A CPA π π is said to be valid if its lower bound (4) doesnβt exceed the global upper bound π π΅π , which represents the cost of the optimal solution achieved so far. βοΈ πΏπ΅(π π )[π] = πΊπΆ(π π )[π] + πΏπ΅π (π π )[π] (4) π΄π >π΄π Third, π΄π evaluates the extended CPA by sending fb? messages, which hold the same data structures excluding πΆπ and π·π πππ , to unassigned agents asking them to evaluate the CPA and send the result of the evaluation. When an agent has completed its evaluation, it sends the result directly to the sender agent via an lb message. The evaluation is based on the calculation of appropriate lower bounds for the received CPA π π . The lower bound of π π (5) is the minimal lower bound over all values of π·π with respect to π π . {οΈ βοΈ πβ1 βοΈ π πΏπ΅π (π )[β] = min πππ (π£π , π£π ) + min {πππ (π£π , π£π )} + (ββ€π<π) π£π βπ·π π£π βπ·π (π₯π ,π£π )βπ β π=β+1 (πβ€β) (β<π<π) }οΈ (5) βοΈ πππ (π£π , π£π ) + min {πππ (π£π , π£π )} π£ βπ·π π π₯π βΞ+ (π₯π ) (π>π) 3. The AFB_BJ+ -DAC* algorithm The AFB_BJ+ -DAC* algorithm uses a higher consistency level, which is a directional arc consistency (DAC* ). It improves the ability of AFB_BJ+ -AC* algorithm to generate more deletions. It is based on executing a set of cost extensions from unary constraints to binary ones, then on executing of AC* . DAC* () (Proc. 5) is the procedure responsible for calculating the extension values (i.e., costs to be transferred) and πΈπ₯π‘πππ() (Proc. 3) is the one that performs the extension of costs from the unary constraints towards the binary ones (Β§2.2). All the extension values used by an agent are stored in a list, πΈπ πππ , and routed to its lower neighbors via an ok? message in order to keep the symmetry of πΆππ ππ constraints in each agent and its neighbors. The list of extension values, πΈπ πππ , is processed in the procedure π πππππ π π ππ’ππππ() (Proc. 6, π. 9-13) in which DAC* () is also performed (Proc. 6, π. 22). 3.1. Description of AFB_BJ+ -DAC* The AFB_BJ+ -DAC* (Proc. 7) is performed by each agent π΄π as follows : π΄π starts with the initialization step (Proc. 7, π. 1-3). If π΄π is the 1π π‘ agent (Proc. 7, π. 4), it filters its domain by calling πΆβππππ ππ’ππππ() (Proc. 4), then performs the appropriate extensions through DAC* () (Proc. 5), and finally calls πΈπ₯π‘ππππΆπ π΄() to generate a CPA π . Next, π΄π starts processing the messages (Proc. 7, π. 9). First, it updates π π΅π and π£π* (Proc. 7, π. 12). Then, π΄π updates π and πΊπΆ and erases all unrelated lower bounds if the received CPA (ππ π.π ) is fresh compared to the local one (π ) (Proc. 7, π. 13). Thereafter, π΄π restores all temporarily deleted values (Proc. 7, π. 28). Proc. 7: AFB_BJ+ -DAC* () 1 Init. of data structures 33 if (ππ π.π‘π¦ππ = lb) 2 foreach (π΄π β Ξ+ ) do {οΈ 34 πΏπ΅π (π π ) β ππ π.πΏπ΅ ; 3 }οΈ πΏπ΅π [0][π£π ] β πππ πππ (π£π , π£π ) ; 35 if (πΏπ΅(π π ) β₯ π π΅π ) (π£π βπ·π ) π£π βπ·π 36 πΈπ₯π‘ππππΆπ π΄() ; 4 if (π΄π = π΄1 ) 5 πΆπ β πΆπ + πΆπ π ; πΆπ π β 0; Proc. 8: ExtendCPA() πΆβππππ ππ’ππππ() ; {οΈ β² }οΈ 6 1 π£π β ππππππ πΏπ΅(π βͺ (π₯π , π£π )) ; 7 DAC* () ; β² π£π βπ·π 8 πΈπ₯π‘ππππΆπ π΄() ; 2 if (πΏπ΅(π βͺ (π₯π , π£π )) β₯ π π΅π ) β¨ 9 while (Β¬πππ) do 10 ππ π β πππ‘π π π() ; (πΆπ + πΊπΆ * (π πβ1 ) + ππ (π£π ) β₯ π π΅π ) 11 if (ππ π.π π΅ < π π΅π ) 3 for π β π β 1 to 1 do 12 π π΅π β ππ π.π π΅ ; π£π* β π£π ; 4 if (πΏπ΅(π )[π β 1] < π π΅π ) 5 sendMsg : back(π π , π π΅π , π·π πππ , πΆπ ) ; 13 if (ππ π.π is stronger than π ) to π΄π 14 π β ππ π.π ; πΊπΆ β ππ π.πΊπΆ ; return ; 15 clear irrelevant πΏπ΅π [][] ; reset π·π ; 6 broadcastMsg : stp(π π΅π ) ; 16 if (ππ π.π‘π¦ππ = ok?) 7 πππ β π‘ππ’π ; 17 ππ’π π‘πππππΉ π΅ β π ππ’π ; 8 else 18 πΊπΆ * β ππ π.πΊπΆ * ; 9 π β {π βͺ (π₯π , π£π )} ; 19 π πππππ π π ππ’ππππ(ππ π) ; 10 if (π£ππ(π ) = X) 20 if (ππ π.π‘π¦ππ = back) 11 π π΅π β πΊπΆ(π ) ; π£π* β π£π ; 21 π β π πβ1 ; 12 π β π πβ1 ; 22 π πππππ π π ππ’ππππ(ππ π) ; 13 πΆβππππ ππ’ππππ() ; 23 if (ππ π.π‘π¦ππ = fb?) 14 πΈπ₯π‘ππππΆπ π΄() ; 24 πΊπΆ * β ππ π.πΊπΆ * ; 15 else 25 foreach (π£π β π·π ) do 16 sendMsg : ok? (π, πΊπΆ, π π΅π , π·π πππ , to π΄π+1 26 πππ π‘ β πΆπ + πΊπΆ * (π πβ1 ) + ππ (π£π ) ; 27 if (πππ π‘ β₯ π π΅π ) πΈπ πππ , πΆπ , πΊπΆ * ) ; 28 π·π β π·π β π£π ; 17 πΈπ πππ .πππππ ; 18 if (ππ’π π‘πππππΉ π΅) 29 sendMsg : lb (πΏπ΅π (π π )[], ππ π.π ) ; to π΄π 19 sendMsg : fb? (π, πΊπΆ, π π΅π , πΊπΆ * ) ; to π΄π 30 if (ππ π.π‘π¦ππ = stp) π>π 31 πππ β π‘ππ’π ; 20 ππ’π π‘πππππΉ π΅ β π πππ π ; When receiving an ok? message (Proc. 7, π. 16), π΄π authorizes the sending of fb? messages and calls π πππππ π π ππ’ππππ() (Proc. 6). When calling π πππππ π π ππ’ππππ() (Proc. 6), π΄π deals initially, for ok? messages only, with extensions of its higher neighbors (Proc. 6, π. 9-13). Afterward, it updates its π·π πππ , then its neighborsβ domains separately in order to keep the same domains as these agents (Proc. 6, π. 1-4). After that, it performs the two projections fulfilling the condition of AC* (Proc. 6, π. 14-15). Next, π΄π decrements the unvisited neighbors of π΄π , π·π πππ [π].π ππ£π πππ , and then checks whether it is the last visited neighbor of this agent π΄π in order to reset its list of deleted values π·π πππ [π].πππ π‘π πππ (Proc. 6, π. 5-8). Then, π΄π updates its global πΆπ (Proc. 6, π. 16). If πΆπ exceeds the π π΅π , π΄π turns off its execution and notifies the others (Proc. 6, π. 18-20). Finally, π΄π calls πΆβππππ ππ’ππππ() to prune its domain, DAC* () (Proc. 5) to make the proper extensions, and πΈπ₯π‘ππππΆπ π΄() to extend the received CPA (Proc. 6, π. 21-23). When calling DAC* () (Proc. 5), π΄π performs the proper extensions from πΆπ to each πΆππ (Proc. 5, π. 4-5). To do that, π΄π calculates, for each value π£π of π·π , its extension value (Proc. 5, π. 2-3) based on the unary cost of this value (0 < πΈ[π£π ] β€ ππ (π£π )). Once completed, π΄π performs a binary projection to keep the symmetry of πΆππ ππ constraints (Proc. 5, π. 6). It should be noted that the direction taken into account by each agent π΄π for the extension of its costs is towards its lower neighbors (Ξ+ (π₯π )). When calling πΆβππππ ππ’ππππ() (Proc. 4), π΄π deletes any value from its domain for which the sum of the πΆπ with the unary cost of this value exceeds π π΅π (Proc. 4, π. 2-3). With each new deletion, π΄π initializes the number of its neighbors not yet visited (Proc. 4, π. 5-6). If π΄π domain becomes empty, π΄π turns off its execution and notifies the others (Proc. 4, π. 7-9). When calling πΈπ₯π‘ππππΆπ π΄() (Proc. 8), π΄π looks for a value π£π for its variable π₯π (Proc. 8, π. 1). If no value exists, π΄π returns to the priority agents by sending a back message to the contradictory agent (Proc. 8, π. 2-5). If no agent exists, π΄π turns off its execution and notifies the others via stp messages (Proc. 8, π. 6-7). Otherwise, π΄π extends π by adding its assignment (Proc. 8, π. 9). If π΄π is the last agent (Proc. 8, π. 10) then a new solution is obtained and the π π΅π is updated, which obliges π΄π to call πΆβππππ ππ’ππππ() to filter again its domain and then πΈπ₯π‘ππππΆπ π΄() to proceed the search (Proc. 8, π. 11-14). Otherwise, π΄π sends an ok? message loaded with the extended π to the next agent (Proc. 8, π. 16) and fb? messages to unassigned agents (Proc. 8, π. 19). When π΄π receives an fb? message, it filters its domain π·π with respect to the received π (Proc. 7, π. 24-28), calculates the appropriate lower bounds (5), and immediately sends them to the sender via lb message (Proc. 7, π. 29). When π΄π receives an lb message, it stores the lower bounds received (Proc. 7, π. 34) and performs πΈπ₯π‘ππππΆπ π΄() to modify its assignment if the lower bound calculated, based on the cost of π (4), exceeds the π π΅π . 3.2. Correctness of AFB_BJ+ -DAC* Theorem 1. AFB_BJ+ -DAC* is guaranteed to calculate the optimum and terminates. Proof. The AFB_BJ+ -DAC* algorithm outperforms AFB_BJ+ -AC* [9] by executing a set of cost extensions. These extensions have already been proved which are correct in [15, 17], and they are executed by the AFB_BJ+ -DAC* without any cost redundancy (Proc. 3, π. 4), (Proc. 5, π. 6), and (Proc. 6, π. 9-13). 4. Experimental Results In this section, we experimentally compare the AFB_BJ+ -DAC* algorithm with its older versions [8, 9] and with the BnB-Adopt+ -DP2 algorithm [18], which is its famous competitor. Two benchmarks are used in these experiments: meetings scheduling and sensors network. Meetings scheduling [1]: are defined by (π, π, π‘π ), which are respectively the number of meetings/variables, the number of participants, and the number of time slots for each meeting. Each participant has a private schedule of meetings and each meeting takes place at a particular location and at a fixed time slot. The constraints are applied to meetings that share participants. We have evaluated the same cases presented in detail in [1]. These cases are A, B, C, and D, each representing a different scenario in terms of the number of meetings and the number of participants who each have a different hierarchical level. Sensors network [2]: are defined by (π‘, π , π), which are respectively the number of tar- gets/variables, the number of sensors, and the number of possible combinations of 3 sensors AFB_BJ+ AFB_BJ+ -AC* AFB_BJ+ -DAC* BnB-Adopt+ -DP2 1,000 number of messages 6,000 800 number of ncccs 4,000 600 400 2,000 200 0 0 A B C D A B C D case case Figure 1: Total of messages (ππ ππ ) sent and non-concurrent constraint checks (πππππ ) for meetings scheduling AFB_BJ+ AFB_BJ+ -AC* AFB_BJ+ -DAC* BnB-Adopt+ -DP2 3,000 8,000 number of messages number of ncccs 6,000 2,000 4,000 1,000 2,000 0 0 A B C D A B C D case case Figure 2: Total of messages (ππ ππ ) sent and non-concurrent constraint checks (πππππ ) for sensors network reserved for tracking each target. A sensor can only track one target at most and each com- bination of 3 sensors must track a target. The constraints are applied to adjacent targets. We have evaluated the same cases presented in detail in [1]. These cases are A, B, C, and D, each representing a different scenario in terms of the number of targets and the number of sensors which are arranged in different topologies. To compare the algorithms, we use two metrics which are the total of messages exchanged (ππ ππ ) that represents the communication load and the total of non-concurrent constraint checks (πππππ ) that represents the computation effort. Regarding meetings scheduling problems (Fig. 1), the results show a clear improvement of the AFB_BJ+ -DAC* compared to others, whether for ππ ππ or for πππππ . But with regard to sensors network problems (Fig. 2), the BnB-Adopt+ -DP2 retains the pioneering role, despite the superiority of the AFB_BJ+ -DAC* algorithm to its older versions. By analyzing the results, we can conclude that the AFB_BJ+ -DAC* is better than its earlier versions, because of the existence of directional arc consistency (DAC* ) which allows agents to remove more suboptimal values. This is due to a set of cost extensions applied to the problem. Regarding the superiority of the BnB-Adopt+ -DP2 over the AFB_BJ+ -DAC* in sensors network problems, this is mainly due to the arrangement of the pseudo-tree used by this algorithm that corresponds to the structure of these problems, as well as the existence of DP2 heuristic facilitates the proper choice of values. 5. Conclusion In this paper, we have introduced the AFB_BJ+ -DAC* algorithm. It is an algorithm that relies on DAC* to generate more deletions and thus quickly reach the optimal solution of a problem. DAC* mainly relies on performing a set of cost extensions in one direction from an agent to its lower priority neighbors in order to perform AC* multiple times, which increases the number of deletions made by each agent and thereby speed up the process of solving a problem. Experiments on some benchmarks show that the AFB_BJ+ -DAC* algorithm behaves better than its older versions. As future work, we propose to exploit the change in the size of the agent domains in variable ordering heuristics. References [1] R. T. Maheswaran, M. Tambe, E. Bowring, J. P. Pearce, P. Varakantham, Taking dcop to the real world: Efficient complete solutions for distributed multi-event scheduling, in: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 1, IEEE Computer Society, 2004, pp. 310β317. [2] R. BΓ©jar, C. Domshlak, C. FernΓ‘ndez, C. Gomes, B. Krishnamachari, B. Selman, M. Valls, Sen- sor networks and distributed csp: communication, computation and complexity, Artificial Intelligence 161 (2005) 117β147. [3] A. Gershman, A. Meisels, R. Zivan, Asynchronous forward bounding for distributed cops, Journal of Artificial Intelligence Research 34 (2009) 61β88. [4] P. J. Modi, W.-M. Shen, M. Tambe, M. Yokoo, Adopt: Asynchronous distributed constraint optimization with quality guarantees, Artificial Intelligence 161 (2005) 149β180. [5] W. Yeoh, A. Felner, S. Koenig, Bnb-adopt: An asynchronous branch-and-bound dcop algorithm, Journal of Artificial Intelligence Research 38 (2010) 85β133. [6] P. Gutierrez, P. Meseguer, Saving messages in adopt-based algorithms, in: Proc. 12th DCR workshop in AAMAS-10, Citeseer, 2010, pp. 53β64. [7] K. Hirayama, M. Yokoo, Distributed partial constraint satisfaction problem, in: Interna- tional Conference on Principles and Practice of Constraint Programming, Springer, 1997, pp. 222β236. [8] M. Wahbi, R. Ezzahir, C. Bessiere, Asynchronous forward bounding revisited, in: Interna- tional Conference on Principles and Practice of Constraint Programming, Springer, 2013, pp. 708β723. [9] R. Adrdor, R. Ezzahir, L. Koutti, Connecting afb_bj+ with soft arc consistency, International Journal of Computing and Optimization 5 no. 1 (2018) 9β20. [10] R. Adrdor, L. Koutti, Enhancing AFB_BJ+ AC* algorithm, in: 2019 International Conference of Computer Science and Renewable Energies (ICCSRE), IEEE, 2019, pp. 1β7. [11] R. Adrdor, R. Ezzahir, L. Koutti, Consistance dβarc souple appliquΓ©e aux problΓ¨mes dcop, JournΓ©es dβIntelligence Artificielle Fondamentale (JIAF) (2020) 63. [12] T. Grinshpoun, T. Tassa, V. Levit, R. Zivan, Privacy preserving region optimal algorithms for symmetric and asymmetric dcops, Artificial Intelligence 266 (2019) 27β50. [13] F. Fioretto, E. Pontelli, W. Yeoh, Distributed constraint optimization problems and applica- tions: A survey, Journal of Artificial Intelligence Research 61 (2018) 623β698. [14] D. T. Nguyen, W. Yeoh, H. C. Lau, R. Zivan, Distributed gibbs: A linear-space sampling- based dcop algorithm, Journal of Artificial Intelligence Research 64 (2019) 705β748. [15] J. Larrosa, T. Schiex, In the quest of the best form of local consistency for weighted csp, in: IJCAI, volume 3, 2003, pp. 239β244. [16] P. Gutierrez, P. Meseguer, Improving bnb-adopt+-ac, in: Proceedings of the 11th Interna- tional Conference on Autonomous Agents and Multiagent Systems-Volume 1, International Foundation for Autonomous Agents and Multiagent Systems, 2012, pp. 273β280. [17] M. C. Cooper, S. De Givry, M. SΓ‘nchez, T. Schiex, M. Zytnicki, T. Werner, Soft arc consistency revisited, Artificial Intelligence 174 (2010) 449β478. [18] S. Ali, S. Koenig, M. Tambe, Preprocessing techniques for accelerating the dcop algorithm adopt, in: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, ACM, 2005, pp. 1041β1048.