Asynchronous Forward-Bounding algorithm with Directional Arc Consistency Rachid Adrdor1 , Lahcen Koutti1 1 Ibn Zohr University, Faculty of Sciences, Department of Computer Science, Agadir, Morocco Abstract The AFB_BJ+ -AC* algorithm is one of the latest algorithms used to solve Distributed Constraint Opti- mization Problems known as DCOPs. It is based on soft arc consistency techniques (AC* ) to speed up the process of solving a problem by permanently removing any value that doesn’t belong to the opti- mal solution. In fact, these techniques have greatly contributed to improving the performance of the AFB_BJ+ algorithm in solving DCOPs, but there are some exceptions in which they have no effect due to the limited number of deletions made. For that, we use in this paper a higher consistency level, which is a directional arc consistency (DAC* ). This level makes it possible to erase more values and thus to quickly reach the optimal solution of a problem. Experiments on some benchmarks show that the new algorithm, AFB_BJ+ -DAC* , is better in terms of communication load and computation effort. Keywords DCOP, AFB_BJ+ , AC* , Directional Arc Consistency 1. Introduction A large number of multi-agent problems can be modeled as DCOPs such as meetings schedul- ing [1], sensor networks [2], and so on. In a DCOP, variables, domains, and constraints are distributed among a set of agents. Each agent has full control over a subset of variables and constraints that involve them [3]. A DCOP is solved in a distributed manner via an algorithm al- lowing the agents to cooperate and coordinate with each other to find a solution with a minimal cost. A solution to a DCOP is a set of value assignments, each representing the value assigned to one of the variables in that DCOP. Algorithms with various search strategies have been suggested to solve DCOPs, for example, Adopt[4], BnB-Adopt[5], BnB-Adopt+ [6], SyncBB[7], AFB[3], AFB_BJ+ [8], AFB_BJ+ -AC* [9, 10, 11], etc. In AFB_BJ+ -AC* , agents synchronously develop a current partial assignment (CPA) in order to find the optimal solution to the problem to be solved. During this process, and in order to reduce the number of retries, each agent uses arc consistency (AC* ) to remove any suboptimal values in its domain. But sometimes, the number of deletions generated by AC* is insufficient, which negatively affects the performance of the algorithm. In this paper, instead of using the basic level of arc consistency (AC* ), we use directional arc consistency (DAC* ), which is the next higher level of AC* . DAC* allows AFB_BJ+ to generate ASPOCP 2021: Workshop on Answer Set Programming and Other Computing Paradigms 2021 co-located with ICLP 2021 Porto, Portugal, September 21, 2021 " rachid.adrdor@edu.uiz.ac.ma (R. Adrdor); l.koutti@uiz.ac.ma (L. Koutti) Β© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) more deletions and thus quickly reach the optimal solution of a problem. The new algorithm is called AFB_BJ+ -DAC* . It uses DAC* to filter agent domains by performing a set of cost extensions from an agent to its neighbors, then executing AC* . Our experiments on different benchmarks show the superiority of AFB_BJ+ -DAC* algorithm in terms of communication load and computation effort. This paper comprises four sections. Section 2 gives an overview of DCOPs, soft arc consistency rules, and AFB_BJ+ -AC* algorithm. Section 3 gives a description of AFB_BJ+ -DAC* algorithm. Section 4 talks about experiments fulfilled on some benchmarks. The last section gives the conclusion. 2. Background 2.1. Distributed Constraint Optimization Problem (DCOP) A DCOP [12, 13, 11] is defined by 4 sets, set of agents π’œ = {𝐴1 , 𝐴2 , ..., π΄π‘˜ }, set of variables 𝒳 = {π‘₯1 , π‘₯2 , ..., π‘₯𝑛 }, set of finite domains π’Ÿ = {𝐷1 , 𝐷2 , ..., 𝐷𝑛 }, where each 𝐷𝑖 in π’Ÿ contains the possible values for its associated variable π‘₯𝑖 in 𝒳 , and set of soft constraints π’ž = {𝑐𝑖𝑗 : 𝐷𝑖 Γ— 𝐷𝑗 β†’ R+ } βˆͺ {𝑐𝑖 : 𝐷𝑖 β†’ R+ }. In a DCOP, each agent is fully responsible for a subset of variables and the constraints that involve them. In this paper, while maintaining the generality, each DCOP is characterized in that each agent is responsible for a single variable and that two variables, at most, are linked by a constraint (i.e., unary or binary constraint) [14]. We consider these notations: 𝐴𝑗 is an agent, where 𝑗 is its level. (π‘₯𝑗 , 𝑣𝑗 ) is an assignment of 𝐴𝑗 , where 𝑣𝑗 ∈ 𝐷𝑗 and π‘₯𝑗 ∈ 𝒳 . 𝐢𝑖𝑗 is a binary constraint between π‘₯𝑖 and π‘₯𝑗 , and 𝑐𝑖𝑗 is its binary cost. 𝐢𝑖𝑗 π‘Žπ‘ is an identical copy of the 𝐢 constraint, used in the AC* process. 𝐢 is a 𝑖𝑗 𝑗 unary constraint on π‘₯𝑗 and 𝑐𝑗 is its unary cost. πΆπœ‘ is a zero-arity constraint that represents a lower bound of any problem solution. πΆπœ‘ 𝑗 is the contribution value of 𝐴𝑗 in πΆπœ‘ . π‘ˆ 𝐡𝑗 is the cost of the optimal solution reached so far and it is also the lowest unacceptable cost used for AC* process. [𝐴1 , 𝐴2 , . . . , 𝐴𝑛 ] is the lexicographic ordering of agents (the default ordering), Ξ“(π‘₯𝑗 ) = {Ξ“βˆ’ : π‘₯𝑖 ∈ 𝒳 | 𝐢𝑖𝑗 βŠ† 𝐷𝑖 Γ— 𝐷𝑗 , 𝑖 < 𝑗} βˆͺ {Ξ“+ : π‘₯𝑖 ∈ 𝒳 | 𝐢𝑖𝑗 βŠ† 𝐷𝑖 Γ— 𝐷𝑗 , 𝑖 > 𝑗} is the set of neighbors of 𝐴𝑗 . Ξ“βˆ’ (resp. Ξ“+ ) is a set of neighbors with a higher priority (resp. with a lower priority). π‘Œ = π‘Œ 𝑗 = [(π‘₯1 , 𝑣1 ), . . . , (π‘₯𝑗 , 𝑣𝑗 )] is a current partial assignment (CPA). 𝑣𝑗* is the optimal value of 𝐴𝑗 . πΏπ΅π‘˜ [𝑖][𝑣𝑗 ] are the lower bounds of a lower neighbor π΄π‘˜ obtained for π‘Œ 𝑗 . 𝐺𝐢 (resp. 𝐺𝐢 * ) are the guaranteed costs of π‘Œ (resp. in AC* ). 𝐷𝑉 π‘Žπ‘™π‘  is a list of 𝑛 arrays containing deleted values, each array, 𝐷𝑉 π‘Žπ‘™π‘ [𝑗], contains two elements, 𝑙𝑖𝑠𝑑𝑉 π‘Žπ‘™π‘  which is the list of values deleted by 𝐴𝑗 and π‘ˆ 𝑛𝑣𝑁 π‘π‘Ÿπ‘  which is a counter of the 𝐴𝑗 neighbors that have not yet processed 𝑙𝑖𝑠𝑑𝑉 π‘Žπ‘™π‘ . 𝐸𝑉 π‘Žπ‘™π‘  is a list of arrays containing extension values. The guaranteed cost of π‘Œ is the sum of 𝑐𝑖𝑗 involved in π‘Œ (1). βˆ‘οΈ 𝐺𝐢(π‘Œ ) = 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) | (π‘₯𝑖 , 𝑣𝑖 ), (π‘₯𝑗 , 𝑣𝑗 ) ∈ π‘Œ (1) 𝑐𝑖𝑗 βˆˆπ’ž A CPA π‘Œ is said to be a complete assignment (i.e., a solution) when it comprises a value assign- ment for each variable of a DCOP. Solving a DCOP is to find a solution such that the sum of the Proc. 1: ProjectUnary() Proc. 5: DAC* () 1 foreach (π΄π‘˜ ∈ Ξ“+ ) do 1 𝛽 ← π‘šπ‘–π‘›π‘£π‘– βˆˆπ·π‘– {𝑐𝑖 (𝑣𝑖 )} ; 2 foreach (𝑣𝑗 ∈ 𝐷𝑗 ) do 2 πΆπœ‘ ← πΆπœ‘ + 𝛽 ; 𝑖 𝑖 3 𝐸[𝑣𝑗 ] ← 𝑐𝑗 (𝑣𝑗 ) ; 3 foreach (𝑣𝑖 ∈ 𝐷𝑖 ) do 4 𝐸π‘₯𝑑𝑒𝑛𝑑(π‘₯𝑗 , π‘₯π‘˜ , 𝐸) ; 4 𝑐𝑖 (𝑣𝑖 ) ← 𝑐𝑖 (𝑣𝑖 ) βˆ’ 𝛽 ; 5 𝐸𝑉 π‘Žπ‘™π‘ [π‘—π‘˜].𝑝𝑒𝑑(𝐸) ; 6 𝑃 π‘Ÿπ‘œπ‘—π‘’π‘π‘‘π΅π‘–π‘›π‘Žπ‘Ÿπ‘¦(π‘₯π‘˜ , π‘₯𝑗 ) ; Proc. 2: ProjectBinary(π‘₯𝑖 , π‘₯𝑗 ) 1 foreach (𝑣𝑖 ∈ 𝐷𝑖 ) do Proc. 6: ProcessPruning(msg) 2 𝛼 ← π‘šπ‘–π‘›π‘£π‘— βˆˆπ·π‘— {𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 )} ; 1 𝐷𝑉 π‘Žπ‘™π‘  ← π‘šπ‘ π‘”.𝐷𝑉 π‘Žπ‘™π‘  ; 3 foreach (𝑣𝑗 ∈ 𝐷𝑗 ) do 2 foreach (π΄π‘˜ ∈ Ξ“) do 4 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) ← 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) βˆ’ 𝛼 ; 3 foreach (π‘Ž ∈ 𝐷𝑉 π‘Žπ‘™π‘ [π‘˜]) do 5 if (𝐴𝑖 is the current agent) 4 π·π‘˜ ← π·π‘˜ βˆ’ {π‘Ž} ; 6 𝑐𝑖 (𝑣𝑖 ) ← 𝑐𝑖 (𝑣𝑖 ) + 𝛼 ; 5 if (π·π‘˜ 𝑖𝑠 π‘β„Žπ‘Žπ‘›π‘”π‘’π‘‘) 6 𝐷𝑉 π‘Žπ‘™π‘ [π‘˜].π‘ˆ 𝑛𝑣𝑁 π‘π‘Ÿπ‘ .π‘‘π‘’π‘π‘Ÿ(βˆ’1) ; Proc. 3: Extend(π‘₯𝑖 , π‘₯𝑗 , 𝐸) 7 if (𝐷𝑉 π‘Žπ‘™π‘ [π‘˜].π‘ˆ 𝑛𝑣𝑁 π‘π‘Ÿπ‘  = 0) 1 foreach (𝑣𝑖 ∈ 𝐷𝑖 ) do 8 𝐷𝑉 π‘Žπ‘™π‘ [π‘˜].𝑙𝑖𝑠𝑑𝑉 π‘Žπ‘™π‘ .π‘π‘™π‘’π‘Žπ‘Ÿ ; 2 foreach (𝑣𝑗 ∈ 𝐷𝑗 ) do 9 if (π‘šπ‘ π‘”.𝑑𝑦𝑝𝑒 = ok?) 3 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) ← 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) + 𝐸[𝑣𝑖 ] ; 10 𝐸𝑉 π‘Žπ‘™π‘  ← π‘šπ‘ π‘”.𝐸𝑉 π‘Žπ‘™π‘  ; 4 if (𝐴𝑖 is the current agent) 11 foreach (π΄π‘˜ ∈ Ξ“βˆ’ ) do 5 𝑐𝑖 (𝑣𝑖 ) ← 𝑐𝑖 (𝑣𝑖 ) βˆ’ 𝐸[𝑣𝑖 ] ; 12 𝐸π‘₯𝑑𝑒𝑛𝑑(π‘₯π‘˜ , π‘₯𝑗 , 𝐸𝑉 π‘Žπ‘™π‘ [π‘˜π‘—]) ; 13 𝐸𝑉 π‘Žπ‘™π‘ [π‘˜π‘—].π‘π‘™π‘’π‘Žπ‘Ÿ ; Proc. 4: CheckPruning() 14 𝑃 π‘Ÿπ‘œπ‘—π‘’π‘π‘‘π΅π‘–π‘›π‘Žπ‘Ÿπ‘¦(π‘₯𝑗 , π‘₯π‘˜ ) ; 15 𝑃 π‘Ÿπ‘œπ‘—π‘’π‘π‘‘π‘ˆ π‘›π‘Žπ‘Ÿπ‘¦() ; 1 foreach (π‘Ž ∈ 𝐷𝑗 ) do {οΈ€ 16 πΆπœ‘ ← π‘šπ‘Žπ‘₯ πΆπœ‘ , π‘šπ‘ π‘”.πΆπœ‘ + πΆπœ‘ ; }οΈ€ 𝑗 2 if (𝑐𝑗 (π‘Ž) + πΆπœ‘ β‰₯ π‘ˆ 𝐡𝑗 ) 17 πΆπœ‘ ← 0 ; 3 𝐷𝑗 ← 𝐷𝑗 βˆ’ {π‘Ž} ; 𝑗 4 𝐷𝑉 π‘Žπ‘™π‘ [𝑗].𝑙𝑖𝑠𝑑𝑉 π‘Žπ‘™π‘ .π‘Žπ‘‘π‘‘(π‘Ž) ; 18 if (πΆπœ‘ β‰₯ π‘ˆ 𝐡𝑗 ) 5 if (𝐷𝑗 𝑖𝑠 π‘β„Žπ‘Žπ‘›π‘”π‘’π‘‘) 19 broadcastMsg : stp(π‘ˆ 𝐡𝑗 ) ; 6 𝐷𝑉 π‘Žπ‘™π‘ [𝑗].π‘ˆ 𝑛𝑣𝑁 π‘π‘Ÿπ‘  ← 𝐴𝑗 .𝑁 π‘π‘Ÿπ‘ ; 20 𝑒𝑛𝑑 ← π‘‘π‘Ÿπ‘’π‘’ ; 7 if (𝐷𝑗 𝑖𝑠 π‘’π‘šπ‘π‘‘π‘¦) 21 πΆβ„Žπ‘’π‘π‘˜π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() ; 8 broadcastMsg : stp(π‘ˆ 𝐡𝑗 ) ; 22 DAC* () ; 9 𝑒𝑛𝑑 ← π‘‘π‘Ÿπ‘’π‘’ ; 23 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() ; costs of constraints involved in this solution is minimal, i.e., π‘Œ * = arg min{𝐺𝐢(π‘Œ ) | π‘£π‘Žπ‘Ÿ(π‘Œ ) = π‘Œ 𝒳 }. 2.2. Soft arc consistency Soft arc consistency techniques are used when solving a problem to delete values that are not part of the optimal solution of this problem. To apply these techniques to a problem, a set of transformations known as equivalence preserving transformations are used. They allow the exchange of costs between the constraints of the problem according to three manners that are a binary projection, a unary projection, and an extension. The binary projection (Proc. 2) is an operation which subtracts, for a value 𝑣𝑖 of 𝐷𝑖 , the smallest cost 𝛼 of a binary constraint 𝐢𝑖𝑗 and adds it to the unary constraint 𝐢𝑖 . The unary projection (Proc. 1) is an operation which subtracts the smallest cost 𝛽 of a unary constraint 𝐢𝑖 and adds it to the zero-arity constraint πΆπœ‘ . The extension (Proc. 3) is an operation which subtracts, for a value 𝑣𝑖 of 𝐷𝑖 , the extension value (𝐸[𝑣𝑖 ]) of 𝑣𝑖 from a unary constraint 𝐢𝑖 and adds it to the binary constraint 𝐢𝑖𝑗 , with 0 < 𝐸[𝑣𝑖 ] ≀ 𝑐𝑖 (𝑣𝑖 ). All of these transformations are applied to a problem under a set of conditions represented by soft arc consistency levels [15], namely: Node Consistency (NC* ) : a variable π‘₯𝑖 is NC* if each value 𝑣𝑖 ∈ 𝐷𝑖 satisfies πΆπœ‘ + 𝑐𝑖 (𝑣𝑖 ) < π‘ˆ 𝐡𝑖 and there is a value 𝑣𝑖 ∈ 𝐷𝑖 with 𝑐𝑖 (𝑣𝑖 ) = 0. A problem is NC* if each variable π‘₯𝑖 of this problem is NC* . Arc Consistency (AC* ) : a variable π‘₯𝑖 is AC* with respect to its neighbor π‘₯𝑗 if π‘₯𝑖 is NC* and there is, for each value 𝑣𝑖 ∈ 𝐷𝑖 , a value 𝑣𝑗 ∈ 𝐷𝑗 which satisfies 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) = 0. 𝑣𝑗 is called a simple support of 𝑣𝑖 . A problem is AC* if each variable π‘₯𝑖 of this problem is AC* . Directional Arc Consistency (DAC* ) : a variable π‘₯𝑖 is DAC* with respect to its neighbor π‘₯𝑗(𝑗>𝑖) if π‘₯𝑖 is NC* and there is, for each value 𝑣𝑖 ∈ 𝐷𝑖 , a value 𝑣𝑗 ∈ 𝐷𝑗 which satisfies 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) + 𝑐𝑗 (𝑣𝑗 ) = 0. 𝑣𝑗 is called a full support of 𝑣𝑖 . A problem is DAC* if each variable π‘₯𝑖 of this problem is DAC* with its neighbors π‘₯𝑗(𝑗>𝑖) . To make a given problem DAC* , we first compute, for each variable π‘₯𝑖 with respect to its neighbors of lower priority π‘₯𝑗(𝑗>𝑖) , the extension values appropriate to the values of its domain 𝐷𝑖 (Proc. 5, 𝑙. 3). Next, we perform the extension operation (Proc. 5, 𝑙. 4) by subtracting the extension values from the unary constraints 𝐢𝑖 and adding them to the binary ones 𝐢𝑖𝑗 (Proc. 3). Then, each neighbor π‘₯𝑗 performs, successively, a binary projection (Proc. 2), a unary projection (Proc. 1), and finally a deletion of non-NC* values. These last three instructions ensure the fulfillment of arc consistency (AC* ). In a distributed case, each agent 𝐴𝑖 performs DAC* locally and shares the value of its zero- arity constraint πΆπœ‘ 𝑖 with the other agents in order to calculate the global πΆπœ‘ (i.e., πΆπœ‘ = 𝐴𝑖 βˆˆπ’œ πΆπœ‘ 𝑖 )(Proc. 6, 𝑙. 16). Each agent 𝐴𝑖 keeps locally for each of its constraints 𝐢𝑖𝑗 an βˆ‘οΈ€ identical copy marked by 𝐢𝑖𝑗 π‘Žπ‘ and used in DAC* procedure. During DAC* , 𝐢 π‘Žπ‘ constraints are 𝑖𝑗 changed. To keep the symmetry of these constraints in the agents, each agent 𝐴𝑖 applies, on its copy 𝐢𝑖𝑗 π‘Žπ‘ , the same action of its neighbor 𝐴 and vice versa (Proc. 5, 𝑙. 6) [16]. 𝑗 2.3. AFB_BJ+ -AC* algorithm Each agent 𝐴𝑗 carries out the AFB_BJ+ -AC* [9] according to three phases. First, 𝐴𝑗 initializes its data structures and performs the AC* in which it deletes permanently all suboptimal values from its domain 𝐷𝑗 . Second, 𝐴𝑗 chooses, for its variable π‘₯𝑗 , a value from its previously filtered domain 𝐷𝑗 in order to extend the CPA π‘Œ 𝑗 by its value assignment (π‘₯𝑗 , 𝑣𝑗 ). If 𝐴𝑗 has successfully extended the CPA, it sends an ok? message to the next agent asking it to continue the extension of CPA π‘Œ 𝑗 . This message loads the extended CPA π‘Œ 𝑗 , its guaranteed cost (2), its guaranteed cost in AC* (3), the πΆπœ‘ , and the list 𝐷𝑉 π‘Žπ‘™π‘ . βˆ‘οΈ 𝐺𝐢(π‘Œ 𝑗 )[𝑗] = 𝐺𝐢(π‘Œ π‘—βˆ’1 ) + 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) (2) (π‘₯𝑖 ,𝑣𝑖 )βˆˆπ‘Œ π‘—βˆ’1 | 𝑖<𝑗 βˆ‘οΈ 𝐺𝐢 * (π‘Œ 𝑗 ) = 𝐺𝐢 * (π‘Œ π‘—βˆ’1 ) + 𝑐𝑗 (𝑣𝑗 ) + 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) (3) (π‘₯𝑖 ,𝑣𝑖 )βˆˆπ‘Œ π‘—βˆ’1 π‘π‘Žπ‘ 𝑖𝑗 βˆˆπ’ž Otherwise, that is to say, the agent 𝐴𝑗 fails to extend the CPA, either because it doesn’t find a value that gives a valid CPA, or because all the values in its domain are exhausted, it stops the CPA extension and sends a back message, containing the same data structures as an ok? message excluding 𝐺𝐢 and 𝐺𝐢 * , to the appropriate agent. If such an agent doesn’t exist or the domain of 𝐴𝑗 becomes empty, 𝐴𝑗 stops its execution and informs the others via stp messages. A CPA π‘Œ 𝑗 is said to be valid if its lower bound (4) doesn’t exceed the global upper bound π‘ˆ 𝐡𝑗 , which represents the cost of the optimal solution achieved so far. βˆ‘οΈ 𝐿𝐡(π‘Œ 𝑗 )[𝑖] = 𝐺𝐢(π‘Œ 𝑗 )[𝑖] + πΏπ΅π‘˜ (π‘Œ 𝑗 )[𝑖] (4) π΄π‘˜ >𝐴𝑗 Third, 𝐴𝑗 evaluates the extended CPA by sending fb? messages, which hold the same data structures excluding πΆπœ‘ and 𝐷𝑉 π‘Žπ‘™π‘ , to unassigned agents asking them to evaluate the CPA and send the result of the evaluation. When an agent has completed its evaluation, it sends the result directly to the sender agent via an lb message. The evaluation is based on the calculation of appropriate lower bounds for the received CPA π‘Œ 𝑖 . The lower bound of π‘Œ 𝑖 (5) is the minimal lower bound over all values of 𝐷𝑗 with respect to π‘Œ 𝑖 . {οΈ‚ βˆ‘οΈ π‘–βˆ’1 βˆ‘οΈ 𝑖 𝐿𝐡𝑗 (π‘Œ )[β„Ž] = min π‘π‘˜π‘— (π‘£π‘˜ , 𝑣𝑗 ) + min {π‘π‘˜π‘— (π‘£π‘˜ , 𝑣𝑗 )} + (β„Žβ‰€π‘–<𝑗) 𝑣𝑗 βˆˆπ·π‘— π‘£π‘˜ βˆˆπ·π‘˜ (π‘₯π‘˜ ,π‘£π‘˜ )βˆˆπ‘Œ β„Ž π‘˜=β„Ž+1 (π‘˜β‰€β„Ž) (β„Ž<π‘˜<𝑖) }οΈ‚ (5) βˆ‘οΈ 𝑐𝑖𝑗 (𝑣𝑖 , 𝑣𝑗 ) + min {π‘π‘—π‘˜ (𝑣𝑗 , π‘£π‘˜ )} 𝑣 βˆˆπ·π‘˜ π‘˜ π‘₯π‘˜ βˆˆΞ“+ (π‘₯𝑗 ) (π‘˜>𝑗) 3. The AFB_BJ+ -DAC* algorithm The AFB_BJ+ -DAC* algorithm uses a higher consistency level, which is a directional arc consistency (DAC* ). It improves the ability of AFB_BJ+ -AC* algorithm to generate more deletions. It is based on executing a set of cost extensions from unary constraints to binary ones, then on executing of AC* . DAC* () (Proc. 5) is the procedure responsible for calculating the extension values (i.e., costs to be transferred) and 𝐸π‘₯𝑑𝑒𝑛𝑑() (Proc. 3) is the one that performs the extension of costs from the unary constraints towards the binary ones (Β§2.2). All the extension values used by an agent are stored in a list, 𝐸𝑉 π‘Žπ‘™π‘ , and routed to its lower neighbors via an ok? message in order to keep the symmetry of 𝐢𝑖𝑗 π‘Žπ‘ constraints in each agent and its neighbors. The list of extension values, 𝐸𝑉 π‘Žπ‘™π‘ , is processed in the procedure 𝑃 π‘Ÿπ‘œπ‘π‘’π‘ π‘ π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() (Proc. 6, 𝑙. 9-13) in which DAC* () is also performed (Proc. 6, 𝑙. 22). 3.1. Description of AFB_BJ+ -DAC* The AFB_BJ+ -DAC* (Proc. 7) is performed by each agent 𝐴𝑗 as follows : 𝐴𝑗 starts with the initialization step (Proc. 7, 𝑙. 1-3). If 𝐴𝑗 is the 1𝑠𝑑 agent (Proc. 7, 𝑙. 4), it filters its domain by calling πΆβ„Žπ‘’π‘π‘˜π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() (Proc. 4), then performs the appropriate extensions through DAC* () (Proc. 5), and finally calls 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() to generate a CPA π‘Œ . Next, 𝐴𝑗 starts processing the messages (Proc. 7, 𝑙. 9). First, it updates π‘ˆ 𝐡𝑗 and 𝑣𝑗* (Proc. 7, 𝑙. 12). Then, 𝐴𝑗 updates π‘Œ and 𝐺𝐢 and erases all unrelated lower bounds if the received CPA (π‘šπ‘ π‘”.π‘Œ ) is fresh compared to the local one (π‘Œ ) (Proc. 7, 𝑙. 13). Thereafter, 𝐴𝑗 restores all temporarily deleted values (Proc. 7, 𝑙. 28). Proc. 7: AFB_BJ+ -DAC* () 1 Init. of data structures 33 if (π‘šπ‘ π‘”.𝑑𝑦𝑝𝑒 = lb) 2 foreach (π΄π‘˜ ∈ Ξ“+ ) do {οΈ€ 34 πΏπ΅π‘˜ (π‘Œ 𝑗 ) ← π‘šπ‘ π‘”.𝐿𝐡 ; 3 }οΈ€ πΏπ΅π‘˜ [0][𝑣𝑗 ] ← π‘šπ‘–π‘› π‘π‘—π‘˜ (𝑣𝑗 , π‘£π‘˜ ) ; 35 if (𝐿𝐡(π‘Œ 𝑗 ) β‰₯ π‘ˆ 𝐡𝑗 ) (𝑣𝑗 βˆˆπ·π‘— ) π‘£π‘˜ βˆˆπ·π‘˜ 36 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() ; 4 if (𝐴𝑗 = 𝐴1 ) 5 πΆπœ‘ ← πΆπœ‘ + πΆπœ‘ 𝑗 ; πΆπœ‘ 𝑗 ← 0; Proc. 8: ExtendCPA() πΆβ„Žπ‘’π‘π‘˜π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() ; {︁ β€² }︁ 6 1 𝑣𝑗 ← π‘Žπ‘Ÿπ‘”π‘šπ‘–π‘› 𝐿𝐡(π‘Œ βˆͺ (π‘₯𝑗 , 𝑣𝑗 )) ; 7 DAC* () ; β€² 𝑣𝑗 βˆˆπ·π‘— 8 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() ; 2 if (𝐿𝐡(π‘Œ βˆͺ (π‘₯𝑗 , 𝑣𝑗 )) β‰₯ π‘ˆ 𝐡𝑗 ) ∨ 9 while (¬𝑒𝑛𝑑) do 10 π‘šπ‘ π‘” ← 𝑔𝑒𝑑𝑀 𝑠𝑔() ; (πΆπœ‘ + 𝐺𝐢 * (π‘Œ π‘—βˆ’1 ) + 𝑐𝑗 (𝑣𝑗 ) β‰₯ π‘ˆ 𝐡𝑗 ) 11 if (π‘šπ‘ π‘”.π‘ˆ 𝐡 < π‘ˆ 𝐡𝑗 ) 3 for 𝑖 ← 𝑗 βˆ’ 1 to 1 do 12 π‘ˆ 𝐡𝑗 ← π‘šπ‘ π‘”.π‘ˆ 𝐡 ; 𝑣𝑗* ← 𝑣𝑗 ; 4 if (𝐿𝐡(π‘Œ )[𝑖 βˆ’ 1] < π‘ˆ 𝐡𝑗 ) 5 sendMsg : back(π‘Œ 𝑖 , π‘ˆ 𝐡𝑗 , 𝐷𝑉 π‘Žπ‘™π‘ , πΆπœ‘ ) ; 13 if (π‘šπ‘ π‘”.π‘Œ is stronger than π‘Œ ) to 𝐴𝑖 14 π‘Œ ← π‘šπ‘ π‘”.π‘Œ ; 𝐺𝐢 ← π‘šπ‘ π‘”.𝐺𝐢 ; return ; 15 clear irrelevant πΏπ΅π‘˜ [][] ; reset 𝐷𝑗 ; 6 broadcastMsg : stp(π‘ˆ 𝐡𝑗 ) ; 16 if (π‘šπ‘ π‘”.𝑑𝑦𝑝𝑒 = ok?) 7 𝑒𝑛𝑑 ← π‘‘π‘Ÿπ‘’π‘’ ; 17 π‘šπ‘’π‘ π‘‘π‘†π‘’π‘›π‘‘πΉ 𝐡 ← 𝑇 π‘Ÿπ‘’π‘’ ; 8 else 18 𝐺𝐢 * ← π‘šπ‘ π‘”.𝐺𝐢 * ; 9 π‘Œ ← {π‘Œ βˆͺ (π‘₯𝑗 , 𝑣𝑗 )} ; 19 𝑃 π‘Ÿπ‘œπ‘π‘’π‘ π‘ π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”(π‘šπ‘ π‘”) ; 10 if (π‘£π‘Žπ‘Ÿ(π‘Œ ) = X) 20 if (π‘šπ‘ π‘”.𝑑𝑦𝑝𝑒 = back) 11 π‘ˆ 𝐡𝑗 ← 𝐺𝐢(π‘Œ ) ; 𝑣𝑗* ← 𝑣𝑗 ; 21 π‘Œ ← π‘Œ π‘—βˆ’1 ; 12 π‘Œ ← π‘Œ π‘—βˆ’1 ; 22 𝑃 π‘Ÿπ‘œπ‘π‘’π‘ π‘ π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”(π‘šπ‘ π‘”) ; 13 πΆβ„Žπ‘’π‘π‘˜π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() ; 23 if (π‘šπ‘ π‘”.𝑑𝑦𝑝𝑒 = fb?) 14 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() ; 24 𝐺𝐢 * ← π‘šπ‘ π‘”.𝐺𝐢 * ; 15 else 25 foreach (𝑣𝑗 ∈ 𝐷𝑗 ) do 16 sendMsg : ok? (π‘Œ, 𝐺𝐢, π‘ˆ 𝐡𝑗 , 𝐷𝑉 π‘Žπ‘™π‘ , to 𝐴𝑗+1 26 π‘π‘œπ‘ π‘‘ ← πΆπœ‘ + 𝐺𝐢 * (π‘Œ π‘—βˆ’1 ) + 𝑐𝑗 (𝑣𝑗 ) ; 27 if (π‘π‘œπ‘ π‘‘ β‰₯ π‘ˆ 𝐡𝑗 ) 𝐸𝑉 π‘Žπ‘™π‘ , πΆπœ‘ , 𝐺𝐢 * ) ; 28 𝐷𝑗 ← 𝐷𝑗 βˆ’ 𝑣𝑗 ; 17 𝐸𝑉 π‘Žπ‘™π‘ .π‘π‘™π‘’π‘Žπ‘Ÿ ; 18 if (π‘šπ‘’π‘ π‘‘π‘†π‘’π‘›π‘‘πΉ 𝐡) 29 sendMsg : lb (𝐿𝐡𝑗 (π‘Œ 𝑖 )[], π‘šπ‘ π‘”.π‘Œ ) ; to 𝐴𝑖 19 sendMsg : fb? (π‘Œ, 𝐺𝐢, π‘ˆ 𝐡𝑗 , 𝐺𝐢 * ) ; to π΄π‘˜ 30 if (π‘šπ‘ π‘”.𝑑𝑦𝑝𝑒 = stp) π‘˜>𝑗 31 𝑒𝑛𝑑 ← π‘‘π‘Ÿπ‘’π‘’ ; 20 π‘šπ‘’π‘ π‘‘π‘†π‘’π‘›π‘‘πΉ 𝐡 ← 𝑓 π‘Žπ‘™π‘ π‘’ ; When receiving an ok? message (Proc. 7, 𝑙. 16), 𝐴𝑗 authorizes the sending of fb? messages and calls 𝑃 π‘Ÿπ‘œπ‘π‘’π‘ π‘ π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() (Proc. 6). When calling 𝑃 π‘Ÿπ‘œπ‘π‘’π‘ π‘ π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() (Proc. 6), 𝐴𝑗 deals initially, for ok? messages only, with extensions of its higher neighbors (Proc. 6, 𝑙. 9-13). Afterward, it updates its 𝐷𝑉 π‘Žπ‘™π‘ , then its neighbors’ domains separately in order to keep the same domains as these agents (Proc. 6, 𝑙. 1-4). After that, it performs the two projections fulfilling the condition of AC* (Proc. 6, 𝑙. 14-15). Next, 𝐴𝑗 decrements the unvisited neighbors of π΄π‘˜ , 𝐷𝑉 π‘Žπ‘™π‘ [π‘˜].π‘ˆ 𝑛𝑣𝑁 π‘π‘Ÿπ‘ , and then checks whether it is the last visited neighbor of this agent π΄π‘˜ in order to reset its list of deleted values 𝐷𝑉 π‘Žπ‘™π‘ [π‘˜].𝑙𝑖𝑠𝑑𝑉 π‘Žπ‘™π‘  (Proc. 6, 𝑙. 5-8). Then, 𝐴𝑗 updates its global πΆπœ‘ (Proc. 6, 𝑙. 16). If πΆπœ‘ exceeds the π‘ˆ 𝐡𝑗 , 𝐴𝑗 turns off its execution and notifies the others (Proc. 6, 𝑙. 18-20). Finally, 𝐴𝑗 calls πΆβ„Žπ‘’π‘π‘˜π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() to prune its domain, DAC* () (Proc. 5) to make the proper extensions, and 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() to extend the received CPA (Proc. 6, 𝑙. 21-23). When calling DAC* () (Proc. 5), 𝐴𝑗 performs the proper extensions from 𝐢𝑗 to each 𝐢𝑖𝑗 (Proc. 5, 𝑙. 4-5). To do that, 𝐴𝑗 calculates, for each value 𝑣𝑗 of 𝐷𝑗 , its extension value (Proc. 5, 𝑙. 2-3) based on the unary cost of this value (0 < 𝐸[𝑣𝑖 ] ≀ 𝑐𝑖 (𝑣𝑖 )). Once completed, 𝐴𝑗 performs a binary projection to keep the symmetry of 𝐢𝑖𝑗 π‘Žπ‘ constraints (Proc. 5, 𝑙. 6). It should be noted that the direction taken into account by each agent 𝐴𝑗 for the extension of its costs is towards its lower neighbors (Ξ“+ (π‘₯𝑗 )). When calling πΆβ„Žπ‘’π‘π‘˜π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() (Proc. 4), 𝐴𝑗 deletes any value from its domain for which the sum of the πΆπœ‘ with the unary cost of this value exceeds π‘ˆ 𝐡𝑗 (Proc. 4, 𝑙. 2-3). With each new deletion, 𝐴𝑗 initializes the number of its neighbors not yet visited (Proc. 4, 𝑙. 5-6). If 𝐴𝑗 domain becomes empty, 𝐴𝑗 turns off its execution and notifies the others (Proc. 4, 𝑙. 7-9). When calling 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() (Proc. 8), 𝐴𝑗 looks for a value 𝑣𝑗 for its variable π‘₯𝑗 (Proc. 8, 𝑙. 1). If no value exists, 𝐴𝑗 returns to the priority agents by sending a back message to the contradictory agent (Proc. 8, 𝑙. 2-5). If no agent exists, 𝐴𝑗 turns off its execution and notifies the others via stp messages (Proc. 8, 𝑙. 6-7). Otherwise, 𝐴𝑗 extends π‘Œ by adding its assignment (Proc. 8, 𝑙. 9). If 𝐴𝑗 is the last agent (Proc. 8, 𝑙. 10) then a new solution is obtained and the π‘ˆ 𝐡𝑗 is updated, which obliges 𝐴𝑗 to call πΆβ„Žπ‘’π‘π‘˜π‘ƒ π‘Ÿπ‘’π‘›π‘–π‘›π‘”() to filter again its domain and then 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() to proceed the search (Proc. 8, 𝑙. 11-14). Otherwise, 𝐴𝑗 sends an ok? message loaded with the extended π‘Œ to the next agent (Proc. 8, 𝑙. 16) and fb? messages to unassigned agents (Proc. 8, 𝑙. 19). When 𝐴𝑗 receives an fb? message, it filters its domain 𝐷𝑗 with respect to the received π‘Œ (Proc. 7, 𝑙. 24-28), calculates the appropriate lower bounds (5), and immediately sends them to the sender via lb message (Proc. 7, 𝑙. 29). When 𝐴𝑗 receives an lb message, it stores the lower bounds received (Proc. 7, 𝑙. 34) and performs 𝐸π‘₯𝑑𝑒𝑛𝑑𝐢𝑃 𝐴() to modify its assignment if the lower bound calculated, based on the cost of π‘Œ (4), exceeds the π‘ˆ 𝐡𝑗 . 3.2. Correctness of AFB_BJ+ -DAC* Theorem 1. AFB_BJ+ -DAC* is guaranteed to calculate the optimum and terminates. Proof. The AFB_BJ+ -DAC* algorithm outperforms AFB_BJ+ -AC* [9] by executing a set of cost extensions. These extensions have already been proved which are correct in [15, 17], and they are executed by the AFB_BJ+ -DAC* without any cost redundancy (Proc. 3, 𝑙. 4), (Proc. 5, 𝑙. 6), and (Proc. 6, 𝑙. 9-13). 4. Experimental Results In this section, we experimentally compare the AFB_BJ+ -DAC* algorithm with its older versions [8, 9] and with the BnB-Adopt+ -DP2 algorithm [18], which is its famous competitor. Two benchmarks are used in these experiments: meetings scheduling and sensors network. Meetings scheduling [1]: are defined by (π‘š, 𝑝, 𝑑𝑠), which are respectively the number of meetings/variables, the number of participants, and the number of time slots for each meeting. Each participant has a private schedule of meetings and each meeting takes place at a particular location and at a fixed time slot. The constraints are applied to meetings that share participants. We have evaluated the same cases presented in detail in [1]. These cases are A, B, C, and D, each representing a different scenario in terms of the number of meetings and the number of participants who each have a different hierarchical level. Sensors network [2]: are defined by (𝑑, 𝑠, 𝑑), which are respectively the number of tar- gets/variables, the number of sensors, and the number of possible combinations of 3 sensors AFB_BJ+ AFB_BJ+ -AC* AFB_BJ+ -DAC* BnB-Adopt+ -DP2 1,000 number of messages 6,000 800 number of ncccs 4,000 600 400 2,000 200 0 0 A B C D A B C D case case Figure 1: Total of messages (π‘šπ‘ π‘”π‘ ) sent and non-concurrent constraint checks (𝑛𝑐𝑐𝑐𝑠) for meetings scheduling AFB_BJ+ AFB_BJ+ -AC* AFB_BJ+ -DAC* BnB-Adopt+ -DP2 3,000 8,000 number of messages number of ncccs 6,000 2,000 4,000 1,000 2,000 0 0 A B C D A B C D case case Figure 2: Total of messages (π‘šπ‘ π‘”π‘ ) sent and non-concurrent constraint checks (𝑛𝑐𝑐𝑐𝑠) for sensors network reserved for tracking each target. A sensor can only track one target at most and each com- bination of 3 sensors must track a target. The constraints are applied to adjacent targets. We have evaluated the same cases presented in detail in [1]. These cases are A, B, C, and D, each representing a different scenario in terms of the number of targets and the number of sensors which are arranged in different topologies. To compare the algorithms, we use two metrics which are the total of messages exchanged (π‘šπ‘ π‘”π‘ ) that represents the communication load and the total of non-concurrent constraint checks (𝑛𝑐𝑐𝑐𝑠) that represents the computation effort. Regarding meetings scheduling problems (Fig. 1), the results show a clear improvement of the AFB_BJ+ -DAC* compared to others, whether for π‘šπ‘ π‘”π‘  or for 𝑛𝑐𝑐𝑐𝑠. But with regard to sensors network problems (Fig. 2), the BnB-Adopt+ -DP2 retains the pioneering role, despite the superiority of the AFB_BJ+ -DAC* algorithm to its older versions. By analyzing the results, we can conclude that the AFB_BJ+ -DAC* is better than its earlier versions, because of the existence of directional arc consistency (DAC* ) which allows agents to remove more suboptimal values. This is due to a set of cost extensions applied to the problem. Regarding the superiority of the BnB-Adopt+ -DP2 over the AFB_BJ+ -DAC* in sensors network problems, this is mainly due to the arrangement of the pseudo-tree used by this algorithm that corresponds to the structure of these problems, as well as the existence of DP2 heuristic facilitates the proper choice of values. 5. Conclusion In this paper, we have introduced the AFB_BJ+ -DAC* algorithm. It is an algorithm that relies on DAC* to generate more deletions and thus quickly reach the optimal solution of a problem. DAC* mainly relies on performing a set of cost extensions in one direction from an agent to its lower priority neighbors in order to perform AC* multiple times, which increases the number of deletions made by each agent and thereby speed up the process of solving a problem. Experiments on some benchmarks show that the AFB_BJ+ -DAC* algorithm behaves better than its older versions. As future work, we propose to exploit the change in the size of the agent domains in variable ordering heuristics. References [1] R. T. Maheswaran, M. Tambe, E. Bowring, J. P. Pearce, P. Varakantham, Taking dcop to the real world: Efficient complete solutions for distributed multi-event scheduling, in: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 1, IEEE Computer Society, 2004, pp. 310–317. [2] R. BΓ©jar, C. Domshlak, C. FernΓ‘ndez, C. Gomes, B. Krishnamachari, B. Selman, M. Valls, Sen- sor networks and distributed csp: communication, computation and complexity, Artificial Intelligence 161 (2005) 117–147. [3] A. Gershman, A. Meisels, R. Zivan, Asynchronous forward bounding for distributed cops, Journal of Artificial Intelligence Research 34 (2009) 61–88. [4] P. J. Modi, W.-M. Shen, M. Tambe, M. Yokoo, Adopt: Asynchronous distributed constraint optimization with quality guarantees, Artificial Intelligence 161 (2005) 149–180. [5] W. Yeoh, A. Felner, S. Koenig, Bnb-adopt: An asynchronous branch-and-bound dcop algorithm, Journal of Artificial Intelligence Research 38 (2010) 85–133. [6] P. Gutierrez, P. Meseguer, Saving messages in adopt-based algorithms, in: Proc. 12th DCR workshop in AAMAS-10, Citeseer, 2010, pp. 53–64. [7] K. Hirayama, M. Yokoo, Distributed partial constraint satisfaction problem, in: Interna- tional Conference on Principles and Practice of Constraint Programming, Springer, 1997, pp. 222–236. [8] M. Wahbi, R. Ezzahir, C. Bessiere, Asynchronous forward bounding revisited, in: Interna- tional Conference on Principles and Practice of Constraint Programming, Springer, 2013, pp. 708–723. [9] R. Adrdor, R. Ezzahir, L. Koutti, Connecting afb_bj+ with soft arc consistency, International Journal of Computing and Optimization 5 no. 1 (2018) 9–20. [10] R. Adrdor, L. Koutti, Enhancing AFB_BJ+ AC* algorithm, in: 2019 International Conference of Computer Science and Renewable Energies (ICCSRE), IEEE, 2019, pp. 1–7. [11] R. Adrdor, R. Ezzahir, L. Koutti, Consistance d’arc souple appliquΓ©e aux problΓ¨mes dcop, JournΓ©es d’Intelligence Artificielle Fondamentale (JIAF) (2020) 63. [12] T. Grinshpoun, T. Tassa, V. Levit, R. Zivan, Privacy preserving region optimal algorithms for symmetric and asymmetric dcops, Artificial Intelligence 266 (2019) 27–50. [13] F. Fioretto, E. Pontelli, W. Yeoh, Distributed constraint optimization problems and applica- tions: A survey, Journal of Artificial Intelligence Research 61 (2018) 623–698. [14] D. T. Nguyen, W. Yeoh, H. C. Lau, R. Zivan, Distributed gibbs: A linear-space sampling- based dcop algorithm, Journal of Artificial Intelligence Research 64 (2019) 705–748. [15] J. Larrosa, T. Schiex, In the quest of the best form of local consistency for weighted csp, in: IJCAI, volume 3, 2003, pp. 239–244. [16] P. Gutierrez, P. Meseguer, Improving bnb-adopt+-ac, in: Proceedings of the 11th Interna- tional Conference on Autonomous Agents and Multiagent Systems-Volume 1, International Foundation for Autonomous Agents and Multiagent Systems, 2012, pp. 273–280. [17] M. C. Cooper, S. De Givry, M. SΓ‘nchez, T. Schiex, M. Zytnicki, T. Werner, Soft arc consistency revisited, Artificial Intelligence 174 (2010) 449–478. [18] S. Ali, S. Koenig, M. Tambe, Preprocessing techniques for accelerating the dcop algorithm adopt, in: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, ACM, 2005, pp. 1041–1048.