Arti ial Intelligen e applied on the Risk! game Lu a Fuligni (lu a.fulignistudio.unibo.it), Andrea Franzon (andrea.franzonstudio.unibo.it), Orfeo Ciano (orfeo. ianostudio.unibo.it) Alma Mater Studiorum - Università di Bologna Abstra t. Our goal is to reate an appli ation that uses arti ial play- ers for the game of Risk!. Some of the existing implementations of Risk! present ustomizable arti ial players but their behavior seems to be neither too smart nor naive and this an tire out the human player. We intent to a hieve the goal in a dierent way: we hose to use Prolog, a de larative language, in ontrast with the existing open-sour e imple- mentations. We learnt that the usage of Prolog redu es the development time and eorts by oering a good abstra tion. Keywords:arti ial intelligen e, Prolog, Risk, game, di es. 1 Introdu tion Our goal is to reate an appli ation that uses arti ial players for the game of Risk! The ontext is the usage of the Arti ial Intelligen es on this game that is a strategy, turn-based, multiplayer game and ontains some randomness due to the throwing of di es. More exa tly we use A.I. for the game strategy. We hoose risk! be ause unlike games in the theory's games it presents many problemati for example the hoi e of the atta king territories or the predi tion of whi h territory the enemy will atta k. Usually the existing open-sour e implementations of this game uses impera- tive languages to des ribe the A.I. strategy. We, instead, propose to separate the A.I. aspe t from the program by using a de larative logi language. We hoose Prolog be ause it in ludes the rst-order logi and ba kward haining me ha- nism natively. Moreover we don't hoose an imperative language, like Java, for the A.I. aspe t be ause all the tree sear h and inferen e fun tionality in luded in some libraries are less e ient than the Prolog engine. As a de larative language we hoose Prolog instead of other languages like Lisp be ause we have to infer more on re ords than on lists. Our intent is to use the Prolog language for des ribing the arti ial player's de isions. The ability of ea h arti ial player (and so the di ulty for a human player) is hara terized by the set of rules that dene the inferen e engine of ea h arti ial player. The parti ularity of our solution is the onne tion between prolog that represents the A.I. and Java with whi h we develop the rest of our appli ation. 2 The game of Risk! Risk! is a turn-based strategi board game for two to six players in whi h the goal is indi ated by an obje tive ard. The game is played on a board depi ting a politi al map of the Earth that ontains 42 territories distributed in 6 ontinents. There are several goals but in our appli ation, for simpli ity and for a general purpose, we onsider only the aim to own 24 territories. At the start-up ea h player distributes a xed number of armies on the board. In ea h turn there are three phases: pla ing armies, atta king, fortifying. In the rst phase the player puts new armies into the owned territories. In the se ond the player atta k a nearby enemy territory, the su ess or fail of an atta k is de ided by the throw of di es. In the third phase the player an move some armies between two owned nearby territories only on e. The phases will repeat themselves until a player rea hes his goal. 3 Related works This approa h an be used in games whi h present randomness and multiplayer issues or in games where the arti ial player's behavior is stri tly onne ted with logi al des ription like Risk!. Nowadays there are several implementations of Risk!. The most famous is Risk Digital of Hasbro but there are a lot of Risk! lones like Lux for Linux, Dominion for mobile phones or other ash games an be found over the Internet. Some of those implementations present ustomizable arti ial players but their behavior seems to be neither too smart nor naive and this an tire out the human player. Our approa h, besides fo using on e ien y, hallenges the logi al aspe t in a dierent and more suitable way in order to improve the game-play. 4 Methodology Our rst implementation is a Java model apt to represent the game board and all the informations that it ontains (territory borders, neighborhood relations, ontinents, territory ownership, . . . ). We also dene the Prolog knowledge-base stru ture paying attention on data onsisten y with the Java model. We propose three dierent A.I. di ulty levels: easy, medium and hard. The easy level performs random a tions. Instead the hard level will be an extension of the medium level that is an intelligent implementation of the same Prolog predi ates. Ea h skill that distinguish medium and hard di ulty is mapped into Prolog predi ates. We use  GNU Prolog for Java to embed the Prolog interpreter into our Java appli ation be ause it respe ts the ISO standards. We assume that the aim is to onquer 24 territories to simplify the study of the game. We hoose to guide the A.I. through some well-known ases in ea h phase of the player's turn. 4.1 Knowledge Base The Risk! Map an be onsidered as a set of ountries (territories). For ea h territory we dene:  a relation of neighborhood between territories;  the territory's ownership;  the number of armies a territory holds; We represented this knowledge in Prolog by dening the following set of fa ts: player/1 represents a player, identied by his olor (blue, red, green, pink, bla k, yellow). territory/1 represents a territory, identied by his name. owner/2 represents the ownership of a territory identied by the denoted player. neighbor/2 represents the relation of the neighborhood between two territories. army/2 represents the army number for ea h territory. So this is the simplest knowledge base we an have: p l a y e r ( red ) . player ( green ) . territory ( siberia ). territory ( ja uzia ). territory ( ita ). owner ( s i b e r i a , r e d ) . owner ( j a uzia , red ) . owner ( ita , green ) . neighbor ( s i b e r i a , j a uzia ). neighbor ( s i b e r i a , ita ). neighbor ( ja uzia , ita ). neighbor ( ja uzia , siberia ) . neighbor ( ita , ja uzia ). neighbor ( ita , sib eria ) . army ( s i b e r i a , 3 ) . army ( j a uzia , 4 ) . army ( ita , 2 ) . 4.2 Moves Sin e a turn is made up of three phases, we dene three separated predi ates: pla e_army/2 Given the player's olor, it gives ba k the territory on whi h an army should be pla ed. atta k/3 Given a player's olor,it gives ba k two territories: an atta k sour e (territory owned by the player), and an atta k destination (territory owned by an enemy player). move/3 Given a player's olor,it gives ba k two territories: the territory- sour e from whi h the armies should be moved, and the territory- destination to whi h the armies should be moved. Here are some sample queries: ?− p l a e _ a r m y ( r e d , D e s t i n a t i o n ) . Yes , D e s t i n a t i o n= s i b e r i a . ?− atta k ( red , Sour e , D e s t i n a t i o n ) . Yes , S o u r e=y a u z i a , D e s t i n a t i o n= i t a . ?− move ( r e d , S o u r e , D e s t i n a t i o n ) . Yes , S o u r e=s i b e r i a , D e s t i n a t i o n=j a uzia . The implementation of those predi ates will vary a ording to A.I. di ulty. In this way the responsibility of the Java engine is to update the knowledge base and perform the move suggested by the prolog engine. 4.3 Randomness representation in Prolog Be ause of randomness omponent of the game, exploring a de isional tree to hoose whi h territory to atta k, ould lead to a general low-responsivity. This is be ause we'd have to insert two randomness level (atta k and defense di es) into the de ision tree in reasing exponentially the number of the nodes in that level. Therefore we de ide to represent the randomness omponents using a table that ontains, given the numer of atta ker and defender army, the probability to win an atta k. Inside the Knowledge Base in Prolog we represent this information with vi tory/3 : vi t o r y ( A t t a k#Army , D e f e n s e#Army , Probability ). So that the predi ate  atta k ,given a xed threshold value, knew the ong- uration atta ker army/defender army that two territories must have to perform an atta k. 4.4 From Java to Prolog Java has the responsibility of generating the knowledge base. In order to a om- plish this task we use the Visitor pattern on the game table (map) obje t. 1 PrologVisitor visitor = new P r o l o g V i s i t o r ( ) ; 2 for ( C o n t i n e n t ontinent : ontinents ) 3 { 4 for ( T e r r i t o r y territory : ontinent . getTerritories ()) 5 visitor . visit ( territory ); 6 } On e the knowledge base is updated, it is loaded by the Prolog engine: 1 Environment env = new E n v i r o n m e n t ( ) ; 2 // L o a d i n g the knowledge base file and A. I . implementation 3 e n v . e n s u r e L o a d e d ( AtomTerm . g e t ( " k n o w l e d g e . p l " ) ) ; 4 e n v . e n s u r e L o a d e d ( AtomTerm . g e t ( " e a s y . p l " ) ) ; 5 interpreter = env . reateInterpreter (); 6 env . r u n I n i t i a l i z a t i o n ( i n t e r p r e t e r ) ; The exe ution of a query: 1 // C a l l i n g the q u e r y : ?− pla e_army ( r e d , T e r r i t o r y ) . 2 VariableTerm answerTerm = new V a r i a b l e T e r m ( " T e r r i t o r y " ) ; 3 Term [ ℄ a r g s = {AtomTerm . g e t ( p l a y e r C o l o r ) , answerTerm } ; 4 CompoundTerm goalTerm = new CompoundTerm ( AtomTerm . g e t ( " p l a e _ a r m y" ) , args ) ; 5 6 // E x e u t i n g the q u e r y and testing if su eded 7 int r = i n t e r p r e t e r . runOn e ( g o a l T e r m ) ; 8 i f ( r ==P r o l o g C o d e . SUCCESS | | r ==P r o l o g C o d e . SUCCESS_LAST ) 9 { 10 11 // G e t t i n g t h e P r o l o g indi ated territory from t h e game t a b l e 12 Term v a l u e = answerTerm . d e r e f e r e n e () ; 13 String name = TermWriter . t o S t r i n g ( v a l u e ) ; 14 Territory d e s t i n a t i o n = t a b l e . g e t T e r r i t o r y ( name ) ; 15 } The result of this query will be used by the Risk game engine that performs the orresponding move. 5 Case-study During an atta k, to handle the randomness of the di es, we generate a table (based on Markov hains) whi h ontains the probability of winning an atta k by the number of atta k and defense armies. Ea h player's di ulty is hara terized by a threshold value, only if the prob- ability of winning is above this value, the player will atta k. Due to this de ision sometimes the A.I. rea hes a deadlo k situation where no one either atta ks or moves armies between the territories. To break the deadlo k we need to hard ode some spe i Prolog onditions. After thit we again tested the system by letting two players, with the same A.I. di ulty level, ght ea h-other. Otherwise, if there is at least one player with a dierent di ulty level, then the spe i onditions are not so evident, so the Turing test is passed. To over ome these problems a ner solution ould be to represent the model as a Constraint Linear Programming problem in whi h every de ision is driven by an obje tive fun tion that will either be minimized or maximized, depending on the a tion that the player has performed. The obje tive fun tion varies a ording to the evaluation of dierent variables. This evaluation is either rough or rened depending on the player's di ulty level. 6 Con lusions Our implementation grants a good level of playing for every kind of player. Otherwise the CPL solution would require a ne tuning of the obje tive fun tion for ea h di ulty in order to meet the optimal value for ea h A.I. level. This will produ e an unnatural playing style. The Prolog interpreter is an optimal solution for these kinds of problems. By using Prolog it is easier to swit h between dierent A.I. logi s (and approa hes) leaving the ore appli ation un hanged. Referen es 1. Osborne, Jason A. April 2003: "Markov Chains for the RISK Board Game Revis- ited", Mathemati s Magazine 76 2. S. Russell e P. Norvig, 2005: "Intelligenza arti iale. Un appro io moderno" volume 1 Se onda Edizione. 3. L.Console, E.Lamma, P.Mello, M.Milano, 1997: "Programmazione Logi a e Pro-log" Se onda Edizione UTET.