5000 Characters at Video Frame Rate using Declarative Programming Samuel Hill1, Ian Horswill1 1 Northwestern University, 2233 Tech Drive, Evanston, IL, 60208, USA Abstract Video games have historically been limited in the number of characters under AI control. This is all the more true for games that use declarative systems which are generally used for turn-based or otherwise non-real-time games. While a declarative system can obviously be used for a real-time simulation, we wanted to test the limits of how far it can be scaled for large numbers of characters. This paper shows how 5000 characters can run simultaneously and in real-time with 60fps performance on a modern laptop using a declarative logic programming language. Keywords Declarative programming, Logic programming, Needs-based AI, crowd simulations 1 1. Introduction a planner, GOAP, that allows for declarative definitions of various AIs in the GDBEdit database tool Performance considerations have traditionally [11]. restricted the number of simultaneous NPCs, One of the earliest and most influential symbolic particularly when those characters are driven by systems for games is the Inform 7 language [12], [13] symbolic rules or other declarative methods. Many that allows designers to build interactive narrative systems that do support this kind of character AI are systems using declarative statements. The Versu also turn-based rather than real-time. In this paper, simulationist narrative system [14], [15] used a we show that 5000 The Sims-style characters can be custom logic programming language, Praxis, based on run simultaneously on a modern laptop using a an exotic modal logic called eremic logic (aka version of logic programming. exclusion logic) [16]. The Lume system [17] made Although Sims-style ‘needs-based AI’ is relatively extensive use of Prolog’s definite clause grammars simple, and the rendering of the characters in our [18], [19] for text generation. Lapeyrade has also used demo is extremely simple (particle-like dots on the Prolog for better character decision making [20]. screen), this still demonstrates that symbolic, All these games and systems use declarative declarative techniques can be pushed well beyond programming for either some component of character what would have been though possible. In this paper, control and/or social reasoning. Some are turn-based we will discuss the system, and the techniques used to games like in the Versu platform, others are batch like make it so efficient. jobs used in Townlikes (that still need to run fast but not at any particular frame rate), and others are real- 2. Related Work time systems like with GOAP in F.E.A.R.. Even when used for real-time character control the number of Many AI-based games have used symbolic declarative NPCs is usually relatively low and the portion of the systems - logic, symbolic rules, or some kind of rule game logic that is declarative are the relations engine - for social reasoning and character control. between possible actions and the world state. The Façade [1] used a reactive planner, ABL [2], that code for how a Goal executes is still usually written in incorporated a forward-chaining production system. an imperative manner [11]. Prom Week [3] also used a forward-chaining There are also entire games that have been built production system, Comme Il Faut [4]. CiF’s successor using declarative languages, such as those built with the Ensemble Engine [5] is similarly a forward- the video game description language - VGDL [21]. chaining production system. The Sims 3 [6] used a Currently this technique is used to generate 2D tile rule system to script the interactions between arcade style games with the help of ASP and situations, personality traits, and actions available to evolutionary search [22], [23]. These games are a given character [7]. MKULTRA [8] and City of playable by a human and are fully declarative - Gangsters [9] both used logic programming to everything from character controls to game logic to implement social reasoning [10]. F.E.A.R. makes use of the tilemap itself are declared in text files that are 11th Experimental Artificial Intelligence in Games Workshop, Copyright © 2024 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). November 19, 2024, Lexington, Kentucky, USA. CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings used to generate a game. The purpose of these games have been selected/assigned to them, potentially is largely to allow for testing of general video game getting rewarded for the various tasks. When no more playing (GVGP) algorithms on several different games actions remain in the queue of selected actions the [24]. NPC searches for all the advertisements on all the In large part because of the expense of AI objects around them then scores each action based on simulation, games involving large-scale character their needs and selects the best scoring simulation are relatively rare. The best known is advertisement. The equation for selecting the best Dwarf Fortress [25], which supports real-time scoring advertisement is 𝑎𝑟𝑔𝑚𝑎𝑥(𝑆) where 𝑆 is the simulation of small hundreds of characters. Achieving set of all scored advertisements for a given character. this level of performance requires implementation in The complexity of scoring all advertisements is C++ and significant programmer effort to optimize 𝑂(𝑁𝐿𝑀), where 𝑁 is the number of people, 𝐿 is the cache locality and minimize pointer chasing. objects nearby, and 𝑀 is the advertisements at each RimWorld is a very similar game that also involves location. Once you have scored all advertisements for social simulation for the purpose of storytelling [26]. all characters, you can run the argmax equation for Caves Of Qud is a game that is more like adventure each character. Since argmax is operating on the set of mode in DF and, similarly, features hundreds of all advertisements for each person, the complexity of characters that all have various social relations and selecting an action from the scored set is 𝑂(𝑁𝐿𝑀), need to choose what action to take each tick [27]. where 𝑁 is the number of people, 𝐿 is the objects Unlike DF Fortress mode and RimWorld, this rogue- nearby, and 𝑀 is the advertisements. Given that these like experience is turn based and as such doesn’t need two steps are each of the same complexity, and that to run at a particular frame rate. While all these games the two iterations are sequential, the total complexity feature large numbers of characters acting in a world, of action selection is 𝑂(𝑁𝐿𝑀). none use declarative systems for the control of these The selected advertised action sequence is then NPCs. added to the queue of actions to be completed. Needs- Some large-scale crowd simulation exist for use in based AI has always had the ability to sequence film and other rendered media forms [28], [29]. Some actions, a technique called action chaining [34], but in of these tools even allow the behaviors of the agents The Sims 4 the developers wanted to be able to have to be stated in sentence form for specific decision more interesting action sequences. To accomplish this nodes [30]. These tools target rendered scenes for the developers allowed the system to simultaneously film and television and as such do not need to run in dispatch multiple actions from the queue when those real-time. actions would not interfere with one another [35]. Some crowd simulation techniques have been The Sims 3 had a goal of creating a larger varied used to study crowd dynamics and the creation of living world, and to do this while remaining playable autonomous agents for visualization [31], [32]. Shao several optimizations were employed [36]. Some have claimed real-time performance at 30fps when execution optimizations such as hierarchical planning animating up to 1400 characters moving and and commodity-interaction maps as well as some pathfinding around a 3D scene on 2007 hardware level of detail (LoD) optimizations such as auto-satisfy [31]. This was done with over 50,000 lines of C++ and curves and story progression [36]. Hierarchical uses a simple action selection method reminiscent of planning takes the usually 𝑂(𝑁𝐿𝑀) approach of needs-based AI. searching for every person (𝑁), for every location (𝐿), and then for every advertised action (𝑀), and makes 3. Needs-based AI it 𝑂(𝑁 + 𝐿 + 𝑀) by instead selecting each step one at a time. Commodity-interaction maps are essentially Needs-based AI is a famous technique for controlling dictionaries of needs and the actions that satisfy those characters in video games that was created for The needs, and they are used to trade off compute time Sims [33], [34], and has been used in other games like searching over all advertisements on all objects to see Roller Coaster Kingdom and an RPG for Lord Of the if they satisfy a sim’s needs with storage cost. The LoD Rings [34]. optimizations tick the actions of sims not in the Needs-based AI attempts to fulfill various needs current focused level when they come into focus for NPCs by searching over the actions of each object (auto-satisfy), and tick certain subsystems like the available to them (i.e. advertisements) and weighting wants of the town itself at a lower rate (story the cost of completing the action against each progression). As a result, although the simulation character’s needs with some scoring function [34]. notionally supports hundreds of characters, the vast Each character goes about completing actions that majority of them are effectively idle at any given time. Figure 1: State Fair Sim running in Unity As well, although The Sims 3 has many more possible degree to which NPCs attenuate distance. The value motives (needs) than the 8 used in previous versions, 2870 was not chosen at random, it relates to the size any individual character has a small number of those of the map that the characters move on and effectively possible motives. scales 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 2 to be between 1 and 30 (as opposed to a max of 82369). 3.1. Scoring functions One more simplification in the current demo is The scoring function that we use here is the most that all locations only advertise a single action. In sophisticated function listed in Zubeks description of effect, a location has a type that correlates to a need. As such, the complexity of this scoring algorithm can needs-based AI, an attenuated need-delta scoring be given as 𝑂(𝑁𝐿), where 𝑁 is the number of NPCs function that is additionally attenuated based on distance [34]. One difference in our version is that that need to select actions and 𝐿 is the number of locations to search over. each action can only satisfy one need. In the version of action performance outlined by Zubek, some actions can be sequences, and this can potentially allow for 4. Real-time logic programming multiple needs to be fulfilled by an advertisement. As To test the scaling limits of declarative symbolic such, the algorithm for calculating an advertised logical methods for real-time NPC control, we built a action score must sum the attenuated scoring across state-fair simulator, as one might find in a tycoon all needs. Instead of summing across all needs, with game, using a logic-programming implementation of each action only satisfying one need, we can simplify needs-based AI. NPC’s wander through the state fair the attenuated need-delta scoring with distance eating, drinking, seeking entertainments, and so on. attenuation to: The simulation consists of a map with 367 locations, onto which we spawn a few thousand NPCs. Each 𝑠𝑐𝑜𝑟𝑒 = 𝐷(𝐴𝑛𝑒𝑒𝑑 (𝜈𝑐𝑢𝑟𝑟𝑒𝑛𝑡 ) − 𝐴𝑛𝑒𝑒𝑑 (𝜈𝑐𝑢𝑟𝑟𝑒𝑛𝑡 + 𝛿)) character has 6 motivations (hunger, bathroom, games, amusement, music, and shopping) that where 𝜈𝑐𝑢𝑟𝑟𝑒𝑛𝑡 is the current need value of a particular influence their action selection. need type, 𝛿 advertised need delta, 𝐴𝑛𝑒𝑒𝑑 is an attenuation function, and 𝐷 is the distance 4.1. Needs-based AI in logic 10 attenuation function. 𝐴𝑛𝑒𝑒𝑑 (𝑥) = as is discussed in 𝑥 programming 𝑥 Zubek, but instead of the mentioned 𝐷(𝑥) = The logic for the core steps of scoring all available 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 2 𝑥 function ours is 𝐷(𝑥) = 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒2 to tune down the actions and then selecting the best action for each 1+( ) 2870 character is contained in the few lines of code found in language implementation are outside the scope of this Code Fragment 1 and 2. For a discussion of the paper but see [37], [38] for more information. programming language used, see [37], [38], but the Another important optimization might be called following fragments can be explained thusly: implicit need decay. Normally, a needs-based AI Code Fragment 1 defines a predicate that is true system would update each need state of each when a given person performing a given advertised character on every tick. For our system, this would be action at a given location has a given score. It states prohibitively expensive. Instead, we store that it has that score when the location advertises that timestamped need values; a need value is represented action, it fulfills some need, and score is the result of as a tuple of (𝑣, 𝑡, 𝜌) stating that the need had value 𝜈 distance-based scoring, above, along with some noise at time 𝑡, and was decaying at rate 𝜌. We can then to add a small amount of randomness to the action compute the current need value on demand as: selection: 𝜈𝑐𝑢𝑟𝑟𝑒𝑛𝑡 = max(𝜈 − (𝑡 − 𝑡𝑛𝑜𝑤 ) ∗ 𝜌) Code Fragment 1 Action Scoring with 𝑡𝑛𝑜𝑤 being the current time. Since these values PersonActionScoring = Predicate("…", are read infrequently but would otherwise be updated person.Indexed, actionType.Indexed, frequently, this is much more efficient. As is location.Indexed, score).If( recommended by Zubek, decay rates for a given need ReadyToSelectAction, CoordForPerson[person, coords], vary slightly from character to character to give them LocationAdvertisements, slightly different “personalities”. These decay rates deltaOffset == delta + are chosen randomly (Appendix A, Code Fragment 3). RandomFloat[-2, 2], ActionSatisfiesNeed, 4.3. Action States FairgroundLocations[location, __, otherCoords], For characters to perform a selected action they must NeedByType[person, needType, need], move to the location of the selected action and then DistanceScoring[need, deltaOffset, take some steps to complete the action before the coords, otherCoords, need is satisfied. While we could allow for completion score]); of actions instantaneously, to be more in line with The Sims implementation - as specified by Zubek – we Given this, Code Fragment 2 defines a predicate instead have some action performance before a need stating a given action/location pair is best for a given is satisfied. character (note that this is essentially the same as the When an action is initiated, the system tells the argmax equation, just written in logic): unity code to start moving the character, and then marks the character as being in progress towards that Code Fragment 2 location. When the character arrives at the location Action Selection they are marked as no longer moving towards their PersonActionBestScore = Predicate("…", destination. When the action is completed, unity person.Key, actionType.Indexed, informs the system, and the system updates the need location.Indexed).If( ReadyToSelectAction[tempPerson], states, and generates a new next action (Appendix A, Maximal((person, actionType, Code Fragment 6, 7, 8, 11). In place of the animations location), score, that would normally be running when performing an PersonActionScoring[tempPerson, action, we simulate an animation running by starting actionType, location, score] & a timer that indicates the selected action is being done person == tempPerson)); (Appendix A, Code Fragment 9). Unlike in The Sims all actions in this simulation are individual, there is no chaining of actions nor are 4.2. Implementation optimizations there any concurrent interactions. This simplification The performance of the system is dependent on is what allows us to use a timer for each person taking several optimizations. Many of these have to do with an action, when the timer is running the “animation” the specific programming language implementation: is playing and when the timer completes the character strong typing, bottom-up execution, parallel execution, native code compilation. The details of the Figure 2: Performance of the State Fair Sim has completed the action and finished its animation. 5. Performance While we do not have proper animations for each sim, when on a timer for an action the sim is moving With 5000 NPCs running this simulation on an Apple randomly around the location that they have selected M3 Pro we are able to achieve 60fps or better for 95% an action at. of frames when running some performance tests. Our average inner-frame time was 13.87ms with a 4.4. Destinations & Unity Interface standard deviation of 2.077ms. Figure 3 shows our performance curve once stable (from 12500 to 25000 For control of character movements, we have a simple ticks) which is somewhat bell curve shaped with a interface between the action selection simulation and longer tail to the right. Most of the tail to the right is the Unity code that performs movements. The likely garbage collection or some other Unity process Simulog code provides a table of people and the as that performance tail is not present in Figure 4 destination they are moving to (Appendix A, Code when we show the performance curve for just the rule Fragment 7) that the Unity code iterates over each tick execution times. to adjust NPC positions. The Unity code in turn provides a function that we can call to ask if someone has arrived at their destination. With this simple interface of action selection assigning destinations to the Unity code and Unity only needing to report back when someone has arrived at the destination, we are able to change out the underlying movement implementation without needing to do any modifications to the action selection algorithm. Currently, the movement is all simple vector addition moving sims at a set speed straight towards their destination. simulation on 5000 characters without any action selection takes about 1ms. Figure 3: State Fair Sim Inter-Frame Intervals Figure 5: Performance of Action Selection After these baseline costs, the variation in performance largely comes down to the number of characters selecting actions on any given tick. The performance cost for one NPC selecting an action averages to just under 0.3ms. This gap between each number of agents selecting is clearly visible in Figure 2 and 5 with both the color mapped data points and green lines indicating the moving average for each tier. The same gaps are not visible in the inter-frame time data (grey dots in Figure 2), although there is clearly a correlation between the range of performance for rule execution and for total inter- frame time. Figure 4: Action Selection Rule Execution Times 5.1. Pushing character counts The graph in Figure 2 shows the progression of the simulation over a 25,000-tick run time. The first 5000 The performance shown above is only possible when ticks spawn one person every tick causing the slow we first compile our rules down to native C# code, the rise in compute time for both the action selection process for which is described in [37], [38], and then mechanism and the inner-frame updates. For the next we run our compiled code in parallel. The type of 7500 ticks characters begin to arrive at their first parallelism employed is per predicate, not per sim, destination and need to select their next action, utilizing the dependency graph for all predicates to creating a tiered slope in the action selection spin up tasks that run the compiled rules for a given performance as various numbers of characters need predicate only when its dependencies have finished. to select an action. From 12500 ticks on we have a Without these optimizations the maximum character steady state where performance is consistent and count at nearly the same performance is 2000 NPCs. linear with NPCs now moving around and selecting One optimization that could be employed to actions at a rate of 13.7 actions selected on average further stabilize performance (and potentially allow per tick for our sample with a σ of 3.8 actions. for more NPCs) is to throttle the number of characters The cost of moving 5000 NPCs and rendering the that are selecting actions on a given tick. With a limit scene (amongst other functions that Unity runs) is the on the number of characters selecting actions that is difference between the action selection rule execution slightly higher than the un-limited average number time and the inner-frame time. As can be seen in selecting you could effectively cap the cost of doing Figure 1, this difference (looking from 5000 ticks to action selection while still eventually performing the 9000 ticks where action selection has not yet taken same sets of needs-based calculations. This technique off) is about 7ms. There is also a baseline cost to was not employed in this demo but a commercial running the rest of the declarative simulation when system that cares more about never dropping below not performing action selection. As can be seen in 60fps may want to implement such a throttle. Figures 2 and 4, the cost of running our declarative 6. Conclusion [15] R. P. Evans and E. Short, “The AI Architecture of Versu”. AI character control based on declarative [16] R. Evans, “Introducing Exclusion Logic as a programming can be surprisingly performant. Deontic Logic,” in Deontic Logic in Computer Through a combination of compilation, parallel Science, vol. 6181, G. Governatori and G. Sartor, evaluation, and careful optimization, thousands of Eds., in Lecture Notes in Computer Science, vol. characters can be run at video frame rate. Although 6181. , Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 179–195. doi: this demonstration involves simple needs-based AI, 10.1007/978-3-642-14183-6_14. we intend to investigate more complicated techniques [17] S. Mason, C. Stagg, and N. Wardrip-Fruin, “Lume: in the future. a system for procedural story generation,” in Proceedings of the 14th International Conference References on the Foundations of Digital Games, San Luis Obispo California USA: ACM, Aug. 2019, pp. 1–9. [1] M. Mateas and A. Stern, Façade. (2005). doi: 10.1145/3337722.3337759. [2] M. Mateas and A. Stern, “A behavior language for [18] F. C. N. Pereira and D. H. D. Warren, “Definite story-based believable agents,” IEEE Intell. Syst., clause grammars for language analysis—A vol. 17, no. 4, pp. 39–47, Jul. 2002, doi: survey of the formalism and a comparison with 10.1109/MIS.2002.1024751. augmented transition networks,” Artif. Intell., vol. [3] J. McCoy, M. Treanor, B. Samuel, A. A. Reed, N. 13, no. 3, pp. 231–278, May 1980, doi: Wardrip-Fruin, and M. Mateas, “Prom week,” in 10.1016/0004-3702(80)90003-X. Proceedings of the International Conference on [19] F. Pereira and S. M. Shieber, “Prolog and Natural- the Foundations of Digital Games, in FDG ’12. New Language Analysis,” 1987. [Online]. Available: York, NY, USA: Association for Computing https://api.semanticscholar.org/CorpusID:2642 Machinery, May 2012, pp. 235–237. doi: 03475 10.1145/2282338.2282384. [20] S. Lapeyrade, “Reasoning with Ontologies for [4] J. McCoy, M. Treanor, B. Samuel, N. Wardrip- Non-player Character’s Decision-Making in Fruin, and M. Mateas, “Comme il Faut: A System Games,” Proc. AAAI Conf. Artif. Intell. Interact. for Authoring Playable Social Models,” Proc. AAAI Digit. Entertain., vol. 18, no. 1, Art. no. 1, Oct. Conf. Artif. Intell. Interact. Digit. Entertain., vol. 7, 2022, doi: 10.1609/aiide.v18i1.21980. no. 1, pp. 158–163, Oct. 2011, doi: [21] T. Schaul, “A video game description language for 10.1609/aiide.v7i1.12454. model-based or interactive learning,” in 2013 [5] B. Samuel, A. A. Reed, P. Maddaloni, M. Mateas, IEEE Conference on Computational Inteligence in and N. Wardrip-Fruin, “The Ensemble Engine: Games (CIG), Niagara Falls, ON, Canada: IEEE, Next-Generation Social Physics”. Aug. 2013, pp. 1–8. doi: [6] The Sims 3. (2009). Maxis. 10.1109/CIG.2013.6633610. [7] R. Evans, “AI challenges in Sims 3,” Artif. Intell. [22] T. S. Nielsen, G. A. B. Barros, J. Togelius, and M. J. Interact. Digit. Entertain., 2009. Nelson, “Towards generating arcade game rules [8] I. Horswill, “Postmortem: MKULTRA, An with VGDL,” in 2015 IEEE Conference on Experimental AI-Based Game,” Proc. AAAI Conf. Computational Intelligence and Games (CIG), Artif. Intell. Interact. Digit. Entertain., vol. 14, no. Tainan: IEEE, Aug. 2015, pp. 185–192. doi: 1, Art. no. 1, Sep. 2018, doi: 10.1109/CIG.2015.7317941. 10.1609/aiide.v14i1.13027. [23] M. J. Nelson and M. Mateas, “Towards Automated [9] City of Gangsters. (2021). SomaSim, Chicago. Game Design,” in AI*IA 2007: Artificial [10] R. Zubek, I. Horswill, E. Robison, and M. Viglione, Intelligence and Human-Oriented Computing, vol. “Social Modeling via Logic Programming in City 4733, R. Basili and M. T. Pazienza, Eds., in Lecture of Gangsters,” Proc. AAAI Conf. Artif. Intell. Notes in Computer Science, vol. 4733. , Berlin, Interact. Digit. Entertain., vol. 17, no. 1, pp. 220– Heidelberg: Springer Berlin Heidelberg, 2007, 226, Oct. 2021, doi: 10.1609/aiide.v17i1.18912. pp. 626–637. doi: 10.1007/978-3-540-74782- [11] J. Orkin, “Three States and a Plan: The A.I. of 6_54. F.E.A.R.,” 2006. [24] J. Levine et al., “General Video Game Playing,” [12] G. Nelson, Inform 7. (2006). Schloss Dagstuhl – Leibniz-Zentrum für [13] G. Nelson, “NATURAL LANGUAGE, SEMANTIC Informatik, 2013, p. 7 pages, 336517 bytes. doi: ANALYSIS AND INTERACTIVE FICTION,” 2006. 10.4230/DFU.VOL6.12191.77. [14] R. Evans and E. Short, “Versu—A Simulationist [25] T. Adams and Z. Adams, Slaves to Armok: God of Storytelling System,” IEEE Trans. Comput. Intell. Blood Chapter II: Dwarf Fortress. (2006). Bay 12 AI Games, vol. 6, no. 2, pp. 113–130, Jun. 2014, Games. doi: 10.1109/TCIAIG.2013.2287297. [26] T. Sylvester, RimWorld. (Oct. 2018). Ludeon Studios. [27] Caves of Qud. (Jul. 15, 2015). Freehold Games, Code Fragment 2 South Bend, IN and Berkeley, CA. NPC start and end conditions [28] “MIARMY | Home.” Accessed: Aug. 29, 2024. Unsatisfied = Definition("Unsatisfied", [Online]. Available: person).Is(NeedByType[person, __, need], https://www.miarmy.com/#/ NeedValue[need] == Zero); [29] “Massive Software.” Accessed: Aug. 29, 2024. People.StartWhen(People.Population[count], [Online]. Available: count < 5000, RandomPerson) https://www.massivesoftware.com/ .EndWhen(People[person], [30] “Defuzz Process - Feishu Docs.” Accessed: Aug. Unsatisfied[person]); 29, 2024. [Online]. Available: https://basefount.feishu.cn/wiki/GhgswVsrfidf Code Fragment 3 OVkmFcgc1XoYnKA Need assignments [31] W. Shao and D. Terzopoulos, “Autonomous foreach (var needOfType in NeedTypeList) pedestrians,” Graph. Models, vol. 69, no. 5–6, pp. People.StartCauses(Add(NeedByType[person, 246–274, Sep. 2007, doi: needOfType, need]).If( 10.1016/j.gmod.2007.09.001. NewNeed[RandomFloat[0.002f, 0.007f], [32] Q. Yu and D. Terzopoulos, “A Decision Network need])); Framework for the Behavioral Animation of Virtual Humans”. Code Fragment 4 [33] W. Wright, The Sims. (2000). Locations and their Advertisements [34] R. Zubek, “Needs-based AI,” in Game FairgroundLocations = Parse( Programming Gems 8, A. Lake., Cengage Learning, "FairgroundLocations", location.Key, Florence, KY, 2010. locationType.Indexed, coords.Indexed, [35] “Concurrent Interactions in The Sims 4.” ParseFairgroundLocations()); Accessed: Aug. 30, 2024. [Online]. Available: LocationAdvertisements = Parse( https://gdcvault.com/play/1020190/Concurre "LocationAdvertisements", nt-Interactions-in-The-Sims location.Indexed, actionType.Indexed, [36] “Modeling Individual Personalities in The Sims delta, ParseLocationAdvertisements()); 3.” Accessed: Aug. 29, 2024. [Online]. Available: https://www.gdcvault.com/play/1012450/Mo Code Fragment 5 deling-Individual-Personalities-in-The Action satisfies need table [37] I. Horswill and S. Hill, “Fast, Declarative, ActionSatisfiesNeed = Parse("ActionSatisfi…", Character Simulation Using Bottom-Up Logic actionType, needType, Programming,” presented at the AIIDE DefaultActionSatisfiesNeed.Select( Workshop on Experimental Artificial pair => (pair.Key, pair.Value))); Intelligence in Games, University of Utah, Utah, USA, Oct. 2023. [38] I. Horswill and S. Hill, “Fast, Declarative, Code Fragment 6 Character Simulation Using Bottom-Up Logic Person action state table Programming,” in AIIDE 24, PersonActionAt = Predicate("PersonActionAt", person.Key, actionType.Indexed, location.Indexed, moving.Indexed); A. Code Appendix PersonActionAt.Overwrite = true; PersonMovingTo = Definition("PersonMovingTo", Other than the two code fragments inside the body of person, location).Is(PersonActionAt[ person, __, location, true]); this document, the remain fragments in this appendix contain the entirety of the Simulation code written in Code Fragment 7 TED/Simulog. Destinations and Arrivals Code Fragment 1 Destinations = Predicate("Destinations", NPCs (people) and their Needs person, coords) .If(People[person], PersonMovingTo[person, People = Exists("People", person); location], FairgroundLocations); NeedByType = Predicate("NeedByType", ArrivedAtDestination = Predicate("…", person) person.JointKey, needType.JointKey, need); .If(People[person], PersonMovingTo[person, __], HasPersonArrived[person]); Code Fragment 8 Arrival state update PersonActionAt.Set(person, moving, false) .If(ArrivedAtDestination); Code Fragment 9 Action timer ActionTimer = Timer("Action", person); ActionTimer.StartWhen(ArrivedAtDestination, PersonActionAt, ActionTypeInteractionTime[actionType, count]); Code Fragment 10 Ready to select action ReadyToSelectAction = Event("…", person) .OccursWhen(People[person], !People.Start[person], !PersonMovingTo[person, __], ActionTimer.NotOnTimer); Code Fragment 11 Action assignment PersonActionAt.Add[person, actionType, location, true].If(PersonActionBestScore); Code Fragment 12 Action completion and reward CompletedAction = Definition("…", person, needType, delta).Is( ActionTimer.TimerFinished, PersonActionAt, ActionSatisfiesNeed[actionType, needType], LocationAdvertisements); NeedByType.Set((person, needType), need).If( CompletedAction[person, needType, delta], NeedByType[person, needType, needCol], UpdateNeed[needCol, delta, need]);