AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI Roman V. Yampolskiy Computer Engineering and Computer Science University of Louisville Louisville, USA roman.yampolskiy@louisville.edu Abstract— The paper contributes to the development of the the amount of context information and `intelligence' they theory of AI-Completeness by formalizing the notion of AI- seem to require.” Complete and AI-Hard problems. The intended goal is to As such, the term “AI-Complete” (or sometimes AI-Hard) provide a classification of problems in the field of General has been a part of the field for many years and has been Artificial Intelligence. We prove Turing Test to be an instance frequently brought up to express difficulty of a specific of an AI-Complete problem and further show numerous AI problems to be AI-Complete or AI-Hard via polynomial time problem investigated by researchers (see (Mallery 1988; reductions. Finally, the paper suggests some directions for future Ide and Véronis 1998; Gentry, Ramzan et al. 2005; Nejad work on the theory of AI-Completeness. April 2010; Bergmair December 2004; McIntire, Havig et al. July 21-23, 2009 ; Navigli and Velardi July 2005; Keywords- AI-Complete, AI-Easy, AI-Hard, Human Mueller March 1987; McIntire, McIntire et al. May 18- Oracle. 22, 2009; Chen, Liu et al. November 30, 2009; Mert and Dalkilic September 14-16, 2009 ; Leahu, Sengers et al. I. INTRODUCTION September 21 - 24, 2008; Phillips and Beveridge September 28-30,. 2009; Hendler September 2008)). This Since its inception in the 1950s the field of Artificial informal use further encouraged similar concepts to be Intelligence has produced some unparalleled developed in other areas of science: Biometric- accomplishments while at the same time failing to Completeness (Phillips and Beveridge September 28-30,. formalize the problem space it is concerned with. This 2009), ASR-Complete (Morgan, Baron et al. April 6-10, paper proposes to address this shortcoming by 2003). Recently numerous attempts to formalize what it contributing to the theory of AI-Completeness, a means to say that a problem is “AI-Complete” have been formalism designed to do for the field of AI what notion published (Ahn, Blum et al. 2003; Demasi, Szwarcfiter et of NP-Completeness did for computer science in general. al. March 5-8, 2010; Dafna Shahaf and Amir March 26- It is our belief that such formalization will allow for even 28, 2007). Even before such formalization attempts, faster progress in solving the remaining problems in systems which relied on humans to solve problems which humankind’s conquest to build an intelligent machine. were perceived to be AI-Complete were utilized: According to the encyclopedia Wikipedia the term “AI- Complete” was proposed by Fanya Montalvo in the 1980s (Wikipedia Retrieved January 7, 2011). A somewhat  AntiCaptcha systems use humans to break general definition of the term included in the 1991 Jargon CAPTCHA security protocol(Ahn, et al. 2003; File (Raymond March 22, 1991) states: Roman V. Yampolskiy 2007a, 2007b; Roman V Yampolskiy and Govindaraju 2007; McDaniel and “AI-complete: [MIT, Stanford, by analogy with `NP- Yampolskiy 2011) either by directly hiring cheap complete'] adj. Used to describe problems or workers in developing countries (Bajaj April 25, subproblems in AI, to indicate that the solution presupposes a solution to the `strong AI problem' (that is, 2010) or by rewarding correctly solved CAPTCHAs the synthesis of a human-level intelligence). A problem with presentation of pornographic images (Vaas that is AI-complete is, in other words, just too hard. December 1, 2007). Examples of AI-complete problems are `The Vision  Chinese Room philosophical argument by John Problem', building a system that can see as well as a Searle shows that including a human as a part of a human, and `The Natural Language Problem', building a computational system may actually reduce its system that can understand and speak a natural language as well as a human. These may appear to be modular, but perceived capabilities such as understanding and all attempts so far (1991) to solve them have foundered on consciousness (Searle 1980).  Content Development online projects such as is a set of problem instances, D is a probability Encyclopedias (Wikipedia, Conservapedia), Libraries distribution over the problem set S, and f : S {0; 1}* (Project Gutenberg, Video collections (YouTube) and answers the instances. Let δ 2 (0; 1]. We require that for Open Source Software (SourceForge) all rely on an > 0 fraction of the humans H, PrxD [H(x) = f(x)] > δ… An AI problem is said to be (δ, )-solved if there contributions from people for content production and exists a program A, running in time at most on any input quality assurance. from S, such that PrxD,r [Ar(x)=f(x)] δ. (A is said to be a  Cyphermint a check cashing system relies on human (δ, ) solution to .) is said to be a (δ, )-hard AI workers to compare a snapshot of a person trying to problem if no current program is a (δ, ) solution to , and perform a financial transaction to a picture of a the AI community agrees it is hard to find such a person who initially enrolled with the system. solution.” It is interesting to observe that the proposed Resulting accuracy outperforms any biometric system definition is in terms of democratic consensus by the AI community. If researchers say the problem is hard, it must and is almost completely spoof proof (see be so. Also, time to solve the problem is not taken into cyphermint.com for more info). account. The definition simply requires that some humans  Data Tagging systems entice users into providing be able to solve the problem (Ahn, et al. 2003). meta-data for images, sound or video files. A popular In 2007 Shahaf and Amir (Dafna Shahaf and Amir approach involves developing an online game which March 26-28, 2007) have published their work on the as a byproduct of participation produces a large Theory of AI-Completeness. Their paper presents the amount of accurately labeled data (Ahn June 2006). concept of the Human-Assisted Turing Machine and formalizes the notion of different Human Oracles (see  Distributed Proofreaders employ a number of Section on Human Oracles for technical details). Main human volunteers to eliminate errors in books created contribution of the paper comes in the form of a method by relying on Optical Character Recognition process. for classifying problems in terms of human-versus- (see pgdp.net for more info). machine effort required to find a solution. For some  Interactive Evolutionary Computation algorithms common problems such as Natural Language use humans in place of a fitness function to make Understanding (NLU) the paper proposes a method of reductions allowing conversion from NLU to the problem judgments regarding difficult to formalize concept of Speech Understanding via Text-To-Speech software. such as esthetic beauty or taste (Takagi 2001). In 2010 Demasi et al. (Demasi, et al. March 5-8, 2010)  Mechanical Turk is an Amazon.com’s attempt at presented their work on problem classification for creating Artificial Artificial Intelligence. Humans are Artificial General Intelligence (AGI). The proposed paid varying amounts for solving problems which are framework groups the problem space into three sectors: believed to be beyond current abilities of AI  Non AGI-Bound: problems that are of no programs (see mturk.com for more info). The general interest to AGI researchers. idea behind the Turk has a broad appeal and the  AGI-Bound: problems that require human level researchers are currently attempting to bring it to the intelligence to be solved. masses via the Generalized Task Markets (GTM)  AGI-Hard: problems that are at least as hard as (Horvitz 2007; Horvitz and Paek 2007; Kapoor, Tan any AGI Bound problem. et al. 2008; D. Shahaf and Horvitz July 2010).  Spam Prevention is easy to accomplish by having The paper also formalizes the notion of Human Oracles and provides a number of definitions regarding humans vote on emails they receive as spam or not. If their properties and valid operations. a certain threshold is reached a particular piece of email could be said to be spam with a high degree of II. THE THEORY OF AI-COMPLETENESS accuracy (Dimmock and Maddison December 2004). From people with mental disabilities to geniuses human minds are cognitively diverse and it is well known that Recent work has attempted to formalize the intuitive different people exhibit different mental abilities. We notion of AI-Completeness. In particular three such define a notion of a Human Oracle (HO) function capable endowers are worth reviewing: of computing any function computable by the union of all In 2003 Ahn et al. (Ahn, et al. 2003) attempted to human minds. In other words any cognitive ability of any formalize the notion of an AI-Problem and the concept of human being is repeatable by our HO. To make our AI-Hardness in the context of computer security. An AI- Human Oracle easier to understand we provide the Problem was defined as a triple: “ , where S following illustration of the Human function: where ui is the utility function (Dafna Shahaf and Amir String Human (String input) { March 26-28, 2007). A. Definitions \/ \/ \/ ••• \/ Definition 1: A problem C is AI-Complete if it has two properties: return output; } Figure 1. Human oracle: HumanBest – a union of minds. 1. It is in the set of AI problems (Human Oracle solvable). Such a function would be easy to integrate with any modern programming language and would require that the 2. Any AI problem can be converted into C by input to the function be provided as a single string of some polynomial time algorithm. length N and the function would return a string of length Definition 2: AI-Hard: A problem H is AI-Hard if and M. No specific encoding is specified for the content of only if there is an AI-Complete problem C that is strings N or M and so they could be either binary polynomial time Turing-reducible to H. representations of data or English language phrases both being computationally equivalent. As necessary the Definition 3: AI-Easy: The complexity class AI-easy is human function could call regular TM functions to help in the set of problems that are solvable in polynomial time processing of data. For example, a simple computer by a deterministic Turing machine with an oracle for program which would display the input string as a picture some AI problem. In other words, a problem X is AI-easy to make human comprehension easier could be executed. if and only if there exists some AI problem Y such that X Humans could be assumed to be cooperating perhaps is polynomial-time Turing reducible to Y. This means that because of a reward. Alternatively, one can construct a given an oracle for Y, there exists an algorithm that solves Human function which instead of the union of all minds X in polynomial time. computes the average decision of all human minds on a Figure 2 illustrates relationship between different AI problem encoded by the input string as the number of complexity classes. Right side illustrates the situation if it such minds goes to infinity. To avoid any confusion we is ever proven that AI-problems = AI-Complete problems. propose naming the first HO HumanBest and the second Left side shows the converse. HO HumanAverage. Problems in the AI domain tend to have a large degree of ambiguity in terms of acceptable correct answers. Depending on the problem at hand the simplistic notion of an average answer could be replaced with an aggregate answer as defined in the Wisdom of Crowds approach (Surowiecki 2004). Both functions could be formalized as Human-Assisted Turing Machines (Dafna Shahaf and Amir March 26-28, 2007). The Human function is easy to understand and uses generalization of the Human Oracle. One can perceive it as a way to connect and exchange information with a real human sitting at a computer terminal. While easy to Figure 2. Relationship between AI complexity classes. intuitively understand, such description is not sufficiently formal. Shahaf et al. have formalized the notion of B. Turing Test as the First AI-Complete Problem Human Oracle as an HTM (Dafna Shahaf and Amir March 26-28, 2007). In their model a human is an oracle In this section we will show that a Turing Test (A Turing machine that can decide a set of languages Li in constant 1950) problem is AI-Complete. First we need to establish time: H ⊆{Li | Li ⊆ ∑*}. If time complexity is taken into that Turing Test is indeed an AI problem (HO solvable). account answering a question might take a non-constant This trivially follows from the definition of the test itself. time: H ⊆{
  • | Li ⊆ ∑*, fi : } there fi is the The test measures if a human-like performance is time-complexity function for language Li, meaning the demonstrated by the test taker and Human Oracles are human can decide if x Li in fi (|x|) time. In order to defined to produce human level performance. While both realistically address capabilities of individual humans a “human” and “intelligence test” are intuitively understood probabilistic oracle was also presented which provided terms we have already shown that Human Oracles could correct answers with probability p: H ⊆{
  • | Li ⊆ be expressed in strictly formal terms. The Turing Test ∑*, 0 ≤ pi ≤ 1}. Finally the notion of reward is introduced itself also could be formalized as an interactive proof into the model to capture humans improved performance (Bradford and Wollowski 1995; Shieber December 2007, on “paid” tasks: H ⊆{
  • | Li ⊆ ∑*, ui : } July 16-20, 2006). Second requirement for a problem to be proven to be problem is in the set of AI problems and all other AI AI-Complete is that any other AI problem should be problem can be converted into it by some polynomial convertible into an instance of the problem under time algorithm or we can reduce any instance of Turing consideration in polynomial time via Turing reduction. Test problem (or any other already proven to be AI- Therefore we need to show how any problem solvable by Complete problem) to an instance of a problem we are the Human function could be encoded as an instance of a trying to show to be AI-Complete. This second approach Turing Test. For any HO-solvable problem h we have a seems to be particularly powerful. The general heuristic String input which encodes the problem and a String of our approach is to see if all information which encodes output which encodes the solution. By taking the input as the question which could be asked during the a question to be used in the TT and output as an answer to administering of a Turing Test could be encoded as an be expected while administering a TT we can see how any instance of a problem in question and likewise if any HO-solvable problem could be reduced in polynomial potential solution to that problem would constitute an time to an instance of a Turing Test. Clearly the described answer to the relevant Turing Test question. Under this process is in polynomial time and by similar algorithm heuristic it is easy to see that for example Chess is not AI- any AI problem could be reduced to TT. It is even Complete as only limited information can be encoded as a theoretically possible to construct a complete TT which starting position on a standard size chess board. Not utilizes all other problems solvable by HO by generating surprisingly Chess has been one of the greatest successes one question from each such problem. of AI and currently Chess playing programs dominate all human players including world champions. C. Reducing Other Problems to TT Question Answering (QA) (Hirschman and Gaizauskas Having shown a first problem (Turing Test) to be AI- 2001; Salloum November 30, 2009) is a sub-problem in Complete the next step is to see if any other well-known Natural Language Processing. Answering questions at a AI-problems are also AI-complete. This is an effort level of a human is something HOs are particularly good similar to the work of Richard Carp who has shown some at based on their definition. Consequently QA is an AI- 21 problems to be NP-Complete in his 1972 paper and by Problem which is one of the two requirements for doing so started a new field of Computational Complexity showing it to be AI-Complete. Having access to an Oracle (Karp 1972). According to the Encyclopedia of Artificial capable of solving QA allows us to solve TT via a simple Intelligence (Shapiro 1992) published in 1992 the reduction. For any statement S presented during following problems are all believed to be AI-Complete administration of TT we can transform said statement into and so will constitute primary targets for our effort of a question for the QA Oracle. The answers produced by proving formal AI-Completeness on them (Shapiro 1992): the Oracle can be used as replies in the TT allowing the program to pass the Turing Test. It is important to note  Natural Language Understanding – “Encyclopedic that access to the QA oracle is sufficient to pass the knowledge is required to understand natural Turing Test only if questions are not restricted to stand language. Therefore, a complete Natural Language alone queries, but could contain information from system will also be a complete Intelligent system.” previous questions. Otherwise the problem is readily  Problem Solving – “Since any area investigated by solvable even by today’s machines such as IBM’s Watson AI researchers may be seen as consisting of problems which showed a remarkable performance against human to be solved, all of AI may be seen as involving Jeopardy champions (Pepitone Retrieved on: January 13, Problem Solving and Search”. 2011).  Knowledge Representation and Reasoning – Speech Understanding (SU) (Anusuya and Katti 2009) “…the intended use is to use explicitly stored is another sub-problem in Natural Language Processing. knowledge to produce additional explicit knowledge. Understanding Speech at a level of a human is something This is what reasoning is. Together Knowledge HOs are particularly good at based on their definition. representation and Reasoning can be seen to be both Consequently SU is an AI-Problem which is one of the necessary and sufficient for producing general two requirements for showing it to be AI-Complete. intelligence – it is another AI-complete area.” Having access to an Oracle capable of solving SU allows  Vision or Image Understanding – “If we take us to solve QA via a simple reduction. We can reduce QA “interpreting” broadly enough, it is clear that general to SU by utilizing any Text-to-Speech software (Taylor intelligence may be needed to do this interpretation, and Black 1999; Chan 2003) which is both fast and and that correct interpretation implies general accurate. This reduction effectively transforms written intelligence, so this is another AI-complete area.” questions into the spoken ones making it possible to solve every instance of QA by referring to the SU oracle. Now that Turing Test has been proven to be AI- Complete we have an additional way of showing other problems to be AI-Complete. We can either show that a D. Other Probably AI-Complete Problems usually easily inferred by students who have access to Figure 3 shows the relationship via reductions between culture instilled common sense. As of this writing no problems shown to be AI-Complete in this paper. We program is capable of solving Programming outside of hope that our work will challenge the AI community to strictly restricted domains. prove other important problems as either belonging or not Having access to an Oracle capable of solving belonging to that class. While the following problems Programming allows us to solve TT via a simple have not been explicitly shown to be AI-Complete, they reduction. For any statement S presented during TT we are strong candidates for such classification and are also can transform said statement into a programming problems of great practical importance making their assignment of the form: “Write a program which would classification a worthy endower. If a problem has been respond to S with a statement indistinguishable from a explicitly conjectured to be AI-Complete in a published statement provided by an average human” (A full paper we include a source of such speculation: Dreaming transcript of the TT may also be provided for (Salloum November 30, 2009), Commonsense Planning disambiguation purposes). Applied to the set of all (Dafna Shahaf and Amir March 26-28, 2007), Foreign possible TT statements this procedure clearly allows us to Policy (Mallery 1988), Problem Solving (Shapiro 1992), pass TT, however Programming itself is not in the set of Judging a Turing Test (Dafna Shahaf and Amir March 26- AI-Problems as there are many instances of Programming 28, 2007), Common Sense Knowledge (Andrich, Novosel which are not solvable by Human Oracles. For example et al. 2009), Speech Understanding (Dafna Shahaf and “Write a program to pass Turing Test” is not known to be Amir March 26-28, 2007), Knowledge Representation an AI-Problem under the proposed definition. and Reasoning (Shapiro 1992), Word Sense Consequently, Programming is an AI-Hard problem. Disambiguation (Navigli and Velardi July 2005; Chen, et III. BEYOND AI-COMPLETENESS al. November 30, 2009), Machine Translation (Wikipedia Retrieved January 7, 2011), Ubiquitous Computing The human oracle function presented in this paper (Leahu, et al. September 21 - 24, 2008), Change assumes that the human being behind it has some Management for Biomedical Ontologies (Nejad April assistance from the computer in order to process certain 2010), Natural Language Understanding (Shapiro 1992), human unfriendly data formats. For example a binary Software Brittleness (Wikipedia Retrieved January 7, string representing a video is completely impossible for a 2011), Vision or Image Understanding (Shapiro 1992). human being to interpret but could easily be played by a computer program in the intended format making it possible for a human to solve a video understanding related AI-Complete problem. It is obvious that a human being provided with access to a computer (perhaps with Internet connection) is more intelligent compared to a human unenhanced in such a way. Consequently it is important to limit help from a computer to a human worker inside a human Oracle function to assistance in the domain of input/output conversion but not beyond as the resulting function would be both AI-Complete and Figure 3. Reductions from the first NP-Complete problem. “Computer Complete”. E. 1st AI-Hard Problem: Programming We define the problem of Programming as taking a natural language description of a program and producing a source code which then compiled on some readily available hardware/software produces a computer program which satisfies all implicit and explicit requirements provided in the natural language description of the programming problem assignment. Simple examples of Programming are typical assignments given to students in computer science classes. Ex. “Write a program to play Tic-Tac-Toe.” with successful students writing source code which if correctly compiled allows the grader to engage the computer in an instance of that game. Many requirements of such assignment remain implicit such as that response time of the computer should Figure 4. Fig. 1. Venn diagram for four different types of intelligence. be less than a minute. Such implicit requirements are Figure 4 utilizes a Venn diagram to illustrate subdivisions could produce a computational agent with a large domain of problem space produced by different types of of human like abilities (see work on RoboRats (Talwar, intelligent computational devices. Region 1 represents Xu et al. 2 May 2002) on monkey controlled robots what is known as Universal Intelligence (Legg and Hutter (Nicolelis, Wessberg et al. 2000)). It is very likely that in December 2007) or Super Intelligence (R.V. Yampolskiy the near future the different types of intelligent agents will 2011; Roman V. Yampolskiy 2012; Roman V. combine to an even greater extent. While such work is Yampolskiy and Fox 2012b, 2012a; Legg June 2008; under way we believe that it may be useful to introduce Roman V. Yampolskiy October 3-4, 2011a, October 3-4, some additional terminology into the field of problem 2011b) a computational agent which outperforms all other classification. For the complete space of problems we intelligent agents over all possible environments. Region propose that the computational agents which are capable 2 is the standard unenhanced Human level intelligence of of solving a specific subset of such problems get to the type capable of passing a Turing Test, but at the same represent the set in question. Therefore we propose time incapable of computation involving large numbers or additional terms: “Computer-Complete” and “Animal- a significant amount of memory. Region 3 is what is Complete” to represent computational classes solvable by currently possible to accomplish via the state-of-the-art such agents. It is understood that just like humans differ AI programs. Finally Region 4 represents an abstract view in their abilities so do animals and computers. of animal intelligence. AI intelligence researchers strive Aggregation and averaging utilized in our human function to produce Universal Intelligence and it is certainly likely could be similarly applied to the definition of the to happen given recent trends in both hardware and respective oracles. As research progresses common names software developments and theoretical underpinning of may be needed for different combinations of regions from the Church/Turing Thesis (AM Turing 1936). It is also Figure 8 illustrating such concepts as Human-AI hybrid or likely, that if we are able to enhance human minds with Animal-Robot hybrid. additional memory and port them to a higher speed hardware we will essentially obtain a Universal IV. CONCLUSIONS Intelligence (Sandberg and Boström 2008). Progress in the field of artificial intelligence requires While Universal Intelligence incorporates abilities of all access to well defined problems of measurable the lower intelligences it is interesting to observe that complexity. The theory of AI-Completeness aims to Human, AI and Animal intelligences have many provide a base for such formalization. Showing certain interesting regions of intersection. For example animal problems to be AI-Complete/-Hard is useful for minds are as good as human minds at visual developing novel ways of telling computers from humans. understanding of natural scenes. Regions 5, 6, and 7 Also, any problem shown to be AI-Complete would be a illustrate common problem spaces between two different great alternative way of testing an artificial intelligent types of intelligent agents. Region 8 represents common agent to see if it attained human level intelligence (Dafna problem solving abilities of humans, computers and Shahaf and Amir March 26-28, 2007). animals. Understanding such regions of commonality may help us to better separate involved computational classes which are represented by abilities of a specific REFERENCES computational agent minus the commonalities with a computational agent with which we are trying to draw a Ahn, Lv. (June 2006). Games With A Purpose. IEEE distinction. For example CAPTCHA (Ahn, et al. 2003) Computer Magazine, 96-98. type tests rely on the inability of computers to perform Ahn, Lv, Blum, M, Hopper, N, and Langford, J. (2003). certain pattern recognition tasks with the same level of CAPTCHA: Using Hard AI Problems for Security. accuracy as humans to separate AI agents from Human Eurocrypt. agents. Alternatively a test could be devised to tell Andrich, C, Novosel, L, and Hrnkas, B. (2009). Common humans not armed with calculators from AIs by looking Sense Knowledge. Paper presented at the Information at the upper level of ability. Such a test should be easy to Search and Retrieval, Available at: http://www.iicm.tu- defeat once an effort is made to compile and formalize the graz.ac.at/cguetl/courses/isr/uearchive/uews2009/Ue06- limitations and biases of the human mind. CommonSenseKnowledge.pdf. It is also interesting to consider the problem solving Anusuya, MA, and Katti, SK. (2009). Speech Recognition abilities of hybrid agents. We have already noted that a by Machine: A Review. International Journal of human being equipped with a computer is a lot more Computer Science and Information Security (IJCSIS), capable compared to an unaided person. Some recent 6(3), 181-205. research in Brain Computer Interfaces (Vidal 1973) Bajaj, V. (April 25, 2010). Spammers Pay Others to provides a potential path for future developments in the Answer Security Tests. Paper presented at the The New area. Just as interestingly combining pattern recognition York Times. abilities of animals with symbol processing abilities of AI Bergmair, R. (December 2004). Natural Language International Conference on Ubiquitous Computing, Steganography and an ``AI-complete'' Security Seoul, South Korea. Primitive. Paper presented at the 21st Chaos Legg, S. (June 2008). Machine Super Intelligence. Paper Communication Congress, Berlin. presented at the PhD Thesis, University of Lugano, Bradford, PG, and Wollowski, M. (1995). A formalization Available at: of the Turing Test. SIGART Bulletin, 6(4), 3-10. http://www.vetta.org/documents/Machine_Super_Intelli Chan, T-Y. (2003). Using a Text-to-Speech Synthesizer to gence.pdf. Generate a Reverse Turing Test. 15th IEEE Legg, S, and Hutter, M. (December 2007). Universal International Conference on Tools with Artificial Intelligence: A Definition of Machine Intelligence. Intelligence (ICTAI'03). Minds and Machines, 17(4), 391-444. Chen, J, Liu, J, Yu, W, and Wu, P. (November 30, 2009). Mallery, JC. (1988). Thinking About Foreign Policy: Combining Lexical Stability and Improved Lexical Finding an Appropriate Role for Artificially Intelligent Chain for Unsupervised Word Sense Disambiguation. Computers. Paper presented at the Annual Meeting of Paper presented at the Second International Symposium the International Studies Association, St. Louis, MO. on Knowledge Acquisition and Modeling (KAM '09), McDaniel, R, and Yampolskiy, RV. (2011). Embedded Wuhan non-interactive CAPTCHA for Fischer Random Chess. Demasi, P, Szwarcfiter, JL, and Cruz, AJO. (March 5-8, Paper presented at the 16th International Conference on 2010). A Theoretical Framework to Formalize AGI- Computer Games (CGAMES), Louisville, KY. Hard Problems. Paper presented at The Third McIntire, JP, Havig, PR, and McIntire, LK. (July 21-23, Conference on Artificial General Intelligence, Lugano, 2009). Ideas on authenticating humanness in Switzerland. collaborative systems using AI-hard problems in Dimmock, N, and Maddison, I. (December 2004). Peer- perception and cognition. Paper presented at the IEEE to-peer collaborative spam detection. Crossroads, National Aerospace & Electronics Conference 11(2). (NAECON), Dayton, OH. Gentry, C, Ramzan, Z, and Stubblebine, S. (2005). Secure McIntire, JP, McIntire, LK, and Havig, PR. (May 18-22, distributed human computation. Paper presented at the 2009). A variety of automated turing tests for network 6th ACM conference on Electronic commerce. security: Using AI-hard problems in perception and Hendler, J. (September 2008). We've Come a Long Way, cognition to ensure secure collaborations. Paper Maybe …. IEEE Intelligent Systems, 23(5), 2-3. presented at the International Symposium on Hirschman, L, and Gaizauskas, R. (2001). Natural Collaborative Technologies and Systems (CTS '09) Language Question Answering. The View from Here. Baltimore, MD. Natural Language Engineering, 7(4), 275-300. Mert, E, and Dalkilic, C. (September 14-16, 2009 ). Word Horvitz, E. (2007). Reflections on Challenges and sense disambiguation for Turkish. Paper presented at Promises of Mixed-Initiative Interaction. AI Magazine- the 24th International Symposium on Computer and Special Issue on Mixed-Initiative Assistants, 28(2). Information Sciences (ISCIS 2009), Guzelyurt. Horvitz, E, and Paek, T. (2007). Complementary Morgan, N, Baron, D, Bhagat, S, Carvey, H, Dhillon, R, Computing: Policies for Transferring Callers from Edwards, J, . . . Wooters, C. (April 6-10, 2003). Dialog Systems to Human Receptionists. User Meetings about meetings: research at ICSI on speech in Modeling and User Adapted Interaction, 17(1), 159- multiparty conversations. Paper presented at the IEEE 182. International Conference on Acoustics, Speech, and Ide, N, and Véronis, J. (1998). Introduction to the special Signal Processing (ICASSP '03). issue on word sense disambiguation: the state of the art. Mueller, ET. (March 1987). Daydreaming and Computational Linguistics, 24(1), 1-40. Computation. Ph.D. Dissertation, University of Kapoor, A, Tan, D, Shenoy, P, and Horvitz, E. (2008). California. Los Angeles. Complementary Computing for Visual Tasks: Meshing Navigli, R, and Velardi, P. (July 2005). Structural Computer Vision with Human Visual Processing. Paper Semantic Interconnections: A Knowledge-Based presented at the IEEE International Conference on Approach to Word Sense Disambiguation. IEEE Automatic Face and Gesture Recognition. Transactions On Pattern Analysis and Machine Karp, RM. (1972). Reducibility Among Combinatorial Intelligence, 27(7), 1075-1086. Problems. In RE Miller & JW Thatcher (Eds.), Nejad, AS. (April 2010). A Framework for Analyzing Complexity of Computer Computations (pp. 85-103). Changes in Health Care Lexicons and Nomenclatures. New York: Plenum. PhD dissertation. Concordia University. Montreal, Leahu, L, Sengers, P, and Mateas, M. (September 21 - 24, Quebec, Canada. 2008). Interactionist AI and the promise of ubicomp, or, Nicolelis, MAL, Wessberg, J, Stambaugh, CR, Kralik, how to put your box in the world without putting the JD, Beck, PD, Laubach, M, . . . Kim, J. (2000). Real- world in your box. Paper presented at the Tenth time prediction of hand trajectory by ensembles of Talwar, SK, Xu, S, Hawley, ES, Weiss, SA, Moxon, KA, cortical neurons in primates. Nature, 408(6810), 361. and Chapin, JK. (2 May 2002). Behavioural Pepitone, J. (Retrieved on: January 13, 2011). IBM's neuroscience: Rat navigation guided by remote control. Jeopardy supercomputer beats humans in practice Nature, 417, 37-38. bout. Paper presented at the CNNMoney, Available at: Taylor, P, and Black, A. (1999). Speech synthesis by http://money.cnn.com/2011/01/13/technology/ibm_jeop phonological structure matching. In Eurospeech99, ardy_watson. Budapest, Hungary. Phillips, PJ, and Beveridge, JR. (September 28-30,. 2009). Turing, A. (1950). Computing Machinery and An introduction to biometric-completeness: The Intelligence. Mind, 59(236), 433-460. equivalence of matching and quality. Paper presented at Turing, AM. (1936). On Computable Numbers, with an the IEEE 3rd International Conference on Biometrics: Application to the Entscheidungsproblem. Proceedings Theory, Applications, and Systems (BTAS '09) of the London Mathematical Society, 42, 230-265. Washington, DC Vaas, L. (December 1, 2007). Striptease Used to Recruit Raymond, ES. (March 22, 1991). Jargon File Version Help in Cracking Sites. Paper presented at the PC 2.8.1. Available at: Magazine. http://catb.org/esr/jargon/oldversions/jarg282.txt. Vidal, J. (1973). Toward direct brain-computer Salloum, W. (November 30, 2009). A Question communication. Annual Review of Biophysics and Answering System based on Conceptual Graph Bioengineering, 2, 157-180. Formalism. Paper presented at the The 2nd Wikipedia. (Retrieved January 7, 2011). AI-Complete. International Symposium on Knowledge Acquisition Available at: http://en.wikipedia.org/wiki/AI-complete. and Modeling (KAM 2009), China. Yampolskiy, RV. (2007a, April 13, 2007). Embedded Sandberg, A, and Boström, N. (2008). Whole Brain CAPTCHA for Online Poker. 20th Annual CSE Emulation: A Roadmap. Paper presented at the Future Graduate Conference (Grad-Conf2007), Buffalo, NY. of Humanity Institute, Oxford University. Technical Yampolskiy, RV. (2007b, September 28, 2007). Report #2008-3, Available at: Graphical CAPTCHA embedded in cards. Western http://www.fhi.ox.ac.uk/Reports/2008-3.pdf. New York Image Processing Workshop (WNYIPW) - Searle, J. (1980). Minds, Brains and Programs. IEEE Signal Processing Society, Rochester, NY. Behavioral and Brain Sciences, 3(3), 417-457. Yampolskiy, RV. (2011). AI-Complete CAPTCHAs as Shahaf, D, and Amir, E. (March 26-28, 2007). Towards a Zero Knowledge Proofs of Access to an Artificially theory of AI completeness. Paper presented at the 8th Intelligent System. ISRN Artificial Intelligence, 271878. International Symposium on Logical Formalizations of Yampolskiy, RV. (2012). Leakproofing Singularity - Commonsense Reasoning (Commonsense 2007), Artificial Intelligence Confinement Problem. Journal of California. Consciousness Studies (JCS), 19(1-2), 194–214. Shahaf, D, and Horvitz, E. (July 2010). Generalized Task Yampolskiy, RV. (October 3-4, 2011a). Artificial Markets for Human and Machine Computation. Paper Intelligence Safety Engineering: Why Machine Ethics is presented at the Twenty-Fourth AAAI Conference on a Wrong Approach. Paper presented at the Philosophy Artificial Intelligence, Atlanta, GA. and Theory of Artificial Intelligence (PT-AI2011), Shapiro, SC. (1992). Artificial Intelligence. In SC Shapiro Thessaloniki, Greece. (Ed.), Encyclopedia of Artificial Intelligence (pp. 54- Yampolskiy, RV. (October 3-4, 2011b). What to Do with 57). New York: John Wiley. the Singularity Paradox? Paper presented at the Shieber, SM. (December 2007). The Turing Test as Philosophy and Theory of Artificial Intelligence (PT- Interactive Proof. Nous, 41(4), 686-713. AI2011), Thessaloniki, Greece. Shieber, SM. (July 16-20, 2006). Does the Turing Test Yampolskiy, RV, and Fox, J. (2012a). Artificial demonstrate intelligence or not. Paper presented at the Intelligence and the Human Mental Model. In A Eden, Twenty-First National Conference on Artificial J Moor, J Soraker & E Steinhart (Eds.), In the Intelligence (AAAI-06), Boston, MA. Singularity Hypothesis: a Scientific and Philosophical Surowiecki, J. (2004). The Wisdom of Crowds: Why the Assessment: Springer. Many Are Smarter Than the Few and How Collective Yampolskiy, RV, and Fox, J. (2012b). Safety Engineering Wisdom Shapes Business, Economies, Societies and for Artificial General Intelligence. Topoi Special Issue Nations: Little, Brown. on Machine Ethics & the Ethics of Building Intelligent Takagi, H. (2001). Interactive Evolutionary Computation: Machines, (In Press). Fusion of the Capacities of EC Optimization and Yampolskiy, RV, and Govindaraju, V. (2007). Embedded Human Evaluation. Proceesings of the IEEE 89, 9, Non-Interactive Continuous Bot Detection. ACM 1275-1296. Computers in Entertainment, 5(4), 1-11.