=Paper= {{Paper |id=Vol-1419/paper0014 |storemode=property |title=On The Scope of Mechanistic Explanation in Cognitive Sciences |pdfUrl=https://ceur-ws.org/Vol-1419/paper0014.pdf |volume=Vol-1419 |dblpUrl=https://dblp.org/rec/conf/eapcogsci/RusanenL15 }} ==On The Scope of Mechanistic Explanation in Cognitive Sciences== https://ceur-ws.org/Vol-1419/paper0014.pdf
                On The Scope of Mechanistic Explanation in Cognitive Sciences
                                  Anna-Mari Rusanen (anna-mari.rusanen@helsinki.fi)
                                    Department of Philosophy, History, Art and Culture Studies,
                                                          PO BOX 24
                                            00014 University of Helsinki FINLAND

                                               Otto Lappi (otto.lappi@helsinki.fi)
                                                 Institute of Behavioural Sciences,
                                                              PO BOX 9
                                              00014, University of Helsinki FINLAND


                            Abstract                                        In what follows, we will argue that while fulfilling these
  Computational explanations focus on information processing
                                                                          epistemic needs is essential in computational explanation in
  tasks of specific cognitive capacities. In this paper, we argue         the cognitive sciences, only the last mode of explanation
  that there are at least two different kinds of computational            conform to the mechanists’ way of thinking what genuine
  explanations; the interlevel and the intralevel ones. Moreover,         mechanistic explanation is.
  it will be argued that neither interlevel nor intralevel                  Thus, we conclude that either philosophers of cognitive
  computational explanations can be subsumed under the                    science need to embrace non–mechanistic computational
  banner of standard mechanistic explanations. In the case of             explanations, or extend the scope of what counts as
  interlevel explanations, the problem is the direction of
  explanation, and in the case of intralevel explanations, the            “mechanistic” explanation in cognitive science.
  problem are the dependencies that the explanations track.
  Finally, it is argued that in the context of explanation of              Computational Explanations and Mechanistic
  cognitive phenomena, it may be necessary to defend more                                 Explanation
  liberal and pluralistic views of explanation, which would
  allow that there are also some non-mechanistic forms of                 Within the last ten years, a growing number of philosophers
  explanation.                                                            have defended the view that computational explanations are
                                                                          mechanistic explanations (Piccinini 2004; Kaplan 2011;
  Keywords:      computational   explanation;       mechanistic
  explanation; computation; Marr                                          Craver & Piccinini 2011). For example, according to
                                                                          Piccinini (2004, 2006a, 2006b) computing mechanisms can
                        Introduction                                      be analyzed in terms of their component parts, their
                                                                          functions, and their organization. For Piccinini, a
Computational explanations focus on information                           computational explanation is then “a mechanistic
processing required in exhibiting specific cognitive                      explanation that characterizes the inputs, outputs, and
capacities, such as perception, reasoning or decision                     sometimes internal states of a mechanism as strings of
making. At an abstract level, these computational tasks can               symbols, and it provides a rule, defined over the inputs (and
be specified as mappings from one kind of information to                  possibly the internal states), for generating the outputs”
another.                                                                  (Piccini 2006b).
  These explanations can increase our understanding of a                    According to this mechanistic account,	
   the goal of
cognitive process at least in three ways: (i) they can explain            computational explanation is to characterize the functions
a certain cognitive phenomenon in terms of fundamental                    that are being computed (the what) and specify the
rational or mathematical principles governing the                         algorithms by which the system computes the function (the
information processing task faced by a system, or (ii) they               how). In other words, the idea is that an information
can explain by describing the formal dependencies between                 processing phenomenon would be explained by giving a
certain kinds of tasks and certain kinds of information                   sufficiently accurate model of how hierarchical causal
processing requirements. Moreover, in many computational                  systems composed of component parts and their properties
accounts1 it is often assumed that (iii) computational                    sustain or produce the phenomenon2.
explanations can explain the phenomenon in terms of its
implementation in more primitive constituent processes.
  In recent years, a number of philosophers have proposed
                                                                            2
that computational explanations of cognitive phenomena                        Constructing an explanatory mechanistic model thus involves
could be seen as instances of mechanistic explanation                     mapping elements of a mechanistic model to the system of interest,
(Piccinini 2004; 2006b; Sun 2008; Kaplan, 2011; Piccinini                 so that the elements of the model correspond to identifiable
                                                                          constituent parts with the appropriate organization and causal
& Craver 2011).
                                                                          powers to sustain that organization. These explanatory models
                                                                          should specify the initial and termination conditions for the
  1
    For instance, Piccinini 2006a,2006b, Kaplan 2011. See also            mechanism, how it behaves under various kinds of interventions,
Shagrir 2010 for discussion.                                              how it is integrated with its environment, and so on.



                                                                    111
   This kind of mechanistic “computational” explanations                    explain why and how certain principles govern the possible
track causal dependencies at the level of cognitive                         behavior or processes of the system.
performances. They correspond to explanations which                           In this sense, interlevel explanations explain the behavior
David Marr (1982) called “algorithmic” explanations.                        of mechanisms at the algorithmic and implementation
However, as we have argued earlier (Rusanen & Lappi                         levels. In such explanations, the explanans is at the “upper”
2007; Lappi & Rusanen 2011), it is not obvious, whether                     computational level, and the explananda are at the “lower”
this mechanistic account can be extended to cover                           algorithmic or performance levels. For example, if one
computational explanations in Marr´s sense3.                                considers, why certain synaptic change is such-and-such,
   In Marr´s trichotomy, computational explanations                         answers are often something like “because it serves to store
specifies what are the information processing tasks, and                    the value of x needed in order to compute y. Or, why is the
what is computed and why. Computational explanations                        wiring in this ganglion such-and-such? Because it computes,
give an account of the tasks that the neurocognitive system                 or approximates computation of x. In other words, pheno-
performs, or problems that the cognitive system in question                 mena at the lower levels are explained by their
is thought to have the capacity to solve, as well as the                    appropriateness of the mechanism for the computational
information requirements of the tasks (Marr, 1982).                         tasks.
   This level of explanation is also the level, whereby the                   Secondly, there are computational explanations, which are
appropriateness and adequacy (for the task) of mappings                     rather intralevel than interlevel explanations. In short, these
from representations to others are assessed (cf. Marr, 1982).               explanations track formal dependencies between certain
For example, in the case of human vision, one such task                     kinds of information processing tasks, and they explain by
might be to faithfully construct 3D descriptions of the                     describing certain kinds of information processing
environment from two 2D projections. The task is specified                  requirements at the level of cognitive competences.
by giving the abstract set of rules that tells us what the                    There are different views about the nature of the formal
system does and when it performs a computation. This                        dependencies, which are tracked by these computational
abstract computational theory characterizes the tasks as                    explanations. Some take it that the dependencies can be
mappings, functions from one kind of information to                         described intentionally i.e. in terms of informational
another. It constitutes, in other words, a theory of                        content, while some other, such as Egan (1992) argues that
competence for a specific cognitive capacity - vision,                      computational explanations track appropriate mathematical
language, decision making etc.                                              dependencies by specifying the mathematical input-output-
                                                                            functions that is being computed. There are also some
         The Interlevel and The Intralevel                                  pluralistic views; for instance Shagrir (2010) defends the
           Computational Explanations                                       view that there are actually two different types of formal
                                                                            dependencies; the “inner” and the “outer” ones. According
It is important to distinguish two different types of
                                                                            to Shagrir (2010) the inner formal dependencies are formal
computational explanations. Firstly, there are interlevel
                                                                            relations between inputs and outputs, and the outer formal
computational explanations, which explain by describing,
                                                                            dependencies are mathematical relations between “what is
how the possible behavior or processes of a system is
                                                                            being represented by the inputs and outputs”. These formal
governed by certain information processing principles,
                                                                            dependencies are abstracted from representational contents,
rather than explain how certain algorithms compute certain
                                                                            which correspond for example certain features of physical
functions. These computational explanations display the
                                                                            environment.
function that the mechanism computes and they explain and
                                                                              So, there are at least two different kinds of computational
why this function is appropriate for a given cognitive task.
                                                                            explanations; the interlevel and the intralevel ones. In the
  Some of our critics, such as Milkowski, have claimed that
                                                                            following sections, we will argue that neither interlevel nor
we see these interlevel computational explanations as
                                                                            intralevel computational explanations can be subsumed
“systemic explanations that show how a cognitive system
                                                                            under the banner of standard mechanistic explanations. In
can have some capacities” (Milkowski 2013, p. 107).
                                                                            the case of interlevel explanations, the problem is the
However, we do not defend such a position. We do not
                                                                            direction of explanation (Rusanen & Lappi 2007), and in the
claim that computational explanations explain how a
                                                                            case of intralevel explanations, the problem are the
cognitive system can have some capacities. Instead, what
                                                                            dependencies that the explanations track (Rusanen 2014).
we claim is that interlevel computational explanations
                                                                             Inter-level Computational Explanations: The
  3
      Although Marr´s notion of computational explanation is                              Problem of Direction
sometimes thought to be “outdated” and “oldfashioned”, it still             In a nutshell, the problem for standard mechanistic accounts
plays an important role in cognitive and cognitive neurosciences.           of interlevel explanations goes as follows: In standard
For example, there is interesting work being done in theoretical            accounts (constitutive) mechanistic explanations are
neuroscience and cognitive modeling within this framework in the            characterized in such a way that in inter-level computational
domains of vision, language, and the probabilistic approach to cog-
                                                                            explanations, the explanans is at a lower level than the
nition (for overviews, see Anderson 1991; Chater 1996; Chater et
al. 2006).                                                                  explanandum. For example Craver (2001, p. 70, emphasis



                                                                      112
added) notes that “ (Constitutive) explanations are inward                “mechanisms”, which are not causally or spatiotemporally
and downward looking, looking within the boundaries of X                  implemented.
to determine the lower level mechanisms by which it can Φ.                  In other words, the problem is that in standard
The explanandum… is the Φ-ing of an X, and the explanans                  mechanistic accounts, in contextual explanations the
is a description of the organized σ-ing (activities) of Ps (still         “contexts” are expressed in causal and spatiotemporal terms,
lower level mechanisms).”                                                 not in terms of information processing at the level of
   In those explanations, phenomena at a higher level of                  computational competences. Crucially, this kind of view
hierarchical mechanistic organization are explained by their              conceives contextual explanations as a kind of systemic
lower-level constitutive causal mechanisms but not vice                   explanations, in which the uppermost level of the larger
versa (Craver 2001, 2006; Machamer & al, 2000). For                       mechanism will still remain non-computational in character.
example, under this interpretation a cognitive capacity                     For this reason, computational explanations are not these
would be explained by describing implementing                             “systemic” contextual explanations. In contrast, we claim,
mechanisms at algorithmic or implementing level. But, in                  computational explanations involve abstract mechanisms,
inter-level computational explanations, the competence                    which are not causally, but logically governing the behavior
explains performance i.e. explanans is at the level of                    of the mechanisms at the lower levels.
cognitive competences, and the explanandum is at the level
of performances. In other words, these inter-level                         Intra–level Computational Explanations: The
computational explanations proceed top-down, while                                   Problem of Dependencies
constitutive mechanistic explanations are typically
                                                                          Now, let´s move to the intralevel computational
characterized in such a way that they seem always to be
                                                                          explanations. Why cannot they be seen as standard
bottom-up explanations. Thus, computational explanations
                                                                          mechanistic explanations? Well, the answer is that they
are not constitutive mechanistic explanations in the standard
                                                                          simply track different kinds of dependencies. While
sense.
                                                                          algorithmic and implementation level explanation track
   One might argue that this analysis ignores the possibility
                                                                          causal or constitutive dependencies at the level of cognitive
that computational explanations are contextual rather than
                                                                          or neural performances, intra-level computational
constitutive mechanistic explanations. In the mechanistic
                                                                          explanations track formal dependencies between certain
terminology, the contextual explanations explain how the
                                                                          kinds of information processing tasks at the level of
“higher-level” mechanism constrains what a lower level
                                                                          cognitive competences.
mechanism does, and one computational mechanism can be
                                                                            Because of this, these different modes of explanation are
a component of a larger computational system, while the
                                                                          not necessarily logically dependent on each other. Thus the
latter serves as the contextual level for the former. For
                                                                          computational explanations at the highest level may be
example Bechtel seems to accept this position, when he
                                                                          formulated independently of assumptions about the
remarks that “since (marrian) computational explanations
                                                                          algorithmic or neural mechanisms which perform the
address what mechanisms are doing they focus on
                                                                          computation.
mechanisms “in context”” (Bechtel 2008, p. 26).
                                                                            Some of our critics, such as Kaplan (2011) and Piccinini
   Now, if computational explanations was contextual
                                                                          (2009) remark that our position can be seen as a typical
explanations, then our argument would fail. Namely, if
                                                                          example of “computational chauvinism”, according to
computational-level       explanations    were       contextual
                                                                          which computational explanations of human cognitive
explanations, and if contextual explanation is a subspecies
                                                                          capacities can be constructed and confirmed independently
of standard mechanistic explanations, then computational
                                                                          of details of their implementation in the brain.
level explanations would be a subspecies of mechanistic
                                                                            Indeed, we defend the view that computational
explanations.
                                                                          explanations can be in principle - if not in practice -
   However, it is possible to argue that computational
                                                                          constructed largely autonomously with respect to the
explanations are not contextual explanations in the standard
                                                                          algorithmic or implementation levels below. That is:
mechanistic sense. For instance, Craver characterizes
                                                                          computational problems of the highest level may be
contextual explanations as explanations, which “refer to
                                                                          formulated independently of assumptions about the
components outside of X” and are “upward looking because
                                                                          algorithmic or neural mechanisms which perform the
they contextualize X within a higher level mechanism”. On
                                                                          computation (Marr 1982; see also Shapiro 1997; Shagrir
this view, a description of how a cognitive system
                                                                          2001). Because the performance and competence- level
“behaves” in its environment, or how an organization of a
                                                                          computational explanations track different kinds of
system constraints the behavior of its components, require a
                                                                          dependencies, these different modes of explanation are not
spatiotemporal interpretation for the mechanisms. But, as
                                                                          necessarily logically dependent on each other. Hence, if this
we argued in 2011, computational explanations do not
                                                                          is computational chauvinism, then we are computational
necessarily refer to spatiotemporally implemented higher-
                                                                          chauvinists.
level mechanisms, and they do not involve spatiotemporally
                                                                            However, Kaplan (2011) claims that while we highlight
implemented components “outside of (spatiotemporally
                                                                          the independence of computational explanations, we forget
implemented) X”. Instead, they refer to abstract
                                                                          something important Marr himself emphasized. Namely,



                                                                    113
Kaplan remarks that even if Marr emphasized that the same                If computational explanations are characterized as
computation might be performed by any number of                        explanations which answer questions such as: “What is the
algorithms and implemented in any number of diverse                    goal of this computation?”, it may be claimed that we fail to
hardwares, Marr´s position changes when he “addresses the              make a distinction between task analysis and genuine
key explanatory question of whether a given computational              explanations.
model or algorithmic description is appropriate for the                  A task analysis breaks a capacity of a system into a set of
specific target system under investigation” (Kaplan 2011,              sub-capacities and specifies how the sub-capacities are (or
p.343).                                                                may be) organized to yield the capacity to be explained.
  Is this, really, an argument against our position? As                Obviously, if computational explanations are mere
Kaplan himself remarks, Marr rejects “the idea that any                descriptions of computational tasks, then they are not
computationally adequate algorithm (i.e., one that produces            explanations at all.
the same input-output transformation or computes the same                However, computational explanations are clearly more
function) is equally good as an explanation of how the                 than mere descriptions of computational tasks, because they
computation is performed in that particular system” (Kaplan            describe formal dependencies between certain kinds of tasks
2011 p.343).                                                           and certain kinds of information processing requirements. If
  But then, we are not talking about competence level                  these formal dependencies are such that descriptions of
explanations anymore. When the issue is how the                        them not only offer the ability to say how the computational
computation is performed in the particular system, such as             layout of the system actually is, but also the ability to say
in human brains, then the explanation is given in terms of             how it would be under a variety of circumstances or
algorithmic or neural processes, or mechanisms, if you will.           interventions, they can be counted as explanatory4.
Then, naturally, the crucial issue is what kinds of                      In other words, if these descriptions answer questions
algorithms are possible for a certain kind of system, or               such as “Why does this kind of task create this kind of
whether the system has structural components that can                  constraint rather than that kind of constraint?” by tracking
sustain the information processing that the computational              such formal dependencies which can explain what makes
model posits at the neural level. If one aims to explain how           the difference, then these descriptions can be explanatory.
our brains are able to perform some computations, then – of              Obviously, computational explanations of this sort are not
course – one should take the actual neural implementation              causal explanations. However, in the context of explanation
and the constraints of the possible neurocognitive                     of cognitive phenomena, it may be necessary to defend
architecture into account as well.                                     more liberal and pluralistic views of explanation, which
  But given this, these kinds of explanations are                      would allow that there are also some non-causal forms of
explanations at the algorithmic or performance level, not at           explanation.
the computational or competence level. Because of this, we               We agree with mechanists that when we are explaining
also find position defended by Piccinini & Craver (2011)               how cognitive processing actually happens for example in
problematic. Piccinini and Craver (ibid) argue that in so far          human brains, it is a matter of causal explanation to tell how
computational explanations do not describe how the                     the neuronal structures sustain or produce the information
computational system “actually works” i.e. describe “how               processing in question. However, we still defend the view
the information is encoded and manipulated” in                         that there are other modes of explanation in cognitive
implementing system, they are mere how possibly-                       sciences as well.
explanations. In our understanding, this depends on the
explanatory questions. If, for example, the aim is to explain,               Discussion: The Scope of Mechanistic
how certain kind of information processing task is actually                              Explanation
solved in human brains, and if the explanations does not
                                                                       Some explanations of cognitive phenomena can be
describe how it actually happens, it is a how possibly-
                                                                       subsumed under the banner of “mechanistic explanation”.
explanation. But, it is a how possibly explanation at the
                                                                       Typically     those     explanations  are    neurocognitive
performance level, not at the competence level.
                                                                       explanations of how certain neurocognitive mechanisms
  For this reason, the remark that computational
                                                                       produce or sustain certain cognitive phenomena, but also
explanations do not describe how the computational system
                                                                       some psychological explanations can be seen as instances of
“actually works” is not an argument against the logical
                                                                       mechanistic explanations. Moreover, if a more liberal
independence of the computational level explanations.
                                                                       interpretation for the term mechanism is allowed, then some
                                                                       computational or competence level explanations may also
   The Explanatory Status of Computational                             qualify as mechanistic explanations (Rusanen & Lappi
                Explanations                                           2007; Lappi & Rusanen 2011).
A more problematic issue is to what extent computational
explanations are explanatory after all. Although Milkowski
may partially misinterpret our position, he still raises an              4
important question concerning the explanatory character of                  This is a non-causal modification of the Woodward´s
                                                                       manipulationist account of explanation (Woodward 2003). For a
computational explanations (Milkowski 2012, 2013).                     similar treatment of Woodward, see Weiskopf 2011.



                                                                 114
  Nevertheless, we think that there are compelling reasons              explain many cognitive phenomena, such as certain forms of
to doubt whether mechanistic explanation can be extended                linguistic patterns, or certain types of inductive
to cover all cognitive explanations. There are several                  generalizations, by combining these principles.
reasons for this plea for explanatory pluralism: Firstly, it is            These explanations are “principle based” rather than
not clear whether all cognitive systems or cognitive                    mechanistic explanations. Moreover, Chater and colleagues
phenomena can be captured mechanistically. Mechanistic                  seem to suggest the mechanistic models of these phenomena
explanations require that the system can be decomposed i.e.             may actually be derived from these general principles, and
analyzed into a set of possible component operations that               explanations that appeal to these general principles provide
would be sufficient to produce or sustain the phenomenon in             “deeper” explanations than the mechanistic explanations
question (Bechtel & Richardson 1993). Typically a                       (Chater & Brown 2008). It is possible, that many of the so
mechanism built in such a manner will work in a sequential              called computational level explanations turn out to be
order, so that the contributions of each component can be               instances of these principle-based explanations rather than
examined separately (Bechtel & Richardson 1993).                        instances of mechanistic explanations.
  However, in cognitive sciences there are examples of                     In sum, taken together these diverse claims seem to imply
systems – such as certain neural nets – which are not                   that there is not a single, unified mode of explanation in
organized in such a manner. As Bechtel and colleagues                   cognitive sciences. Instead, they seem to suggest that
remark, the behavior of these kinds of systems cannot be                cognitive sciences are examples of those sciences which
explained by decomposing the systems into subsystems,                   utilize several different modes of explanation, only some of
because the parts of the networks do not perform any                    which can be subsumed under the mechanistic account of
activities individually that could be characterized in terms of         explanation.
what the whole network does (Bechtel & Richardson 1993;                    Obviously, mechanistic explanation is a powerful
Bechtel 2011, 2012). Hence, it is an open question to what              framework for explaining the behavior of complex systems,
extent the behavior of these kinds of systems can be                    and it has demonstrated its usefulness in many scientific
explained mechanistically. At the very least, it will require           domains. Also, many successful theories and explanations
adopting a framework of mechanistic explanation different               in cognitive sciences are due to this mechanistic approach.
from the one that assumes sequential operation of                       However, this does not imply that it would be the only way
decomposable parts (Bechtel 2011, 2012; Bechtel &                       to explain complex cognitive phenomena.
Abrahamsen 2011).
  Moreover, Von Eckardt and Poland (2004) raise the                                     Concluding Remarks
question to what extent the mechanistic account is                      In this paper, we have argued that there are at least two
appropriate for those explanations which involve appeal to              different kinds of computational explanations; the interlevel
mental representations or to the normative features of                  and the intralevel ones. Moreover, we have argued that
certain psychopathological phenomena. Although we find                  neither interlevel nor intralevel computational explanations
Von Eckardt and Poland´s argumentation slightly                         can be subsumed under the banner of standard mechanistic
misguided, we still think that it is important to consider the          explanations. In the case of interlevel explanations, the
normative aspects of cognitive phenomena. Cognitive                     problem is the direction of explanation, and in the case of
systems are, after all, adaptive systems which have a                   intralevel explanations, the problem are the dependencies
tendency to seek “optimal”, “rational” or “best possible”               that the explanations track.
solutions to the information processing problems that they                Obviously, computational explanations of this sort are not
face. Because of this, cognitive processes are not only goal-           causal explanations. However, in the context of explanation
directed, but also normative. It is not clear how well this             of cognitive phenomena, it may be necessary to defend
normative aspect of cognitive systems can be captured by                more liberal and pluralistic views of explanation, which
mechanistic explanations.                                               would allow that there are also some non-causal forms of
  Thirdly, some philosophers have paid attention to the fact            explanation.
that there are examples of explanatory computational
models in cognitive sciences which focus on the flow of                                        References
information through a system rather than the mechanisms
that underlie the information prosessing (Shagrir 2006,
2010). Along similar lines, Weiskopf (2011) argues that                 Anderson, J. 1991b. Is Human Cognition Adaptive?
there is a set of “functional” models of psychological                    Behavioral and Brain Sciences, 14: 471- 457.
capacities which are both explanatory and non-mechanistic.              Bechtel, W. 2008. Mental mechanisms: Philosophical
  Finally, in recent years cognitive scientists have raised the           perspectives on cognitive neuroscience. London:
possibility that there are some universal, law-like principles            Routledge University Press.
of cognition, such as the “principle of simplicity”,                    Chater, N. 1996. Reconciling Simplicity and Likelihood
“universal law of generalization” or the “principle of scale-             Principles in Perceptual Organization. Psychological
variance” (Chater & al 2006; Chater & Vitanyi 2003).                      Review, 103(3): 566-581.
Chater and colleagues (ibid.) argue that it is possible to



                                                                  115
Chater, N., & Vitanyi, P. M. B. 2003. The generalized               Shagrir, O. 2010. Brains as Analog-Model Computers.
  universal law of generalization. Journal of Mathematical            Studies in the History and Philosophy of Science, 41(3):
  Psychology, 47: 346-369.                                            271-279.
Chater, N., Tenenbaum, J. B., & Yuille, A. 2006.                    Shapiro, L. 1997. A Clearer Vision. Philosophy of Science,
  Probabilistic Models of Cognition: Conceptual                       64, 131-153.
  Foundations. Trends in Cognitive Sciences, 10, 287-291.           Weiskopf, D. 2011. Models and mechanisms in
Chater, N. & Brown, G. From Universal Laws of Cognition               psychological explanation, Synthese, 183: 313-38.
  to Specific Cognitive Models. Cognitive Science, 32, 36-          Woodward, J. 2003. Making Things Happen: A Theory of
  67.                                                                 Causal Explanation. Oxford: Oxford University Press.
Craver, C.F. 2001. Role functions, Mechanisms and
  Hierarchy. Philosophy of Science, 68, 53-74.
Craver, C.F. 2006. When Mechanistic Models Explain.
  Synthese, 153: 355-376.
Kaplan, D. 2011. Explanation and description in
  computational neuroscience. Synthese, 183 (3): 339-373
Lappi, O. & Rusanen, A-M. 2011. Turing Machines and
  Causal Mechanisms in Cognitive Sciences, In P. McKay
  Illari, F. Russo & J. Williamson, (eds.) Causality in the
  Sciences. Oxford: Oxford University Press. 2011: 224-
  239
Machamer, P. K., Darden, L., & Craver, C. 2000. Thinking
  About Mechanisms. Philosophy of Science, 67: 1-25.
Marr, D. 1982. Vision: A Computational Investigation into
  the Human Representation of Visual Information. San
  Francisco: W.H. Freeman.
Milkowski, M. 2012. Limits of Computational Explanation
  of Cognition. In Müller, V. (ed.) Philosophy and Theory
  of Artificial Intelligence. Springer.
Milkowski, M. 2013.Explaining the Computational Mind.
  MIT Press.
Piccinini, G. 2004a. Functionalism, Computationalism and
  Mental Contents. Canadian Journal of Philosophy, 34,
  375-410.
Piccinini, G. 2006a. Computational Explanation and
  Mechanistic Explanation of Mind. In M. DeCaro, F.
  Ferretti & M. Marraffa (Eds.), Cartographies of the Mind:
  The Interface Between Philosophy and Cognitive Science.
  Dordrecht: Kluwer.
Piccinini, G. 2006b. Computational Explanation in
  Neuroscience. Synthese, 153, 343-353.
Piccinini, G. & Craver, C. 2011. Integrating psychology and
  neuroscience: functional analyses as mechanism sketches.
  Synthese, 183 (3):283-311.
Piccinini, G. 2011. Computationalism, in Oxford Handbook
  of Philosophy of Cognitive Science, Eric Margolis,
  Richard Samuels, and Stephen Stich, eds., Oxford:
  Oxford University Press 2011: 222-249.
Rusanen, A-M. & Lappi, O. (2007). The Limits of
  Mechanistic Explanation in Neurocognitive Sciences. In
  Vosniadou, S., Kayser, D. & A. Protopapas, (Eds)
  Proceedings of the European Cognitive Science
  Conference 2007. Howe: Lawrence Erlbaum Associates.
  2007: 284-289.
Newell, A., & Simon, H. A. (1972). Human problem
  solving. Englewood Cliffs, NJ: Prentice-Hall.




                                                              116