What might cognition be, if not computation? by Van Gelder (1995)

http://people.bu.edu/pbokulic/class/vanGelder-reading.pdf

I’ve been feeling ill-at-ease with some hints of computationalism in the work of Sperber et al, so I’m reading up about non-computationalist ways to understand cognition. Although computationalist interpretations of Relevance Theory seem common, in Relevance Theory (2002) Sperber and Wilson state:

“As noted in Relevance (124-32), it therefore seems preferable to treat effort and effect as non-representational dimensions of mental processes: they exist and play a role in cognition whether or not they are mentally represented; and when they are mentally represented, it is in the form of intuitive comparative judgements rather than absolute numerical ones.”

This article is really interesting. Van Gelder argues in favour of a dynamical view, and describes connectionist systems as a subset of dynamical systems.

The dynamical conception of cognition likewise involves interdependent
commitments at three distinct levels, but stands opposed to
the computational conception in almost every respect. The core dynamical
hypothesis-that the best models of any given cognitive
process will specify sequences, not of configurations of symbol types,
but rather of numerical states-goes hand in hand with a conception
of cognitive systems not as devices that transform symbolic inputs
into symbolic outputs but rather as complexes of continuous, simultaneous,
and mutually determining change, for which the tools of
dynamical modeling and dynamical systems theory are most appropriate.
In this vision, the cognitive system is not just the encapsulated
brain; rather, since the nervous system, body, and environment are
all constantly changing and simultaneously influencing each other,
the true cognitive system is a single unified system embracing all
three. The cognitive system does not interact with the body and the
external world by means of the occasional static symbolic inputs and
outputs; rather, interaction between the inner and the outer is best
thought of as a matter of coupling, such that both sets of processes
continually influencing each other’s direction of change. p. 373
If we attempt to describe languages with this kind of complexity
by means of a grammar (a finite set of rules for combining a
finite set of primitive elements into complex structures), we find
they can only be compactly specified by grammars more powerful
than so-called “regular” or “phrase-structure” grammars. If we then
ask what kind of computational device is capable of following the
rules of these grammars to recognize or produce such sentences, the
answer is that they can only be implemented on machines more powerful
than finite-state machines, such as push-down automata or linear-
bounded automata. Therefore, human cognitive systems must be
one of these more powerful computational systems.
A crucial question, then, is whether there is reason to believe that
dynamical systems, with their numerical states and rules of evolution
defined over them, are capable of exhibiting this order of complexity
in behavior. The investigation of the “computational” power of
dynamical systems, especially in the form of neural networks, is a relatively
new topic, but there is already a sizable literature and results
available indicate a positive answer. p. 377

A well-formed sequence is regarded as
successfully recognized if the system ends up in a particular region
after exposure to the whole sentence, while ending up in some other
region for non-well-formed sequences. Pollack (among others) has
found that there are networks that can recognize nonregular languages,
and in fact can learn to have this ability, via a novel form of
induction in language learning, involving bifurcations in system dynamics
which occur as the weights in the network gradually change. p. 378

The Cartesian tradition is mistaken in supposing that mind is an inner
entity of any kind, whether mind-stuff, brain states, or whatever.
Ontologically, mind is much more a matter of what we do within environmental
and social possibilities and bounds. Twentieth-century
anti-Cartesianism thus draws much of mind out, and in particular
outside the skull. The second component is a reconceiving of our
fundamental relationship to the world around us. In the Cartesian
framework, the basic stance of mind toward the world is one of representing
and thinking about it, with occasional, peripheral, causal
interaction via perception and action. It has been known since
Bishop Berkeley that this framework had fundamental epistemological
problems. It has been a more recent achievement to show that escaping
these epistemological problems means reconceiving the
human agent as essentially embedded in, and skillfully coping with, a
changing world; and that representing and thinking about the world
is secondary to and dependent upon such embeddedness.29 The
third component is an attack on the supposition that the kind of behaviors
we exhibit (such that we are embedded in our world and can

be said to have minds) could ever be causally explained utilizing
only the generically Cartesian resources of representations, rules,
procedures, algorithms, and so on. A fundamental Cartesian mistake
is, as Ryle variously put it, to suppose that practice is accounted for
by theory; that knowledge how is explained in terms of knowledge
that; or that skill is a matter of thought. That is, not only is mind not
to be found wholly inside the skull; cognition, the inner causal underpinning
of mind, is not to be explained in terms of the basic entities
of the Cartesian conception of mind. p. 380-381

Edit: I had a go at thinking through a dynamical relevance theory. I’ve always been a bit uncomfortable with the idea of looking for interpretations until some ‘good enough’ value of effect/effort or effect – effort is achieved and then stopping looking for interpretations, but maybe that works better from a dynamical perspective, seeing that as some tipping point with no value as such. I think I have difficulty grasping what things like cognitive effects might look like without metaphors like storage. Re-calibrating responses, perhaps.

One thing I really like is talking about manifestness as opposed to any notion of knowledge or information. I should probably read some Noe.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s