An advanced topics in mind, language and embodied cognition blog

Month: February, 2012

Objective Perception

by daaavve

Hello folks,

Next week we’ll start getting into the meatier parts of ‘Having Thought’ when we read chapter 10: Objective Perception, with Jules and Joey in the presenters’ chair(s). Post ruminations, reflections and reactions to the reading to the comment thread here!

THE INNER AND THE OUTER. The (promised) APPENDIX to my Embodied and Embedded Cognition presentation (just loosely and roughly inspired by Haugeland)

by alfredoms

It seems to me that this is a fundamental distinction in the way humans understand themselves (and to this extent anthropologically relevant). It is obviously related to issues about boundaries. I would point out two main boundaries here. Firstly, I think that the skin is a significant boundary, although not absolute, and it is a significant (wide-bandwidth) interface, even assuming that the mind is not limited by the skin. In many senses we start or we end with the skin.

[Of course I am not denying that there are other senses -extensively explored in our topic-. On the other hand, how the structure Inner/Outer will be altered in cyborgs and AI? And what about a brain in a vat?].

But the main boundary marking the distinction between the inner and the outer is not just physical; in fact, I am reading the skin also –and especially- as phenomenologically significant: in some way an “incorporeal interface” –Haugeland- can match, in this case, the corporeal one. A phenomenologist perhaps would talk about the object-skin and the subject-skin.

In my opinion, literature (the history of literature in general) might be considered both as a source of anthropological material and as the larger research even done on folk psychology (and, on the other hand, a good reason to reject eliminativism). I want to illustrate my view through an example from a novel, The Metamorphosis by Franz Kafka. The story starts when one day, after waking, the main character, Gregor Samsa, realizes that his body has dramatically transformed into a sort of grotesque arthropod. The point is that the narrative force of the story, and even what makes the story meaningful, is the distinction between the inner and the outer: that is, the very idea that, somehow, inside the hideous body Gregor still remains. Thanks to the narrative technique we can assist to the two aspects of his life, the internal and the external.

Some people, Daniel Dennett for instance, maintain that the self is a fiction; likewise, it might be claimed that the distinction between the inner and the outer is a kind of fiction. It could be true. However, in the story about Gregor Samsa, I suggest, we can find two different sorts of fiction: one is Gregor himself, who never has existed, whose metamorphosis has never happened; the other is the distinction between the inner and the outer, something that happens every day to (or in) many people.

Everybody lives, somehow, a double life; not as dramatically as that perfect family man during the day who becomes a killer during the night, but as a normal person, both in moral and in social terms. Everybody has, in this sense and in different degrees, a double life: an inner life and an outer life.


If I had to guess the continuation of Gregor´s story (trying to keep the original drive) I would say that Gregor ends up vanishing within the alien body.

Some things of possible interest

by Mog Stapleton

Hello folks, I just want to give you a heads up about a couple of things I have come across recently.

First of all, there is currently an online consciousness conference going on. What this means is that they put up video presentations and pdfs and invite speakers and the general public to have discussions in the comments section. There are some great speakers and good talks and comments and I really recommend checking it out and contributing to the discussion – its a great non-scary way to get to interact with some excellent philosophers. Its only running for a few more days (till March 2nd):



Secondly there is a nice 40 minute online interview on consciousness with some Barry C Smith, Anil Seth, and Chris Frith that is worth having a listen to:



And finally, a autobioraphical essay by Daniel Dennett in Philosophy Now which might be of interest:



Cheers, and if you come across anything else that might be of interest please do share!


Mind Embodied and Embedded post-presention dicussion.

by Federal Shepard

First off I have to apologise for taking so long to put this up, been a hectic week and my google mail account and wordpress didn’t seem to be getting on at first. Hopefully all this innovating this week shouldn’t be taxing people too much and some interesting discussion could be generated.


In lieu of putting my presentation (as honestly I don”t think it was that good) I thought I’d throw up the two issues that came up in discussion and have been rattling around my head since. Namely:

1) Is there a descent account of how intelligence in the case of novel (by this I mean situations which are new and original to the intelligent system, such that they cannot (at least easily) be accounted for by adaptations selected for by the environment they involved in) problem solving situations could function without any representations what so ever. But merely (for example)  as systems coupled with it’s environment?

Or even is such an account of ‘novel’ situations at all coherent. I remember some dissent over this. To which the following (completely ridicilous) thought experiment arises. Imagine you (being a paradigm intelligent system, one would hope) suddenly find yourself transported (due to some quantum sci-fi nonsense perhaps) to a world that is completely outside of the earth’s light-cone (making you in its absolutely elsewhere, there’s absolutely no way any information could travel from earth to there or vise versa as both will be outside of each others light cone, for an explanation as i’m lazy http://www.phy.syr.edu/courses/modules/LIGHTCONE/minkowski.html )and thus, presumably without the ability to transfer information there would be no ability to transfer meaning. Also presume that the situation your now placed in is (other than the laws of physics) completely novel. Would you then be able to interact with this environment at all? Would you be able to develop concepts or inject meaning into it? Or would you just shut down, unable to comprehend what was going on?

2) I can’t remember the point Harry ( I think, apologies if I’m wrong) made exactly, but insofar as I remember the point being raised that there could be no intelligent system created in complete isolation from any meaning. The issue that has arisen out of this that was worried me, if that if this is the case then:

How did meaning get into the world in the first place? For if intelligence resides within meaning did meaning have to be there first. If intelligence some how created meaning (intuitively this seems to have more of a pull for me) then what did that inteligence reside in? In short, how can we square the general account Haugeland has of intelligence with natural history as we best understand it, that is stellar evolution-abiogensis-micro-organisms on up.

Of course I could be completely getting the wrong end of several sticks here, if so please feel free to rip this all apart.


Representational Genera

by daaavve

Hello folks,

No class next week, but on 28th February we’ll reconvene, rejuvenated, and Ashleigh and I will be talking about Representational Genera. Consider this an open thread to share any thoughts, comments or questions that arise whilst you’re reading through the paper!

There’ll be no additional reading for my presentation, but I’ll be trying to say some things about the open questions for J-Haug that have come up on the course so far. So have a think about what, in your opinion, are the most fiercely burning issues that have cropped up as part of the course thus far, and bring your thoughts along to the class.

And don’t forget to chip in on the comments to previous posts where relevant – I think some really good questions have been raised, and it’d be good to know what everyone thinks about them.

Intentionality. Discussion

by thebestialfloor

First, sorry for the delay in posting this.  I will post here some of my thoughts about which of the three positions makes more sense as we didn’t have enough time for discussing this during class.
I situate myself within second base, regard third base as true but focusing in rather different notion of intentionality  that the one we use normally and first base as unable to naturalize intentionality. I am going to be very partial about the issue to encourage people discussion. So please post your comments to keep dialectic rolling!

A big deal of Haugeland insight in this paper is about the symbol grounding problem. First base has no answer “ready to hand”,second base treats symbolic intentional languages as instrumental (ontological dependant on how and for what purpose  we use those symbols) and third base grounds symbols on social practices.  Thus, as far as first base considered symbols are ungrounded they require interpretation and in second base they require attribution as they are grounded in the value of use.

A good answer to the symbol grounding problem is the teleological functionalism that W. Lycan defends in the paper Dave Statham posted (The Continuity of Levels of Nature).  There is defended the position that Nature is genuinely divided in a hierarchy of functionally characterized levels according to which each level is defined functionally and maintains a coherence through internal laws.  Each level, also, supervenes on the level below it.  Typical examples are biological taxonomies like cells, organs, organisms, species, etc.  In this theory the role/occupant functional distinction goes relative to the levels of nature.

This teleological functionalism (which I think fits into second base position) has several advantages regarding intentionality. Most important it sees intentionality as a personal (organism) level mechanism and hence it goes off the rails if we cast it into lower/higher levels of nature. First base is then mistaken because it casts organism level language into “cognition” as a whole.  Dennet has put forward this argument (A Cure for a Common Code) saying that asking a calculator is not a good way of knowing how does the calculator “really” computes.  And Third base is then talking mostly about social, species level, intentionality whereas when we use intentionality we refer mostly to personal level intentionality (beliefs and desires that are ascribed mostly to individuals). Third base is thus right in pointing that there are social meanings but that’s arguably another level.  And first base cast personal level into almost everything.

Mind Embodied and Embedded – Open thread

by daaavve

Hello folks! This is an open thread for people to share any thoughts, resources, or anything else that crops up during our reading of Mind Embodied and Embedded in advance of next week’s class. Sorry I forgot to do one of these last week!

The paper is a classic, and should provide plenty of food for thought. Share yours in the comments!


Intentionality All Stars

by Mog Stapleton

So this weeks reading is quite dense and takes a while to read so I’m not going to add an extra paper (though see below for a suggested reading to skim through). In class I will take 15 minutes or so to go over the paper and then I want to discuss a couple of issues that kept jumping out at me, which I’d like us all to discuss:

(1) Is intentionality the same as mindedness/mentality? (i.e. is having original intentionality necessary and sufficient for being a minded creature).

(2) What is the relation between intentionality and consciousness? Does either imply the other?

Other things to think about are where you think more current philosophers of mind and cog sci (e.g. Andy Clark) fit on Haugeland’s baseball pitch, and what their positions on the above questions are (e.g. is it different approaches to the above questions that motivates the debate between Andy Clark and critics of the extended mind such as Adams & Aizawa).

To start to think about these topics it may help to read sections 9-11 of the Stanford Encyclopaedia entry on Intentionality by Pierre Jacob (9. Can intentionality be naturalised?; 10. Is intentionality exhibited by all mental phenomena?; 11. Externalism and the explanatory role of intentionality). It is nice and easy to read and gives a really helpful overview of the various positions on the above questions: http://www.seop.leeds.ac.uk/entries/intentionality/#9

I want at least half an hour of my time to be discussion based around these questions so please come with some ideas about what you think, and note down anything you found confusing or problematic.

Cheers, see you on tuesday!

by dstatham

The main issues raised in the regrettably brief discussion session after I’d shut up last week concerned, I think, the levels of explanation involved in cognitivism.

As Haugeland makes clear, reduction in the sciences doesn’t supplant the level of explanation being reduced – there are plenty of cases from the more established sciences where explanations are still valid at a certain level, even if the regularities at this level can be explained in terms of (or mathematically derived from) further sets of regularities and laws.

Lycan [http://mugwump.pitzer.edu/~bkeeley/class/FoNS/Lycan.PDF sorry, bit of a rubbish copy…] provides some nice further clarification of this idea. One thing he is very clear on is the fact that distinctions between function and structure (eg. between a cognitive process and the (sub)components realising it) are relative to the level of explanation – nature is continuous and whether or not we see something as an IBB composed of smaller functional subcomponents, or as one of the functional components comprising a larger IBB, depends on the level of explanation.

But this isn’t just an epistemological convenience. For Lycan, I think, nature is genuinely organised hierarchically, with the smaller simple bits making up ever larger, more complex bits – all the way up. And this bottom-up ‘aggregative ontology’ is best approached with the kind of top-down explanatory strategy that I was trying to clarify last week (in terms of Haugeland’s IPSs being decomposed into component IBBs and so on).

Thus, biological organisms are composed of atoms collected into molecules collected into macromolecules, organelles, cells, organs…and so on. And we understand these organisms at any given level by appeal to the components which realise the function we are interested in. So, an explanation of the heart proceeds by describing its blood-pumping function in terms of the muscular and neural cells composing the heart, which cells can in turn be decomposed further into their functional components, and so on… And at the same time, we can always go the other way and move up the hierarchy, explaining the heart’s place in the wider circulatory system and eventually the organism as a whole.

What was novel and exciting about cognitivism and homuncular functionalism (if I understand correctly) is that it promises a way of bringing meaning and intentionality within this kind of aggregative ontology. And thus the mind, like everything else in nature, could be understood by top-down decomposition of highly intentional and meaningful cognitive states and processes, into gradually simpler less intentional components, all the way down through neurobiology and eventually to the level of fundamental physics.

Anyway, this is a gross simplification on my part and it’s definitely worth reading the Lycan chapter (or various other papers in which he defends this kind of functional explanation) to get a better understanding of these ‘levels of nature’. Also if anyone has or knows of some other related reading, it’d be interesting to get some further clarification, and perhaps compare with Haugeland’s characterisation of cognitivism.

With regard to causation and how it fits into this kind of picture, I’m still pretty clueless. I know that amongst those interested in complex systems and dynamical systems theory as a way of explaining the mind, the idea of ‘circular’ or ‘reciprocal’ causality is pretty popular. The idea being that as well as upwards causation, from smaller subcomponents to higher-level global behaviour, there is also such a thing as downward causation, whereby behaviour which only emerges at this higher global level causally influences the behaviour of lower level components. This might be a way into understanding how cognitive states can be explained reductively but still retain their causal role (as well as just their explanatory role).

But as I say, I can barely understand, let alone explain this stuff. So if anyone can give a better/different/more convincing explanation of emergence, or of how and why cognitive states can preserve causal efficacy at their particular level of organisation, whilst simultaneously being reducible to lower-level components, that’d be good!

Cognitivism -not all bad?

by Robert OS

So in my talk I said that cognitivism assumes symbol manipulation (a device running an explicitly represented program on explicit representations).

I said it was highly likely that cognitivism was a non-starter as an explanation for all of our cognitive abilities. But that still left the possibility that the mind had at its disposal a symbol manipulator.

The key attraction for me, that I perhaps should have been clearer about getting across, was that symbol manipulation programs are built out of basic building blocks of commands such as IF, AND, OR, NOT, COPY, DELETE, ADD, COMPARE, etc. for operating on symbols and commands such as FIXATE, PICKUP, PUTDOWN, MOVETO, OPEN, TURNON etc. for operating on the world. These commands are just as symbolic as ‘ordinary’ symbols (such as ANSWER or COLOUR etc.). This means that you can run programs that manipulate programs (so soft-assemble a more efficient set of commands or re-order them to improve performance). And in particular you can have a program that composes brand new programs out of existing building blocks.

So, in addition to the well known merits of symbol manipulation (Fodor’s compositionality and productivity arguments; the ability to reason using formal logic; and the ability to select the option with the highest valued utility), I claimed that symbol manipulation might be incredibly useful for sequencing and organising actions (e.g. making a cup of coffee or making a copy of a pattern of blocks) and for higher order thought (including Authentic Intentionality).

Many, particularly in embodied/situated cognition, however, are keen to leave no (or a very small) role for symbol manipulation. Why might that be?

  1. Fodor’s LOT assumes for symbol manipulation to work there must be a set of primitive Mentalese atoms (just like a vocabulary of words in a public language). It is not clear how such symbols could acquire an initial intentionality. Indeed Fodor has (perhaps) argued that primitive symbols must have their meanings innately set which nearly all agree is utterly implausible.
  2. Very much related to the problem above is the symbol grounding problem. You can’t have wordlike definitions circularly being defined solely in terms of each other; eventually the symbols must be grounded in perception and how that is done is believed problematic.
  3. Symbol manipulation is not biologically plausible. I’ve seen this argument a few times but it is easy enough to show how neurons could implement symbol manipulation (which is not, of course to say that they actually do!).
  4. Homuncular problems/central executive objections. I believe these can only work against the argument that the mind is (only) a symbol manipulator and do not work against the argument that just some of cognition is based on symbol manipulation.

1 and 2 are I think quite good reasons to be suspicious of symbol manipulation but I am not convinced by 3 and 4.

It would be of direct interest to me for the writing of my thesis to hear what people thought were the best arguments for getting rid of cognitivism/symbol manipulation altogether (or for retaining it in some form, for that matter) so I would be very pleased to get any comments.

As an aside I would just mention that in my thesis I attempt to diffuse 1 and 2 above by arguing that it is wrong to assume that symbol manipulation requires a persisting vocabulary of primitive Mentalese atoms as Fodor and many others assume. Instead, I say, pointers are used to manipulate and organise perceptual and sensorimotor knowledge contained in ‘files’.