This is my much-delayed follow-up post to my little presentation last week. As I said in week 1, I’m hoping that one of the uses for this blog will be for presenters to post any thoughts they have about their presentation topics after the class, perhaps in light of the class discussion, or of material that’s cropped up later on the course. Ideally this post should go up fairly soon after the presentation, although I’m aware I’m setting a bad precedent here!
So, without further ado, here are a few bullet points about what I got out of my presentation, and the ensuing discussion:
- In future philosophy of cogsci focusing on the frame problem, it would be good to distinguish the various problems which have been tagged with that label. In particular, it would be good to note the formulations of the frame problem that have been solved (or dissolved), and articulate why this doesn’t affect the prospects for a solution to the frame problem that interests philosophers like Haugeland.
- Having said that, I thought there seemed to be a consensus in the room that the problem with which Haugeland is concerned looks insuperable for AI – insofar as AI is committed to a methodology based around explicit representation of information and piecemeal application of inferential operations to that information.
- But, of course, AI since the late 80s appears to have no such commitment. So there are still plenty of other ways in which reflecting on the successes and failures of AI since then can inform cogsci theorizing.
- It also seems worth noting that there are possible, and presumably plenty of actual, AI systems that can solve the problem examples Haugeland discusses – even GOFAI systems. So the general problem isn’t tied to anything about those particular cases, and I think could do with a bit more articulation in Haugeland’s paper. Presumably the issue is that the success of any GOFAI system that can work around the frame problem in some domain or other will be tied to specific contextual factors – and ones that look arbitrary with respect to the project of trying to ‘engineer intelligence’. See Wheeler’s distinction between intra- and inter- context frame problems for a way of spelling this out.
- Having said that, it’s worth wondering how flexible and context-neutral our rational/adaptive competences really are – don’t we rely on favourable embedding contexts to solve problems set by the environment in ostensibly similar ways to the ‘dumb’ GOFAI systems Haugeland considers? If so, what does this show?
- One emerging theme to keep an eye on in future weeks – are we going to be sufficiently convinced by Haugeland’s positive account of mindedness to think that it yields materials to adjudicate between ‘extended’ and ’embedded’ views of cognition’s relationship to the environment? That is, will he succeed in showing that the relationships between cognizer and environment to which he appeals are essential for cognition – required for its bare conceptual possibility? Or are they merely important parameters that we must factor in when studying cognition?
- Haugeland’s positive view will clearly be subjectivist/idealist to some degree – he endorses Dreyfus’s phenomenological claim that the world to which we have cognitive access is essentially our human world. Is this going to be a consequence Haugeland can convince us to swallow?
- Haugeland’s review of Dreyfus is awesome. And thus very hard to summarise without just transcribing the whole thing verbatim.
Post any thoughts on these bullets, or any important points from the discussion that I missed, to comments, and I’ll see you in class tomorrow – really looking forward to it!