Intentional Grounding

One of the knottier topics in philosophy of mind is intentionality. The term refers to the way our thoughts are about their objects, and intentionality is often considered to be an exclusive hallmark of the mental. A thought can be “about” Paris, but a stone, or a lampshade, cannot be.

Well then, what about something like a map of Paris, or a book about Paris? The response usually given is that a book about Paris derives its “aboutness” from the minds of the person who wrote it and the person who reads it. In other words, only minds have intrinsic intentionality, and only minds can then bestow a second-order, derived intentionality on the artifacts they create.

So one of the problems (one of many!) for the philosopher who wants to develop a naturalistic account of mind is to root our “intrinsic” intentionality, somehow, in the material world. Some would have it, though, that the mental is irreducibly distinct from the physical, and that intentionality is a property of mind alone. This difficulty in seeing how any collection of matter, regardless of the complexity of its organization, can have this sort of “aboutness” is, to some, a fatal flaw in materialism.

I think, however, that a bridge can be built, if we are willing to accept the power of natural selection to create design. It seems to me that many highly intelligent people, despite having a fairly accurate intellectual overview of evolutionary theory, are still reluctant to embrace fully its most important implication, which is that sophisticated and purposeful design can enter the world without there being a mind to act as the designer. This “bootstrapping” is of course completely at odds with our ordinary experience, and is so deeply counterintuitive that I think even some very sophisticated thinkers stumble over it.

How can there be ‘purposeful’ design without a designer whose purpose it is? The answer is that the design accrues, quite naturally, to the reproductive benefit of material entities that are organized in such a way as to be able to make copies of themselves. The design elements that contribute to reproduction proliferate, and further improvements are gradually built upon them, simply because they are there for the next round. In this way, design emerges from a natural process that is itself quite mindless. The remarkable part is not the seemingly magical emergence of purposefulness, it’s the emergence of self-replicating arrangements of matter in the first place. Once that is in place, the “aboutness”, I am arguing, follows on quite naturally.

Before there were self-replicating organisms, there truly was no intentionality in the world. There were relationships between objects – for example, a river might have scooped a depression in its rocky bank – but such relationships are symmetrical, and intentionality of the sort we are seeking to account for is not. Bill Vallicella, the Maverick Philosopher, makes that point in this post. The rock is shaped by the river, but the river’s path is also confined by the rock. A lock and a key are symmetrically “about” each other. But when I think of Paris, there is no sense in which Paris is thinking about me. Intentionality is an asymmetrical relationship.

As soon as replicators arrived on the scene, however, a new game was in play. As these early entities reproduced, the natural variations among their offspring were tested against the environment, and those new structures or behaviors that increased the organism’s likelihood of generating offspring tended to increase in the population.

Now we can begin to take a new perspective in interpreting such an organism’s form and behavior. Because the types that fail to replicate simply vanish, while others become numerous, we can begin to ask what the salient differences are, and we can begin to assess the forms and behaviors of such replicators in terms of how they contribute to the organism’s chances of reproduction. In other words, why did type A die off, while type B proliferated? The difference will be explicable in terms of which features made it more likely that the organism acquired food, was not itself eaten, produced viable offspring, and so forth. In other words, even though the replicator itself (we are still speaking here of very simple organisms) has no idea whatsoever of any “aims” or “purposes”, a goal-oriented paradigm is still a useful explanatory stance for us to take when evaluating its form and behavior. This is what Daniel Dennett calls “the intentional stance”.

On the purely somatic level, consider the relationship between the morphology of an animal – say the beak of a finch – and its food. In the Galápagos Darwin observed a radiation of finch species among the several islands, and saw that each finch wore a beak that was adapted to the particular flora (and their seeds) of its local habitat. It is perfectly reasonable to view a particular beak as being “for”, or “about”, a particular food source, though the bird itself, we may assume, makes no inner representation of such purposefulness.

With due respect to Dr. Vallicella et al., I maintain that there is no reason not to say that the legs of an animal are “for” running, that its heart is “for” pumping blood, etc. Some will object that such a description implies a designer with a purpose in mind, but I believe that this is an unwarranted generalization from our ordinary human experience. There was a designer, yes, but no mind. A horse’s leg, designed by natural selection, is no less “for” running (an ability that benefits a horse’s reproductive fitness) than a hammer designed by a human is “for” driving nails.

Consider a frog, exquisitely designed for snatching flying insects from the air. This behavior is triggered visually; the passage of a fly across the frog’s visual field sets in motion a cascade of neural events that ends with the fly being zapped (this is already a hugely complex operation, and the end product of a tremendous amount of evolutionary R&D). This does not mean that the frog builds a little mental replica of the fly, or that it has any conscious perception whatsoever, for that matter. But it does mean that there are neurological patterns and structures in the frog that are “about” getting food, and that can distinguish something that is worth zapping from something that isn’t.

Next, consider an animal that differentiates between similar prey, for example a bird that shuns a brightly colored frog in favor of a drab one that happens not to be poisonous. Or a snake that predicts the course of its prey as it runs behind a boulder. In each case there are complex cognitive structures at work, recognizing prey as prey, making decisions about whether it is suitable, and if so, how to catch it. This is purposeful, intentional behavior.

In this way we can see that organisms of increasing neural complexity will exhibit more and more behavior, presumably driven by transitions between neurological states, that is chock-full of “aboutness”, of intentionality that is as “intrinsic” as it gets. Finally, after a great deal of design work, we get to animals like ourselves, with brains that can learn from their mistakes, that can use language, that can grieve, that can think of Paris.

Now all of this intentionality- this abundance of aboutness – is, it seems to me, completely independent of consciousness. The human brain has the additional ability, perhaps unique, to point the aboutness inward, to have thoughts about our thoughts, and this is indeed a great leap forward in cognitive design. And in doing this – in taking the intentional stance toward our own thoughts, from the unique viewpoint of introspection – we cannot see their physical underpinning, and so we think of them as “intrinsically” intentional. But this introspection is only the topmost story in a preexisting tower of intentionality that was built from the ground up by natural selection, and it is an error to mistake what it perceives for the whole show. Yes, a human artifact such as a book “derives” its intentionality from our own. But there is a special class of things in the world – living organisms – that get theirs from Nature.

Finally, there is one last pill to swallow in order to “see how deep the rabbit-hole goes”. Because our brains are, at bottom, physical systems, driven from one state to the next by physical causality, their semantic reliablity – which is apparently quite high when we do things like mathematics or philosophy – is due entirely to the syntactic tuning that has been done over our long and harsh evolutionary history. As Dennett puts it:

The appreciation of meanings – their discrimination and delectation – is central to our vision of consciousness, but this conviction that I, on the inside, deal directly with meanings turns out to be something rather like a benign ‘user-illusion’.

This is a rather unsettling conclusion, as it undermines our certainty about the reliablity of our reason. But if you read the newspapers, after all, that should come as no surprise.

Related content from Sphere


  1. Bob Koepp says

    Hi Malcolm –
    Having just arrived here from the Maverick Philosopher’s blog, I guess I’m continuing a commentary begun there.

    Many years of practice thinking about natural selective processes leave me very comfortable with the notion that products of selection are “for” something. But that needs to be clearly distinguished from the notion that representations are “about” something. Being a representation is a particular sort of “being for” which is not directly comparable to other sorts of “being for” (e.g., being for representing is not comparable to being for digesting, or being for locomotion, etc.). So establishing that natural selection can generate things that are “for” something, doesn’t allow an automatic inference that it can generate things that are “about” something. I think it can do the latter, but that depends entirely on how we articulate the notion of intentionality/aboutness.

    Regarding the putative independence of intentionality and consciousness, I worry that you might be generalizing from a too narrow base. As I said over at the Maverick’s place, it might well be that consciousness gives rise to new forms of intentionality, limiting the scope of the independence thesis. But only time, much careful investigation and even more critical thinking will settle these issues.

    In any case, thanks for blogging on this topic and prompting me to clarify, even a little, how I think the issues might connect to each other.

    Posted May 17, 2006 at 10:54 am | Permalink
  2. Malcolm says

    Hi Bob,

    Thanks, as always, for commenting.

    As far as the discussion over at Bill’s is concerned, I’d agree that representations such as pictures get all of their “aboutness” from the associative networks in the heads of the creator and the viewer. As for cognitive “aboutness”, I’d say that in the same way that I can make marks on a page that are “about” their intended subject, likewise I can make “marks” in my neural architecture that carry the same associative connections. All I am doing is using built-in, rather than external, memory.

    I think that if one is to accept that humans have unconscious wishes, beliefs, anger, etc., then one must grant those phenomena intentional content, and therefore intentionality is separate from consciousness. Some say that it is sufficient that an intentional state is one that merely could become conscious, but that seems rather weak to me. Worth a new post, perhaps. But what “new” forms of intentionality do you imagine? Can you say more about that?

    Posted May 17, 2006 at 12:19 pm | Permalink
  3. Celinda says

    The point about symmetry is something I missed over at Bill’s. My problem is that I do not yet have a good grasp on exactly what intentionality is. I am going to have to reread the posts and try to organize it.

    I do tend to agree with Spur’s latest comment that brain structure is an important element in intentionality. I wonder if the problem with of the current theory comes down to an attempt to make the mental irreducibly distinct from the physical.

    I get lost in a number of the discussions because philosophy tends to lift a thing out of its real context, manipulate it and when it the discussion is done the thing is not quite the same as when it started.

    I too would be interested in hearing more about the different types of intentionality as it seems to me that the usage of the word gets slightly modified depending on the context.

    Posted May 17, 2006 at 4:20 pm | Permalink
  4. Bob Koepp says

    Malcolm and Celinda –
    The only sort of intentionality that I feel I have any grip on is what I’ve called the simplest sort of intentionality, i.e., simply being “about” something. In one of the threads at Bill’s place, I gave a couple examples of information conceived as an objective measure of dependency — planetary movements carry information about gravitational forces, and the way the light at a given point is structured carries information about the distribution of reflective surfaces in the environment. I think that information is “out there” even if no mind or cognitive system uses it. That’s what I mean by the simplest sort of intentionality (aboutness).

    Note how this objective information, and so, whatever intentionality it partakes of, is tied to actually existing states of affairs. But that makes it distinct from the sort of intentionality that first captured the imaginations of philosophers who emphasized that intentional objects need not be actual. What aroused their interest was the way we have thoughts about “mere” possibilities. I, too, find that very, very interesting. But I recognize immediately that something other than the aboutness of objective information is at work here, since it requires whatever it’s about to actually exist.

    Posted May 17, 2006 at 5:28 pm | Permalink
  5. Kevin Kim says


    This is a fascinating discussion, both here and at Dr. V’s.

    If I’m reading you right, is it then proper to say a chess-playing computer exhibits intentionality insofar as its subroutines are “about” what’s happening on the chessboard, and insofar as the computer’s moves exhibit a “measure of dependency” on what the opponent decides to do?

    One of my own questions is whether there is a necessary link between intentionality and consciousness. If Daniel Dennett’s lock-and-key example is a type of intentionality, then it would seem that consciousness is not necessary for intentionality. The same would go for the chess-playing computer, I should think. Most would agree the computer isn’t conscious.

    If, however, we apply the word “intentionality” to things that aren’t conscious, are we misusing the word, or will “intentional” simply have to be redefined to include a wider semantic field? The same might be true for “information,” a word I usually take to imply the existence of the informer and the informed (i.e., a conscious mind capable of receiving and digesting data).


    Posted May 21, 2006 at 3:31 am | Permalink
  6. Bob Koepp says

    Hi Kevin –
    I think that the idea of objective information I advocate leaves open some of the questions you raise, but probably would require some redefintion/reconceptualization in philosophy of mind. My main point is that the idea of information has “aboutness” built-in — information is always and necessarily about something. If this aboutness is the same as the aboutness pointed to as the essence of intentionality, then it would seem that intentionality cannot be the mark of the mental, since objective information exists in the absence of minds to use it. On the other hand… if mind lilterally permeates the field of causes and effects, then perhaps the aboutness of objective information is the same as the intentionality of mind. But in any event, because of the definitional connection between objective information and actually existing states of affairs, getting from objective information to non-existing intentional objects will require some intricate theorizing about how information gets processed.

    Posted May 21, 2006 at 9:20 am | Permalink
  7. Malcolm says

    Hi Bob and Kevin,

    The hard part about the idea that everything carries objective information is that I don’t see how we parse the information into objective categories. You could say that the current state of the entire universe carries all the available information about the previous state; and I suspect that it takes minds to decide what that information is about.

    I think that there is NOT a necessary connection between intentionality and consciousness; I think a chess computer exhibits clearly intentional behavior.

    Posted May 21, 2006 at 8:19 pm | Permalink

4 Trackbacks

  1. […] A frequent topic at Bill’s website is philosophy of mind, and one of the more difficult areas of that subject is the matter of “intentionality”: the “aboutness” of mental acts and states. It is a conundrum that has interested me for quite a while, and in fact I wrote a little post about intentionality a short while back (I may repeat a few points here). […]

  2. […] In considering the question of how “mere” matter can exhibit intentionality, I argued in a previous post that living things have their purposefulness and “aboutness” by virtue of their being designed, just as our artifacts have. The designer, however, in the case of living things, is not a purposeful Mind, but the blind processes of evolution and natural selection. If we are willing to acknowledge that our complex intentionality might, as we look backward through our ancestral history, take simpler and simpler forms, all the way back to the simplest early replicators, we might have in hand the “gradualist bridge” that many dualist philosophers insist cannot be built. We still have a major problem to solve, however, which is the origin of life itself. There have been many promising suggestions as to how life might have got started, but one stumbling block for many models has been a sort of Catch-22 problem in which the machinery for RNA replication needed to be in place in order for the ball to get rolling. NYU researcher Robert Shapiro, however, has a new and promising model that bypasses the need for preexisting RNA, and relies only on simpler organic reactions. […]

  3. By waka waka waka » Blog Archive » Big Game on August 16, 2006 at 11:24 pm

    […] Wright argues that the mutual benefit of non-zero-sum relationships is so great that it is inevitable that as soon as agents appear that can be said to have “interests” – and as soon as natural selection got going, that was the case for all living things, a point I have made in these pages with regard to intentionality – then selection pressures are going to drive those agents reliably into such relationships wherever possible. This is true, he believes, not only for biological entities, but for social ones as well, and the first section of the book is a look at the arc of human history from this viewpoint. […]

  4. By waka waka waka » Blog Archive » Rings and Bridges on October 23, 2006 at 9:47 pm

    […] This all bears directly on the rancorous philosophical debate regarding “original versus “derived” intentionality, which turns on the apparent discontinuity between the “aboutness” of human minds, and the inertness of inanimate matter. Some philosophers have flatly declared this to be an unbridgeable gap, but as I have argued here and elsewhere, I think there is no reason why meaning and intentionality cannot have entered the world gradually, as more complex organisms, with more complex interests and behaviors, came into being. Daniel Dennett has compared these sorts of discontinuity arguments to the notion of the Prime Mammal, an argument that goes like this (taken from Freedom Evolves, p.126): […]