Educational and psychological theories of the mid-20th century saw the human brain as an infinitely flexible learning machine, with no “factory presets”. But the picture that is now emerging of our mental apparatus is of a suite of prewired cognitive modules, located in various areas of the brain, each of which has been shaped by natural selection for some useful task. These modules provide us with a common a priori framework for organizing our model of the world, and each module contributes an intuitive description of some aspect of our environment.
Steven Pinker, in The Blank Slate (pp. 219-223), notes that we have preinstalled instincts and intuitions about physics, engineering, biology, probability, language, and number, to name just a few. Many of the cultural, mathematical, and scientific tools we have developed, however, are either beyond the inborn capability of these modules, or actually at odds with them. For example, we have no problem learning to walk, throw a stone, or speak – all of which are extraordinarily complex tasks – but reading, mathematics, and the like are too recently adaptive to have had time to have entered our set of inherited and intuitive abilities, and must be painstakingly acquired in school. As Pinker points out,
Far from being empty receptacles or universal learners, then, children are equipped with a toolbox of implements for reasoning and learning in particular ways, and those implements must be cleverly recruited to master problems for which they were not designed. That requires not just inserting new facts and skills in children’s minds but debugging and disabling old ones. Students cannot learn Newtonian physics until they unlearn their intuitive, impetus-based physics. They cannot learn modern biology until they unlearn their intuitive biology, which thinks in terms of vital essences. And they cannot learn evolution until they unlearn their intuitive engineering, which attributes design to the intentions of a designer.
-The Blank Slate, p. 223 [emphasis mine]
Fifty-one percent of Americans believe God created humans in their present form.
18 Comments
I’m generally supportive of the research program of Pinker and various others developing broadly darwinian stories about the furniture of the mind. But Pinker probably goes too far out on the limb of modularity — we humans do seem to have some general purpose cognitive strategies which, interestingly, seem to be implicated in linguistic, mathematical and logical practices. Further, while it is undoubtedly true that humans, like virtually all vertebrates, respond differently to animate and inanimate objects in their environments, to suggest that we possess intuitive (innate/heritable?) “theories” of physics, biology and engineering is, shall we say, over the top.
Hi Bob,
It all does sound over the top, but it is important to keep in mind that Pinker isn’t using the terms in the sense that one might imagine on first reading. For example, “intuitive physics” certainly doesn’t mean that we are born with an aptitude for string theory – quite the contrary. Nor does “intuitive biology” mean that we have any innate insights into the architecture of cell membranes. Here’s how Pinker describes the three traits you mentioned:
I don’t think anyone would disagree that we do indeed have some general-purpose and highly plastic cognitive abilities, but the preexisting modular architecture of the brain is hardly in doubt these days.
Hi Malcolm –
In each of the quotes from Pinker regarding our “intuitive understanding” he goes “over the top.” Objects with some continuity (i.e., coherence and localization) are OK with me; but following laws of motion and force? As for biology/natural history; hidden essences, powers, growth and functions? As for engineering/tool making; design and purpose? Pinker helps himself to some heavily freighted metaphysical notions here which I think are very unlikely to be innate. I’m happy to grant that innate/inherited dispositions could strongly channel our theoretical productions along the envisioned lines, but what’s implicit in this business of theorizing? This goes far beyond even the ability to imagine/generate alternative scenarios that can die in our stead, and makes sense only in a context where “understanding” has itself come to be valued.
Hi Bob,
I think it is important to make a distinction here between an innate cognitive apparatus that determines behavior according to implicit “laws of motion and force”, and an explicit conscious representation of those laws. Pinker is not saying that a Pleistocene man would be able to express a concept like f=ma; rather he is saying that such a man would have a brain module that would use an implicit model of physics in order to solve a complex problem like throwing a spear. I see no reason why such mental apparatus, which would be highly adaptive, should not be innate. The point is that our brains have specific prewired equipment to solve only those problems that were relevant to our survival during our long evolutionary ramp; however, we also have some very useful general-purpose equipment, which is what we must laboriously press into service for unfamiliar tasks like reading, writing, and tax preparation.
I’ll be glad to fetch some of the background material if you are curious about Pinker’s sources for all of this.
Interesting topic again, Malcolm. I’ve posted a note critical of so-called “Evolutionary Psychology” (of which I think Pinker might be seen as a general proponent) here, the gist of which was that there is some serious potential for, or sources of, error in its tendency to misunderstand and underestimate (or “misunderestimate” in Bush’s inspired coinage) the nature and function of culture in human adaptation.
Malcolm – No need to provide the background material since I have Pinker’s book. I’m familiar with how procedural knowledge (“knowing how” as contrasted with “knowing that”) is often explained in terms of “implicit knowledge” of various domain specific things, including domain specific laws of transformation. Pinker, along with the last two generations of cognitive scientists, have not yet come to terms with the basic distinction between following a rule/law and instantiating a rule/law. The problem is to explain implicit knowledge in a way that doesn’t imply that stones “implicitly know” a lot of abstruse physics — in all likelihood, more than you or I know explicitly!
Hi Bob, Ellis –
Bob, I’m not clear about what your objection is. There are important differences between:
A) a stone, which simply follows a trajectory based on its mass and momentum,
B) a brain that has a neural module that is adept at calculating what momentum to impart to that stone, given its heft, and
C) a brain that knows and can represent in consciousness what it is doing as it makes that calculation.
Pinker is talking about B), with the idea being that such skills were so useful that we evolved dedicated machinery for them, and that it is possible by observation to work out the implicit models that such apparatus implements.
Ellis, I agree that it is easy for evolutionary psychologists to push the line too far in their preferred direction, and I’ll be interested to read your post. I’m well aware that “to a man who only has a hammer, everything is a nail”. But I think Pinker is well aware of that intellectual pitfall, and is pretty careful to avoid it.
Malcolm –
I don’t believe that the implicit/explicit distinction as used by cognitive scientists corresponds to the unconscious/conscious distinction. Like Pinker, I’m talking about (B). In the stone throwing example, you and Pinker want to say that the stone thrower has implicit knowledge of physics allowing the calculation of necessary momentum given information about heft (and distance to the target). What I want is a story about how such implicit knowledge can be instantiated which does not lend itself to the presumably absurd claim that the stone has implicit knowledge of the momentum imparted to it by the thrower. This is not so easy as one might think.
Hi Bob,
I have to admit I don’t see the difficulty. A stone, being a physical object, will move along a spacetime geodesic when thrown (modified, of course, by the accelerative force of air friction, which will tend to decrease its momentum as kinetic energy is converted to heat).
The brain is a physical object too, and would behave just as the stone does if it were thrown through the air, but we are dealing here with its role as a controller for the skeletomuscular system. It is unproblematic to imagine that over evolutionary time the ability accurately to hurl a rock at one’s prey would confer an advantage (as would, also, being able to dodge an incoming projectile), and that neural architecture that enabled such an ability would be strongly selected for. So what we end up with is a system of neural circuits, modeling an implicit set of “rules”, that can do things like grab a rock, feel its weight, and calculate the bodily motions necessary to put it on an appropriate trajectory, or see a leaping tiger and get out of the way. This is a reactive, electromechanical system, shaped by the usual trial-and-error approach of evolution, in which each incremental lift in design space is built on the preceding results, and the “implicit rules” are no more than some pattern of neural circuitry that happened to get the job done. Such “rules”, having been instantiated for specific tasks, are only as general as the problem they are designed to solve, and attempts to extend them beyond their area of competence can lead to unreliable results, which is Pinker’s point.
Stones don’t do anything like this, as they don’t reproduce themselves, and so are not subject to evolution by selection.
Malcolm –
Even if one accepts the relativization to evolved, reactive systems, the problem doesn’t disappear. If the “implicit rules” are no more than patterns of neural circuitry that “happen to get the job done,” what are the grounds for saying that the system that contains these circuits implicitly knows the rules in question? The rules, not being explicitly represented in the system, cannot be accessed, compared to one another, refined, etc. by the system, so in what non-metaphorical sense are they known? The only reason I know for calling such patterns of causal interaction ‘implicitly represented rules’ is because of a prior commitment to viewing the causal interactions as calculations. But if that’s the case, why restrict the range of application of this locution to evolved, reactive systems?
I think you are zeroing in too much on the nomenclature, Bob. If you like, we don’t have to call them “rules”, but there are empirically observable regularities of behavior controlled by the neural modules in question, modules that are subunits of the system that contains them (i.e. the organism). All that we are trying to do is see what the behavior is that results from the operations of these systems (e.g. stone-throwing, tiger-trajectory-prediction), and to make a reasonable association between what the neural modules seem to be doing and broad categories of intellectual activity. If the brain-module seems to do the job of interpreting trajectories of things – the flight of a stone through the air, given its weight, for example – and controlling the body to take advantage of those interpretations, it is perfectly reasonable to say that it is doing something that corresponds to what we call “physics”. If the organism is capable of doing this with some regularity, what is wrong with saying it “knows” how? Many other organisms, by contrast, don’t know how to throw rocks, after all. It is a very different thing from a stone simply falling; this can be seen by considering the fact that it is possible to fail. A stone will never fail to fall according to the laws of motion, but a stone thrown by a man will often miss the target. We don’t have to use the word “calculation’, either, if you would rather not, but it is not much of a reach to say that the organism has an interest in hitting the prey with the rock, and in order to do so that it must move its body in a certain way that depends on the size and shape of the rock, and that it is able to, by using available sensory data as “input”, generate a bodily “output” that is useful, and which conveys a momentum to the rock that, given the laws of physics, gets the rock to the target. I’m fine with calling that “calculation”; we certainly call it that when NASA scientists do it, which is essentially solving the same problem. But we can call it something else if you like.
Another point is that the rules can be refined by the system; this is what happens when we practice.
If we are a little more flexible, then, with terms, and focus instead on the idea that is being expressed, does that help us reconcile our disagreement?
To recap: Evolution has endowed us with various neural modules that were selected for because they enabled us intuitively to perform specific functions relating to interpreting the environment and adjusting behavior accordingly. The functions these modules perform correspond, more or less, to cognitive areas we now call “physics”, “biology”, etc., but problems can arise when we follow our natural tendency to rely on this intuitive machinery in areas for which it seems appropriate, but which are actually beyond the scope of the original design.
But what you’re talking about, Malcolm, would seem to be true of any sort of intentional, or even broadly purposive behavior — not just of the human dodging the tiger’s leap, but the tiger making the leap, and possibly of a plant struggling for sunlight. Certainly there’s a difference in the underlying phenomena, but to call any of these kinds of behavior instances of “knowing physics” involves us in metaphor, and then it’s just another metaphorical step to speak of a rock knowing physics as well. None of which is to deny that, as you say, “Evolution has endowed us with various neural modules that were selected for because they enabled us intuitively to perform specific functions relating to interpreting the environment and adjusting behavior accordingly”. But when such general and innocuous observations are turned into metaphors involving relatively advanced forms of culturally and socially derived knowledge, I at least start to feel that maybe there’s another agenda being advanced here.
Malcolm –
I don’t think we can view it as simply a matter of words whether we interpret a process as carrying information with particular content, even/especially if we assume a naturalistic frame of reference. My worries are not that physical processes are being assigned content, but how it is being done. What constraints are there on there on the postulation of implicit theories? It’s one thing to say that certain behaviors under cognitive control are successful because the environment in which they occur has various physical, biological, etc. features, and something else to claim that the cognitive processes in question represent, albeit “implicitly,” those environmental features. There’s not enough constraint on the attribution of content here to make for an interesting account of the behavior as _cognitive_, which must be our goal if talk of implicit theories is to be more than metaphorical.
From another perspective, while I’m willing to countenance a broadly darwinian theory of informational content, I don’t think we can safely assume that the content in question will be a representation of salient features of the darwinian process.
But I suspect we are coming at these matters with different sets of concerns.
Ok guys, I see what the problem is here, and I think we can clear this up.
Ellis, you said:
And you are quite right! I agree completely. Evolution will do exactly the same thing with any organism: build systems that help the organism survive. I absolutely agree that we can look at any evolved “intentional” system in this way. The tiger has just the same sort of modular structure in its head, used to work out how to make its leap, that the human has for predicting the pouncing predator’s path. Here’s the important difference, though: the tiger will never attempt to use this system for anything else, whereas humans, and only humans, do. The point here (the only point!) is not to assign the attribute “knows physics” to the tiger or the plant or even the protohuman, but rather to note the fact that when our species engages in the uniquely human practice of extending our preexisting intuitive grasp of the physical world into areas that our prewired modules have not prepared us for (e.g., special relativity, something plants and tigers never think about), we may find that these automatic systems overreach themselves, and may lead us astray. That’s all! Pinker et al. simply call these prebuilt systems “intuitive physics” and so forth because they are modules that do things like working out trajectories, etc. that are in human culture considered to be aspects of “physics”. But there is no other “agenda” here. I’m curious, actually, what agenda you thought I might have been trying to advance.
Bob, you are welcome to see the assignation of “theory of physics” to be as metaphorical as you like, if that eases the difficulty; the only point is that we have these units in our heads that do things like figure out what nerve impulses to generate to throw a ball, and so forth. There is no doubt that they exist, and that this is what they do. They are universal features of human beings, they work the same way in all of us, they cause the same deficits if they are damaged, and they can lead us to the same erroneous conclusions. Visual processing is the same thing – we’ve all seen the common optical illusions that play tricks with converging lines, light and shadow, etc., to make one thing look bigger or smaller or brighter or darker than another, and we all are fooled by them in exactly the same way (check this one out). The reason for that is that we have an innate spatial-geometry apparatus that works fine in the wild, where such false cues aren’t around, but which we find it impossible to switch off even when we know it is misleading us. The point is that for every obvious optical illusion or other bit of trickery, we can go wrong in lots of other, similar, ways without realizing it, whenever the context in question is not the context for which the system was designed. Pinker’s point is that if we are aware that this sort of thing can happen – if we increase our knowledge of our innate cognitive structure, and our understanding of the assumptions it pushes us toward – we can be more wary, and less prone to unconscious error.
You write: “I’m curious, actually, what agenda you thought I might have been trying to advance.”
I didn’t mean to imply that you were advancing an agenda, Malcolm, and I’m sorry if my words seemed to be implying some sort of broader Conspiracy Theory, which I don’t mean either. I did mean, though, that Evolutionary Psychology (and its older cousin, sociobiology), like its inverse, Social Constructionism, have some potential political/ideological associations, and that we should be cautious about using the theory to further such associations. With EP, that misuse is probably most commonly seen in an emphasis on our pleistocene natures as a counter to schemes of social engineering (which are usually just as wrongly-based); but that “stone-age nature” can also be used as a means of characterizing, and therefore denigrating, a particular side in modern political disputes.
On the other hand, I agree completely with your general point that prewired modules may lead us astray in our much changed modern environments.
Good point, Ellis. Certainly we see plenty of examples of models of human nature forming the basis of stifling ideologies. The Nazis were a horrifying example, and as a backlash to their hideous crimes the dogmatic view that there is simply no innate human nature became almost hegemonic in academic and political circles, and persists to this day. Both extreme positions on the question – “all nature” vs. “all nurture”, you might say – have led to unspeakably cruel social-engineering experiments in recent world history. I think Pinker is not advocating any particular political position in his book – he points out errors on both sides of the political spectrum – but is, rather, trying to bring to our attention the facts of the matter: we are predisposed to think and perceive and react in certain ways, but are free enough to avoid such traps if we are aware of them. His point is that in order to create a just society, we need to be as informed as possible about what we are, and why.
Malcolm –
I am happy to treat your references to implicit theories as metaphorical. I don’t think I can be quite so sanguine about Pinker’s references though, since he is working in an established tradition in cognitive science where “implicit information” is supposed to do non-metaphorical explanatory work. So be it.
Pax
Hi Bob,
Well, we can burrow down into what we mean by “theory” at this point. (Or not bother, if this is becoming a “dead horse”!)
If we say that the term “theory” refers to a system for organizing and predicting phenomena, it is fairly uncontroversial, I think, to say that the brain implements several such systems, as we can see them in action, and observe predictable errors when they fail. It is a mere convenience, but a helpful one, to refer to them, when they are being studied and discussed, as “theories” of “physics”, “biology”, and so forth, even though all will admit that we are just mapping the activity of these modules onto the human activities that go by these names.
They can be called “implicit”, I think, simply because there is not necessarily any awareness of their existence on the part of the organism that implements them.
I meant to thank both of you, by the way, for reading, and for offering such interesting comments.