In a response to our Jeff Cooper quote a couple of posts ago, commenter Uriel Fiori linked to a post from 2013 by Nick Land. That post, at his blog Outside in, is called “Monkey Business”, and it discusses a tension in neoreactionary thinking about something called “orthogonalism”.
Simply put, “orthogonalism” is a way of thinking about telos, and the question of means and ends. The orthogonalist assumes the existence of ends that transcend any means available to them. We must eat, but not for the sake of eating; we eat to preserve life. The provision of food is not, therefore, an end in itself. This can be generalized to economics as a whole; the purpose of capitalism, for example, is not the maximization of resources per se, but is rather to increase the availability of resources that are necessary preconditions for the satisfactions of other, more human ends, such as the mitigation of poverty and hunger, and the increase of degrees of human freedom that results from the elimination of high-priority requirements that must be satisfied before other uses of our time and attention can become possible. In other words, there is a sort of triage that necessarily occurs in human life, with something, not always clearly defined, at the apex of the hierarchy of purpose.
This implies, for example, that when capitalistic maximization of resources becomes an end in itself, and other human values are subverted or subordinated to that aim, it is a “means-end-reversal” that will be pointed out and resisted by orthogonalists, who believe that the ultimate end is indeed, and will remain, transcendent. The question for the orthogonalist then becomes: what is at the apex of the hierarchy of goals, and why?
The anti-orthogonalist disputes this traditionalist idea of telos. In another post, “Against Orthogonality“, Nick Land discusses telos in the context of artificial superintelligence. He refers to the idea (see the first link in the quoted passage below) that there are basic “drives” that any intelligence will be motivated by. (It is important to note that the idea of “drives” is not, and should not be, seen as limited to artificial intelligence; it is plausibly central to understanding human psychology as well.)
The philosophical claim of orthogonality is that values are transcendent in relation to intelligence. This is a contention that Outside in systematically opposes.
Even the orthogonalists admit that there are values immanent to advanced intelligence, most importantly, those described by Steve Omohundro as ”˜basic AI drives’ ”” now terminologically fixed as ”˜Omohundro drives’. These are sub-goals, instrumentally required by (almost) any terminal goals. They include such general presuppositions for practical achievement as self-preservation, efficiency, resource acquisition, and creativity.
This is simple, coherent, and in my opinion irrefutable; almost any conceivable goal contains implicit subordinate requirements that must, if the primary goal is to be accomplished, also become goals in themselves. If I want to read Hamlet, I have to learn to read. If I want to drive to Akron, I have to get a car. Etc.
Next, however, comes a more audacious passage that expresses the axiom at the heart of the anti-orthogonalist position.
At the most simple, and in the grain of the existing debate, the anti-orthogonalist position is therefore that Omohundro drives exhaust the domain of real purposes. Nature has never generated a terminal value except through hypertrophy of an instrumental value. To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success.
This contains a plausible assertion, a statement of faith, and a legitimate challenge to orthogonalists. In order:
The plausible assertion is that “Nature has never generated a terminal value except through hypertrophy of an instrumental value.” This may be true. On a purely naturalist and materialist foundation, all of our supposedly transcendent values — altruism, goodness, wisdom, justice, loyalty, honor, love, and so on — are evolutionary adaptations. We feel their pull in our consciences, and imagine them to be abstract truths, simply because they have been, throughout our evolution as social mammals, necessary conditions for the satisfaction of the real, and quite mindless, natural “goal”: the successful propagation of our genes. Our belief that they are anything more than this is simply an illusion: a trick that our wiring plays on us for our own good (or, more precisely, for our genes’ own “good”).
The statement of faith is here: “To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success.” I do not presume to rule on what may or may not be “compatible with techno-scientific integrity”, but I do think it is beyond Nick Land’s competency to know — not to believe, but to know — whether or not a search outside Nature for sovereign purposes has any prospect of success. What Nick is doing here is truncating any possible teleological hierarchy at the level of our current understanding of the natural world; in doing so he slices off the apex that entire civilizations, and uncountable multitudes of human lives, have oriented themselves toward. He is perfectly entitled to do so, of course, and he may, for all I know, even be right; but he cannot know he is right, and he may well be wrong.
Here, then, is the challenge this presents to orthogonalists: you must define your hierarchy of aims. There are two choices. If you have nothing more to offer than a naturalistic and scientistic framework for this, then you have to swallow the “black pill“: there is in fact no transcendent telos in the world at all, and so we are radically free to choose whatever goals we like. If you want something better, or at least less subjective, than that, then you need to put something actually transcendent at the apex of your teleological hierarchy. That something — I don’t see any way around it — must be God.
Transhumanists like Nick Land prefer the black pill. They seem, rather puzzlingly to me, to prefer a future in which the maximization of intelligence is the summum bonum, even if it means that humans, and their purely human priorities, affinities and aversions — all of which Nick refers to as “monkey business” — fall by the wayside.
I take the opposite view. If naturalism is wrong, then the possibility of transcendent goals exists, and our sense of their presence may be more than just an evolutionary illusion. On the other hand, if naturalism is true, and we must swallow the black pill, then I get to choose whatever goals I like, for any reason I like. I might decide, for example, that my goal is simply to understand what maximizes the health and harmony and happiness of human societies, as consistent with human nature, and to exert what influence I can toward that end. Or I might choose a more inwardly directed aim: the unification of my inner multitude, mastery of the Self, and a more conscious life. Another goal might be simply to love my neighbor. Or all of the above.
Either way, though, I’m with the monkeys.
3 Comments
Odd that Land would hold that position when he has written so much on Gnon.
It is a fair point that the transcendent goals would have themselves formed within the mind if such were reasonably possible. However, *something* clearly shaped the ‘Omohundro drives’ as they are the product of evolution and not purely random, so the claim that these goals are neither within the mind nor external is absurd.
Isn’t this exactly the thing the Gnon hypothesis was made to address? Our transcendental goal, external, is the Will of God.
ikacer,
The distinction here is between goals rooted only in nature (which we internalize as “our” goals, but which are really simply contingent expressions of a mindless and ultimately purposeless process of evolution), and a teleology that transcends the natural world (and which, presumably, both precedes nature and stands above it).
Both of these are equally “external”.
there is quite a lineage of posts on this themes (and arguably Land’s entire career is about this single anti-orthogonality thesis). I think the post that addresses “why intelligence” better is this: http://www.xenosystems.net/will-to-think/
I guess the argument there also tells a lot of how that relates to Gnon (namely, intelligence survives), and why anyone not aligned with intelligence maximization per se would do well to favor it anyway.