Hard Questions About AI

Over at Maverick Philosopher, Bill Vallicella has posted an essay examining some of the most difficult questions we are about to face as artificial intelligence advances, in particular the question of whether the AI systems we develop (or which, as may happen in the very near future, develop themselves) will qualify for moral and legal personhood. This question immediately drops us into a nearly impenetrable forest of philosophical riddles, for most of which we have, at least at our present level of understanding, no way of answering.

Bill argues against the personhood of AI systems. He proposes a syllogism:

P1: All sentient beings are biologically alive.  
P2: No AI system is or could be biologically alive. So:
C: No AI system is or could be sentient.

Bill argues also that rights imply duties, which in turn presupposes free will — which AI systems, being built on entirely deterministic computers, cannot possibly have:

A duty is a moral obligation to do X or refrain from doing Y.  Any being for whom this is true is morally responsible for his actions and omissions.  Moral responsibility presupposes freedom of the will, which robots lack, being mere deterministic systems. Any quantum indeterminacy that percolates up into their mechanical brains cannot bestow upon them freedom of the will since a free action is not a random or undetermined action. A free action is one caused by the agent.

This is an interesting angle, and one I hadn’t thought much about; when thinking about this issue I’ve generally viewed moral/legal personhood as depending almost entirely on whether or not the entity in question is conscious (and therefore able actually to experience harm and suffering) or just a “zombie”. By focusing instead on the freedom of the will, Bill opens the debate to the technical questions surrounding free will even in ourselves (whom we know beyond doubt to be capable of conscious suffering). And those are knotty questions indeed, and far from settled. Bill acknowledges this:

But now we approach the mysteries of Kant’s noumenal agency.

How about that syllogism, though? I think P1 is problematic, and that in this context, it begs the question. While I agree that at the moment “All sentient beings are biologically alive”, what is precisely at issue here is whether we are on the threshold of ushering in a brand-new kind of being, one that has never existed before. (I can hear Homer Simpson correcting his son Bart: “You mean all sentient beings are biologically alive so far.”) My intuition strongly inclines me to agree with P1, which I suspect is rooted in some deep metaphysical truth, but until we have finally unraveled the mysteries of consciousness it isn’t a thing we can consider known.

A comment-thread ensued, which was joined by the Purdue University philosopher Elliott Crozat, who offered a link to a fine essay of his own on the moral status of AI systems. (That lucid and thought-provoking essay is here, and I urge you all to go and read it.)

In his article, Dr. Crozat offers a strong, “abductive” argument for the view that despite their remarkable behavior, we are right to conclude, at least as a default position, that AI systems are not conscious persons, but are simply fancy machines. (I won’t post it here; please visit the link and read it.) But he follows this by arguing that, because it would be a graver moral error wrongly to deny personhood to AIs than wrongly to ascribe it, we ought to err on the side of caution and ascribe them moral and legal rights.

This in turn raises the question: what are the possible downsides of that? Can we reliably assess the risks of assigning legal personhood and rights to increasingly powerful agents that might act in ways we can’t predict, or even imagine?

Finally, Dr. Crozat introduces a term I hadn’t heard before: “Conceptual Disruption”:

AI technology is new, and new technology can disrupt our use of concepts, thus introducing confusion into culture and its standards of communication. According to Marchiori and Scharp (2024, Introduction), in the newly emerging literature on conceptual disruption, a conceptual disruption is a challenge or obstacle to the normal course of ideational activity. The literature indicates three kinds of such disruption: gaps; overlaps, and misalignments. A conceptual gap occurs when no concept exists which can be applied to a novel phenomenon. A conceptual overlap holds when multiple incompatible concepts apply to the same item. A conceptual misalignment is constituted by conceptual activity not aligned with our norms and values.

The creation of AI androids likely would cause conceptual disruption, thereby changing our culture and language in various ways, some of which might be undesirable. Consider conceptual gaps and overlaps. The creation of AI androids arguably involves the generation of a new entity in human history. No clear and precise concept exists with corresponding terms to refer to such referents, thereby producing a conceptual gap and motivating what I will call a reverse conceptual overlap, which occurs when the same concept is applied to multiple entities. In this case, since there is no pre-existing concept and term for AI androids, folks might be inclined to think of such entities as persons and refer to them as ‘persons,’ hence generating a reverse conceptual overlap insofar as the same concept and term, namely ‘person’, is used for persons (traditionally understood) and non-personal androids. Such a situation could lead to confusions in the realms of culture, language, thought, morality, and law. This is a rich topic in the philosophy of technology that is open for exploration, particularly concerning the areas of conceptual analysis and conceptual engineering.

We have an awful lot to figure out, and soon. And we’d better get it right.

4 Comments

  1. Hank Johnson says

    We cannot copy a person (in the “CTRL+C then CTRL+V” sense), not just because we can’t currently figure out how to clone a human and get the exact same neuron-for-neuron arrangement of an existing brain. More profoundly, we can’t copy a human person – even if we could duplicate a brain – because we people are embodied souls rather than mere moist automatons.

    The imago dei is real.

    Provided we use enough computer processing power and drive storage, an AI can be copied. That right there shows that even the most capable AI fails the personhood test.

    Posted May 23, 2025 at 9:01 pm | Permalink
  2. Malcolm says

    Hi Hank, and thanks for coming by.

    Provided we use enough computer processing power and drive storage, an AI can be copied. That right there shows that even the most capable AI fails the personhood test.

    Aren’t you assuming what you’re setting out to prove? AI “persons”, if they actually do exist, are a brand-new feature of the world. How do you know that they aren’t the first sort of persons that can be copied?

    Posted May 23, 2025 at 11:01 pm | Permalink
  3. Bill V says

    Malcolm,

    I said that my syllogism didn’t settle the matter. I too question (P1): “All sentient beings are biologically alive.”

    Does God have sensory knowledge of the material universe he created? Is God the subject of sensory states? Does he take pleasure in the beauty of his creation? Does he feel wrath? Similar questions can be raised about disembodied pre-resurrection souls in heaven, hell, or purgatory. Some of these souls will be in painful psychological states.

    >>My intuition strongly inclines me to agree with P1, which I suspect is rooted in some deep metaphysical truth, but until we have finally unraveled the mysteries of consciousness it isn’t a thing we can consider known.<<

    Agreed. Cs. is mysterious. If we don't know what it is in itself, we don't know whether it can emerge from sufficiently complex systems.

    Posted May 28, 2025 at 7:02 pm | Permalink
  4. TexMex says

    AI is not a living being. Never will be. It is not conscious.

    Sorry but I stopped reading after the first two paragraphs. This is not something I need to overthink and debate.

    I wish people would just stop with the AI BS but I know they wont. Welcome to what in the near future will become for many the new God and Master.

    Something wicked this way comes.

    Posted July 3, 2025 at 12:38 am | Permalink

Post a Comment

Your email is never shared. Required fields are marked *

*
*