By Richard Polk, The New York Times, August 5, 2012
Wherever I turn, the popular media,
scientists and even fellow philosophers are telling me that I’m a machine or a
beast. My ethics can be illuminated by the behavior of termites. My brain is a
sloppy computer with a flicker of consciousness and the illusion of free will.
I’m anything but human.
While it would take more time and
space than I have here to refute these views, I’d like to suggest why I
stubbornly continue to believe that I’m a human being — something more than
other animals, and essentially more than any computer.
The temptation to reduce the human to the subhuman has been
around for a long time.
Let’s begin with ethics. Many
organisms carry genes that promote behavior that benefits other organisms. The
classic example is ants: every individual insect is ready to sacrifice itself
for the colony. As Edward O. Wilson explained in a recent essay
for The Stone, some biologists account for self-sacrificing behavior by the
theory of kin selection, while Wilson and others favor group selection.
Selection also operates between individuals: “within groups selfish individuals
beat altruistic individuals, but groups of altruists beat groups of selfish
individuals. Or, risking oversimplification, individual selection promoted sin,
while group selection promoted virtue.” Wilson is cautious here, but some “evolutionary
ethicists” don’t hesitate to claim that all we need in order to understand
human virtue is the right explanation — whatever it may be — of how altruistic
behavior evolved.
I have no beef with entomology or
evolution, but I refuse to admit that they teach me much about ethics. Consider
the fact that human action ranges to the extremes. People can perform
extraordinary acts of altruism, including kindness toward other species — or
they can utterly fail to be altruistic, even toward their own children. So
whatever tendencies we may have inherited leave ample room for variation; our
choices will determine which end of the spectrum we approach. This is where
ethical discourse comes in — not in explaining how we’re “built,” but in
deliberating on our own future acts. Should I cheat on this test? Should I
give this stranger a ride? Knowing how my selfish and altruistic feelings evolved
doesn’t help me decide at all. Most, though not all, moral codes advise me to
cultivate altruism. But since the human race has evolved to be capable of a
wide range of both selfish and altruistic behavior, there is no reason to say
that altruism is superior to selfishness in any biological sense.
In fact, the very idea of an “ought”
is foreign to evolutionary theory. It makes no sense for a biologist to say
that some particular animal should be more cooperative, much less to claim
that an entire species ought to aim for some degree of altruism. If we
decide that we should neither “dissolve society” through extreme selfishness,
as Wilson puts it, nor become “angelic robots” like ants, we are making an
ethical judgment, not a biological one. Likewise, from a biological perspective
it has no significance to claim that Ishould be more generous
than I usually am, or that a tyrant ought to be deposed and tried. In short, a
purely evolutionary ethics makes ethical discourse meaningless.
Some might draw the
self-contradictory conclusion that we ought to drop the word “ought.” I prefer
to conclude that ants are anything but human.They may feel pain and pleasure,
which are the first glimmerings of purpose, but they’re nowhere near human
(much less angelic) goodness. Whether we’re talking about ants, wolves, or
naked mole rats, cooperative animal behavior is not human virtue. Any
understanding of human good and evil has to deal with phenomena that biology
ignores or tries to explain away — such as decency, self-respect, integrity,
honor, loyalty or justice. These matters are debatable and uncertain — maybe
permanently so. But that’s a far cry from being meaningless.
Next they tell me that my brain and
the ant’s brain are just wet computers.”Evolution equipped us … with a neural
computer,” as Steven Pinker put it in “How the Mind Works.” “Human thought and
behavior, no matter how subtle and flexible, could be the product of a very
complicated program.” The computer analogy has been attacked by many a
philosopher before me, but it has staying power in our culture,and it works in
both directions: we talk about computers that “know,” “remember,” and “decide,”
and people who “get input” and “process information.”
So are you and I essentially no
different from the machines on which I’m writing this essay and you may be
reading it? Google’s servers can comb through billions of Web sites in a split
second, but they’re indifferent to what those sites say, and can’t understand a
word of them. Siri may find the nearest bar for you, but “she” neither approves
nor disapproves of drinking. The word “bar” doesn’t actually mean anything to a
computer: it’s a set of electrical impulses that represent nothing except to
some human being who may interpret them. Today’s “artificial intelligence” is
cleverly designed, but it’s no closer to real intelligence than the
letter-writing automatons of the 18th century. None of these devices can think,
because none of them can care; as far as we know there is no program, no
matter how complicated, that can make the world matter to a machine. So
computers are anything but human — in fact, they’re well below the level of an
ant. Show me the computer that can feel the slightest twinge of pain or burst
of pleasure; only then will I believe that our machines have started down the
long road to thought.
The temptation to reduce the human to
the subhuman has been around for a long time. In Plato’s “Phaedo,” Socrates
says that some philosophers would explain his presence in prison by describing
the state of his bones and sinews, but would say nothing about his own
decisions and his views of what was best — the real reasons he ended up on
death row. “They can’t tell the difference between the cause and that without
which the cause couldn’t be a cause,” he says. Without a brain or DNA, I
couldn’t write an essay, drive my daughter to school or go to the movies with
my wife. But that doesn’t mean that my genes and brain structure can explain
why I choose to do these things — why I affirm them as meaningful and valuable.
Aristotle resisted reductionism, too:
in his “Politics,” he wrote that bees aren’t political in the human sense,
because they can’t discuss what is good and just. People are constantly arguing
about what would benefit their country most, or which arrangement is fairest,
but bees don’t start Occupy the Hive movements or call for a flat tax on
pollen. Certainly other animals have complex social arrangements; but they
can’t envision alternative arrangements, consider them with at least the
aspiration to impartiality, and provide reasons on their behalf.
So why have we been tempted for
millenniums to explain humanity away? The culprit, I suggest, is our tendency
to forget what Edmund Husserl called the “lifeworld” — the pre-scientific world
of normal human experience, where science has its roots. In the lifeworld we
are surrounded by valuable opportunities, good and bad choices, meaningful
goals, and possibilities that we care about. Here, concepts such as virtue and
vice make sense. Among our opportunities are the scientific study of ants or
the construction of calculating machines. Once we’ve embraced such a
possibility, it’s easy to get so absorbed in it that we try to interpret
everything in terms of it — even if that approach leaves no room for value and
meaning. Then we have forgotten the real-life roots of the very activity we’re
pursuing. We try to explain the whole in terms of a part.
For instance, one factor that makes
the computer-brain analogy seem so plausible is the ubiquitous talk of
“information.” The word is often thrown around with total disregard for its
roots in the lifeworld — specifically, the world of mid-20th-century
communications. The seminal work in information theory is Claude Shannon’s 1948
paper “A Mathematical Theory of Communication,” which is mainly about the
efficiency with which a certain sequence (say, a set of dots and dashes) can be
transmitted and reproduced. There is no reference here to truth, awareness or
understanding. As Shannon puts it, the “semantic aspects of communication are
irrelevant to the engineering problem.” But concepts from information theory,
in this restricted sense, have come to influence our notions of “information”
in the broader sense, where the word suggests significance and learning. This
may be deeply misleading. Why should we assume that thinking and perceiving are
essentially information processing? Our communication devices are an important
part of our lifeworld, but we can’t understand the whole in terms of the part.
By now, naturalist philosophers will
suspect that there is something mystical or “spooky” about what I’m proposing.
In fact, religion has survived the assaults of reductionism because religions
address distinctively human concerns, concerns that ants and computers can’t
have: Who am I? What is my place? What is the point of my life? But in order to
reject reductionism, we don’t necessarily have to embrace religion or the
supernatural. We need to recognize that nature, including human nature, is far
richer than what so-called naturalism chooses to admit as natural. Nature
includes the panoply of the lifeworld.
The call to remember the lifeworld is part
of the ancient Greek counsel: “Know yourself.” The same scientist who claims
that behavior is a function of genes can’t give a genetic explanation of why
she chose to become a scientist in the first place. The same philosopher who
denies freedom freely chooses to present conference papers defending this view.
People forget their own lifeworld every day. It’s only human — all too human.
Richard Polt is a professor of philosophy at Xavier University in Cincinnati. His books include “Heidegger: An Introduction.”
No comments:
Post a Comment