An offshoot of effective altruism is longtermism. Which is basically the idea that we have as much ethical responsibility to address threats to humanity far off in the future as we do threats in the present. I’m curious what you make of that. I accept the idea that when suffering occurs is not affected by time. If I could be certain that something I did now would do more to reduce suffering in 100 or even 1,000 years than anything I could do to relieve suffering in the present, then sure, I would think that would be the right thing to do. But we don’t have that certainty about the future. That’s a big barrier to making it a real priority to think about the future as more important than the present. The other question that needs to be raised is a deep philosophical question about the risk of extinction of our species. That’s what a lot of longtermists are focused on. They’re saying: “If our species gets through the next century or two, then it’s likely that humans will be around not just for thousands but for millions of years, because by then we’ll be able to colonize other planets. If we become extinct, none of that will happen, so we must give a very high priority to reducing the risk of extinction of our species.” And that raises the question of, is it as bad that beings do not come into existence and therefore do not have happy lives as it is that an already-existing being, who could have a happy life, is prevented from having a happy life?I think it’s still an open question. It would be a tragic loss if our species became extinct, but how do we compare that tragedy with tragedies that might occur now to a billion people? I can’t give a good answer to that. I think it could be reasonable to say, “No, we should focus on the present, where we’re going to have greater confidence in what we’re doing,” than focus on the long-term distant future.
What’s the answer? I think it’s still an open question. It would be a tragic loss if our species became extinct, but how do we compare that tragedy with tragedies that might occur now to a billion people? I can’t give a good answer to that. I think it could be reasonable to say, “No, we should focus on the present, where we’re going to have greater confidence in what we’re doing,” than focus on the long-term distant future.
I’m just a ding-dong, but it seems that there are common-sense objections to longtermism. It’s like, if I see a fire in my yard that I could put out and save some people, shouldn’t I obviously do that rather than say, “Well, I’m working on a fire-retardant system that could save millions of lives at some undefined point in the future”? It runs into what appears to be a common-sense problem, because our intuitions obviously are to help the people now. We’ve evolved to deal with problems that are right there and now. Our ancestors survived because they dealt with those problems. They didn’t survive because they had strong intuitions that we ought to act for the distant future — because there was nothing they could do about the distant future. We are now in a position where we have more influence on whether there will be a human future, so I’m inclined not to trust those common-sense intuitions. But my answer would be: Sure, you should put out the fire. Not because that’s just your common-sense intuition but because you can be highly confident that you can do a lot of good there, and anyway, you can put out the fire and go back to your work on the fire retardant tomorrow.
Not trusting common-sense intuitions is sort of Peter Singer’s whole bag. I think that’s right. Don’t trust your intuitions to think that you ought to help your neighbors in your affluent community rather than distant people elsewhere in the world that you can’t relate to; don’t trust your intuitions in thinking that it’s only humans that matter or human suffering that is always a higher priority than nonhuman animal suffering. I’m somewhat skeptical about trusting those moral intuitions.
You take these moral intuitions about things that people hold closely, like what we eat or how we spend our money, or even the notion that we’re good, and you say, “Hold on a second, are you really behaving ethically?” Where do you think your impulse to do that comes from? It came gradually. I started thinking about particular issues where it was obvious that you could reduce suffering but people had intuitive reasons for not doing so. One of those was the area of biomedical ethics. I was interested in issues about death and dying, and I’ve for a long time been a supporter of medical assistance in dying. When I started talking to people about that, especially doctors, they would say, “Look, it’s all right for us to allow people who are suffering to die by not treating them, but we can’t cross that line and actually assist them in dying.” And I would say, “Well, why?” That example is one where I was critical of intuitions. They were perhaps religiously based intuitions. The fact that I wasn’t religious may have led me to challenge those intuitions. But then I started thinking about a whole range of other intuitions that were probably not religious but may be based in what helped our ancestors to survive.
I was reading the academic journal that you coedit, The Journal of Controversial Ideas. The idea behind the journal is to give rigorous academic treatment and platform to ideas that might be seen as beyond the pale. There are plenty of what seem to me to be relevant arguments in the journal to do with things like public health and academia. And then there are pieces about when blackface should be allowed or arguing for zoophilia — bestiality. Who’s clamoring for deeper arguments in support of these things? What is the point other than provocation? I think both those issues, although they’re certainly far less significant than many of the issues that articles in the journal discuss, have some significance. The question about blackface is relevant to drawing lines about what are people going to get criticized for. The article takes a nuanced approach to that. It acknowledges that there would be cases in which using blackface would be offensive and inappropriate, but it also refers to other cases in which it’s not objectionable. So if people are going to be outed in some way for doing this, and it happened to Justin Trudeau, then you do need to say, what are the cases in which this is not such a bad thing to do? And the case of zoophilia —
Yeah, tell me that one. Well, people go to jail for this, and they may not be causing any harm. It’s reasonable, if somebody’s going to be sent to prison, to ask: “Have you harmed any sentient being? Should this be a crime? Why should it be a crime?” Now, there’s maybe a very small number of cases that would get prosecuted, but I think that’s enough justification for airing the issue.
People have criticized you for not taking into account aspects of personal experience about which you might be ignorant. The example I’m thinking of is your idea that parents should have the right to terminate babies born with severe disabilities that might cause them to suffer terribly. Critics say that you can’t wrap your head around the fact that lives very different from your own might be just as valuable or involve as much happiness, and that some of your ideas also might be stigmatizing or objectifying to nonnormative bodies. Is there anything to the criticism that rationally theorizing from a distance is missing something essential? Rationally theorizing from a distance easily can miss something essential, certainly, but I don’t think that applies to my views about these cases. I formed those views after having discussions not only with doctors in charge of treating infants born with severe disabilities but also some of the parents of those infants, of those children who were no longer infants. I had discussed this with a number of people, both in person and in letters that I had from people. I remember one who said, “The doctors got to play with their toys” — meaning their surgical equipment and their skills — “at helping my son to survive, and then they handed the baby over to us, and the result has been that my child has suffered for nine years.” I do find it strange that some people in the disability movement who are mentally as gifted as anyone but happen to be in wheelchairs think that the fact that they are in a wheelchair gives them greater insight into what it’s like to be a child with severe disabilities that are not just physical but also mental. Or what it’s like to be the parents of children like that.
I don’t know that they’re saying that it gives them particular insights into that specific example. I think they’re saying they might have specific insights into what it’s like to live a different kind of life that you don’t have and can’t access. That’s true, but that’s generally not the kind of case that I’m talking about in suggesting that parents ought to have the option of euthanasia in cases of very severe disabilities.
Is there any way in which airing your more controversial philosophical views has been detrimental to your larger project? This is the idea that people might be turned off by what Peter Singer has to say about people with disabilities, and therefore they’re not going to pay attention to what he has to say about animal rights. There is a possible trade-off, yes, but it’s difficult as a philosopher, because I get asked these questions, and if I start to prevaricate or be fuzzy about the answer, my reputation as a philosopher falls. It’s important to try to follow the argument wherever it goes. There may be some cost to it, but it’s hard to balance those costs against the fact that you’re regarded as a rigorous, clear-thinking philosopher, and people pay more attention to what you say for that reason.
I read your memoir [“Pushing Time Away”]: Three of your four grandparents died at the hands of the Nazis in the Holocaust. And about your grandfather, David Oppenheim, who was a collaborator of Freud’s, you wrote that he spent his life trying to understand his fellow human beings yet seems to have failed to take the Nazi threat to the Jews seriously enough; maybe he had “too much confidence in human reason.” Do you see your grandfather’s work and your work as interacting with or paralleling each other in any way? Possibly paralleling, but not really interacting, because I didn’t read my grandfather’s work until the late 1990s, and I’d already written “Animal Liberation,” I’d already written “Practical Ethics,” I’d already written “Rethinking Life and Death.” What you could point to, I suppose, would be that some of my grandfather’s general attitudes were passed down to me by my mother. That would include the fact that I’m not religious. Some of that did get passed down to me. But not in terms of my specific views about suffering. Now, were they influenced by the knowledge of the suffering that the Nazis inflicted on my grandparents and other members of my extended family and on my parents by driving them out of their home in Vienna? That might have led to why trying to prevent unnecessary suffering has been a leading impulse in the work that I’ve written.
But do you think it did lead to that? I honestly don’t know. I don’t have the self-awareness to say to what extent was knowledge of the Holocaust background in my family decisive in leading me in that direction.
This is a self-awareness question: When your mom was dying from Alzheimer’s — It was some form of dementia. I don’t know if it was Alzheimer’s.
You spent a fair amount of money on providing care toward the end of her life. Which is obviously completely understandable. But was that the most utilitarian use of your money? And if not, did that teach you something about the limits of rational thinking when it comes to helping people? It was probably not the most utilitarian thing to do with those resources, but there would have been personal cost to me. Both in thinking I hadn’t looked after my mother, and also, I had a sister — if I had said, “You can pay for our mother’s care, but I’m not going to,” that would have totally disrupted the warm relationship that I had with my sister. Now, you could argue the money could have helped many people in important ways, and therefore I was being, in a sense, self-interested in not wanting to cause that family rupture. That gets to your second question: Does it show there are limits? Yes, I think there are. Certainly, I’m aware that there are limits to things that I am prepared to do in order to produce the greatest good.
Are those limits a version of common sense? I think they’re a version of what we can reasonably expect people to do. Maybe it’s not good to ask people to do more than we can reasonably expect them to do. There’s a distinction between what would be the right thing to do to the extent that we act in a perfectly ethical way and what is the right thing to ask others to do, perhaps even to do yourself, that might take more account of the fact that we are not perfectly rational, perfectly ethical beings.
One of the things I find myself struggling with about your ideas relates to what the philosopher Derek Parfit called “the repugnant conclusion”: If you follow some of your ideas through to their logical conclusions, you can wind up in morally disturbing places. An example would be — tell me if I’m wrong — that according to your thinking a large number of people with lives barely worth living could be considered better than a smaller number of people living great lives. Your response to that is what? That it’s unlikely we’d ever follow these ideas through to their logical conclusions? My response on that particular case is that that’s not my clear view. I’m still somewhat open minded on that issue. But maybe you’re asking a broader question about whether I hold views that leave me uncomfortable in some way. And, yes, there are views that I hold that leave me quite uncomfortable.
Like what? Views about distribution of well-being. Suppose that you have the choice of helping people who are very badly off by a small amount or helping people who are reasonably well off by a much larger amount, and you can’t do both. You can imagine cases where you spend a vast amount of resources making a small number of people who are really badly off slightly better off, or you make, let’s say, 95 percent of the population significantly better off. I think the right thing to do is to make 95 percent of the population significantly better off. But I’m uncomfortable about the thought that there are people who are worse off, and you could help them, but you don’t.
Not in a thought-experiment way. In a practical, real-life way. In a practical, real-life way, punishing people who are really evil and have done horrible things — using the death penalty. I can feel the pull of that. I feel the retributive sense of that. But I’m not a retributivist, and I think the punishment has to be justified in terms of its consequences, not just the fact that a bad person is now dead.
I suspect most people see themselves as, on balance, a net good for the world. But how does someone know? Very few people are doing enough to make the world a better place. They’re probably not. I don’t think that I’m doing enough to make the world a better place. But how would you know? You would look around for other ways of doing more to make the world a better place, and you would say, “There aren’t any.” That’s the extreme position.
Where’s the line short of that? The line short of that is to say: “I’m doing a lot more than the current social standard is. I’m trying to raise that standard. I’m setting an example of doing more than the current standard.” If you can say those things, you can be content with what you’re doing.
No comments:
Post a Comment