Eva Dale 0:01 From the heart of the Ohio State University on the Oval, this is Voices of Excellence from the College of Arts and Sciences, with your host. David Staley. Voices, focuses on the innovative work being done by faculty and staff in the College of Arts and Sciences at The Ohio State University. From departments as wide ranging as art, astronomy, chemistry and biochemistry, physics, emergent materials, mathematics and languages, among many others, the college always has something great happening. Join us to find out what's new now. David Staley 0:32 Joining me today in the studio is Justin D'Arms, who is Professor of Philosophy at The Ohio State University College of the Arts and Sciences. He has been a recipient of grants from the National Endowment for the Humanities and the John Templeton Foundation, and fellowships from the Charlotte Newcomb Foundation and the Princeton University Center for Ethics and Human Values. His research interests include moral theory, metaethics, reason and rationality, evolutionary theory and the philosophy of emotion, which we'll be talking about in some detail, I hope. Welcome to Voices, Dr. D'Arms. Justin D'Arms 1:08 Thank you very much. It's nice to be here. David Staley 1:10 So, I want to start by discussing your upcoming book "Rational Sentimentalism", which you have described as and these are your words, an articulation and defense of sentimentalist theory of value. But let's start, first of all, tell us what you mean by sentimental values, or sentimentalist values? Justin D'Arms 1:28 Sure, yeah. So in philosophy, there are a number of ways that people have thought about the idea of something being a value, or being valuable, not just an economic sense, but in, yeah, not not in an economic sense, but in a sense of being sort of worthy of pursuit or worthy of concern, being something that actually matters, right? And so when we think about morality, that's usually assumed to be associated with a certain range of values, values of acting rightly, values of being a virtuous person. One question is sort of how to understand the nature and basis of those values. One substantial tradition in philosophy is a kind of rationalist tradition which tries to understand value in terms of something like reasons that a rational being have to engage with something. And the thought is that morality might somehow be founded in that the sentimentalist tradition and that terminology gets its start in a way from a contrast with rationalism, which suggests that rather than thinking that the source of moral values is in our rational nature, and that what it is to be a being that's concerned with moral values is to be a special kind of rational being, which might be equally true of humans and various possible aliens, angels, whatever you like, no matter how they're embodied, no matter what their feelings are. The sentimentalist approach, has tried to understand morality as a product somehow of our sentiments or emotions. And so the idea has been to try to give an account of how we might understand concerns with things like right and wrong by appealing to emotional responses. In our book, which I'm writing with a colleague, Daniel Jacobson, we've been especially focused not so much on right and wrong, though we talk about those, but on various values that seem to be more obviously dependent on emotions, but which people have not sort of thought carefully about how to understand. And I have in mind values like something being funny, or funny, okay, or disgusting or shameful; each of those is tied to some particular sort of human emotional response. But they're not just names for things we in fact, react to. They are something like standards for what to be amused by and what to be disgusted by and what to be ashamed of. David Staley 4:02 So what are some of the findings, then, of the book, especially around these issues around things that are funny or disgusting? Justin D'Arms 4:08 Yeah, so one kind of argument that we make is that people can have very different senses of humor and very different judgments about what things are funny, and yet, if, as we assume, amusement is a pan cultural, human emotional response, there's a way of understanding disagreements over what's funny as disagreements about a common question we're all trying to work out, as it were, what are the things in The world that deserve this sort of response, that merit it. And this idea seems, I think, quite odd to some people when they think about funniness, because they think, oh, you know, we just different people laugh at different things. And what would it be to disagree about this? But I think actually, we do kind of navigate a. Negotiate disagreements over comedy. We may not fight about them as much as we might fight about moral questions or political questions, because there's perhaps less at stake. But then, when you move from funny to say shameful, we're thinking about what does it take for a circumstance to be such that one ought to be ashamed of it, that it is fitting to be ashamed of and we argue that values like that deserve a lot more attention than they've tended to get in the philosophical discussions where morality has been sort of at the center of the stage, because values that are tied to our pan cultural human emotional responses are values that are going to be shaping a lot of the way that we navigate the world and a lot of the way that we think about how to act. And so the shameful is a very important value, even if it's not necessarily about morality. And one of the things that we try to do in the book is we try to look carefully at different discrete emotional types, and talk about how the differences between them make a difference to the standards that we can sensibly hold about when they are fitting or appropriate to have. David Staley 6:15 And when you say emotional types to what are you referring there? Justin D'Arms 6:19 There are controversies about this as about much of what I'm saying. So I believe that there are certain categories of emotion that are what philosophers sometimes call natural kinds. They have a nature. They are psychological structures of a certain sort that human beings are prone to the world over why? Exactly? Open to debate, I inclined to some evolutionary hypotheses about this. But for instance, shame, anger, fear, disgust, guilt, envy, pride, these are types of emotions, and while instances of them can vary, and expression rules can vary, and what particular things elicit them in different cultural contexts or from different individuals can vary. There are things in common to humanity about these kinds which end up making a big difference to what matters to us. So for example, take envy, okay, as I think about it. Envy is an emotion that involves being pained at a difference between yourself and another person in something we might call difference in possession or position, being bothered by the difference itself, being bothered that the rival has the thing where it's not simply a matter of, here's a thing I don't have, and I sure would like to have it. From envy's point of view, it's worse that the rival has it than that no one has it, if you follow me. So I think there's reason to believe that that pattern of response, that way of feeling, is something that is experienced by human beings and perhaps other animals, I don't know, but certainly by human beings in a variety of different cultures for a common reason, it's part of our fund of psychological response. David Staley 8:26 So that's, I wanted to dive into that. So yes, envy is something that many different cultures feel: they feel them differently, presumably, or is there this sort of, what, platonic ideal of envy? Justin D'Arms 8:40 Yeah. So I think that envy is a certain kind of psychological structure which human nature comes equipped with that is a well functioning human being raised in a reasonably effective conditions of ordinary, normal enculturation, no matter whether that's in the United States, in China, in Papua, New Guinea, they're going to emerge with this pattern of response. And what the commonalities are, are sort of functional syndrome of being disposed to code to certain people as rivals, being bothered by what rivals have that one does not oneself have or advantages rivals are gaining that one is not oneself gaining, in a way that's sort of not just continuous with a general desire to be better off, but distinct from that. So it involves a kind of positionality, and I think a tendency to see things positionally and to be motivated by seeing things positionally in this way is a feature of a common human nature. David Staley 9:54 So, and I don't know if you address this in the book, if this is your intention, what is it that makes something funny? Justin D'Arms 9:59 Yeah, so I don't think that one can give a good theory of sort of the necessary and sufficient conditions something has to meet in order to be funny. So I think being funny is being a proper object of amusement. There are some things we can say about the kinds of things that tend to be funny. So certain kinds of crossing of expectations, certain kinds of norm violations, certain kinds of incongruities. These are sort of standard categories of funny things. And various times when philosophers or others try to give a sort of theory of the funny, they usually highlight one or more of those kinds of things. I think that funny is a response dependent concept. So what you have to understand in understanding the idea that something is funny is not that it's a member of some class, like being a benign norm violation. What you have to understand is that it's a proper object of this response, one that you and I are both acquainted with, because we have the human repertoire. And so, you know, beings that came down from another planet and tried to kind of give us a science of funniness and tell us that it turned out that, you know, we were wrong about the funny in certain ways. And some things are funny that we aren't amused by, and other things that we are amused by aren't funny. There's a way in which they would just be missing the point if they don't have a funny bone, so to speak, they're not in a position to have opinions about which things are and are not funny, because having an opinion about that is having an opinion about what things merit this response. David Staley 11:33 These values, funny, shameful, envy, discussed these sorts of things; you've indicated that they've been sort of under theorized by philosophers. Why is that? What explains that? Justin D'Arms 11:43 Well, I think there are a couple things. One is that philosophers in general have tended to be impressed with rationality as the fundamental tool of intellectual inquiry, and so they've looked for theories of values that they could argue were grounded in these very general, abstract capacities that we have to be rational. And part of what they've liked about that, and this idea goes back quite a long way, is the idea that there's a kind of universality to these values, they can be sort of demonstrated to be standards that you reject on pain, of kind of inconsistency, whereas values that are tied to our emotions much more contingent, much less guaranteed to be universal. I'm interested in the ones that are universal to human beings in the sense that I think they're they're tied to universal emotions, but other beings might have a different set of emotional responses, and maybe some of the emotional responses that I've been asserting are human wide are actually more parochial. Maybe, if I'm wrong, envy is like that. May you know. So I think some people think that envy is a product of a certain kind of competitive culture and that there and that is not part of the ordinary architecture of the mind. So I think part of the answer to your question of why has philosophy under investigated these things is that philosophy has looked for values that could be understood by rationality alone. And maybe another part of it is philosophers tend to be moralists, and they tend to think that values that are that compete with morality are not really valuable. They don't really whenever some putative value is something that would be morally better not to care about, then it's not a value at all. So for example, if caring about positional goods the root of envy, the root of envy puts you in a position where you are motivated in ways that, from the point of view of morality and justice don't make any sense, because you'd like to see somebody lose something, even if you don't gain anything. And how could that be reasonable or rational? Right? So then they want to go from that thought to the thought that therefore envy is a kind of mistake. And similarly, I think they want to think that shame is a kind of mistake, at least when it attaches to things for which we are not responsible. So philosophers are okay with guilt, because guilt is an emotion that is thought to be, I think, plausibly tied to something about what we've done to our actions over which we suppose ourselves to have a kind of responsibility. And so guilt can sort of enforce moral conduct by being something that you should feel if you've acted in a way you morally ought not have acted. But shame attaches to things that aren't necessarily connected to your motives and to your actions. It can attach to your appearance. It can attach. Attached to your the family you were born into, or the things that your president is doing, and when that kind of thing happens, right? It's easy for a certain kind of moralistic sensibility to think, well, it's a mistake to be ashamed of that, because it's not your fault what they did. But I think that's actually a misunderstanding in a way. I think that shame is a really important value. It's got to do with what reflects well and badly on us and as human beings that's important to us, and here's the bad, but very human news, things can reflect badly on you, even if you're not responsible for them. David Staley 15:36 Well, given that, I'm struck by the title you gave to the book, "Rational Sentimentalism". Why that title? Why is rational the title? Justin D'Arms 15:45 Yeah, good. So it's a little bit ironic. The idea is that we want a sentimentalism that is capable of taking seriously, the idea that we're not stuck with whatever actual emotional responses we happen to have, and we don't have to just reify as it were, whatever we're prone to be ashamed of as shameful, and whatever we're prone to be angry at as a transgression. There's room for thinking to affect what we become ashamed of and what we regard as a proper object of anger. And by thinking in these terms, we can change to some degree, what we're ashamed of and what we're not. So, for example, somebody might begin by being sort of ashamed of their sexuality, and then through thinking and discussion, come to be able to recognize that this doesn't actually reflect badly on them. Now that I take it as a kind of rational role, a role of reasoning or rationality, in trying to shape what standards of emotions we have, it's sentimentalist, though, because I think that these emotions are not just completely up for grabs and for reconfiguration by critical thinking. We can't just become ashamed of whatever we like and not ashamed of whatever we like. There are some tracks laid down by the nature of these responses which make it impossible to internalize a kind of sensibility that's never ashamed of anything unless you're responsible for it. For example, that's just not the way that shame works. Gretchen Ritter 17:29 Hi, I'm Gretchen Ritter, Executive Dean and Vice Provost for the Ohio State University's College of Arts and Sciences. Did you know 16 of our programs are ranked in the top 25 in US News and World Reports, with nine of those 16 in the top 10? That's why we say that the College of Arts and Sciences is the intellectual and academic core of the Ohio State University. Learn more about the college at artsandsciences.osu.edu. David Staley 18:09 So, in studying emotions, you seem to be treading on a terrain that's usually examined by psychology, so what is it that distinguishes what a philosopher does from say what a psychologist does? Justin D'Arms 18:23 Yeah, good. I mean, one thing to say about this is, for a long time, psychology was just a branch of philosophy. The great philosophers had theories of the mind and theories of human nature. They also had physical theories and biological theories. They had theories, but now we have, sort of, we have sent off into various kinds of disciplinary niches, but to some extent, the questions that psychology is interested in remain philosophical questions. They're questions about why people do what they do, and to what extent when people do what they do, we can regard it as a kind of performance error, and to what extent we should regard it as sort of things working right, and the system thinking as it should, those are kind of normative appraisals that they get to have slipped into psychological thinking. I think that psychology departments are largely separated by the methods of inquiry, more than by the questions they're most interested in. Now, of course, once you start working with a certain set of investigative tools that also shapes the questions you want to ask, and so some of the things that end up getting studied in psychology or not, things that philosophers are prone to do, we don't collect data. I read the work of psychologists, and I'm or actually, when I say we don't collect data. Few of us collect data. There is a movement called Experimental philosophy, in which people are beginning to do some experiments, or have been for a while now, doing. Experiments that try to put old philosophical questions to the test in new ways. But for the most part, I think of the stuff that I'm doing that's closest to psychology as being sort of broad level theory construction and conceptual cleanup of a sort that a psychologist could recognize and potentially agree with or take issue with. In fact, I've published some of my work in Psychology Journals, and I've taken issue with some views of some psychologists, defending views of other psychologists. So I see myself as engaged in a set of questions that are sort of broadly continuous with psychology. I'm more upfront about being willing to say I'm interested not just in when we do feel shame and what feeling shame is, but also in when we should feel shame, and psychologists tend to be allergic to that sort of question. David Staley 20:48 Well, as I was introducing you, I noted that one of your research interests is meta ethics. I'm just curious to know what is meta ethics? Justin D'Arms 20:55 Yeah, meta ethics is a area of philosophy that tries to treat ordinary, first order ethical thinking as a kind of phenomenon, and try to think about what it is doing. So if you think of disputes between utilitarians and kantians over whether or not it's okay to treat someone badly in order to produce a better outcome for others. With utilitarians, classically saying yes, that can sometimes be permission good for the greatest number, right? And Kantian saying no, it's important to treat every individual member of humanity with respect. And part of respect is not treating them as a mere means. That entails that I cannot, as it were, use you as a tool to create more benefit elsewhere. That's a first order ethical dispute. They're fighting over whether or not this particular conduct is right and wrong. Meta ethics thinks about how should we understand the character of disputes like that, for example, should we think of them as disputes about a matter of fact, or should we think of them as competing expressions of attitudes of some kind where they're quite unlike disagreements over whether or not the earth is warming, disagreements over whether or not the earth is warming, or disagreements over a matter of fact, but on one metaethical view disagreements about whether or not it's okay to perform this action that uses one person for the benefit of others, those aren't disagreements about a matter of fact. They are disagreements in attitude. A different view holds that they are disagreements about a matter of fact, and it tries to give an account of the nature of the facts in question. Meta ethics is also interested in thinking about what we call the epistemology of such facts. So the question is like, okay, so suppose that there are facts about moral questions like this, what does the epistemology of morality look like? How do we learn how do we know these moral facts? Are these things that we know by means of deductive reasoning. Are they things that we know by means of intuition? Are they things that we know by means of sentimental response? Right? So questions about, what are the nature of moral properties? What are we doing when we engage in moral disagreement, or when we have a moral thought? Is that something like attributing some feature to reality, or is it something more, like taking a stance or having an attitude? These are the questions about ethics. David Staley 23:28 That's ascinating. So how did you end up as a philosopher? I'm interested in the journey that you took, were you always going to be a philosopher? Justin D'Arms 23:35 No, I didn't know what philosophy was like a lot of people, there was no philosophy in my high school, right? So I got lucky and visited a friend in college when I was still in high school, and went to his philosophy class. And thought it was incredible and exciting, and I couldn't believe that people were actually spending time trying to think carefully through like, how you knew that you were in a room, rather than a brain in a vat, and thinking about it in what seemed to me to be genuinely insightful and productive ways. So I went to college with an inkling that philosophy was something that I would want to study, but I wasn't thinking that I was going to be a philosopher at all. I sort of assumed that I would go to law school. I felt like that's what people with philosophical minds mostly did. And then I went and I worked for a couple of years. I worked in a law firm as a paralegal, and I found out what at least that part of the law was like. And I missed being around people who were thinking about other kinds of questions. I missed the kind of flexibility to get interested in a topic and think about it and have that be your work that you were doing. Yeah. So after a couple years of that, I sort of thought, Hey, maybe I should actually be thinking about whether I could do this. And then I got lucky. David Staley 24:56 And graduate school at University of Michigan, I believe. Justin D'Arms 24:59 Yes, that's right. David Staley 25:00 We're not going to hold that against you. So, you say the book is about ready to be published, "Rational Sentimentalism" is about ready to publish. So what's next after this book? What's next for your research? Justin D'Arms 25:09 Yeah, so I have a couple of different sorts of things that I'm interested in thinking about. There are some questions that I feel like come out of the way that I've been thinking in the book that the book isn't going to get around to answering a big one is how to think about standards of virtue, or what kind of person to be. So I've been focused on the idea that certain emotional responses can be fitting or apt, even if there's something morally ugly about them. I think that's true about envy, for instance, and they can be fitting or apt, even if it would be sort of morally nicer if they weren't. I think that's true about shame, for instance. Then ask, Well, okay, so we've got all these different values, and some of them compete with morality, and others of them are sort of work with morality? Is there a way of understanding what living a good human life looks like that pays its respects to morality but doesn't try to deny the significance of these non moral and in some cases, Contra moral values? Is there a way of articulating a vision of a virtuous character that is responsive to the diverse and plural and conflicting nature of these values? So that's one thing I'm interested in. I'm also a bit interested in some questions about philosophy and technology. For a while now, I have thought that maybe I could have something to say about ethical questions about autonomous vehicles. There's a way in which our moral system, our moral thinking, is built for human beings, our moral expectations, what we take to be right and wrong is guided by the fact that we use these standards on human beings, and human beings cannot say animals and not say animals, and also not say machines. So the moral system, as we understand it, takes its shape in part because of some facts about what we are like, and one of those things is that most people think that morality provides a kind of tolerance for taking a special interest in ourselves, and we're not expected. Some moral philosophers would disagree, but ordinary thinkers tend to think we're not expected to be, as it were, fully impartial between ourselves and our family, on the one hand, and people that we don't know around the world. And I think part of the reason why it seems reasonable to have a moral system that's like that, even though, of course, everybody around the world is equally a person and equally entitled to respect, and their pain matters just as much as mine does. There's something a little puzzling about the license we give ourselves to be self interested, but it's often thought to be something about human nature that we're not going to get past. We're not going to become truly impartial, so morality shouldn't expect us to but what about automated systems that we can create? Right? We can think about what the standards of morality for automated systems ought to look like, and I bet we see no special reason to think that they should privilege themselves, right? So think about your automated car, right? An automated car that, as it were, took humanity as its model and had a moral system that said, like, Okay, well, I don't want there to be crashes, and I don't want crashes to hurt other people, but in a given crash that I might have, I'd much rather that other people get hurt than that. I get hurt right now when it comes to the car, of course, we don't want the car to be concerned with whether it gets hurt right now. What about your car? Should it be especially concerned with whether you get hurt? Or should it be, as it were, impartial, so that it's trying to minimize if there's going to be a collision and it's trying to avoid it. Should it be trying to minimize damage, or should it be trying in the first place to protect David? I think there are interesting questions here, of course, questions about like, what cars would sell, and maybe we have to have the cars have a little bit of a bias toward their occupants in order to get anybody to buy those cars and put them on the road, but maybe we don't. Maybe if we point out that you're safer in that car, even if it's fully impartial in ways you aren't, you're still safer in that car than you would be driving around on your own, maybe that would be enough to justify a set of cars that were genuinely impartial. I think there are interesting ethical questions here that people haven't thought a lot about, and... David Staley 29:59 Well, and they're not just academic questions, these are the kinds of questions that the autonomous vehicle makers at least claim to be dealing with, yes? Justin D'Arms 30:07 Yes, that's absolutely right. And so this is a small area where I think my own thinking about philosophical questions of ethics and sort of what the what the source of our moral convictions is, and this idea that the source of our moral convictions is this feature of humanity that's quite contingent might actually prove to be interesting and relevant to some societal issues. David Staley 30:28 Justin D'Arms, thank you. Justin D'Arms 30:30 Thank you very much. Eva Dale 30:32 Voices from the Arts and Sciences is produced and recorded at The Ohio State University, College of Arts and Sciences, Technology Services Studio. Sound engineering by Paul Kotheimer, produced by Doug Dangler. I'm Eva Dale. Transcribed by https://otter.ai