Eva Dale 0:00 From the heart of the Ohio State University on the Oval, this is Voices of Excellence from the College of Arts and Sciences, with your host, David Staley. Voices focuses on the innovative work being done by faculty and staff in the College of Arts and Sciences at The Ohio State University. From departments as wide ranging as art, astronomy, chemistry and biochemistry, physics, emergent materials, mathematics, and languages, among many others, the college always has something great happening. Join us to find out what's new now. David Staley 0:33 Robert Fox is Professor and Chair of the Department of Speech and Hearing Science at The Ohio State University College of the Arts and Sciences. His research and teaching involves sociophonetics, speech perception, speech acoustics, and second language acquisition. He serves as Co-Director of the Speech, Perception, and Acoustics Laboratory, of which we're gonna learn more in just a moment. He is the author, or co-author, of numerous publications and presentations, and was elected a fellow of the Acoustical Society of America in 1996, and a fellow of the American Speech Language Hearing Association in 2012. Welcome to Voices, Dr. Fox. Robert Fox 1:11 Oh, thank you very much. David Staley 1:13 I note that you arrived here at Ohio State initially in the Department of Linguistics, and then switched to the Department of Speech and Hearing Science. And I wonder, first of all, is that uncommon, is that a typical sort of move for a linguist? Robert Fox 1:26 It's not that uncommon, because speech and hearing deals a lot with speech, and sometimes syntax, and certainly with phonetics, which is what I am. I was a phoneticion in the Department of Linguistics, but when I came to Speech and Hearing, I could call myself a speech scientist and do just about the same thing. David Staley 1:45 So tell me about the, well, the similarities and the differences between, say, linguistics and speech and hearing science? Robert Fox 1:51 Linguistics in general is much more concerned about theories, and supporting one theory or developing one theory or another, about - David Staley 2:00 Theories of language. Robert Fox 2:00 Theories of language, right. And in speech and hearing, particularly with acoustic phonetics, which we're much more interested at a lower level, specifically at the sounds themselves. And again, we do a lot of research that is somewhat directed toward individuals with communication disorders. So for example, one of our most recent areas of research is looking at speech processing problems that individuals with dyslexia might have. And dyslexia, of course, is you know, reading problems, but they stem from phonological processing disorders, and phonological memory, which is just a fancy way of saying about the nature of sounds and decoding letters, etc., and how they fit to the sound patterns that they hear or produce. And we found out that individuals with dyslexia, both children and adults, have a difficult time with identifying dialects and with identifying the intelligibility of the speech, they have a little bit of difficulty with, if it's affected at all, if you degrade the speech using something which is called vocoding. David Staley 3:10 Vocoding. Robert Fox 3:11 Vocoding is when you divide up the frequency range of the speech into say, four, eight, twelve different channels. You take that and you put noise into those channels. *mouth noises* Sounds like that. David Staley 3:27 And that was Dr. Fox, putting in the noise, by the way. Robert Fox 3:31 So what happens is that that mimics what the kind of auditory input that cochlear implant patients have, and we find out that individuals with dyslexia do very poorly at intelligibility, listening to vocoded speech, much worse than individuals without dyslexia. David Staley 3:52 And so what is the relationship, because I associate, as you say, dyslexia with reading, what's the association between the auditory issue and the visual issue? Robert Fox 4:00 Oh, well - David Staley 4:01 Or is it all part of the same? Robert Fox 4:03 It's probably all part of the same because, with the reading, they're decoding and they're trying to understand, you know, how the sounds go together. But when they're listening, they're also doing phonological processing, understanding what is being said, and then later on, what distinctive sounds are being used. David Staley 4:23 So you used the term acoustic phonetics earlier, could you just give us a broad definition of what that might mean? Robert Fox 4:28 Acoustic phonetics is actually doing acoustic analysis of a speech sound or a series of speech sounds, and just understanding the nature of those speech sounds. For example, I've done all my career a lot of work with vowels. So a lot of people say oh, yeah, E, A, I, O, U and I said said no, there's at least 15, if not more, vowels in English. So you have "ee", "ih", "eh", "a", "ah", "oh", "oo","ow", "ay","oy", for example, and we look at how different dialects of American English, how they differ in terms of the vowel sounds, particularly the acoustics of the vowel sounds, because vowels can be identified quite well in terms of their basic acoustic characteristics. David Staley 5:18 So I want to ask you more about your study of dialects in a moment, but is it an oversimplification to say that speech and hearing science is more clinical, say, than linguistics, or has a clinical component? Robert Fox 5:29 It has a clinical component, but a number of people who work in speech and hearing departments don't concentrate on communication disorders, but concentrate on basic acoustic processing, just basic science, because the basic science forms the foundation for students when they're learning to be speech language pathologists or audiologists. David Staley 5:52 I'm interested in your research and I'm going to talk here in a moment about the laboratory, but I'm interested in your research in forensic phonetics. Can you tell us a little more of that really interesting sounding term? Robert Fox 6:03 Okay, forensic phonetics basically is the use of acoustic phonetics, primarily, although I do some transcription as well, but to address issues in the courts. So, the very first time I did a case was in 2003, and this was a murder case, basically. It was a murder case, I suppose I can use his name because it's been in "Forensic Files". David Staley 6:29 On television? Robert Fox 6:30 Yes, on TV, it was an episode called "Chief Suspect". This individual, who was a policeman, came to his home and found his wife raped and killed, shot to death. And he got on 911, was describing the scene, and you could hear him walking around the house. And at one point, he says something like, "I gotta call Phelps, man." Now, this was not heard originally when they listened to the 911 tapes, this case actually became a cold case. But someone listened to the 911 tape again, several years later, and they came up with that, and there was no reason for him to, as he later claimed, that he was saying, I gotta call for help, because he was already calling for help, and people were already coming there. And so my job was to listen to that and to determine if he said, call Phelps, or call for help. And it constituted a major bit of evidence in the case, he was found guilty and sentenced to 11 years. David Staley 7:35 Now, when you say analyze this, is this simply your ear listening to it or is there technological assistance? Robert Fox 7:41 Technological assistance. There's lots of speech analysis programs out there that allow you to do acoustic analysis of speech. Used to be that you had to use all spectrograph machines, and they would generate a piece of paper on which they would burn the information. Now we just have it on the screen. As a PhD student at University of Chicago, I probably burned 1000 of them. But even in the cold of the winter of Chicago, I had to keep the window open because it's somewhat poisonous. David Staley 8:12 Oh, dear. And are these classes that you teach, forensic phonetics? Is this something that students might be interested in, because I know there's a lot of students interested in forensic science? Robert Fox 8:22 Yes, actually, I would love to create something more permanent. It was around three or four years ago, I actually taught a freshman seminar on forensic phonetics, and I teach some independent studies in my lab. David Staley 8:37 Speaking of your lab, I'm very interested in the work that you conduct in the Speech, Perception, and Acoustics Laboratory. Tell us about this research, and I know you do a lot with American English dialects - Robert Fox 8:46 Right. David Staley 8:47 In this lab. Robert Fox 8:47 Well, in the lab we have probably seven computers, we have a large sound booth where we do recordings, and we also use the sound booth to play stimuli to subjects when we're doing perception tests. We also have laptop computers that we take out into the field and collect data that way. And the three dialects we're most comfortable with and have worked the most with is a dialect from Wisconsin, close to Madison and Milwaukee, central Ohio, and Western North Carolina. Western North Carolina is a dialect, and this is a regional dialect. There's two kinds of dialects: a regional dialect is that version, or variety of English that spoken in a fairly specific area, you have other things called social dialects, and that would be for example, African American English would be considered a social dialect of American English. So, we wanted to record individuals that had been born and lived practically all of their life in a very narrow area of these three locations. I really wanted to do Western North Carolina because I was born in North Carolina, all of my relatives come from that same area that we used. As soon as I got interested in phonetics, that's what I wanted to do, was to study the dialectal variation in Appalachian English. And another thing we're looking at is, we record individuals from, will have recorded individuals ranging in age from 8 to 95, and you see that there's a difference as a function of generation, which likely shows that there's something we call sound change going on in a language. Sound change means that the sound that people produce, particularly the vowels, are different, and they've changed over time. And one of the things we're very interested in is determining whether the youngest speakers in, for example, Western North Carolina, are adapting their speech to be closer to mainstream American dialect rather than the Appalachian dialect that their parents or grandparents speak. David Staley 10:56 How is a dialect different, say, from an accent? Is that the same thing? Robert Fox 11:00 Some people will say accent, and other people will use the term variety, so there's a lot of change. A lot of people will use the term accent or accented English to be referring to the speech of someone whose native language is not English, but they're learning English or vice versa. So, that often shows up in the vowels, for example, if you have a Spanish speaker, Spanish has many fewer vowels than does English. And so these speakers will often make errors between "ee" and "ih", for example, or, "eh", and "ay". So that's what we look at. Janet Box-Steffensmeier 11:00 I'm Janet Box-Steffensmeier, interim Executive Dean and Vice Provost for the Ohio State University College of Arts and Sciences. Did you know that 23 of our programs are nationally ranked as top 25 programs, with more than ten of them in the top ten? That's why we say the College of Arts and Sciences is the intellectual and academic core of the Ohio State University. Learn more about the college at artsandsciences.osu.edu. David Staley 12:03 You talk about, an important part of what you're studying is sound change, which is why you're studying people of different ages. Robert Fox 12:08 Right. David Staley 12:09 So I'm interested, is part of what you're examining sort of the speed of that change? Robert Fox 12:13 No, it's actually trying to determine the nature of the change. David Staley 12:18 So, why is there speech change. Robert Fox 12:20 Right, the thing with studying sound change is that there's two ways of doing it, one is much more difficult than the other. One, for example, would be to record someone as they grow older, you know, through a number of years, but that's going to be hard. There's one person who's done this with looking at Queen Elizabeth's Christmas speech, since she gives it every year, and you can see whether it's changing in her speech. David Staley 12:49 Is it? Robert Fox 12:49 A little bit. And then there's something called apparent time. Apparent time is when you have a speaker that's very old, probably speaking a version of the dialect that was used when they were young, and haven't changed, and then the children. And so you compare the two and see how different they are. If there's a consistent pattern of differences between the older speakers and the younger speakers, then in apparent time, it seems as though there's going to be sound change, because there has been sound change. David Staley 13:19 And what are the factors that lead to sound change? Robert Fox 13:19 It's a bit unknown, but it is true that all languages undergo sound change at one time or another. If you look across centuries, you'll find that, I mean, well, English has changed tremendously since Old English, Middle English and modern English. So, part of that was a function of the invasion of the Normans, when they took over Britain, or England then, and that meant that the entire English court spoke French, and that affected the nature of English. David Staley 13:25 But you know, I'll watch, as a historian, I'll watch films from the early part of the 20th century, you know, Franklin Roosevelt or others, and not having any sort of expertise in this, their speech just sounds different from the sort of speech that I would hear today from a politician or someone like that. Robert Fox 14:12 Well, it's probably also the case that they probably were speaking a bit more formally than a lot of politicians do today, and we won't bring up any names. In England, the one dialect that's dominant among the upper class and certainly in the royal family is something called "RP", which is Received Pronunciation. And so, that's very different from, say, Cockney. David Staley 14:40 Okay, yes. And you said that the Queen's Christmas speech has exhibited a little bit of change, right? Of what type? Robert Fox 14:48 Just in the use of sounds, so that is always going to be quite formal. David Staley 14:54 So is that stability itself interesting, that stability over, what, 50-some over 50 years, more than 50 years? Robert Fox 15:02 I think it accurately reflects what many of us do, so. There's... some sound change will come about, for example, I was born not in Western North Carolina, but in Eastern North Carolina, but my first few years were living in Eastern North Carolina and South Carolina. So then when I eventually ended up in Maryland, in the second grade, third grade, I'm sure I had a very strong dialect, or accent. My second grade teacher told my mother that she was amazed how well I read, and that's because she was biased against little boys who have Southern accents, assuming that they weren't very bright, but I showed her. David Staley 15:46 So you use the term perception, that you studied dialects both in terms of acoustics and perception. Can you sort of distinguish those two, acoustics and perception? Robert Fox 15:54 Okay, acoustics is the true physical characteristics of the sounds, what frequencies of energy there are in a particular stretch of speech, etc. It's those acoustic patterns, those acoustic cues that are in the speech stream, that the ear, and then later the brain, pick up on and figure out what you're saying. So, on the perception side, we're looking at, well, what acoustic cues is this individual using? Are they different from that individual? Can they pick up this nature of the same cues? For example, one of the early analyses of dyslexic perception, of individuals with dyslexia, taught a group of speakers, ones with dyslexia, the others without, to try to identify these sort of characters on a screen. By listening to them, they taught them to identify them, both groups. And if it was in English, the individuals with dyslexia simply had a very poor time identifying the voices, they just couldn't process it. And they taught the same thing for these figures on the computer screen in Mandarin, and they were equally poor at that, the individuals with dyslexia and without dyslexia. So it's not an auditory problem, it's a perceptual problem. So I have to ask this, I was interviewing a scholar of film and television just recently and I asked her, are you able just to watch television, or are you always sort of thinking of it as a scholar? Is it the same with you and dialects, I mean, I wonder right now, are you listening to me thinking, hmm, his dialect must be such and such? It's the problem a phonetician has. So, I will listen to people and note when they're doing something that's different from other individuals. So for example, how would you say, "O, F, T, E, N"? David Staley 17:51 Often. Robert Fox 17:52 Many people will say often, and that's the kind of thing I listened for. David Staley 17:57 So what does that signal? Should I ask about my dialect, or maybe maybe I should leave that for off interview? Whenever I interview a lab scientist, I ask them to describe the organization and workings of their lab. Tell us how yours is organized. Robert Fox 18:11 Okay, well, there's two of us, Dr. Ewa Jacewicz, who's a Research Associate Professor, is the other Co-Director of the lab. And we usually have one or two, sometimes more, PhD students who are working on their dissertation as well as working on research with us. We often have an AUD student, which is an audiology student, or an M.A. SLP student, which is someone in speech language pathology. David Staley 18:39 And audiology is the study of hearing...? Robert Fox 18:41 Right, particularly disordered hearing. And eventually, they'll go out and prescribe hearing aids etc., or work with individuals with cochlear implants, etc. And then we usually have one or two undergraduate students, so we've had probably ten, twelve in the past eight years who do undergraduate research theses with us. David Staley 19:05 So, I'm interested to know how you became interested in this field, and I noticed that you have a very interesting and peripatetic journey to speech and hearing science behind you, yes? Robert Fox 19:15 Yes. When I was a young boy, I decided - I was probably eight or nine years old - that I wanted to become a particle physicist. David Staley 19:23 Okay. Robert Fox 19:24 So I had that ambition through high school and actually went to the University of Maryland as an Honors Physics major my first semester, and I decided it wasn't the Flash Gordon whiz-bang stuff that I was expecting, and eventually decided to switch out and I became an English Literature major. David Staley 19:45 That's a switch. Robert Fox 19:45 That's a switch, because I'd always liked English in high school. But then I found, you know, I've read so many books, but I found that there wasn't anything you could do about it. You could either write, which I was not going to do, or be a critic, which I didn't want to do either. So, there's no science to it. So, I first discovered phonetics when I took an anthropology course, and became very interested in phonetics and phonology and decided to go to graduate school in Linguistics, and I went to University of Chicago. And that's how I got there. David Staley 20:21 We had mentioned one of the classes you were teaching, I'm interested in the other classes that you teach. Robert Fox 20:26 Okay, I've been teaching, most recently, I'm finishing up right now an advanced phonetics course, that's the graduate level. And I've developed and have taught a multicultural class, multicultural aspects of communication, the science and disorders, mainly because that fits quite well with my interest in varying dialects and different accented Englishes, because I teach mainly our speech language pathology students, so they can understand what to expect, what kinds of common errors they will find. Because in communication disorders and assessment, what you want to figure out is, you hear something that's not quite, you know, acceptable English and missing some sounds, you have to decide whether it's just a communication difference or it's a communication disorder. And so, that class helps them make that determination, you have to do a fair and unbiased and legal assessment. David Staley 21:25 Tell us what's next for your research. Robert Fox 21:27 Right now, Dr. Jacewicz and I, and another faculty member, Dr. Lee, are going to probably be submitting a grant proposal to NIH to use fNIRS. fNIRS, little "f" and then capital N, I, R, S, stands for Functional Near Infrared Spectography. And using this, you put on a cap with lots of electrodes, but you actually are shining near infrared light through the skull, and you can find some change in oxygenated versus unoxygenated blood as it flows in the head, which will change as a function of what the brain is doing. So basically, it's a form of brain imaging, and Ohio State is, frankly, a real center for brain imaging across a number of fields. David Staley 22:18 Do you use fMRIs at all in your work? Robert Fox 22:21 Dr. Lee does. David Staley 22:22 So, functional MRI machines. Robert Fox 22:24 Right, right. But, the fNIRS machine is basically brand new, and this is a new one, it's the only one currently that's on campus, so we're really looking forward to working on it. David Staley 22:37 Robert Fox. Thank you. Robert Fox 22:39 Thank you for asking me. Eva Dale 22:40 Voices from the Arts and Sciences is produced and recorded at The Ohio State University College of Arts and Sciences Technology Services Studio. Sound engineering by Paul Kotheimer, produced by Doug Dangler. I'm Eva Dale. Transcribed by https://otter.ai