Posts that are hearing-ish

ASL lector notes for the Easter Vigil Mass – 1st reading (Genesis 1-2, Creation)


It is Holy Week, one of my favorite weeks of the year. I have the privilege of signing the 1st reading for the Saturday Vigil Mass this year in Rochester, and I’ve posted my translation and performance notes in case it might be useful to someone who wonders about the translation process (which I’ve written about elsewhere: part 1 and part 2).

The first reading (long version) is most of Genesis 1-2, or the (Yahwist) Creation story. I inadvertently wrote my notes so that they will (hopefully) make sense to both signers and non-signers — I hope this will be useful to my non-signing friends as an explanation of what it’s like (for me) to think in ASL. Basically, the left column is the English translation, and then the middle column is me trying to describe the images that come to mind when I read it.

This isn’t analysis of any sort, it’s not translation, it’s… what is the movie in my mind, right now, when I read through these words? The short version is that God is a lot like a really excited 5-year-old, because… I’m the one signing this, and I’m a lot like a really excited 5-year-old.

After the imagery description in the middle column, another round through the reading follows on the right, with the gloss (as best as I can capture it) for what I sign during the Vigil Mass. I wrote most of this post while I was preparing to lector for the Boston Deaf Catholic Vigil Mass last year (2017). At the time, I still felt really awkward, shy, and hesitant while signing; my expressive usage of the language was very new and limited, and I’d never worked or lived among other Deaf people or otherwise had much of a cultural/linguistic immersion. Vigil Mass 2017 was a linguistic/spiritual/identity landmark for me; it was the first time I felt like I was expressing exactly what I was trying to express in ASL. Which… was a huge deal for me, as a hesitant new signer (thanks, growing up oral).

Thanks to Deacon Patrick Graybill for last-minute feedback on Holy Thursday 2017, and to God for… well, basically… everything, right? That’s what this reading is all about.


The doors we leave open


I’ve been thinking about the doors we leave open, even if they don’t look like they’ll be taken at the time.

One version of this, for me, is that I grew up deaf and oral in the mainstream (local public school with hearing kids). I grew up with speaking and listening as doors that were flung wide open with flashing neon signs and adults hurrying me towards them — but the doors of ASL and Deaf culture were also there, in ways that were important to how I engage with them now, as an adult trying to learn.

There was the itinerant Teacher of the Deaf who visited my elementary school and (briefly) showed 7-year-old Mel a few signs before her parents put a stop to it. I don’t have clear memories of this, but discovering that IEP note as a graduate student was a jolt: my younger self had shown promise for learning how to sign at a remarkable rate, and seemed to enjoy it? Signing was a thing that I had… and maybe could… enjoy, not only fear? These were doors it took me twenty years to walk through.

Even if my parents stopped me from learning ASL (or whatever variant of contact sign people were going to use with me), they did bring me to watch the local children’s theatre, which had Deaf performers. As a slightly older child, I wanted nothing to do with ASL or the Deaf community; it was foreign to me, and everyone kept telling me I was so smart precisely because I could act so much like a hearing kid. I loved music (“like a hearing kid,” I thought, not knowing that Deaf people could also love music). I loved musicals. So my parents brought me to Oliver, and Joseph and the Amazing Technicolor Dreamcoat, and there was signing on the stage… which I couldn’t understand. But later, I could look back and think: there was art there, dancing, theatre, music… and there was ASL there, blended in with them. Exploring this strange new Deaf world wouldn’t mean giving up these things I loved; it might even expand what I could imagine in those spaces. These were doors that took me fifteen years to walk through.

There were the educational interpreters who were assigned to me for a few years, after my parents stopped the ToD from teaching me to sign. (Yeah, I’m not sure what the logic behind this was either.) I had already learned how to learn everything from books, and didn’t know this strange new language they were using with me, so I resented and mostly tried to ignore their presence as much as a lonely child could. As soon as I was able to formulate the argument that I didn’t “need” interpreting, I did — and breathed a sigh of middle-school relief that these people wouldn’t follow me through all my teenage years. But a few years ago, when I started thinking about (willingly) learning ASL and (willingly) seeing what this whole “interpreted access” thing was about, I had two people to reach out to. And they responded! (Thanks, Jamie and Christine… and further back, though I couldn’t find her, Francesca.) These were doors that took me thirteen years to walk through.

There were the folks who were (ex-)interpreters, or captioners, or signers, and kept being those things while we were friends and colleagues in the spaces I already worked in and wanted to be in (which is to say, tech spaces – not Deaf spaces). Who kept being adjacent to both worlds, who kept reminding me that trying these things out might be easier than I thought. Who reminded me that trying it wasn’t a permanent commitment; who walked me through how I could ask for things and set them up, when it was time. (Thank you, Steve and Patti and Mirabai.) Took me… seven years to walk through some of those doors. Or five. But I walked through them all, eventually.

So yeah, those doors. Important things. We don’t know when people will take them, but… even if it’s “not now,” even if it might well be “never,” we… just never know. Open the doors and keep them open, even when it seems completely useless. Wait, and wait, and wait. It’s important that these doors be open, because we never know who’ll come through them, at the most surprising times.


Why Deafening Engineering? Because onto(ethico)epistemologies.


Continuing to write my way through things I’m finding/reading/sorting that help me think about some of the scholarship I want to do.

While we were roommates for the CUR Dialogues conference, Corrine Occhino introduced me to the work of Julie Hochgesang, who does sign language linguistics: phonology, documentation, etc. and tons of other things. I’d been trying to figure out analysis tools for video data, as opposed to making everything a text transcript and analyzing from that. Unsurprisingly, signed linguistics does that kind of thing, and Julie is the author of a guide for using ELAN – which itself is a FOSS (GPL2/GPL3) project for annotating audio and/or video data. Chaaaaaaamp.

And then there’s Georgetown’s recent EdX release of a course on sign language linguistics (structure, learning, and change).

And then there’s Allan Parsons’ notes on Karen Barad’s work on ontoepistemology. (Or onto-ethico-epistemology, I suppose, since the ethical dimension is inextricable, at least according to Barad.) And Annemarie Mol’s brief but reference-dense guide to the ontological turn.

“What the…” you say. “Mel, these have nothing to do with each other. I thought you were doing Deaf Engineering stuff, so what’s with all the weird philosophical…”

“On the contrary,” I say. “Deaf Engineering is a case study; it’s an example of the kind of work I want to do — not the end goal of all my research.”

I’m interested in engineering and computing education ontologies. (Okay, fine, ontoepistemologies.) (Okay, fine, onto-ethico-epistemologies. Happy now?)

See, the reason I’m interested in Deaf Engineering Education — or perhaps the more active verb form, “Deafening Engineering Education” — is because of what it can help us make visible about onto(ethico-epistemo)logies of engineering (education). The phrase “Deafening Engineering (Education),” by the way, takes after Rebecca Sanchez‘s book title, “Deafening Modernism,” where she does the same thing to modernist literature, exploring it “from the perspective of Deaf critical insight.”

It doesn’t have to be Deaf engineering (and computing) education. It could be FOSS/hacker/maker engineering and computer education, a space I’ve also published and worked in. It could be feminist engineering (and computing) education, as Smith College, SWE, Grace Hopper, Anita Borg, the Ada Initiave, and others have explored. It could be engineering education as a liberal (and fine!) arts approach, which is how I’d describe some (but not all!) of Olin College’s take on it. It could be Black engineering education, which I’m curious about as it’s brought forth in HBCUs as well as NSBE (but know very little about myself). It could be Native/indigenous engineering education, which Michele Yatchmeneff and others are exploring. It could be queering engineering education, cripping engineering education, Blinding engineering (and specifically computing) education; it could be…

Here’s the thing about all of these approaches, all of these worlds: by bringing to light other ways we could or might have conceived of engineering, brought it into being, engaged it as a practice — it makes us aware of all of the assumptions we’ve embedded in the discipline thus far. Why do we typically assume that engineers are White (or can act White)? Why do we (again, typically) assume that engineers are hearing (or can interface with the hearing world)? Why do we assume… what do we assume? What else might we assume?

I am so glad for the recent widespread success of the Black Panther film, because the wide-eyed audience reaction to Shuri’s lab and Wakanda’s technology is such a great example of what I’m aiming for. That look into a different world; that plunge into a universe of possibilities, that opening-up. I want to do… not quite science-fiction, but engineering fiction, or things that start as engineering fiction, so that we might make those into engineering not-fiction. To look at these worlds and learn from them and learn how it is that they understand and articulate themselves.

Ontologies. Plural. What is, what might have been, what might yet be. This is a pretty stark contrast to ontology engineering, which is a different (and more engineering/computing-native) approach to the notion of ontology. Ontology engineering is an attempt to document the singular, rather than embrace the tensions of the multiple. Both have their place, but one has been more dominant in engineering/computing thought than the other, and unconsciously so — the same way most STEM researchers are working within a post-positivist paradigm, but don’t (yet) know it.

So why all the Deaf/ASL resources?

Well… it’s a rethinking of the world, and one that’s taken place within a lot of living memory (and one that happens to be extraordinarily accessible to me). The past several decades have seen an explosion into the public sphere of a radical rethinking of what ASL is, what Deafness is, and what all these things could be. We’ve gone from “it’s not really a language, it’s a system of crude gestures” and “what a terrible disability” to… something that’s exploded our notions of what language is and how it works. And linguistics had to figure out and built analysis tools and systems that could work with signed languages. A rapid turn-about between “what would this even look like?” to “maybe it looks like this, or this, or…this?” because… people… made it.

And then came the (again, radical!) idea that ASL could be used as an academic language, just like one might use English (or earlier, French… or German… or Latin…) as an academic language of instruction — and then publication. What does it mean to publish in a signed language? Again, there was no existing answer. So people made one. And then things like: what would an ASL-based software interface look like? We didn’t know. And then ASLClear came out as one answer.

That’s why I’m looking at these resources. Because I see in them a making of a world; the figuring-out and birthing of things that have never existed before. They happen to be Deaf; it happens to be a very, very good example for me to look at right now — but it’s the process of the birth of worlds and universes that thrills me, and I want to look across worlds at the process of that birthing.

You see that? Do you see why I’m excited by this, why I love it, why I see it as so much bigger than just “Deaf Stuff In Engineering?” It’s what Deaf Engineering (and queer engineering, and Hispanic engineering, and…) points to. We don’t know, it doesn’t exist… (see the ontoepistemology in there? the knowing, and the being?) – and then we make it. And we find out what things might be possible. And the ethics inherent in that (re)creation of the world — what and who does our making and remaking let in, who does it keep out? — that’s where it gets ontoethicoepistemological. Nothing is value-neutral; nothing is apolitical. And nothing on this earth is going to be perfectly fair and universal and utopian; let’s not pretend it is; let’s be aware of our own footfalls in these spaces that we share.

I am so afraid of writing about this, thinking about it, letting it be known I’m interested in things that include the words “Deaf” and “ASL” and “engineering” in it, because — as I mentioned in a previous blog post — these kinds of things can be oversimplified and totalizing to one’s scholarly identity, to how others describe and understand one’s work. It’s really important to me that I not get pigeonholed into “just” doing Deaf Engineering Things. Because there’s so much more out there. There’s so much, and I want to see and play within it, too.

But this is where I want to play, and this is where I want to learn and create things and be challenged and in dialogue. And I need access to these first few worlds I play in, so that I can spend my energies on playing and figuring out the mechanics of how world-building works, rather than on hard labor trying to glimpse the snatches of it that I can. And so my first two are open source (since so much of that world takes place in text, where I am about as native as anyone can get) and then Deafness (since I can learn my way into a strange new world where things are visually accessible by default).

I’m hoping that those two will teach me enough between them (or across them) that I’ll be able to branch out to others, someday. Maybe years from now. Probably years. The other spaces will likely be less accessible to me in terms of communication, but I’ll have learned; just as I’m trailing open source practices and philosophies into Deaf Engineering (and computing) spaces with me (see: this blog post, wherein I think out loud / release earlier and more often), I will probably trail Deaf communication and accessibility practices into whatever world I go into after that.

But there will be worlds after that. This isn’t my final one.

Okay. Onwards. Again. Keep thinking and keep writing. I feel so hesitant doing this, but also brave in ways I haven’t felt in a long while.


APA style and qualitative research methods resources in ASL


My friend Anna Murphy recently sent me St. Catherine University’s library resources on APA style — and they have ASL versions! Actual ASL with nice translations, not ”we signed the English word for word” versions. I think these are a nice high school or early-college intro for ASL users, maybe good for a first-year college seminar course. (I’ll ask Corrine Occhino about using them for ours, since this is a lovely set of matched bilingual resources.)

Joan Naturale also pointed me to an ASL companion to an introductory qualitative research methods textbook (Research and Evaluation in Education and Psychology (REEP): Integrating diversity with quantitative, qualitative, and mixed methods). ”ASL Companion,” in this case, means there are well-done chapter summaries in ASL with the blessing of the original author (Dr. Donna Mertens). This is a nice textbook, in its 4th edition from Sage, not some hastily cobbled together thing for the sake of having something signed. Good scholarship in good ASL is, sadly, scarce stuff.

This stuff is important; not only does it make these materials more accessible to those who are native users of ASL, it gives us a glimpse towards what scholarship in ASL might look like. And yes, there have been Deaf (and hearing!) researchers working on “academic ASL” for a while (and what that means is still up for debate). I’m new to the conversation and feeling my way into a world that people far smarter and wiser (and familiar with ASL and Deaf culture!) have created before me, with the hopes of contributing to it as well.

My question is: what would it look like to do this in engineering, computing, and in engineering/computing education? I’ve seen scholarship in ASL, but only for clearly ASL/Deafness related fields… signed linguistics, Deaf education, Deaf history and rights, and so on. I’ve seen stuff about ASL in other fields, but it was written in English. What does it look like to do engineering and computing work in ASL and/or in a culturally Deaf manner? What would culturally Deaf engineering look like?

And I’m pretty sure that look is a key operative word here, but it’s also going to sound like something — Deafness doesn’t mean the absence of auditory information! — and it’s also going to be a host of other things, because Deafness isn’t just about visuals; consider the DeafBlind community, consider all the tactile/kinesthetic richness of the world, consider — but I digress.

But what will Deaf Engineering (and Computing) be like? I don’t know. I’m aware that I’m continuing to write these blog posts in English, and I’m okay with that right now, so long as my actual published/presented outcomes on this front come out bilingually. In part I’m writing in English because this is my scribble pad and I’m a native English writer, and it’s what my thoughts come out most fluidly in (if I thought best in Spanish, I’d be writing in Spanish). But these kinds of resources are not just examples and resources for my future students; they’re building blocks for me of what might be, what things might look like. And I can also tell from watching them that they took tremendous amounts of work to create, so…

..examples. I leave them here as exercises for the reader.


Seeing myself in the (literal) mirror at NTID’s IT office


Some of you already know (and my previous blog post has hinted) that I’m working in a Deaf environment for the first time in my life — the Center on Access Technology (CAT, pronounced like the animal and signed as an acronym) in Rochester, NY. There’s far too much to say about this — I am glad to be here, it’s an incredible learning experience, and I often feel like a stranger in a strange land… but if there’s anything my training in writing and qualitative research has taught me, it’s the power of vignettes and thick descriptions of small moments. So that’s what I’ll start to share. This one is a very small moment, but it was one of the first things that struck me.

So I’m a new faculty member, trying to figure out how one connects to internet, printers, and so forth, as one does. I’m hitting snags, so I walk over to the IT office inside NTID (basically, the Deaf college within RIT). As I’m waiting for the IT staffer to fiddle with my laptop and fix my connectivity issues, I look around. It’s an IT office, full of familiar-looking cords and bins and tables of acronyms pinned to the walls. I see the student workers perched in front of monitors, typing into a ticketing system.

And then I notice that all of the desks facing the wall have mirrors on that wall, behind the monitors. And my first thought is “oh, that’s nice – I guess it makes the room look bigger.” And then one student walks up behind another and begins to sign, and the second student turns around to smoothly engage them. And I suddenly remember: they’re all Deaf, too.

Like me, they can’t hear footfalls from behind. Like me, they would startle from their monitors with a sudden touch on the shoulder. The mirrors let you see someone approaching from behind, a gentle nudge of motion in your periphery, the visual equivalent of footsteps walking up. And all of this is set up so matter-of-factly, just… how it is, of course we put mirrors behind our monitors! and not as some odd flustered accommodation that treats me as a conundrum in the hearing world (“well, Mel can’t hear footsteps, because she’s deaf, so what do we do?”).

I’m used to having my existence in hearing spaces not forethought (“it never occurred to us that a deaf person might be interested in this event, so we didn’t make it accessible”). I’m used to having laborious forethought be the best-case scenario, where I’m a solitary trailblazing oddity (“we’re open to setting up captions for this; can you do the setting-up in your copious amounts of free time?”). It is strange to be in a place where my individual existence doesn’t need to be forethought, because the space has already been created and inhabited by — and expects to see more of — people like me. It is strange to, at least in this one significant way, not be the Other.

Of course, it’s more complex than that. Even NTID is by no means fully accessible (likewise with Gallaudet). The Deaf (and hard-of-hearing) communities are not homogenous; not everything meets everybody’s needs. I’m not just Deaf, I’m lots of other things as well, and many of those things are still unexpected, unanticipated, not-forethought. There’s a lot of solitaire trailblazing work to do here still.

But dang. A world that is accessible to me regardless of whether I’m there or not? A space that stays Deaf-friendly without me, whose Deaf-friendliness is not dependent on my constant nudging and performance of my life as a reminder that people like me exist? Approaches and solutions that go beyond the things my friends and I can think of on our own?

Whoa.


Talk notes: “Technologies that wake you up” from a DHH perspective


Today’s accomplishment: giving part of a (group) talk in my 4th language, and making people laugh both directly and through an interpreter. Watching the audience grin and nod and crack up in two waves was just this… super-gratifying experience — first the audience members who knew ASL, then the ones who were listening to the interpreter translate my signing into English, and I could just… track that.

Sure, I know there are still all these dysfluencies in my sign production. I’m not fully fluent yet, and I’m incredibly aware of that, and working hard on it. But to know that my personality, my sense of humor, can come through in ASL even to people who don’t sign — that’s a tremendous milestone I was afraid that I might never actually reach. It’s difficult to understate how personally significant this accomplishment is for me — I’ve gone from “I will never learn sign language! I’m not one of those Deaf people!” to “I mean, okay, I guess I could learn it as… another language, because interpreting gives me so much that I just miss, but… I’m always going to speak for myself, especially in a work context with hearing people around,” to… well… this.

My talk notes follow. I wrote them, memorized them, and then deviated from them (as one does). The larger context is that my lab (which is basically a Deaf engineering design firm) is doing a series of consumer technology reviews. These aren’t technologies specifically designed for DHH people, but rather everyday technologies from a DHH perspective. For instance, other colleagues looked at various items from Nest, Alexa, etc. — and did you know lots of these devices, even if they are visual, feature an audio-only setup? Annoyance. Folks had to keep calling over their hearing spouses, ask their kids to come over and put on their CI, etc. in order to just get through installation.

Anyway, my segment was on “technologies that wake you up,” because… well, I don’t own a house. And a substantial portion of our community is made of students. And I sleep super deeply, and get uber-grumpy when I’m woken up against my will — just ask my parents; this is a lifelong known cause of Grouchy Mel.

  • most alarm systems are designed for hearing people and are based on sound
  • obviously doesn’t work so well for DHH
  • known problem: historically, all kinds of solutions – rube goldberg contraptions that drop heavy things, hearing humans (hi mom!) who will wake you up at the appointed time, praying that you’ll wake up before X and not be late
  • but now we have TECHNOLOGY!
  • I’ll examine several more modern systems for waking up DHH sleepers
  • First: Can I use “hearing” alarms and somehow make them better?
  • Residual hearing: amplify! plug into speaker system… okay, maybe this isn’t so great for hearing housemates, and it still doesn’t wake me up all the time.
  • Mechanical-only solutions: put phones inside convex objects to concentrate/amplify the sound. Definitely not loud enough for me.
  • Okay, another mechanical solution: set a phone alarm to vibration mode, put on a thin and hard-walled hollow clattery object and close to the edge of stuff that makes noise when other things fall on it. Yeah, terrible idea. Not the most reliable solution, good luck getting up in the middle of the night without wrecking everything, and an alarm that relies on literally dropping your multi-hundred-dollar phone on the floor every day is maybe not the wisest.
  • Enter: specific devices! This is an alarm designed for DHH folks… how many of you have the Sonic Alert alarm clock? (hands go up)
  • Wakes people up in three ways: audio, the sound is customizable (frequency-set knob, volume-set knob)
  • “light flasher” which is an on/off outlet flasher, could plug anything in there
  • “bed shaker” which is an off-center load on a motor in a case (like cell phone vibrators)
  • It’s definitely effective at waking you up. Abruptly. Might not be the best for your mood for the rest of the day, but it works. (Insert explanation of sleep cycles here, with a lot of hamming it up)
  • Okay, but how about stuff that isn’t DHH-specific? Sound aside and vibration/tactile aside, what’s left as a way to wake folks up?
  • Smell and taste might not be useful for alarms (although the smell of tea makes me super happy when I wake up)
  • What’s left is sight
  • Did you know: most deaf people can see
  • Did you know: most hearing people can also see
  • Did you know: although sound might not work for both hearing and DHH folks, light might work for both
  • This is the idea behind the Philips Wake-up Light
  • Idea: you know how the sonic alert wakes you up abruptly? this wakes you gently, like the sun coming through the windows
  • You set the time you want to be awake, and for a period of time before that, the lights will gradually turn on so that you’re sleeping more lightly and close to waking by the time the alarm rings (with the lamp at full brightness)
  • Gentle light wakeup is amazing (display, in contrast, the book cover of Alexander and the Terrible Horrible No Good Very Bad Day)
  • Except that it doesn’t always wake you up all the way, so you need a last-minute push-over into full consciousness
  • Alas, the pre-recorded audio settings on this alarm consist mostly of birdsong (from my perspective, “silence 1,” “silence 2,” “silence 3,” and “silence 4″)
  • I personally need a separate alarm to make the startle sound/vibration/light at the appointed time, but the wake-up light does get me to the point where being woken up by something else is pretty pleasant
  • Not a DHH-specific access issue, but the UI for button placement stinks
  • Alternative, if you already have Philips Hue lights: hack the Hue to be a wake-up light
  • Program the Hue! set something to turn on gradually at an appointed time
  • Not as smooth as the Wake-up light, which starts from zero and smoothly goes up; definitely turns on abruptly and is a more jarring wake-up
  • For me: solves the problem of “the Wake-up light needs a tip-over”
  • And then Sonic Alert for mega-uber backup.
  • End the talk somehow and turn the floor back over to Brian.

Things that have made me happy lately: qual methods companion resource in ASL, my upcoming review of wake-up systems


These are random things that have made me happy today.

The first is that there is an ASL companion to a qualitative research methods textbook (focused on education and psychology, to boot!) I am already fascinated by the design and translation choices they have made in figuring out what it even means to have an ASL qual methods textbook… how multiple signers in the introduction switch between freezing in black and white when it’s not their turn, and becoming full-color and in-motion when it is, so your eye immediately knows who it’s following. How they’ve translated the phrase “chapter author” not as [chapter write-person], but rather as [chapter sign-person] — “they who have signed the chapters” rather than “they who have written down text for the chapters,” because the “text” is in ASL. These little subtle things that tell you that… yes, this is another culture; this is a different world. (Or in my framing: this is an alternate ontology.

Second is that I am giving my portion of a technology review lecture series (1) on ASL and (2) with a fairly decent dose of snarky humor. My topic? “Wake-up systems for DHH sleepers.” I plan to cover…

  • Cheap Hacks for People With Residual Hearing: makeshift and wholly mechanical scoop and rattle amplifiers for phones (put them on big hard hollow things or in cones made of hard materials… like hotel ice buckets!) Also, reasons why these setups may not be the greatest for smartphone users and/or profoundly deaf deep sleepers like myself.
  • Sonic Alert’s Sonic Boom, which emits ear-splitting shrieks at modifiable frequencies, flashes lights (or rather, intermittently turns on and off power to an electrical outlet embedded into its side), and rumbles a bed-shaker. (And, in high school when I had it close to my CRT monitor, it degaussed my monitor. Anyone want to check out a cute little EMP source?) Also, a brief overview of the sleep cycle, and how this device, while highly effective at actually waking one up, is terrible for waking one up pleasantly.
  • Philips Wake-Up Light: awesome, but expensive-ish, and… let’s talk about the usability of the physical design, shall we? (And the choice of bird sounds as the wake-up recording, which… to me, are setting options of “silence,” “other silence,” and “more different silence.”)
  • Philips Hue system as a cheaper and more hack-ish way to replicate some of the functionality of the wake-up light

Gotta work on my content, draft, translate, and rehearse this. It’ll be fun.


Gallaudet Peer Mentoring Certificate Program: first impressions


Some of you already know this, but I’m participating in Gallaudet’s Peer Mentoring Certificate Program, which trains adults with hearing loss on mentoring others with hearing loss. The original idea was for mentoring adults with acquired hearing loss (i.e. people who grew up hearing, and then became… not hearing). However, as someone who grew up oral deaf and knows how complex it can be to figure out the whole d/Deaf/HoH identity thing as a young, early-career adult… I also hope to work with folks like me.

And honestly, part of the reason I’m doing this is that I need this too. I do not have this figured out. Physiology does not come with a cultural/linguistic instruction manual. And if I’m going to explore this with my students and in my research, I darn well better prepare to explore this in ways that might go beyond… um… the usual professional/scholarly boundaries. We don’t ever fully separate our studies from ourselves — we just sometimes pretend we do. In this case, the professional and personal are so obviously interlinked that I need to be extremely thoughtful about how I do and don’t do them. Boundaries. They’re gonna happen.

So far, we’ve had a weekend at Gallaudet getting to meet each other in person — and then we meet in text chat once a week to discuss readings. The weekend meeting was super fun. The other members of my (tiny!) cohort are from all over the place, lots of diversity of experience — all of us are really good at getting through the hearing world, and have varying sorts of involvement in the HoH and Deaf worlds. Academics, engineers, doctors, HLAA officers, fluent signers, teachers of the Deaf, careers completely not-related to ASL/hearing/Deafness, curious non-signers, FM users, CI users, hearing aid users, people who prefer captions, people who prefer lipreading, people who prefer interpreting… so much fluidity! To my surprise, I found that I can codeswitch and mediate (read: “infomally interpret”) way more fluently than I’d thought… turns out that when I’m not incredibly anxious about signing (which is almost every single time I sign), my language skills increase considerably. (The anxiety bit is very much its own post; I may write it someday, I may not.)

As someone who is used to being the only non-hearing person in the room, it was definitely very, very weird (in a good way) to be in a room where there were people using so many different kinds of access. I do wish the quality of captions had been better; I was thankful for the great interpreters we had, and noticed a clear discrepancy between the quality of access provided by the two modalities (because of provider skill — we could have had lousy terps and a great captioner, and the situation would have been the other way around). I wonder what it was like for my classmates who don’t understand ASL and who were relying on captions. We all had to learn and practice advocating for our needs as the weekend went along, which — seriously, good skill to practice, especially in the context of mentoring other people with hearing loss (we’ll have to model this sort of behavior, and it starts with being able to do it ourselves).

Another good thing: when communication wrinkles came up — which they did, because the captioners dropped things, and the interpreters got tired, and the T-coil loop didn’t always work — we stopped, we worked to fix it, we didn’t just keep going and leave people out. We tried really, really hard to not just quietly tolerate it… we thanked each other for noticing, for asking. For some of us, it was a profound experience — some people had never been thanked for that before, especially in a world where asking people to repeat, etc. is often framed as “why are you so bothersome, you annoying deaf person, asking for things?” It was a good learning opportunity for all of us. A good chance for us to practice what we preach, with all the awkwardness and “but how do we account for this delay in what we’d planned to do?” that it entails.

Our first class this fall (it feels more like a lightweight reading group — compared to grad school, super chill!) is on hearing loss in America — lots of historical/cultural/legal overviews. I’m going to get caught up with those readings now, since it’s Sunday afternoon and I’m tired and want something light and fun to do. So we’ll see where this goes! I make no promises about regular updates, but if people ask, I’m more likely to blog about the program.


Oral deaf audio MacGyver: identifying speakers


Being oral deaf is like being MacGyver with audio data, except that the constant MacGyvering is normal since you do it for every interaction of every day. Posting because this seems interesting/useful to other people, although I’m personally still in the “wait, why are people so amused/surprised by this… does not everyone do this, is this not perfectly logical?”

I was explaining how I use my residual hearing to sort-of identify speakers, using faculty meetings as an example. The very short version is that it’s like constructing and doing logic grid puzzles constantly. Logic grid puzzles are ones where you get clues like…

  1. There are five houses.
  2. The Englishman lives in the red house.
  3. The Spaniard owns the dog.
  4. Coffee is drunk in the green house.
  5. The Ukrainian drinks tea.
  6. The green house is immediately to the right of the ivory house.

…and so forth, and have to figure out what’s going on from making a grid and figuring out that the Ukranian can’t possibly live in the green house because they drink tea and the green house person drinks coffee, and so forth.

Now the long explanation, in the context of being oral deaf. Some background: I’m profoundly deaf, with some low-frequency hearing; I use hearing aids and a hybrid CI (typically the CI plus one hearing aid). Generally speaking, I can’t actually hear enough to identify people through voice alone — but I can say some things about some attributes of their voice. For instance, I can tell (to some approximation) if a singer is in-tune, in-rhythm, and in control of their voice, and I can tell the difference between a low bass and a first soprano… but I wouldn’t be able to listen to a strange song and go “oh, that’s Michael Buble!” (My hearing friends assure me that his voice is quite distinctive.)

However! When I know people and have heard their voice (along with lipreading and context) for a while, I do know that their voices do and don’t have certain attributes I can perceive. And even if I’m not using my residual hearing/audio-related gadgetry to get semantic information (i.e. the words someone is saying) because I have better alternatives in that context (interpretation, captioning) I will still want audio…

…and I will pause for a short sidebar right now, because it might seem, to hearing people, that this is the only logical course of action — that hearing more is always good for understanding more. It isn’t. Extra information is only information if it’s worth the mental effort tradeoff to turn it into useful data; otherwise, it’s noise. It’s the same reason you would probably be happy if the background noise in a loud bar went away while you were talking to your friend. That background noise is “extra data,” but it’s not informative to you and just takes more effort to process it away.

In my case — and the case of my deaf friends who prefer to not use residual hearing when there’s another access option available — we’re patching across multiple languages/modalities on a time delay, and that triggers two competing thought streams. If you want to know what that feels like, try to fluently type a letter to one friend while speaking to another on a different topic. Physically, you can do it — your eyeballs and hands are on the written letter, your ears and mouth are in the spoken conversation — but your brain will struggle. Don’t switch back and forth between them (which is what most people will immediately start to do) — actually do both tasks in parallel. It’s very, very hard. In our case, one stream is lossy auditory English as the speaker utters something, and the other is clear written English or clear ASL visuals some seconds behind it. (Assuming your provider is good. Sometimes this data stream is… less clear and accurate than one might like.) Merging/reconciling the two streams is one heck of a mental load… and since we *can* shut off the lossy auditory English as “noise” rather than “signal,” sometimes we do.

Anyway, back to the main point. Sometimes I don’t want the audio data for semantic purposes — but I want it for some other purposes, so I’ll leave my devices on. Oftentimes, this reason is “I’d like to identify who’s speaking.” Knowing who said what is often just as important as what’s being said, and this is often not information available through that other, more accessible data stream — for instance, a random local interpreter who shows up at your out-of-state conference will have no idea who your long-time cross-institutional colleagues are, so you’ll get something like “MAN OVER THERE [is saying these things]” and then “WOMAN OVER THERE [is saying these things]” and then try to look in that direction yourself for a split-second to see which WOMAN OVER THERE is actually talking.

This is where the auditory data sometimes comes in. I can sometimes logic out some things about speaker identity using my fuzzy auditory sense along with other visually-based data, both in-the-moment and short-term-memorized.

By “fuzzy sense,” I mean that auditorily — sometimes, in good listening conditions — I can tell things like “it’s a man’s voice, almost certainly… or rather, it is probably not a high soprano woman.” By in-the-moment visual data, I mean things like “the person speaking is not in my line of sight right now” and “the interpreter / the few people who are in my line of sight right now are looking, generally, in this direction.” By short-term-memorized visual data, I mean things like “I memorized roughly who was sitting where during the few seconds when I was walking into the room, but not in great detail because I was also waving to a colleague and grabbing coffee at the same time… nevertheless, I have a rough idea of some aspects of who might be where.”

So then I think — automatically — something like this. “Oh, it’s a man now, and not in my line of sight right now, and that has two possibilities because I’ve quasi-memorized where everyone is sitting when I walked into the room, so using the process of elimination…”

Again, the auditory part is mostly about gross differences like bass voices vs sopranos in no background noise. Sometimes it’s not about what I can identify about voice attributes, but also about what I can’t — “I don’t know if this is a man or a woman, but this person is not a high soprano… also, they are not speaking super fast based on the rhythm I can catch. Must not be persons X or Y.”

For instance, at work, I have colleagues whose patterns are…

  1. Slow sounds, many pauses, not a soprano
  2. Super fast, not a bass, no pauses, machine gun syllable patterns
  3. Incredibly variant prosody, probably not a woman but not obviously a bass
  4. Slower cadence and more rolling prosody with pauses that feel like completions of thoughts rather than mid-thought processing (clear dips and stresses at the ends of sentences)
  5. Almost identical to the above, but with sentences that have often not ended, but pauses are occurring and prosodic patterns are repeating and halting and repeating

These are all distinctive fingerprints, to me — combined with knowing where they’re sitting, and I have decently high confidence in most of my guesses. And then there are people who won’t speak unless I’m actually looking at them or the interpreter or the captioning, and that’s data too. (“Why is it quiet? Oh! Person A is going to talk, and is waiting for me to be ready for them to speak.”)

There’s more to this. Sometimes I’ll look away and guess at what they’re saying because I know their personalities, their interests, what they’re likely to say and talk about, opinions they’re likely to hold… I build Markov models for their sentence structures and vocabularies, and I’m pretty good at prediction… there’s a lot more here, but this is a breakdown of one specific aspect of the constant logic puzzles I solve in my head as a deaf person.

In terms of my pure-tone audiogram, I shouldn’t be able to do what I do — and it’s true, I can’t from in-the-moment audio alone. But combined with a lot of other things, including a tolerance of extreme cognitive fatigue? Maybe. In the “zebra puzzle,” where I drew the example logic puzzle clues from at the beginning, there are a series of clues that go on and on… and then the questions at the end are “who drinks water?” and “who owns the zebra?” Neither water nor zebra are mentioned in any of the clues above, so the first response might be “what the… you never said anything about… what zebra?” But you can figure it out with logic. Lots of logic. And you have the advantage of knowing that the puzzle is a logic puzzle and that it ought to be solvable, meaning that with logic, you can figure out who owns the zebra. In the real world… nobody tells you something could become a logic puzzle, and you never know if they are solvable. But I try them anyway.


Some thoughts that I don’t want to have, regarding people getting shot


This post could be written by a lot of people who belong to a lot of groups. This post has been written by a lot of people who belong to a lot of groups, pharm and you should find and read those things too. This just happens to be the post that I can write, about a group that I belong to also.

Trigger warnings: audism, racism, discussions of police-related violence/shooting, probably some other stuff.

A number of (hearing) friends from a bunch of my (different) social circles recently sent me — almost simultaneously — links to news stories about Deaf people getting killed by cops who couldn’t communicate with them.

This is nothing new. It’s been happening for ages. Someone with a gun gets scared and pulls the trigger, and someone else is dead. Maybe that person is Deaf. Maybe that person is Black. In any case, that person is now dead, and that’s not okay. (Maybe that person is both Deaf and Black, and we mention the second part but not the first. That’s disability erasure that, statistically, correlates highly with race; that’s also not okay.)

I’ve been deaf as long as I can remember, and I’ve known these stories happened for a long, long time. But this is the first time I’ve watched them from inside the conversations of a Deaf community — for some definition of “inside” that includes confused mainstreamed-oral youngsters like me who are struggling to learn ASL and figure out where they fit.

I’m a geek, a scholar, and an academic. My last long string of blog posts is part of a draft chapter on postmodernist philosophy as a theoretical language for describing maker/hacker/open-source culture within engineering education, and honestly… that’s what I’d rather write about. That’s what I’d rather think about. That’s what I’d rather sign about. Not people getting shot. A large portion of my Deaf friends are also geeks and scholars — older and more experienced than me, with tips on how to request ASL interpreting for doctoral defenses and faculty meetings, how to use FM units to teach class, how to navigate accessibility negotiations when your book wins awards and you get international speaking invitations. They are kind and brilliant and passionate and wonderful I love them and I want to be one of them when I grow up.

And we are geeks when we talk about these deaths, too. Kind and brilliant and passionate and wonderful. And my heart bursts with gratitude that I know these people, because it’s such a thoughtful and complex discussion, from so many perspectives, drawing on so many historical, theoretical, personal, etc. threads… the narratives I love, the sorts of tricky complexity that brought me back to graduate school and sent me hurtling down years of studying intricate threads of thought so I could better appreciate the mysteries that people and their stories are.

And I can’t stop thinking that any of us — any of these kind and brilliant and passionate and wonderful geeks in the middle of these great and rather hopeful discussions about complex societal dynamics and how to improve them — we could be taken out by a single bullet from a cop who doesn’t know.

I’ve learned a lot of things about being a deaf woman of color in the past year. I’m lucky; I look like a “good” minority, a white-skinned Asian who can play to stereotypes of quiet submission — but even then. And I know lots of people who can’t. And one of the first things I learned was how to stop pretending to be hearing all the time — especially in any interaction involving someone with a badge or guns (airports, traffic stops, anything). This isn’t just because it’s exhausting to lipread, but because it can be dangerous to piss off someone who thinks you’re ignoring them out of malice or attitude rather than the truth that you simply didn’t hear them shouting.

I first learned this sort of thing in undergrad, when some of my engineering college friends were horrified by stories of some other student from some other engineering college arrested by panicky cops for carrying around an electronics project. I thought they were upset for the same reasons I was — because it was a stupendous overreaction on the part of the cops and the school. And it was. But they were also worried because — what if that had been me? And the cops had shouted stop, and turn around, and put down the device — and I didn’t hear them?

“It’s fine. I mean, I’m deaf, but I can talk — I would explain things. I would figure it out,” I told them at the time. “I’m smart, you know.” As if that would protect me, as if I could compensate that way — because I’d compensated that way for so much, for all my life.

But being smart doesn’t make you more hearing — to hear shouts from people pointing guns at you — or less dead, once they fire them. And being smart doesn’t spare you from assumptions people make because of how you’re navigating tradeoffs. If you’re a PhD who decides to go voice-off while getting through airport security because it means you’re less likely to get shot, you’re going to get treated like a very small and stupid child. Maybe not every time, and not by everyone, but enough that swallowing your pride becomes a normal part of flying. No written note, no typed message, no outward display of intelligence that I’ve been able to figure out has made someone recognize the intellectual identity I’m trying to communicate when they’ve already assumed it isn’t there.

And being smart doesn’t mean you can think your way out of other people’s assumptions and their ignorance and their inability to see who you are. And being smart isn’t what gives your life its value; being human does. (Being smart doesn’t make you more special than people who don’t rank as high on whatever flawed metric of smartness you or the world decide to use.) And being kind and brilliant and passionate and wonderful does not exempt you from being heartbroken when the world is broken, and afraid because it hurts you, and your friends, and people like you, and people like your friends, for a lot of different reasons that shouldn’t matter in the world, but do.

I wish I were more eloquent, but I can’t think about this too much and still do things like finish my doctoral dissertation this week. I wish I could speak to how this isn’t just about violence against Deaf and disabled people, how I’m not just speaking up right now because I happen to belong to those groups too — this breaks my heart when it’s Black people and queer people and Christian people and female people and trans people and… people. It’s mostly that I can speak a little bit more readily from inside groups I’m in, and that I have a little bit of time to vent this out right now, between writing a section on “postmodern narrative sensemaking as plural” and another on “narrative accruals as co-constructing communities of practice.”

Back to the world, I guess. Back to writing my stories of the gorgeousness and complexity and hope that always lives inside the world that wins my heart and breaks it all at the same time.