Posts that are hearing-ish

Seeing myself in the (literal) mirror at NTID’s IT office


Some of you already know (and my previous blog post has hinted) that I’m working in a Deaf environment for the first time in my life — the Center on Access Technology (CAT, pronounced like the animal and signed as an acronym) in Rochester, NY. There’s far too much to say about this — I am glad to be here, it’s an incredible learning experience, and I often feel like a stranger in a strange land… but if there’s anything my training in writing and qualitative research has taught me, it’s the power of vignettes and thick descriptions of small moments. So that’s what I’ll start to share. This one is a very small moment, but it was one of the first things that struck me.

So I’m a new faculty member, trying to figure out how one connects to internet, printers, and so forth, as one does. I’m hitting snags, so I walk over to the IT office inside NTID (basically, the Deaf college within RIT). As I’m waiting for the IT staffer to fiddle with my laptop and fix my connectivity issues, I look around. It’s an IT office, full of familiar-looking cords and bins and tables of acronyms pinned to the walls. I see the student workers perched in front of monitors, typing into a ticketing system.

And then I notice that all of the desks facing the wall have mirrors on that wall, behind the monitors. And my first thought is “oh, that’s nice – I guess it makes the room look bigger.” And then one student walks up behind another and begins to sign, and the second student turns around to smoothly engage them. And I suddenly remember: they’re all Deaf, too.

Like me, they can’t hear footfalls from behind. Like me, they would startle from their monitors with a sudden touch on the shoulder. The mirrors let you see someone approaching from behind, a gentle nudge of motion in your periphery, the visual equivalent of footsteps walking up. And all of this is set up so matter-of-factly, just… how it is, of course we put mirrors behind our monitors! and not as some odd flustered accommodation that treats me as a conundrum in the hearing world (“well, Mel can’t hear footsteps, because she’s deaf, so what do we do?”).

I’m used to having my existence in hearing spaces not forethought (“it never occurred to us that a deaf person might be interested in this event, so we didn’t make it accessible”). I’m used to having laborious forethought be the best-case scenario, where I’m a solitary trailblazing oddity (“we’re open to setting up captions for this; can you do the setting-up in your copious amounts of free time?”). It is strange to be in a place where my individual existence doesn’t need to be forethought, because the space has already been created and inhabited by — and expects to see more of — people like me. It is strange to, at least in this one significant way, not be the Other.

Of course, it’s more complex than that. Even NTID is by no means fully accessible (likewise with Gallaudet). The Deaf (and hard-of-hearing) communities are not homogenous; not everything meets everybody’s needs. I’m not just Deaf, I’m lots of other things as well, and many of those things are still unexpected, unanticipated, not-forethought. There’s a lot of solitaire trailblazing work to do here still.

But dang. A world that is accessible to me regardless of whether I’m there or not? A space that stays Deaf-friendly without me, whose Deaf-friendliness is not dependent on my constant nudging and performance of my life as a reminder that people like me exist? Approaches and solutions that go beyond the things my friends and I can think of on our own?

Whoa.


Talk notes: “Technologies that wake you up” from a DHH perspective


Today’s accomplishment: giving part of a (group) talk in my 4th language, and making people laugh both directly and through an interpreter. Watching the audience grin and nod and crack up in two waves was just this… super-gratifying experience — first the audience members who knew ASL, then the ones who were listening to the interpreter translate my signing into English, and I could just… track that.

Sure, I know there are still all these dysfluencies in my sign production. I’m not fully fluent yet, and I’m incredibly aware of that, and working hard on it. But to know that my personality, my sense of humor, can come through in ASL even to people who don’t sign — that’s a tremendous milestone I was afraid that I might never actually reach. It’s difficult to understate how personally significant this accomplishment is for me — I’ve gone from “I will never learn sign language! I’m not one of those Deaf people!” to “I mean, okay, I guess I could learn it as… another language, because interpreting gives me so much that I just miss, but… I’m always going to speak for myself, especially in a work context with hearing people around,” to… well… this.

My talk notes follow. I wrote them, memorized them, and then deviated from them (as one does). The larger context is that my lab (which is basically a Deaf engineering design firm) is doing a series of consumer technology reviews. These aren’t technologies specifically designed for DHH people, but rather everyday technologies from a DHH perspective. For instance, other colleagues looked at various items from Nest, Alexa, etc. — and did you know lots of these devices, even if they are visual, feature an audio-only setup? Annoyance. Folks had to keep calling over their hearing spouses, ask their kids to come over and put on their CI, etc. in order to just get through installation.

Anyway, my segment was on “technologies that wake you up,” because… well, I don’t own a house. And a substantial portion of our community is made of students. And I sleep super deeply, and get uber-grumpy when I’m woken up against my will — just ask my parents; this is a lifelong known cause of Grouchy Mel.

  • most alarm systems are designed for hearing people and are based on sound
  • obviously doesn’t work so well for DHH
  • known problem: historically, all kinds of solutions – rube goldberg contraptions that drop heavy things, hearing humans (hi mom!) who will wake you up at the appointed time, praying that you’ll wake up before X and not be late
  • but now we have TECHNOLOGY!
  • I’ll examine several more modern systems for waking up DHH sleepers
  • First: Can I use “hearing” alarms and somehow make them better?
  • Residual hearing: amplify! plug into speaker system… okay, maybe this isn’t so great for hearing housemates, and it still doesn’t wake me up all the time.
  • Mechanical-only solutions: put phones inside convex objects to concentrate/amplify the sound. Definitely not loud enough for me.
  • Okay, another mechanical solution: set a phone alarm to vibration mode, put on a thin and hard-walled hollow clattery object and close to the edge of stuff that makes noise when other things fall on it. Yeah, terrible idea. Not the most reliable solution, good luck getting up in the middle of the night without wrecking everything, and an alarm that relies on literally dropping your multi-hundred-dollar phone on the floor every day is maybe not the wisest.
  • Enter: specific devices! This is an alarm designed for DHH folks… how many of you have the Sonic Alert alarm clock? (hands go up)
  • Wakes people up in three ways: audio, the sound is customizable (frequency-set knob, volume-set knob)
  • “light flasher” which is an on/off outlet flasher, could plug anything in there
  • “bed shaker” which is an off-center load on a motor in a case (like cell phone vibrators)
  • It’s definitely effective at waking you up. Abruptly. Might not be the best for your mood for the rest of the day, but it works. (Insert explanation of sleep cycles here, with a lot of hamming it up)
  • Okay, but how about stuff that isn’t DHH-specific? Sound aside and vibration/tactile aside, what’s left as a way to wake folks up?
  • Smell and taste might not be useful for alarms (although the smell of tea makes me super happy when I wake up)
  • What’s left is sight
  • Did you know: most deaf people can see
  • Did you know: most hearing people can also see
  • Did you know: although sound might not work for both hearing and DHH folks, light might work for both
  • This is the idea behind the Philips Wake-up Light
  • Idea: you know how the sonic alert wakes you up abruptly? this wakes you gently, like the sun coming through the windows
  • You set the time you want to be awake, and for a period of time before that, the lights will gradually turn on so that you’re sleeping more lightly and close to waking by the time the alarm rings (with the lamp at full brightness)
  • Gentle light wakeup is amazing (display, in contrast, the book cover of Alexander and the Terrible Horrible No Good Very Bad Day)
  • Except that it doesn’t always wake you up all the way, so you need a last-minute push-over into full consciousness
  • Alas, the pre-recorded audio settings on this alarm consist mostly of birdsong (from my perspective, “silence 1,” “silence 2,” “silence 3,” and “silence 4″)
  • I personally need a separate alarm to make the startle sound/vibration/light at the appointed time, but the wake-up light does get me to the point where being woken up by something else is pretty pleasant
  • Not a DHH-specific access issue, but the UI for button placement stinks
  • Alternative, if you already have Philips Hue lights: hack the Hue to be a wake-up light
  • Program the Hue! set something to turn on gradually at an appointed time
  • Not as smooth as the Wake-up light, which starts from zero and smoothly goes up; definitely turns on abruptly and is a more jarring wake-up
  • For me: solves the problem of “the Wake-up light needs a tip-over”
  • And then Sonic Alert for mega-uber backup.
  • End the talk somehow and turn the floor back over to Brian.

Things that have made me happy lately: qual methods companion resource in ASL, my upcoming review of wake-up systems


These are random things that have made me happy today.

The first is that there is an ASL companion to a qualitative research methods textbook (focused on education and psychology, to boot!) I am already fascinated by the design and translation choices they have made in figuring out what it even means to have an ASL qual methods textbook… how multiple signers in the introduction switch between freezing in black and white when it’s not their turn, and becoming full-color and in-motion when it is, so your eye immediately knows who it’s following. How they’ve translated the phrase “chapter author” not as [chapter write-person], but rather as [chapter sign-person] — “they who have signed the chapters” rather than “they who have written down text for the chapters,” because the “text” is in ASL. These little subtle things that tell you that… yes, this is another culture; this is a different world. (Or in my framing: this is an alternate ontology.

Second is that I am giving my portion of a technology review lecture series (1) on ASL and (2) with a fairly decent dose of snarky humor. My topic? “Wake-up systems for DHH sleepers.” I plan to cover…

  • Cheap Hacks for People With Residual Hearing: makeshift and wholly mechanical scoop and rattle amplifiers for phones (put them on big hard hollow things or in cones made of hard materials… like hotel ice buckets!) Also, reasons why these setups may not be the greatest for smartphone users and/or profoundly deaf deep sleepers like myself.
  • Sonic Alert’s Sonic Boom, which emits ear-splitting shrieks at modifiable frequencies, flashes lights (or rather, intermittently turns on and off power to an electrical outlet embedded into its side), and rumbles a bed-shaker. (And, in high school when I had it close to my CRT monitor, it degaussed my monitor. Anyone want to check out a cute little EMP source?) Also, a brief overview of the sleep cycle, and how this device, while highly effective at actually waking one up, is terrible for waking one up pleasantly.
  • Philips Wake-Up Light: awesome, but expensive-ish, and… let’s talk about the usability of the physical design, shall we? (And the choice of bird sounds as the wake-up recording, which… to me, are setting options of “silence,” “other silence,” and “more different silence.”)
  • Philips Hue system as a cheaper and more hack-ish way to replicate some of the functionality of the wake-up light

Gotta work on my content, draft, translate, and rehearse this. It’ll be fun.


Gallaudet Peer Mentoring Certificate Program: first impressions


Some of you already know this, but I’m participating in Gallaudet’s Peer Mentoring Certificate Program, which trains adults with hearing loss on mentoring others with hearing loss. The original idea was for mentoring adults with acquired hearing loss (i.e. people who grew up hearing, and then became… not hearing). However, as someone who grew up oral deaf and knows how complex it can be to figure out the whole d/Deaf/HoH identity thing as a young, early-career adult… I also hope to work with folks like me.

And honestly, part of the reason I’m doing this is that I need this too. I do not have this figured out. Physiology does not come with a cultural/linguistic instruction manual. And if I’m going to explore this with my students and in my research, I darn well better prepare to explore this in ways that might go beyond… um… the usual professional/scholarly boundaries. We don’t ever fully separate our studies from ourselves — we just sometimes pretend we do. In this case, the professional and personal are so obviously interlinked that I need to be extremely thoughtful about how I do and don’t do them. Boundaries. They’re gonna happen.

So far, we’ve had a weekend at Gallaudet getting to meet each other in person — and then we meet in text chat once a week to discuss readings. The weekend meeting was super fun. The other members of my (tiny!) cohort are from all over the place, lots of diversity of experience — all of us are really good at getting through the hearing world, and have varying sorts of involvement in the HoH and Deaf worlds. Academics, engineers, doctors, HLAA officers, fluent signers, teachers of the Deaf, careers completely not-related to ASL/hearing/Deafness, curious non-signers, FM users, CI users, hearing aid users, people who prefer captions, people who prefer lipreading, people who prefer interpreting… so much fluidity! To my surprise, I found that I can codeswitch and mediate (read: “infomally interpret”) way more fluently than I’d thought… turns out that when I’m not incredibly anxious about signing (which is almost every single time I sign), my language skills increase considerably. (The anxiety bit is very much its own post; I may write it someday, I may not.)

As someone who is used to being the only non-hearing person in the room, it was definitely very, very weird (in a good way) to be in a room where there were people using so many different kinds of access. I do wish the quality of captions had been better; I was thankful for the great interpreters we had, and noticed a clear discrepancy between the quality of access provided by the two modalities (because of provider skill — we could have had lousy terps and a great captioner, and the situation would have been the other way around). I wonder what it was like for my classmates who don’t understand ASL and who were relying on captions. We all had to learn and practice advocating for our needs as the weekend went along, which — seriously, good skill to practice, especially in the context of mentoring other people with hearing loss (we’ll have to model this sort of behavior, and it starts with being able to do it ourselves).

Another good thing: when communication wrinkles came up — which they did, because the captioners dropped things, and the interpreters got tired, and the T-coil loop didn’t always work — we stopped, we worked to fix it, we didn’t just keep going and leave people out. We tried really, really hard to not just quietly tolerate it… we thanked each other for noticing, for asking. For some of us, it was a profound experience — some people had never been thanked for that before, especially in a world where asking people to repeat, etc. is often framed as “why are you so bothersome, you annoying deaf person, asking for things?” It was a good learning opportunity for all of us. A good chance for us to practice what we preach, with all the awkwardness and “but how do we account for this delay in what we’d planned to do?” that it entails.

Our first class this fall (it feels more like a lightweight reading group — compared to grad school, super chill!) is on hearing loss in America — lots of historical/cultural/legal overviews. I’m going to get caught up with those readings now, since it’s Sunday afternoon and I’m tired and want something light and fun to do. So we’ll see where this goes! I make no promises about regular updates, but if people ask, I’m more likely to blog about the program.


Oral deaf audio MacGyver: identifying speakers


Being oral deaf is like being MacGyver with audio data, except that the constant MacGyvering is normal since you do it for every interaction of every day. Posting because this seems interesting/useful to other people, although I’m personally still in the “wait, why are people so amused/surprised by this… does not everyone do this, is this not perfectly logical?”

I was explaining how I use my residual hearing to sort-of identify speakers, using faculty meetings as an example. The very short version is that it’s like constructing and doing logic grid puzzles constantly. Logic grid puzzles are ones where you get clues like…

  1. There are five houses.
  2. The Englishman lives in the red house.
  3. The Spaniard owns the dog.
  4. Coffee is drunk in the green house.
  5. The Ukrainian drinks tea.
  6. The green house is immediately to the right of the ivory house.

…and so forth, and have to figure out what’s going on from making a grid and figuring out that the Ukranian can’t possibly live in the green house because they drink tea and the green house person drinks coffee, and so forth.

Now the long explanation, in the context of being oral deaf. Some background: I’m profoundly deaf, with some low-frequency hearing; I use hearing aids and a hybrid CI (typically the CI plus one hearing aid). Generally speaking, I can’t actually hear enough to identify people through voice alone — but I can say some things about some attributes of their voice. For instance, I can tell (to some approximation) if a singer is in-tune, in-rhythm, and in control of their voice, and I can tell the difference between a low bass and a first soprano… but I wouldn’t be able to listen to a strange song and go “oh, that’s Michael Buble!” (My hearing friends assure me that his voice is quite distinctive.)

However! When I know people and have heard their voice (along with lipreading and context) for a while, I do know that their voices do and don’t have certain attributes I can perceive. And even if I’m not using my residual hearing/audio-related gadgetry to get semantic information (i.e. the words someone is saying) because I have better alternatives in that context (interpretation, captioning) I will still want audio…

…and I will pause for a short sidebar right now, because it might seem, to hearing people, that this is the only logical course of action — that hearing more is always good for understanding more. It isn’t. Extra information is only information if it’s worth the mental effort tradeoff to turn it into useful data; otherwise, it’s noise. It’s the same reason you would probably be happy if the background noise in a loud bar went away while you were talking to your friend. That background noise is “extra data,” but it’s not informative to you and just takes more effort to process it away.

In my case — and the case of my deaf friends who prefer to not use residual hearing when there’s another access option available — we’re patching across multiple languages/modalities on a time delay, and that triggers two competing thought streams. If you want to know what that feels like, try to fluently type a letter to one friend while speaking to another on a different topic. Physically, you can do it — your eyeballs and hands are on the written letter, your ears and mouth are in the spoken conversation — but your brain will struggle. Don’t switch back and forth between them (which is what most people will immediately start to do) — actually do both tasks in parallel. It’s very, very hard. In our case, one stream is lossy auditory English as the speaker utters something, and the other is clear written English or clear ASL visuals some seconds behind it. (Assuming your provider is good. Sometimes this data stream is… less clear and accurate than one might like.) Merging/reconciling the two streams is one heck of a mental load… and since we *can* shut off the lossy auditory English as “noise” rather than “signal,” sometimes we do.

Anyway, back to the main point. Sometimes I don’t want the audio data for semantic purposes — but I want it for some other purposes, so I’ll leave my devices on. Oftentimes, this reason is “I’d like to identify who’s speaking.” Knowing who said what is often just as important as what’s being said, and this is often not information available through that other, more accessible data stream — for instance, a random local interpreter who shows up at your out-of-state conference will have no idea who your long-time cross-institutional colleagues are, so you’ll get something like “MAN OVER THERE [is saying these things]” and then “WOMAN OVER THERE [is saying these things]” and then try to look in that direction yourself for a split-second to see which WOMAN OVER THERE is actually talking.

This is where the auditory data sometimes comes in. I can sometimes logic out some things about speaker identity using my fuzzy auditory sense along with other visually-based data, both in-the-moment and short-term-memorized.

By “fuzzy sense,” I mean that auditorily — sometimes, in good listening conditions — I can tell things like “it’s a man’s voice, almost certainly… or rather, it is probably not a high soprano woman.” By in-the-moment visual data, I mean things like “the person speaking is not in my line of sight right now” and “the interpreter / the few people who are in my line of sight right now are looking, generally, in this direction.” By short-term-memorized visual data, I mean things like “I memorized roughly who was sitting where during the few seconds when I was walking into the room, but not in great detail because I was also waving to a colleague and grabbing coffee at the same time… nevertheless, I have a rough idea of some aspects of who might be where.”

So then I think — automatically — something like this. “Oh, it’s a man now, and not in my line of sight right now, and that has two possibilities because I’ve quasi-memorized where everyone is sitting when I walked into the room, so using the process of elimination…”

Again, the auditory part is mostly about gross differences like bass voices vs sopranos in no background noise. Sometimes it’s not about what I can identify about voice attributes, but also about what I can’t — “I don’t know if this is a man or a woman, but this person is not a high soprano… also, they are not speaking super fast based on the rhythm I can catch. Must not be persons X or Y.”

For instance, at work, I have colleagues whose patterns are…

  1. Slow sounds, many pauses, not a soprano
  2. Super fast, not a bass, no pauses, machine gun syllable patterns
  3. Incredibly variant prosody, probably not a woman but not obviously a bass
  4. Slower cadence and more rolling prosody with pauses that feel like completions of thoughts rather than mid-thought processing (clear dips and stresses at the ends of sentences)
  5. Almost identical to the above, but with sentences that have often not ended, but pauses are occurring and prosodic patterns are repeating and halting and repeating

These are all distinctive fingerprints, to me — combined with knowing where they’re sitting, and I have decently high confidence in most of my guesses. And then there are people who won’t speak unless I’m actually looking at them or the interpreter or the captioning, and that’s data too. (“Why is it quiet? Oh! Person A is going to talk, and is waiting for me to be ready for them to speak.”)

There’s more to this. Sometimes I’ll look away and guess at what they’re saying because I know their personalities, their interests, what they’re likely to say and talk about, opinions they’re likely to hold… I build Markov models for their sentence structures and vocabularies, and I’m pretty good at prediction… there’s a lot more here, but this is a breakdown of one specific aspect of the constant logic puzzles I solve in my head as a deaf person.

In terms of my pure-tone audiogram, I shouldn’t be able to do what I do — and it’s true, I can’t from in-the-moment audio alone. But combined with a lot of other things, including a tolerance of extreme cognitive fatigue? Maybe. In the “zebra puzzle,” where I drew the example logic puzzle clues from at the beginning, there are a series of clues that go on and on… and then the questions at the end are “who drinks water?” and “who owns the zebra?” Neither water nor zebra are mentioned in any of the clues above, so the first response might be “what the… you never said anything about… what zebra?” But you can figure it out with logic. Lots of logic. And you have the advantage of knowing that the puzzle is a logic puzzle and that it ought to be solvable, meaning that with logic, you can figure out who owns the zebra. In the real world… nobody tells you something could become a logic puzzle, and you never know if they are solvable. But I try them anyway.


Some thoughts that I don’t want to have, regarding people getting shot


This post could be written by a lot of people who belong to a lot of groups. This post has been written by a lot of people who belong to a lot of groups, pharm and you should find and read those things too. This just happens to be the post that I can write, about a group that I belong to also.

Trigger warnings: audism, racism, discussions of police-related violence/shooting, probably some other stuff.

A number of (hearing) friends from a bunch of my (different) social circles recently sent me — almost simultaneously — links to news stories about Deaf people getting killed by cops who couldn’t communicate with them.

This is nothing new. It’s been happening for ages. Someone with a gun gets scared and pulls the trigger, and someone else is dead. Maybe that person is Deaf. Maybe that person is Black. In any case, that person is now dead, and that’s not okay. (Maybe that person is both Deaf and Black, and we mention the second part but not the first. That’s disability erasure that, statistically, correlates highly with race; that’s also not okay.)

I’ve been deaf as long as I can remember, and I’ve known these stories happened for a long, long time. But this is the first time I’ve watched them from inside the conversations of a Deaf community — for some definition of “inside” that includes confused mainstreamed-oral youngsters like me who are struggling to learn ASL and figure out where they fit.

I’m a geek, a scholar, and an academic. My last long string of blog posts is part of a draft chapter on postmodernist philosophy as a theoretical language for describing maker/hacker/open-source culture within engineering education, and honestly… that’s what I’d rather write about. That’s what I’d rather think about. That’s what I’d rather sign about. Not people getting shot. A large portion of my Deaf friends are also geeks and scholars — older and more experienced than me, with tips on how to request ASL interpreting for doctoral defenses and faculty meetings, how to use FM units to teach class, how to navigate accessibility negotiations when your book wins awards and you get international speaking invitations. They are kind and brilliant and passionate and wonderful I love them and I want to be one of them when I grow up.

And we are geeks when we talk about these deaths, too. Kind and brilliant and passionate and wonderful. And my heart bursts with gratitude that I know these people, because it’s such a thoughtful and complex discussion, from so many perspectives, drawing on so many historical, theoretical, personal, etc. threads… the narratives I love, the sorts of tricky complexity that brought me back to graduate school and sent me hurtling down years of studying intricate threads of thought so I could better appreciate the mysteries that people and their stories are.

And I can’t stop thinking that any of us — any of these kind and brilliant and passionate and wonderful geeks in the middle of these great and rather hopeful discussions about complex societal dynamics and how to improve them — we could be taken out by a single bullet from a cop who doesn’t know.

I’ve learned a lot of things about being a deaf woman of color in the past year. I’m lucky; I look like a “good” minority, a white-skinned Asian who can play to stereotypes of quiet submission — but even then. And I know lots of people who can’t. And one of the first things I learned was how to stop pretending to be hearing all the time — especially in any interaction involving someone with a badge or guns (airports, traffic stops, anything). This isn’t just because it’s exhausting to lipread, but because it can be dangerous to piss off someone who thinks you’re ignoring them out of malice or attitude rather than the truth that you simply didn’t hear them shouting.

I first learned this sort of thing in undergrad, when some of my engineering college friends were horrified by stories of some other student from some other engineering college arrested by panicky cops for carrying around an electronics project. I thought they were upset for the same reasons I was — because it was a stupendous overreaction on the part of the cops and the school. And it was. But they were also worried because — what if that had been me? And the cops had shouted stop, and turn around, and put down the device — and I didn’t hear them?

“It’s fine. I mean, I’m deaf, but I can talk — I would explain things. I would figure it out,” I told them at the time. “I’m smart, you know.” As if that would protect me, as if I could compensate that way — because I’d compensated that way for so much, for all my life.

But being smart doesn’t make you more hearing — to hear shouts from people pointing guns at you — or less dead, once they fire them. And being smart doesn’t spare you from assumptions people make because of how you’re navigating tradeoffs. If you’re a PhD who decides to go voice-off while getting through airport security because it means you’re less likely to get shot, you’re going to get treated like a very small and stupid child. Maybe not every time, and not by everyone, but enough that swallowing your pride becomes a normal part of flying. No written note, no typed message, no outward display of intelligence that I’ve been able to figure out has made someone recognize the intellectual identity I’m trying to communicate when they’ve already assumed it isn’t there.

And being smart doesn’t mean you can think your way out of other people’s assumptions and their ignorance and their inability to see who you are. And being smart isn’t what gives your life its value; being human does. (Being smart doesn’t make you more special than people who don’t rank as high on whatever flawed metric of smartness you or the world decide to use.) And being kind and brilliant and passionate and wonderful does not exempt you from being heartbroken when the world is broken, and afraid because it hurts you, and your friends, and people like you, and people like your friends, for a lot of different reasons that shouldn’t matter in the world, but do.

I wish I were more eloquent, but I can’t think about this too much and still do things like finish my doctoral dissertation this week. I wish I could speak to how this isn’t just about violence against Deaf and disabled people, how I’m not just speaking up right now because I happen to belong to those groups too — this breaks my heart when it’s Black people and queer people and Christian people and female people and trans people and… people. It’s mostly that I can speak a little bit more readily from inside groups I’m in, and that I have a little bit of time to vent this out right now, between writing a section on “postmodern narrative sensemaking as plural” and another on “narrative accruals as co-constructing communities of practice.”

Back to the world, I guess. Back to writing my stories of the gorgeousness and complexity and hope that always lives inside the world that wins my heart and breaks it all at the same time.


Grandparent communications


From the category of “thoughts that won’t leave your mind until you write them down, sovaldi sale ” I’m taking a brief writing break from my thesis to get some thoughts out, erectile and then… back to it.

When I was little, my grandparents were largely Phones To Shout Into. They lived in the Philippines (later, my mom’s parents moved to Seattle). I was growing up in Chicago. We called each other on special occasions — Christmas, New Year’s, maybe birthdays — and it was always short, because long distance calls were pricey.

There’s no way to lipread on a phone call, so my general impression of my grandparents came from my bewildered looks at nearby parents to explain the blurry audio and prompt me for the proper answer.

“Hello, merry Christmas! (Mom: “They’re asking how is school.”) Uh, school is good! Uh, yeah! I love you too. Here’s mom! Bye!”

Not much in the way of conversation. More like hoping I could guess the right phrase to say into the phone, successfully enough and long enough that they would let me go. I knew they loved me, and they knew I loved them, but it’s hard to get to know someone like that.

Fast forward ten years later. It was my last semester of college, and it had been a good day. After spending hours volunteering at the tech nonprofit that would later become my first job after college, I had reluctantly logged out of an office flooded with rapid-fire English text conversations — computing discussions, made accessible to me for the first time by a distributed international group of contributors who happened to choose text chat as their collaboration medium. Warmed by the unfamiliar fuzzy feeling of full-throttle, large-scale communication, I was walking to the train on rain-slicked Boston cobblestones. It was a warm night.

My phone rang. I recognized my cousin’s name and was momentarily disgruntled at my family. “They know I don’t do phone calls, I can’t hear them.” And then: “Oh crap, I don’t do phone calls. Maybe something is wrong.”

“Hello?”

My cousin said something on the other side. I knew he would be speaking English, but the words didn’t make…

“What did you say?”

He said something again. He sounded serious — his prosody was far slower and more somber than I was used to.

“I’m sorry, I don’t…”

This time, I thought he might have said our grandfather — our Chinese dialect’s word for grandfather. I wasn’t sure. I said the word, hoping I’d guessed correctly. He repeated… something that was also probably that word. I thought.

I don’t know how many times I made my cousin repeat it over and over: our grandfather was dead. (“What?”) Our grandfather had died. (“I didn’t catch that last…”) He had a massive heart attack. (“Something about our grandfather?”) It was sudden and unexpected. (“Can you repeat…”) There was nothing anyone could do even once the ambulance arrived. (“Hang on, can you back up? Are we talking about our grandfather?”)

We gave up, hung up, and I made the long transit trek back to my suburban college dorm, wondering if our grandfather was dead, hoping I’d parsed the phone audio incorrectly, deciding whether I wanted to email my parents and ask if he was alive and risk looking like an idiot.

Eventually, I found my parents over email. He had died. I was to fly home for his funeral and sit while people mourned around me in languages I didn’t understand. Sometimes it was in English, but it’s hard to lipread people when they’re crying.

Fast forward a decade later. My grandmothers both live in the Philippines again. This time, we have Skype. I’m sitting beside my youngest cousin, and she’s the one relaying phrases, prompting my answers.

“Hello! (Cousin: “She’s asking how is school.”) Uh, school is good! Uh, yeah! I love you too.”

This time, I could be more eloquent about school; at the age of thirty, far more so than at the age of ten, I’ve learned to use my hyper-fluency in spoken English to cover for my inability to hear it. But our grandmother is not a native English speaker, and that language has grown harder for her over time — so I need to dial my language to a different setting than when I am sparring verbally in academia — and the awkward 10-year-old comes out.

I’m the canary in the coal mine for my family’s intergenerational communications, or at least that’s what it often feels like. When my grandmother’s English grammar started to slip due to the mental vagaries of age, I started straining more and more to understand her — without clear sentence structures to guess at, the clues I could glean from lipreading ceased to make sense, and at some point a wall slammed shut before me. In contrast, my cousins and my aunts and mother, brother, father, uncles… they get her words, unscramble them so slightly and so fast they barely noticed it at first. There are conversations I can’t be in anymore; there are thickets I cannot, with all my intellect and skill with language, force my way through.

They say she’s still quite clear in our Chinese dialect, her native language, and I believe them. But I can’t lipread that. I’m only oral deaf in English, and in German, and a little bit in Mandarin and Spanish… languages with books, languages with grammars and phonologies I can learn in clear text first, and the fuzzy, lossy mouths of speakers second. And my family is made of people, not of books.

Sometimes — often — I can’t speak to my grandparents. But I can write — and so I write. Not so much to them now, but sometimes for them.

Hello! School… school is hard now. Hard in ways I never thought it would be hard. ButI know how much it means to you that someone in the family will get a Ph.D. You might not understand the words I’m writing, but you do understand that part of why I’m writing them is in appreciation of the generations worth of sacrifice and planning that it took to get us here.

I wish that there was more that I could say to you directly. I wish there was more of your world that I could understand, and vice versa. I wish it didn’t cost so much for me to try with spoken language, but it does, so I will do it indirectly with a written one.

Uh, yeah! I love you too. So yeah, here’s… back to my dissertation.

Bye.


Reading the labels of canned beans


My friend Sheila recently shared this article about two (hypothetical) deaf kids of hearing families at the dinner table. It’s absolutely worth a read. Both children in the story are about 8 years old, sovaldi and go to a school where they’re taught in ASL; both are bilingual in spoken English and ASL, and both have hearing parents who care for them greatly and want only what will give their children a better life. There are no bad guys here.

In this fictional story, the parents of “Sophia” sign, and use ASL with her at the dinner table; family mealtimes are full of learning and interaction for her, active participation, question-asking, learning more about the world, about her parents’ lives, telling them about hers. The parents of “Caleb” don’t, because they think it’s important that he learn to interact with the hearing world. Caleb learns to keep his CI on to keep his parents happy, even if he doesn’t understand. He learns how to pretend. He loves them. He knows they love him. It’s not a bad childhood, honestly.

And yet.

“Over time, Caleb has learned that it’s best to pretend to understand more than he does, so he will annoy them less… [at dinner, when his parents smile,] Caleb smiles as well, because he likes to see his parents happy, even if he knows nothing about what they’re saying. He has not learned anything from this dinnertime, but he doesn’t usually, so he does not think anything of it…  Caleb clears his plate and leaves the room to brush and ready himself for bed. He is not unhappy, and is in fact mostly fine, but there is a subtle quietness in his heart that he doesn’t completely understand. He can’t identify it yet.”

I grew up closer to Caleb, without the CI, other d/Deaf/HoH kids around, ASL exposure, and with a family that regularly creamed-up English sentences into a creole’d rush of Southeast Asian languages. I know Caleb is a fictional character, but his experience hits close to mine in many ways, though I exhibited no visible academic delays (plenty of social ones, though — and although I was always at the top of my classes as a kid, I wonder what sort of learner I might have been with full access to the world… but that’s a complex experiment that can’t be re-run in any case, and I could have turned into a hypersocial party girl who thought studying was boring, too).

When I was a kid, one of the running family jokes was that I would read anything, anytime. Literally. Anything. I’d grab a can of beans out of the pantry and read the nutritional labels, and I honestly would find it fascinating (“whoa, ascorbic acid is in everything!”). Everybody found it weird and hilarious and cute; I thought it was pretty funny, too. I didn’t know why I kept wanting to read at dinner — and really, all the time — but I just did. It felt like I always had to, like the books were food and I was always starving.

The joke’s still funny, but now it’s also sad — looking at that family joke now, the books were food, and I was always starving. I look back now and see a little kid so ravenous for information that she scavenged the best of what was available to her, which was… ingredient labels. On canned beans. In hindsight, I understand this as tiny-Mel’s attempt to make family mealtime (and all times, for that matter) an information acquisition opportunity, since most of the discussion was… not entirely a closed book, but a heavily blacked-out, liquid-smeared, highly effortful one to read. In many ways, I made my own learning experiences at dinner, got my own content to the table when I was allowed or was able to sneak it.

Sometimes that content was a book I’d try to hide under the table and read until my parents scolded me for not “being present with the family” at dinner, which I could only do through lipreading. Lipreading is exhausting and inaccurate — I say this now as an adult with advanced degrees and a high degree of metalinguistic fluency and topical knowledge with which to guess, so it was probably even worse for a small child who was still developing language skills and vocabulary, and had less knowledge of the world to guess with.

Books are hard to hide under the edges of the table, so it wasn’t usually books. It was typically ingredients. Cereal boxes. The aforementioned cans of beans. Or advertising catalogues that had arrived in the mail. (I became hyper-aware of what I’d now call a typology of the rhetoric of bulk mailings.) This was the information about the world that I could make sense of as a child.

This is not too different from the information I can usually make sense of during hearing dinners now… the difference is that I have more coping strategies and use my speaking privilege like a powerfully wielded machete to get myself into discussions, I have more capacity to moderate and strategize my use of energy and brainpower to focus on important cues and topics, and I have a far richer mental model of the world and all of the ideas in it that I can use to make sense of the spots of information I am able to extract. The information, though… it’s still a crawl, a drip, a broken stream.

I remember this past fall when I was invited to the house of a Deaf family I’ve come to know in town, along with a bunch of other Deaf folks who were mutual friends of ours from church. My ASL receptive skills, at that point, were enough to make sense of most conversation — not to understand it perfectly, but it had surpassed lipreading in terms of cost/benefit (energy expenditure vs accuracy) tradeoff. I wasn’t really signing much myself, yet. I was a linguistic toddler.

I remember sitting in their kitchen and just watching… people… talk. About… local restaurants. Their jobs. Their kids. The snacks. Picking up their kids from school. Job hunting. Whether a kid was allowed to have another piece of chocolate. Topics shifted, nothing was particularly important, nothing was… it was… the most insignificant conversational content ever. And I sat there, wide-eyed, thinking: oh, this is how it is — this is a type of conversation I have never seen — this is what people talk about after meals, this is…

This is the rhetoric of everyday life, the stuff I kept on getting error pages for during my childhood attempts to access it — the “oh, it’s not important” response, or the classic of “I’ll tell you later” with a later that never came. This is the experience of an ethnographer plunged into a foreign culture, but the culture I was plunged into was actually… my own, except with (partial) access to the language for the first time.

“Making the familiar strange” is a common phrase used in training qualitative research students, but I think I might always live inside a world that’s somehow strange to me — as do we all, but I am very much aware of this particular way in which the world is strange to me because of how I grew up with communication.

That’s all I’ve got for now.


Notes from a DeafSpace talk by Hansel Bauman, plus going voice-off


I went to Hansel Bauman’s talk on DeafSpace in Boston last Wednesday. Here are a few of my notes, public health lightly edited.

First, I was struck by Bauman’s presentation/interaction choices; they were good reminders that the medium is so inextricably part of the message. He started his presentation in ASL, directly addressing the deaf folks in the audience and letting us know that he would be voicing most of the presentation, then switched modalities. After the talk and Q&A (all interpreted), he came out to the cluster of signers that had formed at that point, and joined us in conversation. It saddens me that this is so rare as to be delightfully surprising, but it was nice to be acknowledged in a non-othering way.

I also enjoyed their starting question (which I paraphrase here since I didn’t catch the exact wording): If there were no people on Gallaudet’s campus, how would we tell that it was a Deaf Space just by walking around? (Starting answer: “Huh. We couldn’t.”) This came after some discussion on how taking up space is the first proof of existence (that’s a quote from someone whose name I didn’t catch), and having to constantly adapt the world is a material dialogue of “you’re not supposed to be here.”

On this note, I also appreciated the subtlety of observations the architects made about usage of space, backed in obvious, concrete ways with film data. For instance, they showed how people shuffled tables/chairs into a circle, dealt with chairs with arms, looked at each other while walking down a street engaged in conversation, shifted out of direct lighting, and so on. They were largely things that are so commonplace an adaptation that one might not think to address it; it’s just what we do when we’re used to worlds never quite fitting us. Their effective use of film made me think about my intermittent hopes to use video to back up my own research-related observations; lightweight documentary filmmaking may be a skill to develop more later.

There are two things ongoing in DeafSpace work that I’d love to keep an eye open for. First is the pedagogy used to bootstrap the d/Deaf/HoH users from Gallaudet into engagement with the design process, which feeds into my interest in teaching human-centered design in general. The second is the pattern language they’re developing from the DeafSpace projects that have gone up and are going up. (Plus: using the term “pattern language” correctly already earns bonus points in my book — but these are architects, so they would use that term correctly, if anyone would.)

As a side note, this event was also one of my first experiences choosing to stay voice-off in a mixed group of signers and non-signers, instead of simcomming, asynchronously translating myself into voice, or some other English-dominant modality that refuses the possibility of another person voicing me. I’m used to speaking my own English, and I’m not (yet?) a fluent signer, so even the thought of someone else voicing me is unnerving and distracting. Plus, if I’m in a conversation that is fundamentally in English… I’m going to be in English too, because that is my native language, and… why wouldn’t I?

But this time, the signed conversation was way more interesting to me than the spoken one — which is a rarity for me. And I could join it directly, just as I usually join English conversations as directly as I can. So… I did. I threw my CI and hearing aid in my pockets, threw my attention as far away from auditory channels as possible, and dove into the conversation with Bauman. I was vaguely aware that, at times, different people intermittently and spontaneously voiced me as needed for non-signing hearing people to understand the conversation (which I was pretty quiet for, because the other signers had far more interesting things to say). I had to very, very actively try not to look at them to lipread how they were voicing me (I can tell when people are talking, but not what they’re saying). It was a good experience; it was also a growth experience, but it was uncomfortable discomfort because of the dynamics and who was around (a few other Deaf people I already knew).

Turning my voice on again afterwards took… a surprising amount of effort, which is an effect that still makes me pause and ponder. The different kinds of effort that it takes to be in different ways of being is… intriguing. I will use the word intriguing here, for now.


Thoughts on my family’s language


A recent Facebook thread had me thinking about my relationship with the languages spoken by my family. Almost all my relatives speak English to some degree, orthopedist with the native/fluent proportion increasing with later generations, tadalafil as immigrant generations tend to go. But we have others, story including the regional Chinese-Filipino dialect I would identify as “my family’s language.”

My family’s language, but not mine. Probably never mine. In some ways, I have a heritage language I may never speak. I still can’t successfully lipread my family’s language, and can only speak a few childish words of it — brush your teeth, time to eat, go to bed. English had far more resources to learn with: libraries full of books I could read, drills on vocabulary and grammar so I had patterns I could guess at, speech therapists trained for the phonemes of that tongue. And so that was my language.

I’m used to being surrounded by that dialect when I’m home sometimes, and even more so when we’re in the Philippines. What I’m used to is not being able to understand it. That’s… just my experience with it. It’s ours, but it’s mine in a different way than it is theirs.

But if you asked what my family’s language is, I would still point to our dialect. And I want to see it preserved, and I want my own children (who are likely to be hearing) to someday learn it from my parents, aunts, uncles, brother, and cousins, even if I myself may never speak it. Many parents want to give their kids something they didn’t have themselves, and this is one of mine.