Posts that are hearing-ish

Why I can’t (yet) teach engineering in ASL


I’m a Deaf engineering professor. I want to teach my engineering college classes in ASL. This is a goal I have for the next couple decades of my engineering faculty career — to teach my way through all the core undergraduate engineering courses, plus the required undergraduate ones in my field of electrical/computer engineering (ECE) — in ASL.

Right now, this is not possible.

This might seem strange, because — I’m Deaf! I sign! I’ve taught engineering at the college level for years! But nope: being an expert at teaching a topic and being fluent in a language… does not mean you’re automatically able to teach that topic in that language. You need to be fluent in that topic, in that language.

And for that to be possible, vocabulary needs to exist. You need ways to efficiently express disciplinary concepts in the target language (in this case, ASL). Vocabulary is a key part of language; language has to be there for communication to happen; communication must happen for teaching to occur. And right now, I don’t have (good) signs for basic concepts such as “voltage,” which is an idea so fundamental that I can’t teach elementary school electronics without it, let alone college-level classes.

Now, I can (and do) teach engineering voice-off, signing, but I’m not using ASL when I do so; I’m using a signed form of English (which some people would call PSE or contact sign). I’m basically transliterating, with the occasional insertion of ASL grammar and a couple of classifiers. I’m not voicing, but you could read an entire English engineering lecture off my lips. In other words, I’m teaching in “English, with hands.”

ASL is not “English, with hands.”

We need vocabulary. We need ways to express these ideas within Deaf language and Deaf culture — ways that are efficient, that don’t require tons of expansion every time. In English, we say “voltage,” not “the electric potential difference between two points.” The latter is a definition, not a term. Similarly, I can explain voltage in ASL (perhaps as “electric pressure point point compare”), but I need a sign for the concept, and other concepts like that. If I can’t, I don’t have a professional vocabulary. It is akin to restricting technical communication to Basic English or Up Goer Five. If someone used the phrase “funny voice air” instead of “helium,” we’d figure they didn’t know what they were talking about, because there’s a word for that.

We also need ways to express these ideas within this language, not just ways to refer to the concepts as expressed in another language, as with fingerspelling. Yes, short fingerspelled words can turn into lexicalized signs, like “bank” and “OK,” and in this case perhaps “amps,” but what do we do for “semiconductor” or “bypass capacitor” — abbreviate? “SC” is already “South Carolina,” and “BC” is birth control, and I’d like to use my brain for things other than figuring out sentences like “You’ll need a BC in P to smooth the MC input V.” Or if we break the English word into components and then sign those, we get things like “tiny administrative person” for “microcontroller” (micro-controll-er). And then I flinch again, hard.

At that point, we’re just pointing to the English. If I wanted English, I would use it. I want ASL.

Every other Deaf engineer I know does this exact same thing. The moment we begin discussing technical topics, our signing shifts — hard — towards English. Perhaps we flinch a little and apologize to each other for using mouthing (and only mouthing/lipreading) to distinguish between “electric,” “battery,” and “circuit.” Perhaps we comment that, yes, signing “tolerate” (as in “to put up with, to bear”) is a poor sign for “tolerance” (permissible variation in a measured value). But we do not have other ways to do this. Not yet.

Fascinatingly enough, this has happened before — in engineering and computing and other college-level STEM fields, even — with spoken languages. There are plenty of examples of decolonizing the language of (collegiate/professional) instruction — I recently learned that Japan is doing this, for instance — but my favorite example is Hebrew and the War of the Languages. When the first Jewish (later Israeli) universities were being established, they knew that Jewish culture was amazing, and that Hebrew was a rich and beautiful language with a deep, deep history and multiple ways of expressing the concept of “God” — but no way to express the concept of “computer.”

And guess where a lot of their professors had trained? Germany. Austria. All their notes, all their books, all their training on… say, computers — they were obviously not in Hebrew, because there was no Hebrew word for “computer.” But yeah, it was a little problematic to be teaching programming… at a Jewish university… in German. And so, rather than capitulate to “eh, I guess we have to teach in German,” they built up the Hebrew language so that they could have technical discussions within it. They enriched their language and their culture instead of switching to another. This took a tremendous amount of work — many people, over many years, working to create a world where it was possible to teach computing in Hebrew. And now they have it.

That’s what I want for ASL and engineering (and computing, and technology). It’s going to take a long time. Probably the rest of my career. (“Congratulations, you’ve found a lifetime side project.”) It’s going to take a lot of collaboration with a lot of people and a lot of work and it’s never going to be done, because languages are never done. It’s going to be a lot of awkwardness and stumbling experimentation and a lot of new engineers brave enough to go out into the world not just with technical skills, but with language (ASL) to communicate those skills, and we’ll have a lot of short-term inefficiencies compared to “but why don’t you just teach it in English or signed English?” — but look: we’re going to make a world.

It does not yet exist. That’s why we need to make it.


Parents have visited, semester winding down


Freewrite/braindump/linksave.

My parents came to visit me in Rochester this weekend, which was nice – and not only because I got to eat out more than I usually do. I like how my relationship with my parents has been slowly evolving into one between adults, one of whom happens to be the child of the other two.

They came to Imagine RIT, which is a huge student (and non-student) project display festival. It’s massive. Massive. And it’s also the largest-scale interpreting setup I’ve ever seen to date — interpreters everywhere, stationed across campus, ready to walk over to whatever exhibitions needed them. Seeing DHH folks in a mix of both presenter and visitor roles was also quite nice.

I’m still navigating how to interact with groups of people when some of them know me as “a person who speaks” and the others know me as “a person who signs” — which language do I use when? — but it was also nice to watch my parents interacting (fairly smoothly!) with signing DHH people. Mostly I stood back and watched them chat with each other, but a few times I dropped in (signed) comments and it felt pretty smooth. (But generally, it would feel weird to sign to my parents through an interpreter… about as odd as if they spoke Chinese through a translator to me. The presence of other people is what allows us to use those combinations of modalities and moderations with each other.)

The semester is winding down, and I’m staring at the research projects that remain. I am quietly excited about some of them, eager to be challenged by others, and (honestly) hoping to find ways to redirect yet others towards other people as quickly as possible before I’m locked into something I don’t actually want to commit to – the work of how to say no and frame that no in ways that actually work for others. (It seems silly when I write this, but… the intellectual and emotional labor associated with that last part are tremendous sinks for me right now. Tremendous.)

I’m still trying to… maybe not “rediscover” my scholarly soul, but to keep a scrawny, struggling flame alive. I want to read things. I want to just sink into ideas and learn and think, and sometimes it feels like there’s so much friction around all of it I want to give up on it all. Still working on this.

And then random links I don’t want to lose. I found an old newsletter from Erik Kennedy about Magic Ink, which is a lovely longform piece on interface design that would probably make for a nice inflight reading at some point. And then there are the things I want to read and do, like the Chinese chicken soup recipe my mom just sent me (yep, we ate this as kids).

Okay. Back to… things. I feel like these posts are me surfacing for air and gasping; this space (online, text, long-form) is still where I can most easily breathe. And I need air, and company, in spaces where I can breathe… well.


ASL lector notes for the Easter Vigil Mass – 1st reading (Genesis 1-2, Creation)


It is Holy Week, one of my favorite weeks of the year. I have the privilege of signing the 1st reading for the Saturday Vigil Mass this year in Rochester, and I’ve posted my translation and performance notes in case it might be useful to someone who wonders about the translation process (which I’ve written about elsewhere: part 1 and part 2).

The first reading (long version) is most of Genesis 1-2, or the (Yahwist) Creation story. I inadvertently wrote my notes so that they will (hopefully) make sense to both signers and non-signers — I hope this will be useful to my non-signing friends as an explanation of what it’s like (for me) to think in ASL. Basically, the left column is the English translation, and then the middle column is me trying to describe the images that come to mind when I read it.

This isn’t analysis of any sort, it’s not translation, it’s… what is the movie in my mind, right now, when I read through these words? The short version is that God is a lot like a really excited 5-year-old, because… I’m the one signing this, and I’m a lot like a really excited 5-year-old.

After the imagery description in the middle column, another round through the reading follows on the right, with the gloss (as best as I can capture it) for what I sign during the Vigil Mass. I wrote most of this post while I was preparing to lector for the Boston Deaf Catholic Vigil Mass last year (2017). At the time, I still felt really awkward, shy, and hesitant while signing; my expressive usage of the language was very new and limited, and I’d never worked or lived among other Deaf people or otherwise had much of a cultural/linguistic immersion. Vigil Mass 2017 was a linguistic/spiritual/identity landmark for me; it was the first time I felt like I was expressing exactly what I was trying to express in ASL. Which… was a huge deal for me, as a hesitant new signer (thanks, growing up oral).

Thanks to Deacon Patrick Graybill for last-minute feedback on Holy Thursday 2017, and to God for… well, basically… everything, right? That’s what this reading is all about.


The doors we leave open


I’ve been thinking about the doors we leave open, even if they don’t look like they’ll be taken at the time.

One version of this, for me, is that I grew up deaf and oral in the mainstream (local public school with hearing kids). I grew up with speaking and listening as doors that were flung wide open with flashing neon signs and adults hurrying me towards them — but the doors of ASL and Deaf culture were also there, in ways that were important to how I engage with them now, as an adult trying to learn.

There was the itinerant Teacher of the Deaf who visited my elementary school and (briefly) showed 7-year-old Mel a few signs before her parents put a stop to it. I don’t have clear memories of this, but discovering that IEP note as a graduate student was a jolt: my younger self had shown promise for learning how to sign at a remarkable rate, and seemed to enjoy it? Signing was a thing that I had… and maybe could… enjoy, not only fear? These were doors it took me twenty years to walk through.

Even if my parents stopped me from learning ASL (or whatever variant of contact sign people were going to use with me), they did bring me to watch the local children’s theatre, which had Deaf performers. As a slightly older child, I wanted nothing to do with ASL or the Deaf community; it was foreign to me, and everyone kept telling me I was so smart precisely because I could act so much like a hearing kid. I loved music (“like a hearing kid,” I thought, not knowing that Deaf people could also love music). I loved musicals. So my parents brought me to Oliver, and Joseph and the Amazing Technicolor Dreamcoat, and there was signing on the stage… which I couldn’t understand. But later, I could look back and think: there was art there, dancing, theatre, music… and there was ASL there, blended in with them. Exploring this strange new Deaf world wouldn’t mean giving up these things I loved; it might even expand what I could imagine in those spaces. These were doors that took me fifteen years to walk through.

There were the educational interpreters who were assigned to me for a few years, after my parents stopped the ToD from teaching me to sign. (Yeah, I’m not sure what the logic behind this was either.) I had already learned how to learn everything from books, and didn’t know this strange new language they were using with me, so I resented and mostly tried to ignore their presence as much as a lonely child could. As soon as I was able to formulate the argument that I didn’t “need” interpreting, I did — and breathed a sigh of middle-school relief that these people wouldn’t follow me through all my teenage years. But a few years ago, when I started thinking about (willingly) learning ASL and (willingly) seeing what this whole “interpreted access” thing was about, I had two people to reach out to. And they responded! (Thanks, Jamie and Christine… and further back, though I couldn’t find her, Francesca.) These were doors that took me thirteen years to walk through.

There were the folks who were (ex-)interpreters, or captioners, or signers, and kept being those things while we were friends and colleagues in the spaces I already worked in and wanted to be in (which is to say, tech spaces – not Deaf spaces). Who kept being adjacent to both worlds, who kept reminding me that trying these things out might be easier than I thought. Who reminded me that trying it wasn’t a permanent commitment; who walked me through how I could ask for things and set them up, when it was time. (Thank you, Steve and Patti and Mirabai.) Took me… seven years to walk through some of those doors. Or five. But I walked through them all, eventually.

So yeah, those doors. Important things. We don’t know when people will take them, but… even if it’s “not now,” even if it might well be “never,” we… just never know. Open the doors and keep them open, even when it seems completely useless. Wait, and wait, and wait. It’s important that these doors be open, because we never know who’ll come through them, at the most surprising times.


Why Deafening Engineering? Because onto(ethico)epistemologies.


Continuing to write my way through things I’m finding/reading/sorting that help me think about some of the scholarship I want to do.

While we were roommates for the CUR Dialogues conference, Corrine Occhino introduced me to the work of Julie Hochgesang, who does sign language linguistics: phonology, documentation, etc. and tons of other things. I’d been trying to figure out analysis tools for video data, as opposed to making everything a text transcript and analyzing from that. Unsurprisingly, signed linguistics does that kind of thing, and Julie is the author of a guide for using ELAN – which itself is a FOSS (GPL2/GPL3) project for annotating audio and/or video data. Chaaaaaaamp.

And then there’s Georgetown’s recent EdX release of a course on sign language linguistics (structure, learning, and change).

And then there’s Allan Parsons’ notes on Karen Barad’s work on ontoepistemology. (Or onto-ethico-epistemology, I suppose, since the ethical dimension is inextricable, at least according to Barad.) And Annemarie Mol’s brief but reference-dense guide to the ontological turn.

“What the…” you say. “Mel, these have nothing to do with each other. I thought you were doing Deaf Engineering stuff, so what’s with all the weird philosophical…”

“On the contrary,” I say. “Deaf Engineering is a case study; it’s an example of the kind of work I want to do — not the end goal of all my research.”

I’m interested in engineering and computing education ontologies. (Okay, fine, ontoepistemologies.) (Okay, fine, onto-ethico-epistemologies. Happy now?)

See, the reason I’m interested in Deaf Engineering Education — or perhaps the more active verb form, “Deafening Engineering Education” — is because of what it can help us make visible about onto(ethico-epistemo)logies of engineering (education). The phrase “Deafening Engineering (Education),” by the way, takes after Rebecca Sanchez‘s book title, “Deafening Modernism,” where she does the same thing to modernist literature, exploring it “from the perspective of Deaf critical insight.”

It doesn’t have to be Deaf engineering (and computing) education. It could be FOSS/hacker/maker engineering and computer education, a space I’ve also published and worked in. It could be feminist engineering (and computing) education, as Smith College, SWE, Grace Hopper, Anita Borg, the Ada Initiave, and others have explored. It could be engineering education as a liberal (and fine!) arts approach, which is how I’d describe some (but not all!) of Olin College’s take on it. It could be Black engineering education, which I’m curious about as it’s brought forth in HBCUs as well as NSBE (but know very little about myself). It could be Native/indigenous engineering education, which Michele Yatchmeneff and others are exploring. It could be queering engineering education, cripping engineering education, Blinding engineering (and specifically computing) education; it could be…

Here’s the thing about all of these approaches, all of these worlds: by bringing to light other ways we could or might have conceived of engineering, brought it into being, engaged it as a practice — it makes us aware of all of the assumptions we’ve embedded in the discipline thus far. Why do we typically assume that engineers are White (or can act White)? Why do we (again, typically) assume that engineers are hearing (or can interface with the hearing world)? Why do we assume… what do we assume? What else might we assume?

I am so glad for the recent widespread success of the Black Panther film, because the wide-eyed audience reaction to Shuri’s lab and Wakanda’s technology is such a great example of what I’m aiming for. That look into a different world; that plunge into a universe of possibilities, that opening-up. I want to do… not quite science-fiction, but engineering fiction, or things that start as engineering fiction, so that we might make those into engineering not-fiction. To look at these worlds and learn from them and learn how it is that they understand and articulate themselves.

Ontologies. Plural. What is, what might have been, what might yet be. This is a pretty stark contrast to ontology engineering, which is a different (and more engineering/computing-native) approach to the notion of ontology. Ontology engineering is an attempt to document the singular, rather than embrace the tensions of the multiple. Both have their place, but one has been more dominant in engineering/computing thought than the other, and unconsciously so — the same way most STEM researchers are working within a post-positivist paradigm, but don’t (yet) know it.

So why all the Deaf/ASL resources?

Well… it’s a rethinking of the world, and one that’s taken place within a lot of living memory (and one that happens to be extraordinarily accessible to me). The past several decades have seen an explosion into the public sphere of a radical rethinking of what ASL is, what Deafness is, and what all these things could be. We’ve gone from “it’s not really a language, it’s a system of crude gestures” and “what a terrible disability” to… something that’s exploded our notions of what language is and how it works. And linguistics had to figure out and built analysis tools and systems that could work with signed languages. A rapid turn-about between “what would this even look like?” to “maybe it looks like this, or this, or…this?” because… people… made it.

And then came the (again, radical!) idea that ASL could be used as an academic language, just like one might use English (or earlier, French… or German… or Latin…) as an academic language of instruction — and then publication. What does it mean to publish in a signed language? Again, there was no existing answer. So people made one. And then things like: what would an ASL-based software interface look like? We didn’t know. And then ASLClear came out as one answer.

That’s why I’m looking at these resources. Because I see in them a making of a world; the figuring-out and birthing of things that have never existed before. They happen to be Deaf; it happens to be a very, very good example for me to look at right now — but it’s the process of the birth of worlds and universes that thrills me, and I want to look across worlds at the process of that birthing.

You see that? Do you see why I’m excited by this, why I love it, why I see it as so much bigger than just “Deaf Stuff In Engineering?” It’s what Deaf Engineering (and queer engineering, and Hispanic engineering, and…) points to. We don’t know, it doesn’t exist… (see the ontoepistemology in there? the knowing, and the being?) – and then we make it. And we find out what things might be possible. And the ethics inherent in that (re)creation of the world — what and who does our making and remaking let in, who does it keep out? — that’s where it gets ontoethicoepistemological. Nothing is value-neutral; nothing is apolitical. And nothing on this earth is going to be perfectly fair and universal and utopian; let’s not pretend it is; let’s be aware of our own footfalls in these spaces that we share.

I am so afraid of writing about this, thinking about it, letting it be known I’m interested in things that include the words “Deaf” and “ASL” and “engineering” in it, because — as I mentioned in a previous blog post — these kinds of things can be oversimplified and totalizing to one’s scholarly identity, to how others describe and understand one’s work. It’s really important to me that I not get pigeonholed into “just” doing Deaf Engineering Things. Because there’s so much more out there. There’s so much, and I want to see and play within it, too.

But this is where I want to play, and this is where I want to learn and create things and be challenged and in dialogue. And I need access to these first few worlds I play in, so that I can spend my energies on playing and figuring out the mechanics of how world-building works, rather than on hard labor trying to glimpse the snatches of it that I can. And so my first two are open source (since so much of that world takes place in text, where I am about as native as anyone can get) and then Deafness (since I can learn my way into a strange new world where things are visually accessible by default).

I’m hoping that those two will teach me enough between them (or across them) that I’ll be able to branch out to others, someday. Maybe years from now. Probably years. The other spaces will likely be less accessible to me in terms of communication, but I’ll have learned; just as I’m trailing open source practices and philosophies into Deaf Engineering (and computing) spaces with me (see: this blog post, wherein I think out loud / release earlier and more often), I will probably trail Deaf communication and accessibility practices into whatever world I go into after that.

But there will be worlds after that. This isn’t my final one.

Okay. Onwards. Again. Keep thinking and keep writing. I feel so hesitant doing this, but also brave in ways I haven’t felt in a long while.


APA style and qualitative research methods resources in ASL


My friend Anna Murphy recently sent me St. Catherine University’s library resources on APA style — and they have ASL versions! Actual ASL with nice translations, not ”we signed the English word for word” versions. I think these are a nice high school or early-college intro for ASL users, maybe good for a first-year college seminar course. (I’ll ask Corrine Occhino about using them for ours, since this is a lovely set of matched bilingual resources.)

Joan Naturale also pointed me to an ASL companion to an introductory qualitative research methods textbook (Research and Evaluation in Education and Psychology (REEP): Integrating diversity with quantitative, qualitative, and mixed methods). ”ASL Companion,” in this case, means there are well-done chapter summaries in ASL with the blessing of the original author (Dr. Donna Mertens). This is a nice textbook, in its 4th edition from Sage, not some hastily cobbled together thing for the sake of having something signed. Good scholarship in good ASL is, sadly, scarce stuff.

This stuff is important; not only does it make these materials more accessible to those who are native users of ASL, it gives us a glimpse towards what scholarship in ASL might look like. And yes, there have been Deaf (and hearing!) researchers working on “academic ASL” for a while (and what that means is still up for debate). I’m new to the conversation and feeling my way into a world that people far smarter and wiser (and familiar with ASL and Deaf culture!) have created before me, with the hopes of contributing to it as well.

My question is: what would it look like to do this in engineering, computing, and in engineering/computing education? I’ve seen scholarship in ASL, but only for clearly ASL/Deafness related fields… signed linguistics, Deaf education, Deaf history and rights, and so on. I’ve seen stuff about ASL in other fields, but it was written in English. What does it look like to do engineering and computing work in ASL and/or in a culturally Deaf manner? What would culturally Deaf engineering look like?

And I’m pretty sure that look is a key operative word here, but it’s also going to sound like something — Deafness doesn’t mean the absence of auditory information! — and it’s also going to be a host of other things, because Deafness isn’t just about visuals; consider the DeafBlind community, consider all the tactile/kinesthetic richness of the world, consider — but I digress.

But what will Deaf Engineering (and Computing) be like? I don’t know. I’m aware that I’m continuing to write these blog posts in English, and I’m okay with that right now, so long as my actual published/presented outcomes on this front come out bilingually. In part I’m writing in English because this is my scribble pad and I’m a native English writer, and it’s what my thoughts come out most fluidly in (if I thought best in Spanish, I’d be writing in Spanish). But these kinds of resources are not just examples and resources for my future students; they’re building blocks for me of what might be, what things might look like. And I can also tell from watching them that they took tremendous amounts of work to create, so…

..examples. I leave them here as exercises for the reader.


Seeing myself in the (literal) mirror at NTID’s IT office


Some of you already know (and my previous blog post has hinted) that I’m working in a Deaf environment for the first time in my life — the Center on Access Technology (CAT, pronounced like the animal and signed as an acronym) in Rochester, NY. There’s far too much to say about this — I am glad to be here, it’s an incredible learning experience, and I often feel like a stranger in a strange land… but if there’s anything my training in writing and qualitative research has taught me, it’s the power of vignettes and thick descriptions of small moments. So that’s what I’ll start to share. This one is a very small moment, but it was one of the first things that struck me.

So I’m a new faculty member, trying to figure out how one connects to internet, printers, and so forth, as one does. I’m hitting snags, so I walk over to the IT office inside NTID (basically, the Deaf college within RIT). As I’m waiting for the IT staffer to fiddle with my laptop and fix my connectivity issues, I look around. It’s an IT office, full of familiar-looking cords and bins and tables of acronyms pinned to the walls. I see the student workers perched in front of monitors, typing into a ticketing system.

And then I notice that all of the desks facing the wall have mirrors on that wall, behind the monitors. And my first thought is “oh, that’s nice – I guess it makes the room look bigger.” And then one student walks up behind another and begins to sign, and the second student turns around to smoothly engage them. And I suddenly remember: they’re all Deaf, too.

Like me, they can’t hear footfalls from behind. Like me, they would startle from their monitors with a sudden touch on the shoulder. The mirrors let you see someone approaching from behind, a gentle nudge of motion in your periphery, the visual equivalent of footsteps walking up. And all of this is set up so matter-of-factly, just… how it is, of course we put mirrors behind our monitors! and not as some odd flustered accommodation that treats me as a conundrum in the hearing world (“well, Mel can’t hear footsteps, because she’s deaf, so what do we do?”).

I’m used to having my existence in hearing spaces not forethought (“it never occurred to us that a deaf person might be interested in this event, so we didn’t make it accessible”). I’m used to having laborious forethought be the best-case scenario, where I’m a solitary trailblazing oddity (“we’re open to setting up captions for this; can you do the setting-up in your copious amounts of free time?”). It is strange to be in a place where my individual existence doesn’t need to be forethought, because the space has already been created and inhabited by — and expects to see more of — people like me. It is strange to, at least in this one significant way, not be the Other.

Of course, it’s more complex than that. Even NTID is by no means fully accessible (likewise with Gallaudet). The Deaf (and hard-of-hearing) communities are not homogenous; not everything meets everybody’s needs. I’m not just Deaf, I’m lots of other things as well, and many of those things are still unexpected, unanticipated, not-forethought. There’s a lot of solitaire trailblazing work to do here still.

But dang. A world that is accessible to me regardless of whether I’m there or not? A space that stays Deaf-friendly without me, whose Deaf-friendliness is not dependent on my constant nudging and performance of my life as a reminder that people like me exist? Approaches and solutions that go beyond the things my friends and I can think of on our own?

Whoa.


Talk notes: “Technologies that wake you up” from a DHH perspective


Today’s accomplishment: giving part of a (group) talk in my 4th language, and making people laugh both directly and through an interpreter. Watching the audience grin and nod and crack up in two waves was just this… super-gratifying experience — first the audience members who knew ASL, then the ones who were listening to the interpreter translate my signing into English, and I could just… track that.

Sure, I know there are still all these dysfluencies in my sign production. I’m not fully fluent yet, and I’m incredibly aware of that, and working hard on it. But to know that my personality, my sense of humor, can come through in ASL even to people who don’t sign — that’s a tremendous milestone I was afraid that I might never actually reach. It’s difficult to understate how personally significant this accomplishment is for me — I’ve gone from “I will never learn sign language! I’m not one of those Deaf people!” to “I mean, okay, I guess I could learn it as… another language, because interpreting gives me so much that I just miss, but… I’m always going to speak for myself, especially in a work context with hearing people around,” to… well… this.

My talk notes follow. I wrote them, memorized them, and then deviated from them (as one does). The larger context is that my lab (which is basically a Deaf engineering design firm) is doing a series of consumer technology reviews. These aren’t technologies specifically designed for DHH people, but rather everyday technologies from a DHH perspective. For instance, other colleagues looked at various items from Nest, Alexa, etc. — and did you know lots of these devices, even if they are visual, feature an audio-only setup? Annoyance. Folks had to keep calling over their hearing spouses, ask their kids to come over and put on their CI, etc. in order to just get through installation.

Anyway, my segment was on “technologies that wake you up,” because… well, I don’t own a house. And a substantial portion of our community is made of students. And I sleep super deeply, and get uber-grumpy when I’m woken up against my will — just ask my parents; this is a lifelong known cause of Grouchy Mel.

  • most alarm systems are designed for hearing people and are based on sound
  • obviously doesn’t work so well for DHH
  • known problem: historically, all kinds of solutions – rube goldberg contraptions that drop heavy things, hearing humans (hi mom!) who will wake you up at the appointed time, praying that you’ll wake up before X and not be late
  • but now we have TECHNOLOGY!
  • I’ll examine several more modern systems for waking up DHH sleepers
  • First: Can I use “hearing” alarms and somehow make them better?
  • Residual hearing: amplify! plug into speaker system… okay, maybe this isn’t so great for hearing housemates, and it still doesn’t wake me up all the time.
  • Mechanical-only solutions: put phones inside convex objects to concentrate/amplify the sound. Definitely not loud enough for me.
  • Okay, another mechanical solution: set a phone alarm to vibration mode, put on a thin and hard-walled hollow clattery object and close to the edge of stuff that makes noise when other things fall on it. Yeah, terrible idea. Not the most reliable solution, good luck getting up in the middle of the night without wrecking everything, and an alarm that relies on literally dropping your multi-hundred-dollar phone on the floor every day is maybe not the wisest.
  • Enter: specific devices! This is an alarm designed for DHH folks… how many of you have the Sonic Alert alarm clock? (hands go up)
  • Wakes people up in three ways: audio, the sound is customizable (frequency-set knob, volume-set knob)
  • “light flasher” which is an on/off outlet flasher, could plug anything in there
  • “bed shaker” which is an off-center load on a motor in a case (like cell phone vibrators)
  • It’s definitely effective at waking you up. Abruptly. Might not be the best for your mood for the rest of the day, but it works. (Insert explanation of sleep cycles here, with a lot of hamming it up)
  • Okay, but how about stuff that isn’t DHH-specific? Sound aside and vibration/tactile aside, what’s left as a way to wake folks up?
  • Smell and taste might not be useful for alarms (although the smell of tea makes me super happy when I wake up)
  • What’s left is sight
  • Did you know: most deaf people can see
  • Did you know: most hearing people can also see
  • Did you know: although sound might not work for both hearing and DHH folks, light might work for both
  • This is the idea behind the Philips Wake-up Light
  • Idea: you know how the sonic alert wakes you up abruptly? this wakes you gently, like the sun coming through the windows
  • You set the time you want to be awake, and for a period of time before that, the lights will gradually turn on so that you’re sleeping more lightly and close to waking by the time the alarm rings (with the lamp at full brightness)
  • Gentle light wakeup is amazing (display, in contrast, the book cover of Alexander and the Terrible Horrible No Good Very Bad Day)
  • Except that it doesn’t always wake you up all the way, so you need a last-minute push-over into full consciousness
  • Alas, the pre-recorded audio settings on this alarm consist mostly of birdsong (from my perspective, “silence 1,” “silence 2,” “silence 3,” and “silence 4″)
  • I personally need a separate alarm to make the startle sound/vibration/light at the appointed time, but the wake-up light does get me to the point where being woken up by something else is pretty pleasant
  • Not a DHH-specific access issue, but the UI for button placement stinks
  • Alternative, if you already have Philips Hue lights: hack the Hue to be a wake-up light
  • Program the Hue! set something to turn on gradually at an appointed time
  • Not as smooth as the Wake-up light, which starts from zero and smoothly goes up; definitely turns on abruptly and is a more jarring wake-up
  • For me: solves the problem of “the Wake-up light needs a tip-over”
  • And then Sonic Alert for mega-uber backup.
  • End the talk somehow and turn the floor back over to Brian.

Things that have made me happy lately: qual methods companion resource in ASL, my upcoming review of wake-up systems


These are random things that have made me happy today.

The first is that there is an ASL companion to a qualitative research methods textbook (focused on education and psychology, to boot!) I am already fascinated by the design and translation choices they have made in figuring out what it even means to have an ASL qual methods textbook… how multiple signers in the introduction switch between freezing in black and white when it’s not their turn, and becoming full-color and in-motion when it is, so your eye immediately knows who it’s following. How they’ve translated the phrase “chapter author” not as [chapter write-person], but rather as [chapter sign-person] — “they who have signed the chapters” rather than “they who have written down text for the chapters,” because the “text” is in ASL. These little subtle things that tell you that… yes, this is another culture; this is a different world. (Or in my framing: this is an alternate ontology.

Second is that I am giving my portion of a technology review lecture series (1) on ASL and (2) with a fairly decent dose of snarky humor. My topic? “Wake-up systems for DHH sleepers.” I plan to cover…

  • Cheap Hacks for People With Residual Hearing: makeshift and wholly mechanical scoop and rattle amplifiers for phones (put them on big hard hollow things or in cones made of hard materials… like hotel ice buckets!) Also, reasons why these setups may not be the greatest for smartphone users and/or profoundly deaf deep sleepers like myself.
  • Sonic Alert’s Sonic Boom, which emits ear-splitting shrieks at modifiable frequencies, flashes lights (or rather, intermittently turns on and off power to an electrical outlet embedded into its side), and rumbles a bed-shaker. (And, in high school when I had it close to my CRT monitor, it degaussed my monitor. Anyone want to check out a cute little EMP source?) Also, a brief overview of the sleep cycle, and how this device, while highly effective at actually waking one up, is terrible for waking one up pleasantly.
  • Philips Wake-Up Light: awesome, but expensive-ish, and… let’s talk about the usability of the physical design, shall we? (And the choice of bird sounds as the wake-up recording, which… to me, are setting options of “silence,” “other silence,” and “more different silence.”)
  • Philips Hue system as a cheaper and more hack-ish way to replicate some of the functionality of the wake-up light

Gotta work on my content, draft, translate, and rehearse this. It’ll be fun.


Gallaudet Peer Mentoring Certificate Program: first impressions


Some of you already know this, but I’m participating in Gallaudet’s Peer Mentoring Certificate Program, which trains adults with hearing loss on mentoring others with hearing loss. The original idea was for mentoring adults with acquired hearing loss (i.e. people who grew up hearing, and then became… not hearing). However, as someone who grew up oral deaf and knows how complex it can be to figure out the whole d/Deaf/HoH identity thing as a young, early-career adult… I also hope to work with folks like me.

And honestly, part of the reason I’m doing this is that I need this too. I do not have this figured out. Physiology does not come with a cultural/linguistic instruction manual. And if I’m going to explore this with my students and in my research, I darn well better prepare to explore this in ways that might go beyond… um… the usual professional/scholarly boundaries. We don’t ever fully separate our studies from ourselves — we just sometimes pretend we do. In this case, the professional and personal are so obviously interlinked that I need to be extremely thoughtful about how I do and don’t do them. Boundaries. They’re gonna happen.

So far, we’ve had a weekend at Gallaudet getting to meet each other in person — and then we meet in text chat once a week to discuss readings. The weekend meeting was super fun. The other members of my (tiny!) cohort are from all over the place, lots of diversity of experience — all of us are really good at getting through the hearing world, and have varying sorts of involvement in the HoH and Deaf worlds. Academics, engineers, doctors, HLAA officers, fluent signers, teachers of the Deaf, careers completely not-related to ASL/hearing/Deafness, curious non-signers, FM users, CI users, hearing aid users, people who prefer captions, people who prefer lipreading, people who prefer interpreting… so much fluidity! To my surprise, I found that I can codeswitch and mediate (read: “infomally interpret”) way more fluently than I’d thought… turns out that when I’m not incredibly anxious about signing (which is almost every single time I sign), my language skills increase considerably. (The anxiety bit is very much its own post; I may write it someday, I may not.)

As someone who is used to being the only non-hearing person in the room, it was definitely very, very weird (in a good way) to be in a room where there were people using so many different kinds of access. I do wish the quality of captions had been better; I was thankful for the great interpreters we had, and noticed a clear discrepancy between the quality of access provided by the two modalities (because of provider skill — we could have had lousy terps and a great captioner, and the situation would have been the other way around). I wonder what it was like for my classmates who don’t understand ASL and who were relying on captions. We all had to learn and practice advocating for our needs as the weekend went along, which — seriously, good skill to practice, especially in the context of mentoring other people with hearing loss (we’ll have to model this sort of behavior, and it starts with being able to do it ourselves).

Another good thing: when communication wrinkles came up — which they did, because the captioners dropped things, and the interpreters got tired, and the T-coil loop didn’t always work — we stopped, we worked to fix it, we didn’t just keep going and leave people out. We tried really, really hard to not just quietly tolerate it… we thanked each other for noticing, for asking. For some of us, it was a profound experience — some people had never been thanked for that before, especially in a world where asking people to repeat, etc. is often framed as “why are you so bothersome, you annoying deaf person, asking for things?” It was a good learning opportunity for all of us. A good chance for us to practice what we preach, with all the awkwardness and “but how do we account for this delay in what we’d planned to do?” that it entails.

Our first class this fall (it feels more like a lightweight reading group — compared to grad school, super chill!) is on hearing loss in America — lots of historical/cultural/legal overviews. I’m going to get caught up with those readings now, since it’s Sunday afternoon and I’m tired and want something light and fun to do. So we’ll see where this goes! I make no promises about regular updates, but if people ask, I’m more likely to blog about the program.