Being deaf is: unlearning “paper face” (proceed until NAK vs. wait for ACK)

May 10, 2015 – 8:32 pm

Edited on May 17 to expand and clarify a few thoughts.

One of the first things a mainstreaming deaf kid learns is how to hide how much they’re missing. Facial expressions can give you away. If I looked confused every time I missed something someone said, I would look perpetually heartbroken, left-out, lonely, helpless. Not the most fun emotions to have running across your face and body all the time. Not the greatest emotions to let others see, either — they overreact in entirely non-helpful ways.

Solution: don’t show (eventually, don’t even feel) those emotions. I ended up with a semi-permanent “paper face” in school — a blank sheet, carefully screened, regardless of the content or how much of it I was missing. (Curiosity and excitement were allowed through — hungry for knowledge, I smiled a lot when I got it.) If it was important, let’s just hope I could figure it out later somehow.

One side effect of “paper face” is that, to hearing people, I look like I understand a lot more than I do. The hearing world operates under the communication assumption that “if they’re not complaining about it, then they understand it.” You’re assumed to have accurately received a message by default. If you say something, and I want you to think I’ve understood you, I do… nothing.

And since we so often mistake understanding for competence and intelligence, rather than considering how lack of access can so easily mask the two — I do… nothing — so hearing people will (accurately) assume I’m competent and intelligent. In order to perform my identity as “intelligent” to the hearing, I fake understanding, ironically denying my intellect the data it thrives on. Run faster with a weighted vest, and don’t complain.

The Deaf world works differently. Instead of the hearing protocol of blithely proceeding until you get a NAK, the Deaf protocol is to constantly monitor for ACKs. The default is to assume people did not get the message unless they specifically indicate otherwise. Eye contact. Nodding. The linguistic equivalents of “Mm-hmm” and “uh huh, yeah, yeah, gotcha.” Constant mutual monitoring and affirming of connection. To Deaf people, my facial blankness makes it look like I understand a lot less than I do.

So far, in terms of cultural adjustment, this has been the biggest gut-punch. I don’t know if I want to adopt this aspect into what it means for me — Mel — to “be deaf.” I don’t know if I want to visibly show people, in realtime, when I do and don’t understand. I know that most of the time, I don’t understand — and I know that hurts. It hurts to realize it, and it hurts to show it.

So: do I work at showing that? Do I blip packets of “understanding status” back towards my interpreters, and risk them being intercepted and translated (and mistranslated) by the hearing folks around me? Do I let all that frustration seep into my face, my body, my thoughts and feelings — is that something I want to admit into my way of being? Will that take away from my ability to think? Communicate? Or will it strengthen and empower it, ground it in presence and reality?

This is not a matter of how much grit I have, or how much hurt I can tolerate. This is also about very real tradeoffs regarding what impact I want my effort and my suffering to have. My suffering will exist regardless, in a world not made for people like me. My choice is how to use my rare ability to pass for hearing — how to voice my experiences to hearing people as a deaf person who plays their game and speaks their language better than most of them do.

Do I clip a huge part of my heart and soul out in order to stay inside the dialogue — because even half of my voice is half a voice that wouldn’t be inside the dialogue otherwise? Or do I speak from all of who I am, and risk being kicked out of it? Risk ruining my ability to be accepted as “one of them,” risk being dismissable as one of “those disability activists,” just like we dismiss “those feminists” as an excuse to stop trying to understand them? Every time I use my voice, I risk diminishing its power. Or perhaps it’s not a risk; perhaps in some ways, that’s always the tradeoff, as if I had a finite store of voice-power to use in changing the world. That, too, is lack of privilege.

On the one hand, this is small. Eye contact, nodding. What’s the big deal? On the other hand, the personal becomes political becomes philosophical, without my desire or intent to do so. Because for me, that eye contact means “Help me. We have created a world in which I am insufficient. Will you come back to get me, and others like me, so we can all fix it together?”

Know someone who'd appreciate this post?
  • Print
  • Facebook
  • Twitter
  • Google Bookmarks
  • email
  1. 6 Responses to “Being deaf is: unlearning “paper face” (proceed until NAK vs. wait for ACK)”

  2. Wow. The stuff you make me think about that I’ve never thought about. You are awesome as always. I enjoy that Google has remembered your website for me. ;)

    By Brit on May 11, 2015

  3. Beautifully worded & makes me weep.

    Another negative result of the constant ACK stream is hearing people can read that as agreement, when the message could be “I get what you’re saying and it’s totally wrong.”

    By Jesse the K on May 11, 2015

  4. @Jesse — that’s a great example of how using culturally Deaf cues with interpreters (to aid their work) can easily get mistranslated by the hearing folks you’re trying to communicate with.

    Because I understand this stuff in Communications (ECE) language… I’m trying to find solutions that move to a different section of the communications spectrum in order to avoid that interference — if I can negotiate some hand gesture, protocol, etc. to provide the constant ACK stream in a way that’s really, really hard for others to interpret as anything else, that might alleviate most of the problem. It’s packet/protocol design. (I’ll have to teach that protocol to every interpreter I ever meet, but I can do that — add it to my startup script.)

    I could add encryption, if I want it to be hard to detect (not jut hard to misinterpret. Frequency hopping might be part of it (thanks, Hedy Lamar!) This would involve finding multiple, possibly redundant ways to send the ACK, and ways to seamlessly switch between them, so the blockage (oh no, someone stepped in front of me! Oh no, videochat dropped! Oh no, I have my hands full with lab equipment!) or detection of one technique need not derail the whole thing.

    Of course, humans (including d/Deaf ones) already route automatically around tons of these things, which I find awesome. I understand the things I understand about electrical engineering in part because of this — I’ve had to think so much about the human-speech equivalents of these protocols for most of my life that it’s almost been a relief to find that engineering-speak had words and math for them.

    By Mel on May 18, 2015

  5. so deaf to deaf communication is like TCP? and has retransmission if NACK? and

    By Kevix on May 25, 2015

  6. I’ve thought about this a lot since I read it the first time. I felt like there were gaps in understanding, but I couldn’t pinpoint them. I can explain this more in person; remind me.

    1) ASL is visual. The redundant signals of understanding are a necessary part of it. It’s easier to get from a linguistic perspective. Just like all sign languages have very similar grammar, they all include (very) active listening. But they are far more subtle than nodding all the time. Which leads to…

    2) The signals of understanding are not always signals of agreement. They can be slight looks which mean, “I understand what you are talking about. I may or may not agree. Move on, you don’t need to keep explaining.” That would be the definition of the single nostril twitch, I would say. Or, “Oh, okay, that’s what you mean.” That would be the sign commonly glossed “OIC.”

    When you watch deaf people talk to each other, you see the whole range in play. I’m not a perfect stand in, but you should see a bunch on Wednesday. I’ll point them out to you when I am aware of them. _\m/

    By Brit on May 28, 2015

  7. Sorry, my point with that is it is natural to do it in an ASL conversation (after one has gained some fluency.) It doesn’t happen in spoken language because it isn’t needed. I suppose hearing people could see that conversation happening and misinterpret it, but they would have no clue what the subject was anyway, let alone the intent. When back “in hearing,” it wouldn’t need to be continued.

    By Brit on May 28, 2015

What do you think?