Being deaf is: choosing between having emotions and communicating them (also: “met deaf wow” moment)

I’m the kind of person who realizes what she’s thinking when the words come out of her mouth. My insights surprise me as much as they surprise my listeners.

Today I said: “I’ve usually had to choose between having emotions and being able to communicate them.”

This comes from the middle of a chat between myself and Sara about communication mediums — text chat, ASL, and spoken conversations. Of the three, text chat takes the least effort for me to engage in… but it’s not my preference. Spoken conversation seems much more “alive” to me than text chat; when the dialogue is accessible, I feel much more connected to the other person. Conversations have things like emotions, pauses, timing, excitement, energy… things I can respond to.

That’s a lot harder to get across in text. Text is… bad at feelings.

I also grew up… bad at feelings, relatively unskilled at allowing myself to have them and express them. I grew up getting most of my input from text — written English, because spoken English was so inaccessible to me. I don’t think these are unrelated.

With spoken language, I can also connect even if my thoughts are incoherent. I’m able to express the state I’m in by flailing around, making noises (“wheeeeeeee!”), facial expressions, body language… I can just be. I can become verbally incoherent, and it translates as “Mel is excited! This is huge! She hasn’t figured it out yet, but it’s awesome!” (Or “Mel is tired.” Or “Mel is sad.” Or “Mel is in a complex emotional state, but you can kinda get the gist of it because she’s moving around in a particular evocative way.”)

In contrast, when I write, I have to at least make sentence clauses and find words for things. I have to pull back far enough to type sentences like “I am excited,” which means I have to make myself less-excited. I have to step away from my feelings long enough to find words and structures for them. So I’ve usually had to choose between having emotions, and being able to communicate them.

One of the exciting things about learning ASL is that I might no longer need to. It’s the strangest feeling to be able to get both the affect and the content of a communications medium without having to laser-focus all the time. I recently had my first extended voice-off conversation with a native signer. We went on for nearly 5 hours, constantly communicating, and my brain was not tired at all; I wanted to keep socializing, even with my language-learning awkwardness. I didn’t want to go home and lie on the couch with my eyes closed. I wanted more people. More. People.

This… this doesn’t happen. I don’t like meeting new people and talking with them for extended periods of time. I just… I’m not supposed to do that. But I did. And it felt fantastic. Weirdly awesome. I was later introduced to the phrase “Met Deaf Wow,” which is an appropriate description.

This doesn’t mean I’m going to switch to signing all the time. I still live and work and socialize in the hearing world, and I probably always will. But the more I can take a break from the cognitive load — the more relief I get — the more wherewithal I have to be Mel (rather than exhausted-Mel) in the hearing world. I can use my energy wisely, where it matters, instead of having to expend max effort all the time.

It’s something that’s helping me learn how to be here. And I like… being here, and I like being Mel. (It’s so much better than being exhausted-Mel. Exhausted-Mel is not a great default state to be in, but for the longest time, I didn’t have another.)

Being deaf is: unknowingly mispronouncing lots of common words

Since I am deaf, most of my native-language (English) vocabulary comes from books. Consequently, I can walk around pronouncing words incorrectly for years before someone says something. A small selection, in chronological order.

  1. Vegetable, elementary school. (“veg-eh-tay-bull,” as if I were pronouncing “table” like the piece of furniture.)
  2. Pythagorean, 6th grade. (To be fair, my Filipino-born parents also pronounce it “PITH-a-GORE-ee-yan theorem.”)
  3. Supremacist(s), 8th grade. (I gave a history presentation that mentioned the “Nazi Super-masses.”)
  4. Chef, Champagne, and all other French words beginning in ‘ch’, age 26. (“The Sheff chose a great shampain to pair with this food.”)
  5. Scheme, last week (one of my favorite CS textbooks is “The Little Sheemer.” Sadly, this means I have been butchering the title since age 19 when I first encountered it.)
  6. Aggrandizing, yesterday (this was pronounced correctly, but with the wrong syllabic stress: I guessed “aggranDIzing,” but it’s “agGRANDizing.”)

Friends, if you remember other amusing “Mel mangles her native language!” moments, let me know. I’m collecting these.

Welcome to Italy. I’m an illegal immigrant from Holland.

Part of an email conversation, reworked for sharing.

“Welcome to Holland” is an essay for parents of disabled kids. (And here’s an alternative and critical interpretation of that essay.) It makes the analogy of preparing for a trip to Italy — expecting a normal child — and then suddenly getting off the plane and finding you’re in Holland instead. “Italy” is a metaphor  for “normal” childhood, whereas “Holland” is a metaphor for disability.

To extend the metaphor (in a way that would have been entirely true 5 years ago, although I’m less sure now): I’m an illegal immigrant. I snuck out of the Holland border as a toddler — crawled on my own, nobody carried me. Now I’m working and living in Italy, but always with a constant sense of fear. At any time, someone could check my papers and discover that my passport’s fake. They could deport me. Any time. (Ok, in real-life immigration law, Holland residents don’t need visas to enter Italy, but roll with me here.)

I make repeated dashes back and forth across that border. And none of my neighbors are allowed to know — the trips I take at night, the money I send back, all the exhaustion and the stress that comes with wrangling my life so I won’t be found out — in order to stay in Italy, I need to sweep that all under a rug of excuses and can’t come clean with them on why I’m just so tired all the time.

My family doesn’t entirely know that I’m an illegal alien either — they think I’ve long since traded my citizenship in for an Italian one. My parents live in Italy — not just in Italy, but in a really nice flat there; two brilliant kids with engineering degrees, a hard-working family success story. They got brochures about Holland, once upon a time, when I was small. But it’s a distant memory now, and thank goodness that their daughter ended up being Italian after all. Holland is that “other place” where “other people” go, the poor and pitiful ones. But not us, not me. Clearly, I’m not one of them.

But I am.

I still have my Holland passport. I will always have this passport. And I hate it, and resent it, and deny it. And I have carefully forged an Italian one that’s so good that even experts can’t tell it’s fake. But I know it isn’t real, no matter how hard I pretend.

The original email conversation ends here. I’ve added the rest since then.

If I don’t forge my Italian citizenship papers, I can’t go to school or get a job. I mean, kind of. But it would take a lot more effort to apply to a much smaller, crappier selection of them. And I have no route for naturalization. No matter how brave I am, how many useful things I do, how smart I am, who I marry, or how long I’m here, I’ll never magically become a citizen.

My deafness is not heritable, so my kids will probably be born Italian. I grew up seeing that you could only look at a Holland passport with pity — and I could never truly compensate for that, regardless of how hard I worked in Italy. So I used to honestly believe I ought never to put anyone in the terrible position of having me as a wife or mother — that it would be selfish and unfair of me to even open up the option. My kids will grow up with an illegal-immigrant mother — and being a first-generation child is hard, because your parents can’t coach you through early life experiences they haven’t had. Or if I choose to move to Holland, then my kids will have to go there if they want to visit me. Or if I choose to be a legal resident of Italy, I’ll have to walk around wearing a giant orange hat to visibly mark that I am from Holland — because that’s how Dutch people get “legal” status in Italy. And what kid wants to walk next to their mom when she’s wearing a weird giant orange hat?

And yet. There is a flaming hope there now, somewhere. That weird blended Dutch-Italian families with ordinary lives are possible. And that those ordinary lives would change the boundaries of what sorts of “ordinary lives” are possible. I know that other people do this, and I know it’s hard. But… I can do hard. I’ve done hard my entire life.

Hi, Italy. I’m an illegal immigrant from Holland.

Thoughts on being a deaf extrovert

It’s been a few years now since I realized I was an extrovert. This came as a surprise; my Myers-Briggs tests have always scored me as an extreme introvert, and I leak energy — not just leak, hemorrhage – in a majority of social situations, as an introvert does.

For instance, I recoil from statements such as:

  • You spend your leisure time actively socializing with a group of people, attending parties, shopping, etc.
  • The more people with whom you speak, the better you feel.

And nod vigorously when I read things like:

  • After prolonged socializing, you feel you need to get away and be alone.
  • You often prefer to read a book than go to a party.

But nope. I’m not an introvert. I’m just deaf. People energize me. But lipreading and the other things I need to do in order to communicate… they pulverize me. It’s like having to make a blood donation every time you go out to get food; you often end up spiraling onto the floor, dizzy and starving. Grumpy. And lonely. And bewildering, at least for me for many years — because I didn’t understand why.

I didn’t understand my reactions, didn’t understand how to recharge — didn’t understand why my recharging strategies (be alone! do things without people!) weren’t working. I thought all introverts were like me, so I’d constantly push through my own exhaustion to draw quiet friends into constant interaction, because I thought they wanted that — even if I didn’t.


A hearing introvert will tire early in a party, walk outside, and go “phew — now I can go home and recharge.” A deaf extrovert will tire early in a party, walk outside, and go “hurrah, now that the background noise is gone, I can talk to people!” It’s been a long hard haul to recognize more and more things I didn’t know I didn’t know.

The learning continues. Deaf extrovert friends are teaching me to be okay with taking internet-socializing breaks (chat, Twitter, Facebook, etc.) to recharge during work hours — I get a little energy from the real-time text-based communication, but without the lipreading burnout. And I have been learning how to savor solitude, to differentiate communion from communication, and to learn the shape and heft of my great hunger for community. It’s a hunger I’ve long ignored and matted down.

I love walking into a room of people I know, and sitting and simply being in company, in silence, maybe with occasional nods and waves. Places where I don’t need to constantly reach out to prove and/or reestablish the connection, because I trust it. Being able to relax into that sharedness of understanding. This makes me happy, and I want to find and nurture spaces like this everywhere I go. Places I can recharge.

I think these are my thoughts for now. I will post them and go to lunch.

On contract/specs-based grading and intrinsic motivation

My undergrad roommate Kristen (now Dr. Dorsey, after earning her PhD in ECE from CMU) emailed me about an article on specs-based grading, asking what effect it might have on intrinsic motivation (which we’d been discussing with some of our former suitemates over an extended email thread. I love my suitemates).

My reply was that I’ve also heard the technique called “contract grading,” and it has pluses and minuses. This Chronicle of Higher Ed article has a decent discussion of the minuses, which mostly consist of “watch out for loopholes and students trying to game your system to do minimal work.”

Contract or specs-based grading is exactly what it sounds like: writing out detailed instructions as to what students must do to earn a certain grade in class. And I mean detailed. Turn-your-class-into-a-videogame detailed. The kind of contract you’d write out when specifying a technical component you’re outsourcing to a subcontractor. “If you submit 4 of these 10 assignments and are absent fewer than 3 times, you get a B.” “To earn an A, your essay must answer the following questions in grammatically correct English…”

There’s been a limited amount of empirical research on its effects. Via the POD mailing list, here’s a study on contract grading’s effects on a science class (psychology) and a humanities class (composition). Spoiler: contract grading was “more effective” at student retention and higher grades than a traditionally-graded control group.

Now: what about intrinsic motivation — the sort of thing most teachers wish their students had? You know, the students who want to learn about nanoelectronics because it’s so beautiful! and they love love love electronics! just like you do.

Here’s where it gets tricky. Intrinsic motivation can be fragile, and extrinsic rewards can destroy it. If a kid loves playing the violin, and you start rewarding her with ice cream every time she plays, she may learn to play in order to get ice cream — and will stop playing the violin as soon as the ice cream ceases.

This means (in my opinion) that contract grading contracts should be written so that students who are on fervent fire can keep on running without needing to stop to puzzle out bean-counting. Your expectations should be clear and flexible enough that students who do have a project in mind can see how they would do those things anyway if they were doing the project well — the goal here is minimal re-routing of an intrinsically motivated student who’s already running full-tilt down a path. Also, the contract should explicitly state that students can talk with you about renegotiating the contract to fit a project they really want to do.

Depending on your student population, you may or may not not have a lot of intrinsically motivated students from the first day. Hopefully you won’t have many amotivated ones who just don’t care at all. If so, the contracts can help by turning amotivated students into extrinsically motivated students. Extrinsic motivation means that they are motivated, but by something other than an inner love for the subject.

Extrinsic motivation has a bunch of sub-categories, but it’s not necessarily “bad.” Heck, we try to extrinsically motivate students: “you should do well in this class because it’ll help you get a job.” (People often confuse intrinsic with extrinsic motivation. “He’s so motivated to do well because he wants to keep his scholarship!” is still extrinsic motivation — you may not need to keep prodding this student to do his work, but the scholarship is what’s driving him, not necessarily a deep-seated love for circuit theory.)

Basically, contracts can turn all students into extrinsically motivated students — which is great for amotivated students, but not so good for intrinsically motivated ones. So be careful when writing your contracts so that the amotivated students can’t find loopholes — and the intrinsically motivated students won’t get distracted by having to worry about “playing the game” in order to get points.

Sarcastic Mel Sighting: backstory of “Communicating Is So Inefficient” (PRISM column)

Apparently, I have a snarky side. This post is the backstory for how my recent ASEE Prism column, “Communicating Is So Inefficient,” came to be.

TL;DR summary: the article’s first sentence is “After years of observing engineering education, I’ve finally figured out what our goal is: minimal student-teacher interaction.” The rest of the article points out the observations that led to this conclusion, in the vein of this SMBC comic of aliens speculating about the human war against plant genitalia (translation: we give flowers as gifts).

The article started after several colleagues approached me, in separate conversations, and started venting like this: “Aargh! I am trying to do this thing that requires students to start an open-ended dialogue with me about their work in the discipline, and…”

At this point, they would say some combination of these three things:

A) The students don’t get it, don’t do it, and are complaining that I’m “not teaching them”!
B) Senior colleagues/admins tell me I’m not supposed to do that if I want to survive tenure!
C) It is impossible to have these conversations with all the students I’ve been given, in the time I’ve been allotted, while still covering the content I’m required to cover!

Basically, I was hearing my colleagues genuinely thirsting to interact with students — engage with their individual processes, help shepherd what they were creating, get to know them — and running into an education system that penalized them for doing so. At some point, I started giving this response to point (A):

“Of course they don’t want to talk to you. These were the ‘smart kids’ in high school. They’ve been conditioned to associate ‘asking a question’ with ‘not knowing stuff.’ If you talk to a teacher, that means you’re failing and something is wrong.”

And then I realized that it wasn’t just the students who’d been conditioned this way. My colleagues were the teachers who were actively resisting the same system trying to condition them away from talking with their students. I started getting sarcastic in those conversations. “Oh, no, you can’t do that. We need to process more students through the system. No, no, we just need to automate everything. Not just the grading. The teaching and the learning, too.”

When people laughed at an observation I’d made, I wrote it down. And then I started putting them together into paragraphs, and then my editor emailed and said “we need your column” and I hadn’t written anything else…

And that’s how the rare sighting of Sarcastic Mel came to be.

Learned today: babies are 3kHz vuvuzelas to match the Fletcher-Munson curve of hearing people

Working with audio engineers  is a ton of fun. Davin Huston and I were just discussing the Fletcher-Munson curve, which describes how (objectively) loud certain frequencies need to be in order for a normal-hearing person to perceive them as being at the same volume. The normal hearing human ear is more sensitive to some frequencies than others.

Turns out that 3kHz is one of those frequencies. I’d never heard of this before. It’s something that all (normal) hearing adults are more sensitive to than other frequencies — soft 3kHz noises sound particularly loud and annoying to us. Only that frequency.

“It’s the frequency that babies cry at,” Davin (who has a newborn) said.

I blinked. “And everyone with normal hearing has this bump, this sensitivity to 3kHz.”


“Do all babies cry at this frequency?”

Yeah, said Davin. Doesn’t matter if they can hear or not. It’s a matter of the air being forced from tiny lungs through a tiny vocal tract. Babies are tiny didgeridoos.

This explains a lot. I know screaming babies actually make sounds — they don’t just lie there with their mouths open, which is what it looks like to me. (3kHz is so hilariously outside my hearing range that we’ve never even tried to amplify it — my cochlea is so completely damaged there that you’d just be throwing data into a void.)

But I’ve never understood why screaming babies seemed to be so particularly annoying, gauged by the frequency and intensity of complaints I’d get from hearing travelling companions when we passed by a vocalizing infant. I used to wonder if it was just because it was a kid, and some sort of psychological “humans, take care of our species!” thing kicked in.Was a screaming baby more annoying than, say, a vuvuzela at the same amplitude?

Turns out the answer is yes. A screaming baby is physiologically more annoying than a vuvuzela played at the same amplitude, because it’s something our species came up with in order to ensure we would take care of screaming babies of our species. We want them to shut up.

It also now makes more sense to me how it’s technologically possible to have flashing alerts for crying babies (for Deaf parents). When I encountered the idea of “flashing alerts for crying babies” last summer (Lynn and Sharon were brainstorming on a hypothetical future-Mel-as-a-mom house — long story), my first thought was “oh my gosh, that must be a really complicated signal processing problem. All those babies, all those variants of voices, all those variants on crying — how can you train the dang thing to hear your baby?”

But nope. All they need to do is filter for a 3kHz sound. Easy.

I’m not sure why this never occurred to me before — that baby-output might be simple. I guess I have a default assumption that humans make really complicated sounds. Which is usually true. But sometimes… not true.

So! Babies are vuvuzelas. Who knew?

Pushing back on the STEAM acronym

Kathleen Hickey, my Jazz dance teacher, recently asked us to reflect on some newspaper articles about “STEAM” (Science, Technology, Engineering, Art, and Math — an addition to the usual acronym of “STEM”). Here’s what I wrote.

As someone who’s an artist (writer, musician, illustrator, dancer, improv theatre performer, and more), engineer (electrical/computer/software), and engineering education researcher, I have a whole tangle of thoughts on this topic that goes far beyond the confines of this short reflection, but I’ll try to be brief.

I’ll start by saying three things:

  1. I’m an artist and an engineering educator.
  2. The “STEAM” acronym annoys me terribly.
  3. The reason it annoys me is that I see “artist” and “engineer” as the same identity.

There’s not a hard boundary between “STEM” and this “art stuff.” The acronym of STEAM does itself both a service and a disservice – yes, engineering and art should both taught, but to say they should be taught “alongside” one another seems to imply that they are separate things, and that we can split them into buckets and then conveniently stack them atop each other. Both articles treat “A” and “STEM” as distinct entities, with verbiage like “applying technology to the arts” or “incorporating the arts into science.”

This seems to imply that one has to choose sides, to code-switch, to belong (or at least belong first) to one culture or the other, which can start the two worlds touching – but will ultimately keep them from merging. It will also make it more difficult for people who identify with both to comfortably express themselves as fully integrated – the dominant rhetoric and metanarrative won’t allow it. Pick one, or say you’re one of those strange double-major, cross-disciplinary oddities; compartmentalize.

Art has engineering inside it; it always has. Partnering work teaches us about biology, friction, and structure. Choreography has patterns, repetition, shape. STEM has art inside it; it always has. We make color-coding choices in our graphs, dream about snakes to understand how benzene molecules circle together. This world is fluid and interconnected, and our minds are what tease everything apart. It becomes politically convenient to separate them into distinct departments and colleges and funding sources; it becomes a resource strategy to write grants calling for an “A” in “STEAM” – and so we do. Good things happen when we do that, to be sure.

But this is a story – one of many stories that could be told. And we must look at the narrators, and their motives, and the other interpretations that those narrators could have chosen but have instead rejected, and the functions that these stories serve.

How to succeed in engineering as a disabled person (poem)

I’ve been asked how to succeed in engineering as a disabled person. This answer — which is sarcasm, by the way — came out during a recent long drive to Kentucky. It’s intended to be spoken-word poetry, and was inspired by an intersectionality conversation last month with Joi-Lynn Mondisa.

(Also, I want to point out that I had wonderful friends in engineering undergrad and grad school; I also wish I could have been in a state of less exhaustion and been able to better appreciate those friendships at that time.)

How to succeed in engineering as a disabled person

Work hard.
If something comes up, don’t get frustrated.
Be proactive.
Work the system.

Don’t get angry.
Don’t have feelings.
Don’t realize how tired you are.
Don’t realize that what you’re doing is extra labor.
Stay oblivious. Focus on your classwork.

Don’t ask for help.
Don’t look dumb.
And never show signs that you’re struggling.
That any of this is any harder for you.
That any of this ever hard for you.

Don’t socialize.
Don’t have friends.
Especially disabled friends. You might start comparing notes.
Besides, you’re too tired to hang out with them anyway.

Don’t try to find out what you don’t know.
There are a lot of things you don’t know that you don’t know.
That’s good. Keep it that way. That’ll let you keep working yourself to death.

Oh, and stay away from disability-related things.
Accessibility initiatives. Activism.
They might mess up that delicate balance of ignorance you’ve worked so hard to build.
You might get mad at how unfair it is.
Or how much life is stacked against you.
Or how much you have to fight.
And how little anybody recognizes it.
And that would be distracting from your work.

And besides, you don’t need any of that help, do you?
That’s just for people who aren’t good enough to make it on their own.
But you?
You’re good enough to do it, right?


You gotta prove that, you know.
You gotta prove you’re worth it.
Show you’re functional. Always. Constantly.

So don’t think too hard about it.
Just work. That’s what you’re worth as a human.
Work hard.

And that’s how you succeed in engineering as a disabled person.

Realtime transcript of “Using Realtime Transcription” FIE 2014 talk

This is the anonymized transcript of my Frontiers in Education (FIE) 2014 conference talk. The paper title is “Using Realtime Transcription to do Member-Checking During Interviews” and the authors are myself (Mel Chua) and Robin S. Adams. Since the paper was about realtime transcription, I did not use a slide deck. Instead, I projected a CART feed (live captions) for my own presentation as I spoke, so that my audience could see me demo what I’d written about. The transcript below is therefore both (1) what was said verbally, and (2) what was shown on the screen. All names, except mine and Lynn’s (who requested to be identified in this transcript), have been changed.

MEL CHUA: Okay. This is actually a two-slide presentation. Oh, hi, everybody, I’m Mel. I’m from Purdue and I’m doing that dissertation thing and I do qualitative research, so I do lots of interviews. But there’s one little bit of a wrinkle — I’m deaf, so talking is hard.

[new slide explaining the CART acronym]

One thing I use in my classes for accessibility is something called CART. It stands for Communication Access Real-Time Translation. And that’s what it looks like. And it’s basically a stenographer who comes and types super, super fast on a magic chording keyboard what people are saying, which is one of the reasons I’m wearing a microphone and a microphone will be passed around the room. What ended up happening was I just used CART for my interviews. And one of the side benefits of CART is you get a transcript right while you’re talking.

[new slide with the agenda in bullet-points]

What I’m going to be presenting is what it looks like, and what are some of the implications of CART, because it actually has some interesting implications for research methodology and subject positionality and the kind of interactions you have during the conversations.

I thought that the best way to do that would actually be to show you what this is. And so everybody, say hello to Becky. (Note: name has been changed.) Becky is my captioner for today.

[switches from slides to a live Streamtext of the CART for the talk -- the realtime transcript of the event scrolls on the projector for the remainder of the talk]

THE AUDIENCE: Hello, Becky.

MEL CHUA: Becky, everyone says hi. Do you want to say hello to everyone?

BECKY (typed on the screen): (Hi, everyone!) (How are you today?)

MEL CHUA: Yeah, sometimes people ask me, “What speech recognition software are you using?” It’s not a software, it’s a person. So that’s the point. I wanted to show people a little bit about what it looks like and what can happen when you do this kind of thing during an interview. Tom was kind enough to volunteer to do a mini demo. (Note: name has been changed.) Hi, Tom.

TOM: Hi, Mel.

MEL CHUA: Tom, can you tell me a little bit about the balance you strike between research and how you do including diversity in the classroom?

TOM: And including diversity in the classroom?


TOM: That’s a great question. I just started a new job at a teaching university in [SCHOOL NAME] so I don’t have structured expectations to teach. I teach 9 credits, 3 courses this semester. That’s considered on the light side compared to some of my colleagues. So I have a structural place for teaching.

And I have a structural blessing — an institutional blessing to do research but I don’t really have a structural place to do it. So ways that I kind of — I incorporate strategies by — I strategize by collaborating with other universities that give me a structure. I have two days off. Not really off. But I’m not teaching classes. For two afternoons, Monday and Friday.

And that really — on Friday I focus on research. Monday, teaching. And how that — and I bring myself to foster diversity in the classroom. By trying to be attentive to those who — I’m in a [small school in a US state]. It’s a pretty homogenous population so I do notice — and we are very male dominated. I do notice when those — come in my classroom that may not feel like they identify with everyone else. So there were some strategies with stereo — protecting against stereotype threat that I try to incorporate in the classroom. Is that about time?

MEL CHUA: Yeah, that’s awesome. Thanks, Tom. And so if you hold on for a moment, you might notice we actually have a transcript up already. One thing we can do right now is scroll back a bit and say, Tom, it’s member check time. Hang onto this for a moment. [hands Tom the microphone]

TOM: Okay.

MEL CHUA: I’m going to scroll through real quick some bits of what you said, and if you see something that seems interesting and pops out to you and you want to talk a little more about it, stop me and then just say that.

TOM: Okay. And by the way, this is completely unrehearsed. So you know. (Audience laughter) Okay. I’m looking. I start off pretty descriptive. Probably to gain comfort with the question. And so I talk with what I know. Right off the bat. So yeah, go down a little bit… and if we looked up, I’m — I have like some — maybe go up — back up, I’m sorry. It’s like right in between. In an in-between spot.

Yeah, where I say I have a structural place for teaching and I have a structural blessing — an institutional blessing I’m correcting myself there to do research but I don’t really have a structural place to do it. I kind of — I have some pause about that. Because I’m thinking, oh, what if this were to get out. And how would this reflect on the university that hires me and feeds my wife and kids and me.

But I’m okay with it. But it does give me some pause whenever I see it. So we’re good. Do you want me to keep going? Okay.

MEL CHUA: Thanks, Tom. Keep in mind this was a quick demo. In an actual research interview you would go much longer and much more in depth. Just from this you can see a couple of implications. First of all, member checking can be done in the same session as the interview, so the dropout problem that you have — it can get at that a little bit.

Second, the positionality. So instead of subject interviewer, Tom became sort of a co-analyser or co-researcher and his reflecting on his own words and so forth. While it was still fresh in his mind. And that’s actually something that a lot of participants do in interviews anyway. In Holstein and Gubrium’s “The Active Interview” from 1995. [pause to let captioner catch up] Wow. Okay. They talked about indigenous coding, which is when people are in the middle of an interview and they say things like, “Oh, just like I said before.” Or, “This is a good example of…” and they start analyzing and reflecting on what they have said.

But the difference is that what Tom was able to do, you saw that he went back up to a portion of his transcript and then quoted the exact words he said instead of having to remember it. So it’s indigenous coding. But it’s grounded in the direct verbatim words of a transcript.

Another thing this does is it makes transcription more visible as a methodological choice we’re making. A lot of times we just go, oh, transcripts, transcription, transcription. But that’s not actually the case. The choices we make can have a big impact on way we analyze and way we present our findings. This makes it much more visible.

There are some downsides. You’ve got to set this up in advance. Because it’s a person. You have another person’s schedule to juggle, but it’s much like if you were doing a foreign language interview and needed a Spanish translator or something, not that different.

Cost-wise it’s about for — shall — it’s about like paying an undergrad to transcribe thing except it’s faster, much faster.

Some people feel a little weird when doing this. I will sometimes use Google Docs and have a transcriber write into a Google Doc so we can correct it, correct typos in the middle. Some people report it’s a little distracting to see their words pop up so they don’t look at the screen. Everyone responds to it in different ways.

And I wanted to close off by there’s a few folks in the room that’s been subject to this particular method and I wanted to give them a chance to speak to what that’s actually like.

Dave. I asked him before [the talk if he'd like to speak]. (Note: name has been changed.)

DAVE: One thing I notice is when you were going through this, you kind of switched from — from this descriptive mode to then reflecting on what you’re describing, and I found that in participating in this, I would — in particular when we would go back to the transcript in a follow-up session, I would look at things that I said and just enter this reflecting… “Hmmm, why did I use that word, or why did I use that phrase, and do I actually think that, and how do I feel about this being transparent and out there for the world to see?”

It’s certainly a sense of unease at times, but I actually found it really useful for development in my own thinking to look at my words both as they were appearing and then a week later or so or two weeks later and reflecting why in the world I would say certain things. Yeah, it’s… terrifying, also. (Audience laughter)

MEL CHUA: I’m continuing to experiment with this stuff, I’m happy to talk with people about it. Robin Adams, my advisor and co-author, is sitting right there and can also speak to what this method. She’s been on both ends of the microphone.

I thought I would leave some time for questions because this might be the first time a lot of people have seen this.


AUDIENCE QUESTIONER: Thank you. Does it lend itself available for all types of analysis for example analysis where you do have (Audio cutting in and out) — analyzers and those are kind of getting lost, as well?

MEL CHUA: Are you still getting audio okay?

BECKY (captioner, typed on the screen): (It was cutting in and out a little bit).

>> MEL CHUA: Okay. I guess that would answer my question.

So that’s a great question. And actually one of the things CART does is that it makes transcription very visible as a deliberate choice of methodology, so it’s probably not the right choice for super precise verbal protocol analysis type stuff. Also, when I do this, I always make a backup audio recording just this case something cuts off or I want to go back and make sure that’s exactly what was said. Because, yes, you do lose some of the precision just like you would with any type of interpreter or translation type thing. I think of this as sound detection. So it’s not appropriate for everything. But when it’s just communicative, [it works].

AUDIENCE QUESTIONER: (Speaker off mike).

MEL CHUA: Yeah; yeah. The backup gives me a good idea if there’s a part that was super fast or I had a really weird, you know, Russian author name that we need to track, that kind of thing. Any other questions?

AUDIENCE QUESTIONER: So in interviews when people hear voices on audiotapes a lot of times people are like, “That’s what I really sound like?” or on TV, they are embarrassed, like, “That’s what I look like?” Here you see a little bit of that effect, I guess like, “That’s what I really said.” Is that hard to get past? Is that part of the protocol, is that what you’re analyzing?

MEL CHUA: [to Lynn Andrea Stein, another audience member] Do you want to answer that question? (Note: Lynn is Lynn’s actual name; she requested to be identified in this transcript.)

LYNN: I knew that was my question. (audience laughter) Mel and I started using this protocol when she was interviewing me for [a research project], and I can’t read the transcription. And we actually switched to using a method in which I just type to Mel and we type back and forth, because as good as Becky and her colleagues are, it’s exactly that I can’t stand to hear my voice, I can’t stand to see somebody else’s transcription. I worry a lot about precision, so for me, because I can type fast enough that we can have a real-time conversation, this method was really difficult. And part of me is grateful — I love the real-time conversation and analysis of the conversation as it goes. And I also wish that I could tolerate this method because I think it’s great and I just can’t get myself to do it.

MEL CHUA: Thank you, Becky I think we need to switch over to the next person. Thank you, again. (Audience applause)

BECKY (captioner, typed on the screen): (Thank you!)