Posts that are languages-ish

Learning kana by butchering the English language


I was recently asked how I learned the Japanese writing system. Actually, healthful there are three: hiragana and katakana, which are phonetic systems, and kanji , which are basically Chinese characters embedded into Japanese text and pronounced as Japanese words.

The answer was that I was really bored the summer before I started high school, and had no qualms about butchering the English or Japanese languages. I’d also recently read Ladle Rat Rotten Hut – so when I pulled out Japanese stuff from my local library, I went “wait, these are phonetic systems… they’re just different ways of writing sounds.” So I happily started writing English sentences in the Japanese phonetic writing system.

For instance, a poem beginning:

Listen my children and you shall hear
Of the midnight ride of Paul Revere

Becomes…

りすてん まい ちゅづれぬ あん ゆ しゃる ひる
li-su-te-n ma-i chu-du-re-nu a-n yu sha-ru hi-ru

おぷ で みづないっと らいど おぷ ぱいる れびる
o-pu de mi-du-na-i-t-to ra-i-do o-pu pa-i-ru re-bi-ru

…and so forth.

Basically, I learned kana as an alternative phonetic system with which to write English, then switching to use the same phonetic system to write Japanese once I had it down. (I never did build up a good Japanese word vocabulary – but I can still read and write kana fluently to this day. I just don’t know what the words I’m saying mean.)

Kanji, on the other hand, I never did find a good way to learn. It helps to think in terms of breaking characters down into radicals, but beyond that you just have to memorize each one, as far as I know. I did learn that studying Chinese and Japanese back-to-back is a bad idea, though; to this day, when I’m reading Chinese, sometimes the Japanese pronunciation of a word will pop out in the middle, confusing the heck out of everyone – and the reverse happens as well. I still haven’t figured out a way to learn grammar, either. But that’s what further explorations are for.


Language learning for deaf autodidacts: Praat


My friend Erin Dowd, cystitis a talented linguist (and engineer, this site musician, ambulance and cook, among other things) who spends her days thinking about things like phonological inventories and has better Mandarin pronunciation than I do, shot me an email last week.

I was using Praat for a paper I’m writing, and I thought you might like to play with it, especially if you’re still trying to change your accent. I use it to analyze formants in sound files, but it’s got a lot more functionality that I haven’t explored.

Praat is an open source phonetic analysis suite developed by Paul Boersma and David Weenink at the University of Amsterdam. It’s GPL(v2) licensed, ridiculously cross-platform (Mac/Win/Linux/FreeBSD
- but also SGI/Solaris/HPUX), has been around since at least 2007, and is still under active development (the latest version was uploaded on April 15, 2011, which is 4 days ago as of this writing). There’s no public version control repository – I’ve emailed them about that – but you can download the source tar and dig around; it’s mostly C++.

It’s fun to play with. I know Praat’s intended for speech analysis, but couldn’t resist playing some guitar into the microphone (on the theory that the guitar – even with its many harmonics – would produce purer frequencies than my speaking voice). And you can see it picked up on the short 4-step ascending and descending scale I played before I confused with with chords at the end – the blue line in the bottom graph (the spectrogram) goes up and down in sync with the notes of the guitar.

Voice data was even more fascinating. I’ve been trying to work on my “deaf accent” for several years now, and Erin’s been amazing with helping me explore that whenever we get to hang out together; she’ll notice strange little things nobody else does. It was her observation that the “accent” decreased when I tilted my head back that eventually led to the discovery of how to control my oral-nasal resonance, which makes me sound a lot less deaf when I remember to do it.

Pratt – which has a ton of features I don’t fully understand – is sort of a computerized Erin. In addition to plotting the spectrogram, it gives you pitch contours (in blue), formants (red), and intensity contours (yellow) – and all of these can be manipulated. Oh, and you can annotate the images with phonemes and words pronounced if you want. The graph below is for the spoken words “hello, world!” (what else would I say)?

Erin suggested using Praat as a beefed-up version of language software that “trains” your pronunciation to be like a native speaker’s. Plot your voice, see how close you are to average formant values in a phonological inventory for your desired accent, plot again, do it again until you hit it right.

She showed me an interesting variant on this theme; Praat can also manipulate the formants, intensity curves, pitches, etc. of audio waveforms, meaning it can take your voice and change your intonation, accent, etc. So these researchers took second-language learners, recorded their voices in the target language (where they had an accent), used Praat to transform the learners’ voices into non-accented speech, and gave them back their “perfect” pronunciation records to imitate and measure themselves against. Holy cow, I thought. I could hear myself with a perfect American English accent. Or gorgeous Mandarin pronunciation. Or…

I’d love to see this made into a language-learning game. Imagine the possibilities! Pronunciation training like this could blow away any of the “speech recognition” functionality in commercial language-learning software currently on the market; instead of matching against one native speaker’s recording (no matter how hard you try, you’ll never sound like someone else), you could norm your own voice into an accent-average compiled from data from thousands of native speakers, and match against that. When combined with something like librivox, an initiative to make open-licensed audiobooks from public domain works like Project Gutenberg, the by-products of language learning could actually accelerate the building of a shared cultural library. Anyone looking for a software engineering, game design, linguistics, or signal processing project?

Praat is, by the way, not yet packaged for Fedora.


Language-learning for deaf autodidacts: Tell Me More (tool review)


I’m deaf. I love learning foreign languages, ailment but group classes tend not to work so well for me – the rest of the students can hear, so that’s how classes are built. No problem, I thought in high school when I started struggling in Japanese immersion class. I’ll just become an autodidact. This sort of worked; I found out that I could teach myself how to read text quickly in almost any foreign language by piling through books (I didn’t really try writing; I didn’t have anyone to write to), but listening and speaking remained complete blanks. I couldn’t get much out of the books, either. Most self-teaching language learning resources also assume (for good reason) that their users can hear.

Well, shoot. If I’m deaf (or a non-auditory learner for some other reason) and teaching myself foreign languages, what can I use? I asked around to the best folks I could think of. Jen, a French major alumni from Gallaudet with a hearing profile similar to mine, informed me that the prestigious university for the deaf just skips the auditory part of language learning completely, and that I could ask for special allowances if I needed to take fluency exams. Becky, a childhood friend of mine who learned ASL when I had an interpreter and is now studying French language education out in Monterey, did a literature search on my behalf and came up with just two articles on language-learning for the deaf; the breathless conclusion was that oh my gosh, deaf children are actually capable of learning foreign languages.

Well, duh. Of course we are.

This was starting to drive me nuts, so I decided I’d just start working on it.   As an engineer, the first thing I want to look at are the tools and materials I’m using – just like in cooking or woodworking or anything else, my theory is that good tools will make hard work get good results, and poor tools are a waste of everybody’s time and money. And if I hadn’t found the right study plan and tools to sit down and learn something every morning, then I should spend that morning time trying out different tools until I find things I do like. If you’re blocked, then fix the blocker. Simple.

For consistency and sanity, I’ll stick with one language at a time. Right now it’s German. I concede that Spanish would have been an easier choice to find a broader range of study materials for in the US, that Mandarin is the foreign language I’m closest to fluent in, and that ASL presents the fewest barriers to learning it as a deaf person. However, my boyfriend is from Germany, and we agreed that if I learned enough to pass a basic exam, then he would learn some ASL, so there you go. German Also, I do have some residual low-frequency hearing, so I want to learn to deal with audio somehow; the trick is that my auditory input is extremely limited compared to most people, so I need to learn how to rewire the coping strategies I use for English (lipreading, a very good predictive model, and so forth) for languages I don’t yet know. So I will be dealing with audio here, just drastically adjusted.

This morning I worked through a full beginner’s lesson of the demo version of Tell Me More ($530 for full 10-level set), based on this review and others like it. It sounded like a good resource for the hardcore. I found that there’s a ton of material in here and a lot of great exercises; if I could take advantage of the audio components, this might make a nice tool for practice and intensive study. However, nearly 70% of the exercises (as measured by the program’s final count of which lessons I completed and which I skipped) are audio-dependent with no “hooks” that I can use for compensation.

Exercises deaf folks can’t use

There are plenty of exercises that depend on audio. Audio crosswords, audio word search, “listen and repeat this pronunciation,” audio questions to which you must select answers… not so accessible for deaf people. Similarly, “listen to this word, then find it in the puzzle” is not so useful for learning if you can’t listen to the word.

I’ll note that if the same word had been spoken in context in several sentences in unsubtitled video, I could probably build my lipreading and predictive models from that, figure out which word they meant, then find it in the puzzle – but there was no context and no visuals, just the standalone word. I have the same problem on one hearing test, which asks you to repeat random words uttered in isolation with no lipreading; imagine a disembodied voice speaking nonsense to you (“Baseball. Hotdog. Cheesecake. Pumpkin.”) and having to echo it, and that’s basically the test.

Without the context and the visuals, I fail miserably; I don’t remember my exact results, but with wild guessing – they always use the same limited word set, so I can guess decently – I think I get way less than 50% correct. Add in lipreading – still the same isolated, random words (the only reason I know the contents of the word set is that they do it both without and with lipreading, by the way) – and my score suddenly jumps to 90-something percent. Change that to conversational sentences, and most people have no idea I’m deaf at all. I understand the program is trying to isolate vocabulary words for learners. But what do you hear in real life when you’re trying to communicate with people? You hear sentences in context, and you can lipread. I need something that’s going to help me navigate the messiness, because I need that mess for context.

Actual usefulness

There are some nice exercises that don’t depend on audio. In particular, I enjoyed synonym-matching – a number of words in the target language are piled on the left, and synonyms from the target language are piled on the right, and you have to pair the two. “Photo” and “picture” go together, “Nice to meet you!” and “pleased to meet you!” go together. They also do this for related words – match “Fall” with “Spring,” “Parents” with “Grandparents,” and “Mountain” with “Mountain range.”

Also good: the word-transformation exercise, which lines up a bunch of unconjugated verbs along the left side of the screen and makes you type the conjugated versions on the right. It’s not that I couldn’t do this myself, but the software lining up the material for me is something that makes me go through a lot more material, a lot more words and vocabulary, and more difficult vocabulary at a faster pace than I would if I had to make all these materials myself.

Useful resources I can find elsewhere

I also appreciated the “phonetics exercise,” which gave text descriptions of the physical motions used to produce various sounds (“place the tip of your tongue behind your teeth… the vocal cords do not vibrate…” As someone who can’t simply “hear and imitate,” I do need a resource that will help me figure out where my tongue and teeth and throat go; I wonder if an international phonetics book would be a good resource. There’s got to be something out there that will painstakingly describe, and draw in detailed cross-section, the physical movements needed to produce most sounds in most languages around the world.

There’s also tooltip translation – you can choose between “word” and “sentence” translation, and if you right-click on a word (or sentence) a mini-dictionary will pop up. That’s wonderful for learning; hidden, but easy to access. I loved that and would like to find software that does that for my whole computer – there must be something like this that draws from an online dictionary database. I know StarDict exists and I’ve used it (semi-successfully) for Chinese – I need to try that out for German, and to see if something else out there is better. It’s easy enough to get this one in standalone software.

There are transcripted short videos, which is a great idea – real-word content, at real-world speed, with text. I should find short video clips that are subtitled or transcribed in German, with the subtitles and/or transcripts in a format that easily lets me deploy tooltip translation on it. Listening to audio while reading the corresponding text works well for me, because I think of language primarily in text format, and audio as “stuff I mentally translate into text I can visualize in my brain” – and when I see the words in my head, then I understand it. (Yes, I do this for English, which is my native language.) So I do need something that will play audio over text. Maybe podcasts with transcripts, or subtitled YouTube videos – not sure yet.

If I could just buy those parts alone, I’d go for it. However, for the high price (over $500, gah!) I need something that has more non-auditory material.

My end recommendation: Nice if you’re hearing and looking for tons of interactive material for hardcore practice, not worth it if you’re deaf – you need to hear too much of that material.

What else should I try?