Posts that are teaching open source-ish

Video (subtitled!) and transcript for 2013 PyCon talk, “EduPsych Theory for Python Hackers”

The video for my 2013 PyCon talk, “EduPsych Theory for Python Hackers,” is up. It’s 27 minutes and 56 seconds long, and you can view the subtitled version

(Disclaimer: I’m transcribing my own talk about a week after having given it, but I am deaf, so I’m typing this out through a combination of residual hearing, remembering what I said last Thursday, lipreading myself in the video, and reading slide content. It’s probably 98% accurate; patches welcome on Universal Subtitles.)

The quick and dirty FAQs for starting a research blog: hesitation-fightin’ version

A friend and fellow PhD student asked for points of advice on starting a research blog, mentioning his hesitation to put “non-polished” stuff out there because he’s used to getting everything peer-reviewed. Here’s my reply.

1. It’s a blog. It’s not supposed to be peer-reviewed. It’s often going to be crap. (Think lab notebooks and memoes.) Put a disclaimer at the top, big and bold, that THIS IS ROUGH and SUBJECT TO CHANGE and whatever else makes you feel comfortable that people will read your work as you intend it to read — as rough stuff you’re kicking around — and do it.

2. Publicly accessible is not the same thing as heavily publicized. If you start a blog (on or whatever), in the beginning nobody will know about it, and nobody will find it, because the average person doesn’t wake up one morning and think “ooh, I wonder what would happen if I typed in a random URL like…” — point being that your early readers will be the people you send links to, so they’ll be friends and colleagues you choose to share with (I’d love to be one, for the record) and they’ll pass the link on to people *they* trust with context of their own, and so forth… so you can think of this as making it much easier for your friends to share your work with their friends.

3. Choose a license for your blog. Creative commons something-or-other. I highly recommend CC-BY-SA or CC-BY (and not using the NC/noncommercial clause; see this essay for why). One of my professors recently started her own research blog, and we put citation/licensing instructions on a page on her site — so that’s how you can do it too, with minimal fuss.

4. I think that’s most of it, really.  And I will actually go blog these posts up quickly now.

5. Oh — that brings me to my last point. Emails make great blog post starters. Anytime you send a work-related email that gets into a decent explanation, think “could I blog this if I changed a few details for anonymizing?” This one did.

EduPsych theory for Python Hackers: slides and an extended Q&A with further-readings

I recently delivered a talk at PyCon called EduPsych theory for Python Hackers: a whirlwind overview.

Description: I’ve taken two years of graduate courses in engineering education. I save you $50k in tuition and hundreds of hours of reading and give you the short version for Pythonistas who care about education and outreach.

The slides for my talk are below; video will be coming soon at this location (it’s not up yet, but I’ll update this blog when it is). [Edit: it's up!]

EduPsych Theory for Python Hackers: A Whirlwind Overview from Mel Chua

After the talk, I was asked where to go for more information about the Dreyfus Model for Skill Acquisition and how to counteract the phenomenon that people at the more experienced levels don’t remember what it was like at the less experienced levels. The Dreyfus brothers wrote several things about their model; the most often-cited is the book Mind Over Machine. It’s a good book, but their ideas about skill acquisition evolved as they wrote about it — so if that’s what you’re focused on, I would look to their most recent 2008 essay on mastery (doc) for a description (also, it’s freely available online). Here we find some clues as to why experts often forget how to teach novices.

The first clue is the notion that an “immediate intuitive response… characterizes expertise.” You no longer think about the decisions you’re making, you just do — and if pressed to explain why you did what you did, it’s difficult; you can’t describe rules you didn’t use. It just “feels right.” That’s really the gist of it; unless they’re very conscious of their actions, it’s easy for experts to forget that not everyone can “read” the surrounding context as fluently as they do.

The literature on situated cognition and cognitive apprenticeship describes this a bit more fully. Brown, Collins, and Duguid’s paper “Situated Cognition and the Culture of Learning” talks about (on page 37) how many times, practitioners won’t be able to even execute their normal work outside of context; they won’t be able to remember or describe things when they’re standing outside their workplace (it’s so much easier for me to type a new Python program off the top of my head than to stand in the center of a room and recite it out loud). That’s because our mental representations are embedded in the context of our workspace (fancy word: “indexical representations”). In the “history” portion of my presentation, I talked about how (according to one paradigm, anyway) learning is situated — but by the same paradigm, once you’ve learned knowledge, the knowledge stays situated too!

Going along (this is from p. 34 of the same paper), when we transfer an authentic, situated task into the classroom and transform it into a sterile, context-less thing, we create problems for both learners and teachers (if the teachers are experts). The experts don’t have the context they rely on to navigate their tasks intuitively; they’re stuck with rulesets they no longer use. The novices, on the other hand, don’t get the chance to learn how to navigate the richness of a real-world context. When we do this, we usually say we’re “cutting out the noise,” but that “noise” is actually a large part of the point; people need to be learning in context. This is like handing someone a Wikipedia page on A Midsummer Night’s Dream and saying “I’ve just saved you so much time — now that you’ve read the plot, you don’t need to go see the play!”

Several counter-actions to this are possible. First, just being aware of this phenomena — having the Dreyfus stages as a tool with which to think about your own skill level and the skill level of your students — is tremendously helpful. Donald Schoen describes this sort of thing as knowing-in-action and reflection-in-action, ideas that are themselves useful to read about; his essay “Knowing­ in ­action: The new scholarship requires a new epistemology” is a good starting point, especially pages 29-32 (from “Turning the Problem on its Head” until you reach “Project Athena at MIT”).

Now you’ve got experts teaching in the context they’re skilled at navigating, aware that they see the world differently than their students, and reflecting on their actions as they go along (or at least that’s what you’re trying to have, anyway). So how do you teach — what strategies do you use to structure and design experiences in the classroom? For this, I’ve found the framework of cognitive apprenticeships a useful one. “Cognitive Apprenticeship: Making Thinking Visible” is a good starter and has teaching examples embedded within it.

To bring this all together in an example that really happened during PyCon: let’s say I’m Aleta Dunne, the maintainer of planeteria, and I meet someone named Mel Chua at PyCon, and she’s interested in poking around the code and maybe submitting a patch. 

  1. The first thing I’ll do is to stay in this context. I’m going to take Mel to the actual code instead of trying to talk about it only abstractly and from a distance.
  2. Second, I’ll think about where I fall on the Dreyfus scale here… perhaps I don’t feel like I’m an expert yet, but maybe I’m proficient.
  3. Then I’ll think about where my “student” falls… hrm, this Mel person is pretty new to this particular codebase, but she seems to have seen some code before. She can’t yet prioritize what chunks of code are important here, though — that’s a sign that she might be an advanced beginner.
  4. Aha. As a proficient person, I can prioritize what’s important — and as an advanced beginner, Mel can’t yet. So prioritizing problems is something I will probably need to scaffold her on as we progress. (For those who saw the talk: Aleta is thinking about bringing “prioritization” into Mel’s zone of proximal development, because Mel can’t do it without Aleta — yet — but perhaps Mel can do it with Aleta’s help.)
  5. Let me start prioritizing tasks and reflect-in-action to figure out what I am doing — the underlying rules I’m using. Why do I think this ticket might be a good one to work on compared to the other tickets I can see? Okay — now… can I explain a bit of that logic to Mel and see if I can help her walk through a similar thinking process for a second ticket?
  6. …and so forth. It’s not a perfect example, but you start getting the idea (I hope) of how these tools-to-think-with can blend into your actions in a teaching-learning-doing space.

There are benefits to this for the expert-teacher’s performance as well. Further on in that paper by the Dreyfus brothers I linked to earlier, they go on to speculate that for experts in a state of flow, “all of the brain’s activity is focused solely on performance so nothing new is learned” (emphasis mine). Therefore, to move towards mastery, they say that…

“It appears that the future master must be willing and able, in certain situations, to override the perspective that as an expert performer he intuitively experiences. The budding master forsakes the available “appropriate perspective” with its learned accompanying action and deliberatively chooses a new one. This new perspective lacks an accompanying action, so that too must be chosen, as it was when the expert was only a proficient performer. This of course risks regression in performance and is generally done during rehearsal or practice sessions.”

In other words, to answer the question original question directly:

  1. Experts can teach better by backing out of their instincts and deliberately stepping into different patterns with awareness, bringing their students along for the ride.
  2. This is difficult to do (and experts don’t do this often) because it often makes them perform worse and look dumber.
  3. Therefore, to teach better, get over the fear of looking dumb — and when you do that, you’re on the track to mastery.

Allen Downey’s “Bayesian Statistics Made Simple” workshop: a recap and review

I attended Allen Downey’s PyCon 2013 workshop on Bayesian Statistics Made Simple; his slides and code are available online (free and open, of course — go Allen!) Bayesian thinking (“given these results, how likely are my hypotheses?”) is powerful, simple, and a mind-flip for folks like me used to frequentist statistics (“given my hypotheses, how likely are results I don’t yet have?”). The Python library Allen developed for his book makes it easy to nest multiple levels of specifications describing our assumptions; since I find typing out clean, modular code consumes far less mental RAM than manipulating abstract math symbols on paper, that suits me just fine. We went through basic problems such as:

  • If I’ve seen N enemy tanks with the following serial numbers (and assume enemy tanks are numbered sequentially starting from 0), how many total tanks does the enemy probably have — and how does my guess change as I see more tanks?
  • If I have the results of a repeated coin-flip, what is the probability that the coin I flipped is fair? (Hint: it depends on how you think the coin may be unfair.)
  • If Alice scored higher on the SAT than Bob, what is the chance that Alice is smarter than Bob? (What assumptions do we make about the SAT, test-takers like Alice and Bob, and the nature of intelligence?)

The budding researcher in me took notice when Allen presented examples of more things one could do with Bayesian statistics. For instance, b

  • y looking at my dataset from a first experimental sampling run… say I interview students and find 3 students who love thermodynamics, 2 who hate it, and 15 who don’t know what it is — I can start making inferences about:
  • How many other opinions about thermodynamics might be out there that I didn’t get in my first trip to the field?
  • How many more students will I need to interview before I have X% confidence that I have gotten Y% of the existing opinions about thermodynamics expressed?
  • What’s the proportion of students who love, hate, etc. thermodynamics — or rather, what’s the probability that, in the entire population of all students I could ever interview, X% will express this opinion? (In my first sample, 10% of students loved thermodynamics. What’s the probability of the “real” proportion of thermodynamics-lovers in the general student population being 10% — versus, say, 50% of students loving thermodynamics and me just unluckily missing them? How would my confidence in making the claim “10% of students love thermodynamics” increase if I interviewed more students?)

…all given certain assumptions, of course, such as assuming my sampling is really random — maybe the thermodynamics-lovers were all at a thermodynamics conference when I went looking, or assuming students will express a clear, “truthful” opinion on thermodynamics to me for some value of “truth,” and… As with all modeling techniques, these guesses are only as good as my model. But the neat thing about Bayesian statistics is that it’s easy to tweak facets of my assumptions and see what changes in predicted results ripple out from it. So it’s a thinking tool that’s good to have, in any case.

I also picked up pedagogy from Allen’s workshop. It was immediately clear to me how he had used his in-progress book and research blog on the same topic to scaffold the construction of his workshop, and that was a lesson in and of itself; all three share the deliberate, incremental, self-teaching style I associate with Allen (who was one of my professors in college). Although we both believe in transparent science, our teaching styles are vastly different; Allen leads large groups down a well-marked trail, scaleable and reproducible, moving with the clean efficiency of experience. He’d be an excellent MOOC professor. I like scattering my groups loose to wander in a rawer pasture, building discussion around surprising things people stumble into — a different improvisation every time depending on who’s there. Both styles have their pluses and minuses, and Allen’s an old hand at his style whereas I’m barely a journeyman in mine. I come from teaching full-week, all-day workshops and semester-long classes, where team dynamics and wandering comfort can evolve and distributed group improvisation is wonderful once you get past the initial discomfort hump. But Allen’s marked-trail style is more expected — and certainly more efficient — for short workshops like the ones we had, so I’ll try to adapt my materials more to that teaching technique next time I have a 3-hour workshop to run.

Speaking of which — I succumbed to exhausted sleep too early last night to post materials from my workshop; I’ll need to find some time in the next 36 hours to do so (note to self: the perfect is the enemy of the good).

PyCon signal processing workshop materials

Here are the materials from my PyCon 2013 tutorial, Digital signal processing through speech, hearing, and Python.

Original description:

Why do pianos sound different from guitars? How can we visualize how deafness affects a child’s speech? These are signal processing questions, traditionally tackled only by upper-level engineering students with MATLAB and differential equations; we’re going to do it with algebra and basic Python skills. Based on a signal processing class for audiology graduate students, taught by a deaf musician.

I’ve pulled the code snippets and some graphs into the slides so that you can walk through the entire thing by working through the slide deck (below). You can also find all the code (with many inline comments, and some more details that didn’t get into the slides) on github.

Tutorial attendees will note that the vocoder demo is still forthcoming — I’ll make a blog post when that’s up and update this post to link to the vocoder demo when the time comes.

How would you teach signal processing to audiology graduate students? I’m doing it spring term, and here are my ideas.

It appears that I’ll be teaching a graduate-level signal processing class to the 2nd-year PhDs in the audiology department at Purdue this spring term. My mission for this class is to help the audiology students become the sort of audiologists I’d want to have myself as a deaf geek. At the end of this post are three ideas that I’d like folks to give the “crazy test” to (as in, feedback: is this cool or is this crazy?)

I am excited! I mean, I get to spend a semester helping audiologists get comfortable with tech geekery? YES. I am also scared shitless, because… wait, am I qualified to do this? Signals and Systems boggled my mind when I first encountered it as an electrical and computer engineering undergrad, and I haven’t taken any grad-level classes on it myself yet (I’ve just got a bachelor’s) and I know… minuscule amounts about audiology, most of it as a patient.

It comforts me to know that my engineering education PhD classmate Farrah is co-teaching. She made a living teaching DSP college classes for nearly a decade in Pakistan, and now makes a living researching how to do it better. We also only have 8 students — the same ones in my Hearing Aids class this term — and they’re cool and willing to experiment. I also know I do tremendously well with baptisms by fire; the class I’m most legendary for being an awesome TA at was one I nearly flunked when I took it (I actually begged to teach because I wanted another chance to go back and learn the material, and… oh, did I ever.)

It is further complicated by the fact that the class will be held in Indiana next term, and I… will be in Ohio. I’ll be in Columbus studying at Ohio State University (OSU) as a travelling scholar, taking Patti Lather’s classes on cultural theory and poststructural/feminist research methods in education before she retires at the end of this school year. (I have also, to my great joy, gotten permission to take the intro hip hop dance class as well as whatever level of “individualized German” I place into — apparently I am not the first hearing-impaired student they’ve had… okay, I’m the second, but at least someone’s done it before, and the ability to go at my own pace is a huge blessing because I read fast but listen slow.)

Farrah and I had hoped I might be able to drive back for classes every other week, but the course scheduling for the semester makes that… insane. It would involve attending class, driving 5 hours, teaching class, driving 5 hours, sleeping for a few hours, and then attending another class. I’m willing to do that once or twice, but… every other week? No. I have this strict no-dying policy about my work life…

So I have been brainstorming as to how we can teach the class, and have come up with 3 ideas that Farrah hasn’t vetoed as too crazy yet. I would love feedback and thoughts.

Have the class write the course textbook — for an audience of geeky hearing aid users and their companions. Yes, I’m a fan of collaborative project-based learning that results in Real Useful Things, how did you guess? When I say “textbook,” I mean “reference material” — this probably will resemble a collection of articles and videos and code samples more than a traditional thing-on-dead-trees-with-a-spine. (And of course it’ll all be open-licensed. Here’s the idea: at some point, every audiologist is going to have patients who are geeks, or whose parents or partners or friends or family are geeks — they’re going to encounter someone who is going to look at the room full of equipment and the tiny, extremely expensive embedded device going inside someone’s skull or ear canal, and ask “so, how does that work? Like… really work?” The pretty ad copy from hearing aid manufacturers featuring happy grandparents and little babies isn’t going to cut it — so what are you going to give them? Right. There’s nothing for that yet. So my idea is that we’re going to make it. Whatever we make ought to be readable and understandable by a bright high school kid; no engineering PhD required — but it’s going to explain things, ok? No hand-waving, no magical black boxes. This would be our Giant Project for the semester, the entire class pulling and learning to get it together. This means we need project work time, which is why my second idea is to…

Flip the classroom, Oxford style. I think it’s Oxford that has students meet 1:1 with tutors instead of placing them in giant lecture halls. As I told Farrah tonight, the notion of lecturing to 8 students seems ludicrous to me — especially if they all come from such different math/programming/technology/science backgrounds. Give them material to learn each week and a choice of resources to learn it from plus a little self-diagnostic exercise thing to check their understanding, then have them meet for 30 or 45 minutes with an instructor to assess that understanding, ask questions, and whatever else that individual needs. (Bonus: grading happens during those meetings and becomes formative feedback — way more fun and easy than marking exams.) That’s 2-3 hours per instructor per week (depending on whether we do 30 or 45 minute sessions) and way better contact time for the students.

This means that “class time” as scheduled can be used for project work time — basically, studio. We know everyone will be free then. We can get together and work on stuff! But we don’t all need to be in the same place. In fact, I won’t be in the same place. I’m probably going to do Google Hangouts for my 1:1s (bonus: multiple people can listen in). I’ll do the same for team meetings. Heck, they can do the same for team meetings. Heck, they can do the same if they want to talk with other people who aren’t in Indiana (hearingaidhacks community, I’m looking at you). I mean, we’re talking 8 future audiologists here — they’re going to know so much more about how to use this flexibility for awesome than we will.

Use Python. Yes, MATLAB is what researchers in this field will use. Lord knows I’ve had to do my signal processing homework in MATLAB — but I was studying electrical engineering, not preparing for future clinical practice. And I had programmed before. And was very, very comfortable with matrix operations. And… I mean, if conceptual understanding is the goal here, use a language that’s easy to understand, has novice-friendly libraries (I am in thrall of the myriad options Python provides for signal processing libraries — now I just need to work through them all and figure out which ones are best for learning!), and produces beautiful graphical and auditory output. I mean, if you wanted to become a brilliant research MATLAB programmer, you’re probably not getting your PhD in audiology. That having been said, I know the department wants the students to learn MATLAB, so maybe there’s some compromise here to be had; a mix of MATLAB and Python, having students translate MATLAB code to Python or vice versa as an exercise, something of the sort. (And if anyone really wants to learn MATLAB, we’ll teach them. Farrah and I can do that. If someone needs this for their future research project, we can help, but I’d rather have the individuals who need it ask for it than to force-feed it to everyone.)

Time commitment math? Here’s what I figure.

For the students: 3 hours in project studio plus 1-2 hours working on the project outside studio. Reading time plus 30-45 minutes for individual meetings — let’s call that another 3 hours. Total: 7-8 hours per week, which is well under the 9 hours you’re “supposed” to spend on a 3-credit class (we will inevitably forget something like administrivia, so the buffer time is good). And a lot of the time is flexible, which I think audiology students with busy clinic hour requirements might appreciate.

For the instructors: 3 hours in project studio plus 2-3 hours of individual meetings, and between 2-5 hours a week of course prep depending on what’s going on. I’m trying to overestimate here, so that places us at 7-11 hours a week — totally survivable. (I suspect it will be closer to the lower end of this time estimate if we prepare intelligently and manage things well, but… Murphy’s Law.)

That’s what I’m thinking. O metabrain of the internets, please hammer on these ideas, propose new ones, make them better, do the many-eyeballs magic that you do.

Designing technology to support reflection

Anyone who’s heard me talk about Teaching Open Source portions of my research will soon gather that I’m a fanatic about metacognition. If our designers and engineers and writers and developers and contributors don’t know themselves and how they think, how can they work with others? Here’s an actionable summary of a little (ok, 20 pages) paper called “Designing Technology to Support Reflection, written for FOSS projects and other distributed online communities where learning takes place. The paper was written in 1999, when open source was still a baby, the internet had just begun taking hold, and MOOCs were light years away, so I’ve updated the examples to be more familiar to us 13 years later.

The paper outlines “…four ways that technology can provide powerful scaffolding for… [a] combination of both individual and collaborative reflection… [for] actively monitoring, evaluating, and modifying one’s thinking and comparing it to both expert models and peers.” (That last quote was an incredibly hacked-up and shortened version of the abstract, by the way – and the start of the paper is a lovely lit review on why metacognition matters for performance; if you are at all interested in these topics, I recommend reading at least the first 3 pages.)

The four ways:

  • process displays
  • process prompts
  • process models
  • a forum for reflective social discourse

Process displays are basically debuggers; they show learners where they are in a process (which may be new, complex, and confusing to see, let alone do the first time). Firebug can be thought of as a process display, albeit not one specifically designed for education; Hackasaurus is a bit closer. Bret Victor’s piece on Learnable Programming is a great example of how an even richer system might be designed. Also, pretty much anything that supports portfolio creation can be used as a process display — wikis, blogs, etc.

Process prompts are attention-grabbers that bring the focus of learners to appropriate bits of a process while it’s in motion. You can think of the specific question-prompts of a bug reporting form as a process prompt; if we took that one step farther and had an option to replace the big blank textbox with a “novice” display that walked them through steps for reporting a bug (“What were you trying to do?” “What specifically did you type/click?” “What happened next?” “What did you expect to happen instead?” etc.) that would be an upgrade. Checklists are a very simple sort of process prompt; for instance, see Mediawiki’s guide to conducting a code review.

Process models are maps of expert thought processes, the “stuff” that experts know and act on but don’t always articulate because they’re taking it for granted. Mo Duffy does a great job of this when describing her work on GNOME and Fedora, and I don’t even need to look that far — her two most recent blog posts walk you through some of her thinking about bootloader UI design and wallpaper development, respectively. Now, this is more a think-out-loud than a true generalized process map; such a thing looks more like Fedora Infrastructure’s “what to do when there’s a system outage” document. Armed with this, even a novice sysadmin like me can see how masters like Kevin Fenzi and his crew keep their cool and manage the ship even in the midst of a maelstrom (although actually doing it is another thing).

Reflective social discourse is about getting more eyeballs on a problem to make it shallower — hearing multiple voices on a topic, getting feedback from many different angles. (This is where the paper most shows its age; there’s almost an undercurrent of awe in describing how several computers could be connected through a fileserver! and students could write notes commenting on the work of their peers!) Nowadays, there’s the obvious collection of tools specifically made for dialog: blogs, microblogs, Planet blog aggregators, chatrooms of all sorts. But there are others more subtly embedded in our practice. Public bugtrackers and code reviews are all about reflective social discourse — I’ve learned plenty by reading extended comment threads on closed tickets. And while this isn’t a technology-supported option per se, conferences and in-person events are another form of this — present your idea to a bunch of friends and have them help you make it stronger.

The article closes with a plea for systems thinking. Won’t someone, somewhere, start designing learning technology that incorporates all four of these in one? And we are — look at any of the dashboards being created for things like EdX or Khan Academy, and you’ll find all four aspects. But now you know what they’re doing.

Here’s the paper, for those of you inclined to chase down the full text. Have fun!

Lin, X. D., Hmelo, C. E., Kinzer, C. K., & Secules, T. J. (1999). Designing technology to support reflection. Educational Technology Research & Development 47(3), 43-62.

So far, Radically Transparent Research has cost $291.98

Sebastian and I have been working on a collection of Teaching Open Source (TOS) faculty stories since last spring; I’ll probably write on that over the next few weeks as we get things together for our FIE presentation. It’s taking a while because (1) we’re busy, (2) we’re figuring out a lot of copyright/IRB stuff for the first time, and (3) honestly, it’s sort of our first “real” empirical-data research project, so the clumsy newbie-fingerprints are painfully obvious All Over The Dang Thing.

One of the things I love about my research is that it’s possible to do it on a shoestring. No scanning electron microscope needed (though those are awesome), no supercomputer requirements (…yet). All our equipment has been dug out from backpacks and drawers; I recorded the first round of interviews with an aging digital camera that dates back to my junior year of undergrad. If you poke it the wrong way, the battery goes flying out of its compartment — this is the same battery that takes a night to juice up and then peters out within a 2-hour filming session. When my camera really gives up the ghost during a session, I record interviews on my cell phone, which is… suboptimal. Thankfully, Sebastian has a new digital camera now. If he brings it to Seattle, maybe we’ll even be able to capture TWO interviews on the same day for the first time! We’re doing almost everything else with Free Software.

I say “almost” everything else, because strictly speaking, a transcription service is not FOSS. We debated this last spring break, but I finally declared that it would be Really Dumb to have audio files transcribed by a deaf researcher (me) or a researcher trying to reverse typing-induced RSI (him) and that if we were going to spend money on this project, IT WOULD BE ON THIS. *stamps little foot*

So we’ve spent $218.98 on transcription. Add $10 for a domain name and $63 for webhosting for the months we’ve been doing the project, and we’re up to $291.98, which is not so bad. With additional transcription, maybe that’ll jump to $600 or $800; after a certain point I want to find a place to host the oral history library of (open-licensed, fully-identifiable) transcripts at a place we don’t need to pay for, because the idea is that more TOS professors will be able to contribute once we get things set up. (And I know, I know, it’s hard to contribute when you don’t know what it is because it’s not online yet. It nags at me too! We’re working on this! Copyright is, um… fun!)

Mostly I wanted to note spending on this project so far; this tally doesn’t count the cost of going to Seattle to present it in a few weeks, nor does it include the hours and hours we’ve spent on it. I’m trying to get a grasp on How Much Research Costs, at least the type of research that I want to do. Someday I will be able to make reasonable estimates; right now I’m going for somewhat-accurate-tracking.

C’mon, Mel. More work to do. Go, go, go.

My research proposal (draft) as a comic book page

Robin asked me to try drawing what I was thinking. Here’s what happened. It peters out at the end where I start waving my hands around and going “I, I, I need to study this post-structuralism thing in Ohio in the spring,” but I’m actually fairly happy with everything up to the last two frames as something that might be understandable by folks from pretty much any discipline and from outside academia. (Maybe I need to explain radical transparency more?)

Anyhow. I wonder if the NSF would accept a (better) variant of this as one of the two pages of my fellowship research proposal, or if they’d just look at me funny.

research sketches: cross-disciplinary collaboration wtih radical transparency

Think about Universal Design for FOSS community experiences, not just products.

If I asked a random FOSS community member whether they’d like more people to use the software (or content, or hardware, or whatever) they work on, I’m pretty sure most people would squint at me funny and say some variant of “yes” or (if they’re blunter) “duh.” And also I think most of us acknowledge accessibility is a Good Thing That Can Help With That — possibly a good thing we don’t have resources to implement, but a good thing nonetheless — and applaud efforts like GNOME’s that are directly working on making “FOSS things” more accessible to the less-privileged (most commonly thought of as “the disabled,” but really extensible to just about everyone). This is called Universal Design, and it is…

“…the design of products and environments  usable by all people, to the greatest extent possible, without the need for adaptation or specialized design.” –Robert Mace (emphasis mine)

Now, we create things, but we also create experiences — and one of those experiences is the experience of becoming one of us, a contributing member of the project community (which I believe most FOSS contributors would also like more of in their projects). Our communities are an environments. What would it look like to expand our thinking from “accessibility of product is good!” to “accessibility to contribution process is good!”? I’ve been reading about Universal Design for Learning, an adaptation of Universal Design geared towards the design of learning experiences, and it’s got some heuristics that might be useful to consider. I’ve adapted the list below is from Sheryl Burgstahler’s Universal Design: Principles, Process, and Applications.

  1. Equitable. The community is useful and marketable to people with diverse abilities. For example, a project that specifically calls for help with documentation, art, testing, etc. and other things beyond code and system administration. It’s made clear that interest and willingness to learn are the most important things, not a PhD in CS — come in and we’ll teach you much of what you need to know to get started helping out.
  2. Flexibility. The community accommodates a wide range of individual preferences and abilities. Mailing lists where language minorities can converse in their native tongue.  Holding in-person getting-started events to bootstrap folks who are nervous about getting the etiquette of online communication wrong.
  3. Simple and Intuitive. How to start participating in the community is easy to understand, regardless of the newcomer’s experience, knowledge, language skills, or current domain of expertise. Having suggested default settings in “how to join this team” instructions, the same way most installers nowdays have a recommended default setup that advanced or adventurous folks can change.
  4. Perceptible Information. The community communicates necessary information effectively to the contributor, regardless of ambient conditions or the user’s sensory abilities. A common example in most communities is “if it’s not on public mailing list X, it didn’t happen” — phone calls, hallway conversations, and even (in some cases) IRC meetings should be captured somewhere everyone can look at them.
  5. Tolerance for Error. The community culture minimizes hazards and the adverse consequences of accidental or unintended actions. A safe place for newcomers to ask even the “silliest” questions — a place that’s actually maintained and responded-to by respected, established community figures instead of becoming a ghetto of ignorance. A cultural habit of encouraging and applauding genuinely curious attempts that go wrong instead of responding with a “how could you be so stupid as to try X?” tone. The practice of teaching newcomers how their actions can be made reversible — the notion that you can revert wiki pages, roll back code in a version control system, etc. and that this is No Big Deal. Being bold often requires the confidence that you won’t irreversibly screw anything up.
  6. Minimal Activation Energy (originally “Low Physical Effort” for things like automatic door openers). The community’s processes can be used efficiently and comfortably, and with a minimum of fatigue. Integrated logins across a project’s infrastructure. Not making people sign into 4 different accounts and make 20 mouse-clicks in order to vote for a board. The option to subscribe to notifications of one’s choosing.
  7. Space for in-person participation. At in-person gatherings, size and space is provided for approach, reach, manipulation, and participation by community members of many sizes, postures, and mobility considerations — including the ability to be there in person at all.  Live collaborative notetaking during meetings, childcare at conferences, choosing wheelchair-accessible buildings for events whenever possible, projecting IRC channels into a live space so remote members can have a visible voice.
Food for thought. And by the way, these are all things that new contributors (including classes of students looking for a project) can evaluate and improve upon for their favorite FOSS project — doing these things would be providing an extremely useful helping hand to many places.