Here’s the transcript of the seminar “Psst: wanna eavesdrop on my research?” [materials] I delivered on Thursday about applying Free Culture / Open Source practices to qualitative research (for the engineering education department at Purdue, hence the disciplinary focus). I’ve edited in some context for readers who weren’t there, and anonymized audience comments (except Jake Wheadon’s interview — thanks, Jake!) and the transcript cuts out before the last 2 audience questions, but otherwise this is what happened; click on any slide’s photo to enlarge it.
This will be a slightly strange seminar; I’ve had at least half a dozen people e-mail me and say they can’t be here but would like to catch up later on. So we’re interacting here and it’s being recorded by Boilercast and transcribed by Terry over there.
The title is long and fancy and we’re going to ignore that.
I’m Mel. I think you all know me. I’m a Ph.D. student here in engineering education and one of the things I do is qualitative research, because it’s fun. I also come from the hacker world, the open source, open content world where there’s this radical transparency culture that defaults to open and share everything about what they’re doing.
This is Terry. She is a CART provider and the one responsible for typing super, super fast on our shared transcript document. The URL for the live transcript is on the slide for those of you who are following along on Boilercast. You should know that all of the recordings and the documentation and so forth we’re producing in the seminar will be open data. That means a couple of things.
First of all, the document that Terry is transcribing in is a collaborative text editor and you can type and annotate and fix spelling or whatever you want. It will be the canonical record of our discussion. The second is, as we’re recording this, no names will be taken down. So it’s not going to say I said this and you said that and this person said that other thing. It’s just going to be “person in room” said words. If you want to be off the record, if you don’t want Terry to type down what you’re saying, say that before you speak and she will stop typing for a moment, or if you see something in the document you can go and delete the stuff you said if you don’t want it on the record. The document will be available for editing right after the seminar as well. I probably won’t read and post the final version until sometime after dinner. So you can also take out stuff you said afterward if you don’t want to be captured here. The ground rules are also in the document.
This is what we’re going to be doing today. It’s a bit of an adventure. I’ve been playing are something called radically transparent research. There’s a website and it’s out of date and once I finish my papers I’ll fix it. What radically transparent research refers to is this: what if we did engineering education research, or any kind of qualitative research, as if it were an open project? Make the data open, the analysis open in terms of both being publicly available and open to anyone who wants to participate, and not having a delineation saying “these are the official people” on the project and “these are not.”
What would it look like if we defaulted to open instead of defaulting to closed? So what we’re going to do is we’re going to do a little RTR — radically transparent research — project right here in this room. We’ll be collecting data, going through the licensing process, seeing what analysis looks like, and dissemination — we’ll get back to that. We’ll see a few examples of other projects that RTR is being used in and then we’re going to loop back and and do an instant replay of “okay what the heck just happened?” I’m hoping as we go through the steps they’ll seem fairly logical, but then when we go back and compare them to the normal way of doing qualitative research they’ll start seeming really weird and the implications of the pieces lining up will start piling and piling and piling.
(Note: this was a 45-minute talk, so for the sanity of feed readers, I’m going to say “click to read more” here.)
I had the privilege of spending several days last week at Georgia Gwinnett College with professors Nannette Napier and Evelyn Brannock, alumni of our 2011 Professors’ Open Source Summer Experience (POSSE) cohort. They work you hard, these Georgia profs — within the first 24 hours of my landing in Atlanta, I’d spoken at 3 classes, had lunch with Science and Technology faculty and a meeting with the dean, gone on a library tour, hosted a Google Summer of Code application prep session, and delivered an all-college Tech Talk on humanitarian free/libre/open source contribution as a career stepping-stone. [slides]
In addition to a stroll around the CNN Center and Centennial Park, Evelyn and Nannette introduced me to tomato aspic and fried chicken livers at The Colonnade. The former is a virgin bloody mary in gelatin format and is… edible. The latter is a very, very good idea (om nom NOM). Then there was Magic Bread, a basket of sweet, malty, butter-laden pillows of goodness. They are supposedly called “yeast rolls.” I like my naming better.
On my last day there, I spoke in front of a group of K-12 (mostly middle-school) teachers in for a Saturday workshop on computing while middle school girls played with Scratch and Lego Mindstorms next door. Of all the moments I had in Georgia, this surprised me the most; the workshop was jam-packed, fast-paced, INSPIRE CHILDREN go go GO! and normally I’d jump right in and surf that wave of hyper, even ratchet the speed up a notch — but this time, the wave rushed through me and then past me — and I was aware of it but not swept into it; a strange experience for me. I heard how fast people were talking. How little they were breathing. These were teachers with no technical background who’d been suddenly asked to teach computing. I wondered what that pressure felt like, having to teach kids something you didn’t know and had no time to learn. What could I give them, if what they really needed was more time –
And suddenly I was being introduced (fast! high-energy! She has INSPIRED CHILDREN go go GO!) and I was up front, and pulled a chair and sat and looked at them and breathed –
I remember only vaguely what I said; this talk came from a different place; a quiet place, a gently rooted one. I told them stories of the 15-year-old girl I used to be. I spoke about uncertainty and shyness, needing to watch and be safe, having my fears and hesitations accepted as a valid thing and held so I could address them in my own time. About creating space and letting go and knowing you’re fine even if the world is yelling faster! you’re behind! and how wrenchingly difficult it is to stand there and create a peace amidst the pressure. About the patience needed to let playfulness and ownership emerge and intertwine. About giving yourselves permission to show those around you — especially the kids — how a grown-up learns new things in an unfamiliar space. I sat up there, speaking slowly, looking into eyes. I had the strength of gentleness and all the time in the world, and I could share it. The room’s energy relaxed and softened, and it felt — in a very small way — like hugging a tiny corner of the universe.
Yeah. Like I said, it was the strangest thing. I’ve felt this quiet power once before, during my last brief remarks on a 2010 panel at UIUC. I don’t know what this space is or when I will be back, but I suspect that silence (one of my biggest fears) is guardian of its doors. I’d like to stand here more, though. It feels… right. It’s very difficult, and I’m not sure if I would say I’m happy when I’m there, but I am more… me. So I will continue learning towards that.
Thanks, Georgia. Thanks, Nannette and Evelyn and GGC. Thanks, tiny corner of the universe — you hugged me back.
(Disclaimer: I’m transcribing my own talk about a week after having given it, but I am deaf, so I’m typing this out through a combination of residual hearing, remembering what I said last Thursday, lipreading myself in the video, and reading slide content. It’s probably 98% accurate; patches welcome on Universal Subtitles.)
A friend and fellow PhD student asked for points of advice on starting a research blog, mentioning his hesitation to put “non-polished” stuff out there because he’s used to getting everything peer-reviewed. Here’s my reply.
1. It’s a blog. It’s not supposed to be peer-reviewed. It’s often going to be crap. (Think lab notebooks and memoes.) Put a disclaimer at the top, big and bold, that THIS IS ROUGH and SUBJECT TO CHANGE and whatever else makes you feel comfortable that people will read your work as you intend it to read — as rough stuff you’re kicking around — and do it.
2. Publicly accessible is not the same thing as heavily publicized. If you start a blog (on wordpress.com or whatever), in the beginning nobody will know about it, and nobody will find it, because the average person doesn’t wake up one morning and think “ooh, I wonder what would happen if I typed in a random URL like random-person-research-blog.wordpress.com…” — point being that your early readers will be the people you send links to, so they’ll be friends and colleagues you choose to share with (I’d love to be one, for the record) and they’ll pass the link on to people *they* trust with context of their own, and so forth… so you can think of this as making it much easier for your friends to share your work with their friends.
4. I think that’s most of it, really. And I will actually go blog these posts up quickly now.
5. Oh — that brings me to my last point. Emails make great blog post starters. Anytime you send a work-related email that gets into a decent explanation, think “could I blog this if I changed a few details for anonymizing?” This one did.
Description: I’ve taken two years of graduate courses in engineering education. I save you $50k in tuition and hundreds of hours of reading and give you the short version for Pythonistas who care about education and outreach.
The slides for my talk are below; video will be coming soon at this location (it’s not up yet, but I’ll update this blog when it is). [Edit: it's up!]
After the talk, I was asked where to go for more information about the Dreyfus Model for Skill Acquisition and how to counteract the phenomenon that people at the more experienced levels don’t remember what it was like at the less experienced levels. The Dreyfus brothers wrote several things about their model; the most often-cited is the book Mind Over Machine. It’s a good book, but their ideas about skill acquisition evolved as they wrote about it — so if that’s what you’re focused on, I would look to their most recent 2008 essay on mastery (doc) for a description (also, it’s freely available online). Here we find some clues as to why experts often forget how to teach novices.
The first clue is the notion that an “immediate intuitive response… characterizes expertise.” You no longer think about the decisions you’re making, you just do — and if pressed to explain why you did what you did, it’s difficult; you can’t describe rules you didn’t use. It just “feels right.” That’s really the gist of it; unless they’re very conscious of their actions, it’s easy for experts to forget that not everyone can “read” the surrounding context as fluently as they do.
The literature on situated cognition and cognitive apprenticeship describes this a bit more fully. Brown, Collins, and Duguid’s paper “Situated Cognition and the Culture of Learning” talks about (on page 37) how many times, practitioners won’t be able to even execute their normal work outside of context; they won’t be able to remember or describe things when they’re standing outside their workplace (it’s so much easier for me to type a new Python program off the top of my head than to stand in the center of a room and recite it out loud). That’s because our mental representations are embedded in the context of our workspace (fancy word: “indexical representations”). In the “history” portion of my presentation, I talked about how (according to one paradigm, anyway) learning is situated — but by the same paradigm, once you’ve learned knowledge, the knowledge stays situated too!
Going along (this is from p. 34 of the same paper), when we transfer an authentic, situated task into the classroom and transform it into a sterile, context-less thing, we create problems for both learners and teachers (if the teachers are experts). The experts don’t have the context they rely on to navigate their tasks intuitively; they’re stuck with rulesets they no longer use. The novices, on the other hand, don’t get the chance to learn how to navigate the richness of a real-world context. When we do this, we usually say we’re “cutting out the noise,” but that “noise” is actually a large part of the point; people need to be learning in context. This is like handing someone a Wikipedia page on A Midsummer Night’s Dream and saying “I’ve just saved you so much time — now that you’ve read the plot, you don’t need to go see the play!”
Several counter-actions to this are possible. First, just being aware of this phenomena — having the Dreyfus stages as a tool with which to think about your own skill level and the skill level of your students — is tremendously helpful. Donald Schoen describes this sort of thing as knowing-in-action and reflection-in-action, ideas that are themselves useful to read about; his essay “Knowing in action: The new scholarship requires a new epistemology” is a good starting point, especially pages 29-32 (from “Turning the Problem on its Head” until you reach “Project Athena at MIT”).
Now you’ve got experts teaching in the context they’re skilled at navigating, aware that they see the world differently than their students, and reflecting on their actions as they go along (or at least that’s what you’re trying to have, anyway). So how do you teach — what strategies do you use to structure and design experiences in the classroom? For this, I’ve found the framework of cognitive apprenticeships a useful one. “Cognitive Apprenticeship: Making Thinking Visible” is a good starter and has teaching examples embedded within it.
To bring this all together in an example that really happened during PyCon: let’s say I’m Aleta Dunne, the maintainer of planeteria, and I meet someone named Mel Chua at PyCon, and she’s interested in poking around the code and maybe submitting a patch.
The first thing I’ll do is to stay in this context. I’m going to take Mel to the actual code instead of trying to talk about it only abstractly and from a distance.
Second, I’ll think about where I fall on the Dreyfus scale here… perhaps I don’t feel like I’m an expert yet, but maybe I’m proficient.
Then I’ll think about where my “student” falls… hrm, this Mel person is pretty new to this particular codebase, but she seems to have seen some code before. She can’t yet prioritize what chunks of code are important here, though — that’s a sign that she might be an advanced beginner.
Aha. As a proficient person, I can prioritize what’s important — and as an advanced beginner, Mel can’t yet. So prioritizing problems is something I will probably need to scaffold her on as we progress. (For those who saw the talk: Aleta is thinking about bringing “prioritization” into Mel’s zone of proximal development, because Mel can’t do it without Aleta — yet — but perhaps Mel can do it with Aleta’s help.)
Let me start prioritizing tasks and reflect-in-action to figure out what I am doing — the underlying rules I’m using. Why do I think this ticket might be a good one to work on compared to the other tickets I can see? Okay — now… can I explain a bit of that logic to Mel and see if I can help her walk through a similar thinking process for a second ticket?
…and so forth. It’s not a perfect example, but you start getting the idea (I hope) of how these tools-to-think-with can blend into your actions in a teaching-learning-doing space.
There are benefits to this for the expert-teacher’s performance as well. Further on in that paper by the Dreyfus brothers I linked to earlier, they go on to speculate that for experts in a state of flow, “all of the brain’s activity is focused solely on performance so nothing new is learned” (emphasis mine). Therefore, to move towards mastery, they say that…
“It appears that the future master must be willing and able, in certain situations, to override the perspective that as an expert performer he intuitively experiences. The budding master forsakes the available “appropriate perspective” with its learned accompanying action and deliberatively chooses a new one. This new perspective lacks an accompanying action, so that too must be chosen, as it was when the expert was only a proficient performer. This of course risks regression in performance and is generally done during rehearsal or practice sessions.”
In other words, to answer the question original question directly:
Experts can teach better by backing out of their instincts and deliberately stepping into different patterns with awareness, bringing their students along for the ride.
This is difficult to do (and experts don’t do this often) because it often makes them perform worse and look dumber.
Therefore, to teach better, get over the fear of looking dumb — and when you do that, you’re on the track to mastery.
I attended Allen Downey’s PyCon 2013 workshop on Bayesian Statistics Made Simple; his slides and code are available online (free and open, of course — go Allen!) Bayesian thinking (“given these results, how likely are my hypotheses?”) is powerful, simple, and a mind-flip for folks like me used to frequentist statistics (“given my hypotheses, how likely are results I don’t yet have?”). The Python library Allen developed for his book makes it easy to nest multiple levels of specifications describing our assumptions; since I find typing out clean, modular code consumes far less mental RAM than manipulating abstract math symbols on paper, that suits me just fine. We went through basic problems such as:
If I’ve seen N enemy tanks with the following serial numbers (and assume enemy tanks are numbered sequentially starting from 0), how many total tanks does the enemy probably have — and how does my guess change as I see more tanks?
If I have the results of a repeated coin-flip, what is the probability that the coin I flipped is fair? (Hint: it depends on how you think the coin may be unfair.)
If Alice scored higher on the SAT than Bob, what is the chance that Alice is smarter than Bob? (What assumptions do we make about the SAT, test-takers like Alice and Bob, and the nature of intelligence?)
The budding researcher in me took notice when Allen presented examples of more things one could do with Bayesian statistics. For instance, b
y looking at my dataset from a first experimental sampling run… say I interview students and find 3 students who love thermodynamics, 2 who hate it, and 15 who don’t know what it is — I can start making inferences about:
How many other opinions about thermodynamics might be out there that I didn’t get in my first trip to the field?
How many more students will I need to interview before I have X% confidence that I have gotten Y% of the existing opinions about thermodynamics expressed?
What’s the proportion of students who love, hate, etc. thermodynamics — or rather, what’s the probability that, in the entire population of all students I could ever interview, X% will express this opinion? (In my first sample, 10% of students loved thermodynamics. What’s the probability of the “real” proportion of thermodynamics-lovers in the general student population being 10% — versus, say, 50% of students loving thermodynamics and me just unluckily missing them? How would my confidence in making the claim “10% of students love thermodynamics” increase if I interviewed more students?)
…all given certain assumptions, of course, such as assuming my sampling is really random — maybe the thermodynamics-lovers were all at a thermodynamics conference when I went looking, or assuming students will express a clear, “truthful” opinion on thermodynamics to me for some value of “truth,” and… As with all modeling techniques, these guesses are only as good as my model. But the neat thing about Bayesian statistics is that it’s easy to tweak facets of my assumptions and see what changes in predicted results ripple out from it. So it’s a thinking tool that’s good to have, in any case.
I also picked up pedagogy from Allen’s workshop. It was immediately clear to me how he had used his in-progress book and research blog on the same topic to scaffold the construction of his workshop, and that was a lesson in and of itself; all three share the deliberate, incremental, self-teaching style I associate with Allen (who was one of my professors in college). Although we both believe in transparent science, our teaching styles are vastly different; Allen leads large groups down a well-marked trail, scaleable and reproducible, moving with the clean efficiency of experience. He’d be an excellent MOOC professor. I like scattering my groups loose to wander in a rawer pasture, building discussion around surprising things people stumble into — a different improvisation every time depending on who’s there. Both styles have their pluses and minuses, and Allen’s an old hand at his style whereas I’m barely a journeyman in mine. I come from teaching full-week, all-day workshops and semester-long classes, where team dynamics and wandering comfort can evolve and distributed group improvisation is wonderful once you get past the initial discomfort hump. But Allen’s marked-trail style is more expected — and certainly more efficient — for short workshops like the ones we had, so I’ll try to adapt my materials more to that teaching technique next time I have a 3-hour workshop to run.
Speaking of which — I succumbed to exhausted sleep too early last night to post materials from my workshop; I’ll need to find some time in the next 36 hours to do so (note to self: the perfect is the enemy of the good).
Why do pianos sound different from guitars? How can we visualize how deafness affects a child’s speech? These are signal processing questions, traditionally tackled only by upper-level engineering students with MATLAB and differential equations; we’re going to do it with algebra and basic Python skills. Based on a signal processing class for audiology graduate students, taught by a deaf musician.
I’ve pulled the code snippets and some graphs into the slides so that you can walk through the entire thing by working through the slide deck (below). You can also find all the code (with many inline comments, and some more details that didn’t get into the slides) on github.
Tutorial attendees will note that the vocoder demo is still forthcoming — I’ll make a blog post when that’s up and update this post to link to the vocoder demo when the time comes.
It appears that I’ll be teaching a graduate-level signal processing class to the 2nd-year PhDs in the audiology department at Purdue this spring term. My mission for this class is to help the audiology students become the sort of audiologists I’d want to have myself as a deaf geek. At the end of this post are three ideas that I’d like folks to give the “crazy test” to (as in, feedback: is this cool or is this crazy?)
I am excited! I mean, I get to spend a semester helping audiologists get comfortable with tech geekery? YES. I am also scared shitless, because… wait, am I qualified to do this? Signals and Systems boggled my mind when I first encountered it as an electrical and computer engineering undergrad, and I haven’t taken any grad-level classes on it myself yet (I’ve just got a bachelor’s) and I know… minuscule amounts about audiology, most of it as a patient.
It comforts me to know that my engineering education PhD classmate Farrah is co-teaching. She made a living teaching DSP college classes for nearly a decade in Pakistan, and now makes a living researching how to do it better. We also only have 8 students — the same ones in my Hearing Aids class this term — and they’re cool and willing to experiment. I also know I do tremendously well with baptisms by fire; the class I’m most legendary for being an awesome TA at was one I nearly flunked when I took it (I actually begged to teach because I wanted another chance to go back and learn the material, and… oh, did I ever.)
It is further complicated by the fact that the class will be held in Indiana next term, and I… will be in Ohio. I’ll be in Columbus studying at Ohio State University (OSU) as a travelling scholar, taking Patti Lather’s classes on cultural theory and poststructural/feminist research methods in education before she retires at the end of this school year. (I have also, to my great joy, gotten permission to take the intro hip hop dance class as well as whatever level of “individualized German” I place into — apparently I am not the first hearing-impaired student they’ve had… okay, I’m the second, but at least someone’s done it before, and the ability to go at my own pace is a huge blessing because I read fast but listen slow.)
Farrah and I had hoped I might be able to drive back for classes every other week, but the course scheduling for the semester makes that… insane. It would involve attending class, driving 5 hours, teaching class, driving 5 hours, sleeping for a few hours, and then attending another class. I’m willing to do that once or twice, but… every other week? No. I have this strict no-dying policy about my work life…
So I have been brainstorming as to how we can teach the class, and have come up with 3 ideas that Farrah hasn’t vetoed as too crazy yet. I would love feedback and thoughts.
Have the class write the course textbook — for an audience of geeky hearing aid users and their companions. Yes, I’m a fan of collaborative project-based learning that results in Real Useful Things, how did you guess? When I say “textbook,” I mean “reference material” — this probably will resemble a collection of articles and videos and code samples more than a traditional thing-on-dead-trees-with-a-spine. (And of course it’ll all be open-licensed. Here’s the idea: at some point, every audiologist is going to have patients who are geeks, or whose parents or partners or friends or family are geeks — they’re going to encounter someone who is going to look at the room full of equipment and the tiny, extremely expensive embedded device going inside someone’s skull or ear canal, and ask “so, how does that work? Like… really work?” The pretty ad copy from hearing aid manufacturers featuring happy grandparents and little babies isn’t going to cut it — so what are you going to give them? Right. There’s nothing for that yet. So my idea is that we’re going to make it. Whatever we make ought to be readable and understandable by a bright high school kid; no engineering PhD required — but it’s going to explain things, ok? No hand-waving, no magical black boxes. This would be our Giant Project for the semester, the entire class pulling and learning to get it together. This means we need project work time, which is why my second idea is to…
Flip the classroom, Oxford style. I think it’s Oxford that has students meet 1:1 with tutors instead of placing them in giant lecture halls. As I told Farrah tonight, the notion of lecturing to 8 students seems ludicrous to me — especially if they all come from such different math/programming/technology/science backgrounds. Give them material to learn each week and a choice of resources to learn it from plus a little self-diagnostic exercise thing to check their understanding, then have them meet for 30 or 45 minutes with an instructor to assess that understanding, ask questions, and whatever else that individual needs. (Bonus: grading happens during those meetings and becomes formative feedback — way more fun and easy than marking exams.) That’s 2-3 hours per instructor per week (depending on whether we do 30 or 45 minute sessions) and way better contact time for the students.
This means that “class time” as scheduled can be used for project work time — basically, studio. We know everyone will be free then. We can get together and work on stuff! But we don’t all need to be in the same place. In fact, I won’t be in the same place. I’m probably going to do Google Hangouts for my 1:1s (bonus: multiple people can listen in). I’ll do the same for team meetings. Heck, they can do the same for team meetings. Heck, they can do the same if they want to talk with other people who aren’t in Indiana (hearingaidhacks community, I’m looking at you). I mean, we’re talking 8 future audiologists here — they’re going to know so much more about how to use this flexibility for awesome than we will.
Use Python. Yes, MATLAB is what researchers in this field will use. Lord knows I’ve had to do my signal processing homework in MATLAB — but I was studying electrical engineering, not preparing for future clinical practice. And I had programmed before. And was very, very comfortable with matrix operations. And… I mean, if conceptual understanding is the goal here, use a language that’s easy to understand, has novice-friendly libraries (I am in thrall of the myriad options Python provides for signal processing libraries — now I just need to work through them all and figure out which ones are best for learning!), and produces beautiful graphical and auditory output. I mean, if you wanted to become a brilliant research MATLAB programmer, you’re probably not getting your PhD in audiology. That having been said, I know the department wants the students to learn MATLAB, so maybe there’s some compromise here to be had; a mix of MATLAB and Python, having students translate MATLAB code to Python or vice versa as an exercise, something of the sort. (And if anyone really wants to learn MATLAB, we’ll teach them. Farrah and I can do that. If someone needs this for their future research project, we can help, but I’d rather have the individuals who need it ask for it than to force-feed it to everyone.)
Time commitment math? Here’s what I figure.
For the students: 3 hours in project studio plus 1-2 hours working on the project outside studio. Reading time plus 30-45 minutes for individual meetings — let’s call that another 3 hours. Total: 7-8 hours per week, which is well under the 9 hours you’re “supposed” to spend on a 3-credit class (we will inevitably forget something like administrivia, so the buffer time is good). And a lot of the time is flexible, which I think audiology students with busy clinic hour requirements might appreciate.
For the instructors: 3 hours in project studio plus 2-3 hours of individual meetings, and between 2-5 hours a week of course prep depending on what’s going on. I’m trying to overestimate here, so that places us at 7-11 hours a week — totally survivable. (I suspect it will be closer to the lower end of this time estimate if we prepare intelligently and manage things well, but… Murphy’s Law.)
That’s what I’m thinking. O metabrain of the internets, please hammer on these ideas, propose new ones, make them better, do the many-eyeballs magic that you do.
Anyone who’s heard me talk about Teaching Open Source portions of my research will soon gather that I’m a fanatic about metacognition. If our designers and engineers and writers and developers and contributors don’t know themselves and how they think, how can they work with others? Here’s an actionable summary of a little (ok, 20 pages) paper called “Designing Technology to Support Reflection, written for FOSS projects and other distributed online communities where learning takes place. The paper was written in 1999, when open source was still a baby, the internet had just begun taking hold, and MOOCs were light years away, so I’ve updated the examples to be more familiar to us 13 years later.
The paper outlines “…four ways that technology can provide powerful scaffolding for… [a] combination of both individual and collaborative reflection… [for] actively monitoring, evaluating, and modifying one’s thinking and comparing it to both expert models and peers.” (That last quote was an incredibly hacked-up and shortened version of the abstract, by the way – and the start of the paper is a lovely lit review on why metacognition matters for performance; if you are at all interested in these topics, I recommend reading at least the first 3 pages.)
The four ways:
a forum for reflective social discourse
Process displays are basically debuggers; they show learners where they are in a process (which may be new, complex, and confusing to see, let alone do the first time). Firebug can be thought of as a process display, albeit not one specifically designed for education; Hackasaurus is a bit closer. Bret Victor’s piece on Learnable Programming is a great example of how an even richer system might be designed. Also, pretty much anything that supports portfolio creation can be used as a process display — wikis, blogs, etc.
Process prompts are attention-grabbers that bring the focus of learners to appropriate bits of a process while it’s in motion. You can think of the specific question-prompts of a bug reporting form as a process prompt; if we took that one step farther and had an option to replace the big blank textbox with a “novice” display that walked them through steps for reporting a bug (“What were you trying to do?” “What specifically did you type/click?” “What happened next?” “What did you expect to happen instead?” etc.) that would be an upgrade. Checklists are a very simple sort of process prompt; for instance, see Mediawiki’s guide to conducting a code review.
Process models are maps of expert thought processes, the “stuff” that experts know and act on but don’t always articulate because they’re taking it for granted. Mo Duffy does a great job of this when describing her work on GNOME and Fedora, and I don’t even need to look that far — her two most recent blog posts walk you through some of her thinking about bootloader UI design and wallpaper development, respectively. Now, this is more a think-out-loud than a true generalized process map; such a thing looks more like Fedora Infrastructure’s “what to do when there’s a system outage” document. Armed with this, even a novice sysadmin like me can see how masters like Kevin Fenzi and his crew keep their cool and manage the ship even in the midst of a maelstrom (although actually doing it is another thing).
Reflective social discourse is about getting more eyeballs on a problem to make it shallower — hearing multiple voices on a topic, getting feedback from many different angles. (This is where the paper most shows its age; there’s almost an undercurrent of awe in describing how several computers could be connected through a fileserver! and students could write notes commenting on the work of their peers!) Nowadays, there’s the obvious collection of tools specifically made for dialog: blogs, microblogs, Planet blog aggregators, chatrooms of all sorts. But there are others more subtly embedded in our practice. Public bugtrackers and code reviews are all about reflective social discourse — I’ve learned plenty by reading extended comment threads on closed tickets. And while this isn’t a technology-supported option per se, conferences and in-person events are another form of this — present your idea to a bunch of friends and have them help you make it stronger.
The article closes with a plea for systems thinking. Won’t someone, somewhere, start designing learning technology that incorporates all four of these in one? And we are — look at any of the dashboards being created for things like EdX or Khan Academy, and you’ll find all four aspects. But now you know what they’re doing.
Here’s the paper, for those of you inclined to chase down the full text. Have fun!
Lin, X. D., Hmelo, C. E., Kinzer, C. K., & Secules, T. J. (1999). Designing technology to support reflection. Educational Technology Research & Development 47(3), 43-62.
Sebastian and I have been working on a collection of Teaching Open Source (TOS) faculty stories since last spring; I’ll probably write on that over the next few weeks as we get things together for our FIE presentation. It’s taking a while because (1) we’re busy, (2) we’re figuring out a lot of copyright/IRB stuff for the first time, and (3) honestly, it’s sort of our first “real” empirical-data research project, so the clumsy newbie-fingerprints are painfully obvious All Over The Dang Thing.
One of the things I love about my research is that it’s possible to do it on a shoestring. No scanning electron microscope needed (though those are awesome), no supercomputer requirements (…yet). All our equipment has been dug out from backpacks and drawers; I recorded the first round of interviews with an aging digital camera that dates back to my junior year of undergrad. If you poke it the wrong way, the battery goes flying out of its compartment — this is the same battery that takes a night to juice up and then peters out within a 2-hour filming session. When my camera really gives up the ghost during a session, I record interviews on my cell phone, which is… suboptimal. Thankfully, Sebastian has a new digital camera now. If he brings it to Seattle, maybe we’ll even be able to capture TWO interviews on the same day for the first time! We’re doing almost everything else with Free Software.
I say “almost” everything else, because strictly speaking, a transcription service is not FOSS. We debated this last spring break, but I finally declared that it would be Really Dumb to have audio files transcribed by a deaf researcher (me) or a researcher trying to reverse typing-induced RSI (him) and that if we were going to spend money on this project, IT WOULD BE ON THIS. *stamps little foot*
So we’ve spent $218.98 on transcription. Add $10 for a domain name and $63 for webhosting for the months we’ve been doing the project, and we’re up to $291.98, which is not so bad. With additional transcription, maybe that’ll jump to $600 or $800; after a certain point I want to find a place to host the oral history library of (open-licensed, fully-identifiable) transcripts at a place we don’t need to pay for, because the idea is that more TOS professors will be able to contribute once we get things set up. (And I know, I know, it’s hard to contribute when you don’t know what it is because it’s not online yet. It nags at me too! We’re working on this! Copyright is, um… fun!)
Mostly I wanted to note spending on this project so far; this tally doesn’t count the cost of going to Seattle to present it in a few weeks, nor does it include the hours and hours we’ve spent on it. I’m trying to get a grasp on How Much Research Costs, at least the type of research that I want to do. Someday I will be able to make reasonable estimates; right now I’m going for somewhat-accurate-tracking.