So, as an instructional designer, part of my job is to make things clear and easy to understand, right?
Well, it turns out that’s not necessarily the best option.
Cathy Moore just put up a blog post that has her checklist for evaluating your own e-Learning design. You rate where your learning falls on a continuum. In particular, I noticed this item:
This isn’t a new idea, but it’s particular powerful one — use consequences instead of disembodied-voice-of-the-eLearning-gods-type-feedback.
Sure, the “correct/incorrect” feedback may be easier to understand or have no possibility of misunderstanding, but it’s a disservice to your learners (and not just because it’s boring).
It’s been particularly resonating with me because of something that was said in this podcast on Show, Don’t Tell for fiction writers (mp3 here) from Storywonk:
“The difference is that in Telling there’s absolutely no role for the viewer or the reader to put anything together…In Showing, the viewer has a chance to put two things together…it’s giving them the opportunity to put stuff together themselves and to actually be active in the story…”
“It’s so much more engaging as an audience member… if I am left to put stuff together myself and not have it all assembled for me and handed in front of me that this is the way it is.”
“You need to give your…readers stuff to do. Give them a way to be an active participant, and by allowing them to draw conclusions based on little clues that you leave, you engage them in the story and they become part of [it]…”
- Lani Diane Rich (aka Lucy March)
(emphasis and any transcription errors are my mine)
I thought that was a really interesting take on the issue. From a learning point of view, reading a text is considered to be one of the more passive ways to learn, but your text can be really passive (let me just hand everything to you ) or it can be made more active through showing rather than telling.
I think this matches up really well with the point that Cathy is making. It’s one thing to say “It’s really important for health care practitioners to wash their hands” and entirely another to uncover the fact that a horrible staph infection is threatening vulnerable patients in the hospital.
A little friction is necessary for learning. Making something very easy to understand is actually doing your learners a disservice. I just saw this fascinating critique of the Kahn Academy videos (found via the Action-Reaction Blog). In it, Derek Muller explains that the “easier to understand” version of a science video had worse outcomes:
Learners who heard “clear and easy to understand” explanations did worse than students who were confused by discussions of misconceptions. In fact, learners from the “clear and easy to understand” camp frequently thought they’d understood when they hadn’t (watch the video – it’s really really good).
This goes along with the incredibly interesting study that came out of a few months ago that looked at this question (I paraphrase):
What’s the best way to study for a test?
a) Read the text
b) Read the text in consecutive sections
c) Create a concept map of the material (described in the NY Times as “arrang[ing] information from the passage into a kind of diagram, writing details and ideas in hand-drawn bubbles and linking the bubbles in an organized way”).
d) Retrieval practice (a free-form essay test followed by re-reading, and a second test).
Dave Ferguson has a good write up of this study with a link to the actual paper, but the test-taking condition beat the other hands down. They were even better at concept-mapping the material a week later than the students who had actually been part of the concept-mapping group. The researchers speculate that it’s partially because the learners were forced to confront their own knowledge gaps and reconcile them rather than just recognizing the material and assuming they knew it.
Another interesting perspective on this is from this study: Making sense of discourse: An fMRI study of causal inferencing across sentences
Subjects were shown sentence pairs. Some of the sentence pairs went together very easily (x obviously causes y), some required some interpretation to see the connection, and some were pretty unrelated. For example:
Main sentence: “The next day his body was covered in bruises.”
That sentence was preceded by one of these statements:
- “Joey’s brother punched him again and again.” (highly causally related – x obviously caused y)
- “Joey’s brother became furiously angry with him.” (intermediately causally related – you’ve got to read between the lines a little)
- “Joey went to a neighbor’s house to play.” (pretty much unrelated)
The subjects spent the most time on the middle sentences — they were related but forced the subjects to connect some dots to see the connection. The study saw a greater degree of brain activation in many areas for those sentences, and they were better remembered later.
So, in the end, there appears to be something really beneficial about wrestling with stuff a bit and drawing their own inferences — you need to have a certain amount of learning friction.
I’m not arguing you should make things deliberately obtuse (there’s a difference between challenging and confusing), but if learner can connect the dots too easily, they don’t retain the learning as well (or as Derek Muller points out — they may think they DO know when they really don’t).
And if you can create opportunities for the learners to confront their own assumptions, and give them access to their own gaps, the overall results will be much better.
Whaddya all think? And any ideas for good ways to add a little friction?
—————————————————————-
As an aside, this has interesting implications for Level 1 Evaluation (Level 1 = What was the learner reaction? Usually interpreted as “Did your learners like it?). It suggests that a positive learner reaction (“It was clear and easy to understand!”) can actually be a counterproductive measure in certain circumstances. Hmm.
References:
- Karpicke JD, Blunt JR (2011): Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping, Science 11 February 2011: 331 (6018), 772-775.
- Kuperberg GR, Lakshmanan BM, Caplan DN, Holcomb PJ (2006): Making sense of discourse: An fMRI study of causal inferencing across sentences. Neuroimage 33:343–361.
- Muller D: PhD Thesis, Designing Effective Multimedia for Physics Education, http://www.physics.usyd.edu.au/pdfs/research/super/PhD%28Muller%29.pdf