Killing Our Darlings

The following is the third post on the book Make It Stick by Brown, Roediger, and McDaniel.  Henry L. Roediger, III will be the keynote speaker at this year’s TLTCon, May 16-17, 2019, on the campus of the College of Charleston.  Attendees will receive a free copy of Make It Stick upon registration to promote Roediger’s visit and the learning experience. Click the hyperlinks to read blog posts on “effortful retrieval” and “varied practice” as learning strategies.

 

“In writing, you must kill all your darlings.”  That’s advice from William Faulkner where he is encouraging writers not to shy from deleting favorite but useless passages from manuscripts.  It may be helpful advice for teachers, too.   Few theories are as dear as learning styles theory (LST), which urges teachers to use a variety of presentation methods to meet preferred modes of learning.  If you believe the grammar of chapter six’s title “Get beyond Learning Styles” expresses the writers’ true feelings about LST, you’re correct.  Move past it.  Get over it.  Research suggests the theory is overrated at best and dispiriting at worst.

 

We shouldn’t be surprised at their argument, however, given LST’s “perverse effect” (148) of subordinating hard work to style preference.  Brown, Roediger, and McDaniel have made it abundantly clear that effort is key to learning.  That’s the one note they’ve been blowing all along. The sooner we jettison “easy learning is the best learning” the better.

That said, we should consider whether a chapter debunking learning style theory is nothing short of a “Get out of jail free” card for us instructors—i.e., we don’t have to worry about how we teach since learning is really up to the learner.  Such is not the case.  In a previous blog on “effortful retrieval” (ET) I highlighted the limitations of “dipstick testing”—i.e., testing that measures a student’s short-term memory.  These tests can also be called “static testing” because they measure a student’s learning at a specific time in the same way that a dipstick tells us where the oil level is while we’re at the BP on Highway 17 at 3:15 pm on Monday, November 26, 2018.  The information is helpful for the moment but could quickly become irrelevant if an engine valve is going out or the oil plug is faulty.  In short, there are more precise measurements to take if we really want to know how the Honda Accord is running.  Enter dynamic testing, a term we could have predicted.  Dynamic testing aims at

determining the state of one’s expertise; refocusing learning on areas of low performance; follow-up testing to measure the improvement and to refocus learning so as to keep raising expertise.  Thus, a test may assess a weakness, but rather than assuming that the weakness indicates a fixed inability, you interpret it as a lack of skill or knowledge than can be remedied (151).

If this description rings with Carol Dweck’s concept of a growth mindset, you’re right, and there are at least two important takeaways for instructors.  The first is represented directly from the above excerpt: offer testing regularly for students to assess their weaknesses.  They can redouble efforts to improve weak areas and check for improvement with subsequent testing.  Certain course formats are more suited for this type of testing and follow-up testing, gaming being perhaps the best.

 

The second takeaway is more applicable to a wider variety of course designs: clearly identify what skill(s) we are testing.  As teachers, we can too easily be guilty of giving tests that simply cover content areas.  Ill-defined testing yields useless information.  Call it non-information or—better yet—Statistically Hopeless Ill-defined Testing.  Would any one of us be satisfied by going to the doctor and having to sit through a barrage of tests only to be told we’re “sick” at the end of the appointment?  “Sick with what?” we demand.  That diagnosis isn’t good enough for you or the doctor, and more testing will ensue.  The same holds true in cases where Professor Z announces, “There will be a quiz on chapter 10, pages 253-75.”  What’s being measured?  Students who earn a D on a quiz so poorly defined only know that they are below average in chapter 10, pages 253-75.  That’s not helpful, and it’s not education.

 

Providing our students with clearly articulated objectives prior to testing is essential to dynamic testing.  The clarity lets the students know exactly what skill they are being tested on.  As instructors, we should be able to finish this statement for every graded assignment: “This [test, quiz, writing assignment, etc.] measures the student’s ability to . . .”  (Nota bene: I recommend Bloom’s Revised Taxonomy Action Verbs to formulate and scale objectives.) The student should be able to verbalize the objective(s) in return.  If we can’t say precisely what we are testing, let’s save ourselves the irritation when grading and students the bewilderment of blindly reading over information in hopes of reaching unforeseen goals.  That’s a darling everyone can do without.

Leave a Reply

Your email address will not be published. Required fields are marked *