Thinking Outside the Assessment Box

Image by mohamed Hassan from Pixabay

Our kids love a show called Wild Kratts in which two brothers Chris and Martin Kratt take kids on all kinds of animal adventures. The show starts out in regular film footage before transitioning to animation. If you’re familiar with Wild Kratts, you’ll know they transition with one little phrase: “What if?” It’s tiny but essential to enabling their imaginations to think outside the human box and get to know about an animal as the animal itself.

Maristela Petrovic-Dzerdz’ article “Gamifying Online Tests to Promote Retrieval-Based Learning” behaves like the Wild Kratts: “What if we quit fretting about students cheating with their textbooks in online tests and allowed online tests to be open book? Could we still achieve retrieval-based learning?” It’s an effort in ingenuity that works like a karate chop on the tired fear we have of students looking at course material when taking online tests. She concludes that “if we shift the focus from assessment to learning, then online, open-book tests have promising applications” (36).

Here’s the skinny on Petrovic-Dzerdz’ course. She designed a course with four sections: Level 1 tests (10%), Level 2 tests (10%), Midterm Exam (35%), and Final Exam (45%). Level 1 and 2 tests were online; Midterm and Final exams were in-class. The Level 1 tests students could take up to five times. When they scored at least 80% on the Level 1, the LMS unlocked the Level 2 test which was more difficult and could only be taken once. The more testing they did, the more exposure students got to test content that could appear on the Midterm and Final Exams. The online tests were open-book; Midterm and Final exams were not. In the end, the study found that “online, open-book tests designed using gamification principles are an effective strategy for using education technology to motivate students to repeatedly engage in retrieval-based learning activities and improve long-term knowledge retention, regardless of the course delivery mode” (24).

I perceive four important takeaways Petrovic-Dzerdz’ study has to offer. The first is that she identified not only learning objectives but also behavior objectives she wanted to elicit from the students. It’s an important step that determined course design. She wanted the students to test themselves multiple times (the objective behavior), and this behavioral objective led her to design the Level 1 and 2 tests in a way that enticed students to take them as often as possible. The objective behaviors determined test parameters.

Another is that the Level 1 and 2 tests were safe places for students to make mistakes. By and large, the pressure that high-stakes testing creates leads students to reach for less ethical means of scoring high on tests. A safe space for testing was created when she designed the Level 1 and 2 tests that could be taken multiple times for the purpose of repeatedly exposing them to course content with low percentage weights attached. Level 1 and 2 tests were, by definition, low-stakes tests.

A third is the chance to “level up.” The terminology comes from gaming and means beating one level to move up to another, more difficult level. The experience is usually accompanied with a chemical reaction in the body whereby dopamine is released into the blood stream and creates the excitement and focus gamers feel. Leveling up in Petrovic-Dzerdz’ course occurred when students scored at least 80% on Level 1 and got to move on to Level 2, which they could only take once. But the real incentive for taking Level 2 tests was that it counted for 10% of their final grade. For every Level 2 test students didn’t level up on, they lost on 2% of their final grade.

Lastly, Petrovic-Dzerdz’ Level 1 and 2 system used the LMS as an actual learning management system. Instead of acting as a repository where an instructor housed grades, papers, discussions and information, students could use the LMS to manage learning. They tested themselves multiple times, received feedback, tried again. Basically, they used the LMS for the tool it is, much like they might a video game they were trying to beat.

Nevertheless, what I appreciate the most about Petrovic-Dzerdz’ article is its process. It turns an oft-mentioned fear of instructors on its head. “Let ‘em have their books. I’ll still advance their learning.” Granted, this doesn’t solve the current dilemma we have when it comes to having high-stakes testing online, but if we apply her thought process, we might come up with new practices. We only need to begin with “What if?”

Leave a Reply

Your email address will not be published. Required fields are marked *