Wednesday, December 23, 2009

Testing, Assessment, and Mastery Learning


Fair Juliet claimed that which we call a rose by any other name would smell as sweet, but Juliet never had to assess learning and certainly could never have imagined the tools we now have to do so. Most of which we just don't use much or in very inventive ways.

Use of technology that allows a learner to take more ownership of learning, subject mastery and self-evaluation has helped in understanding more applied formative assessment practices. A few that we tossed about in the latest College of Public Programs TechByte lunch topic on assessment were the rich, often hidden features within Blackboard tests/quizzes.

Here's bottom line. We want learners to learn. Do we really care that it took them more time/effort than another learner, or that they didn't understand a concept first time around? Why not use testing as a way for them to explore meaning, work through issues they don't understand, concentrate harder on the material they didn't master, and give them a tool and an option to rethink / redo / revise? Isn't that how learning happens best?

Here's a few practices shared in putting our heads together. Add any you've come up with and perhaps our students will actually value, instead of dread, our assessment practices.

Blackboard (at ASU; other places, other CMS...same principles):
  • Set test for multiple attempts. You can choose number of attempts (unlimited, 3, etc) and time frame (Monday from 9am-noon, 50 minutes per attempt) , but why are we asking that they get it right first time? What if it's the way you phrased question that stumped them? Where's the harm in going back to the book, rethinking, trying again, going back to the source? It may be the only time they do!
  • Randomize questions for each redo, forcing students to concentrate on questions each time, not just fill in madly to get to the missed questions. (also hinders sharing)
  • Put questions in a pool, allowing students to pull different questions each attempt (see bullet above on sharing). This is also a great way to offer a final, randomly pulling questions from previous quizzes.
  • Set exam for storing best attempt, not last attempt. Research shows that students are more likely to try again if not afraid of inability to top last score. (BB instructors: this feature is oddly hidden in Grade Center/Modify Column, rather than via deployment - which is why so few of us use it and leave last attempt as default)
  • Give feedback immediately for each of the missed questions. Don't provide answer, but perhaps the page in text where concept is found or the reason they may have missed question, etc. One instructor (Hey, Kelly!) tells us that she has been consistently receiving grateful feedback from learners as she improves feedback on missed (and correct!) responses in her quizzes.
Tom D'Angelo and Patricia Cross suggest that unless feedback is very immediate or absolutely needed to progress (eg "follow my advice on your rewrite of this paper, or else"), students don't review or follow up on feedback. Giving them auto-feedback, in a low-risk environment where they know you don't see their first effort, is a great use of technology to invite time-on-task and mastery learning.

Other ideas for more formative self-assessment practices using the CMS testing options? Send them my way and I'll incorporate.


No comments: