Monday, May 2, 2011

Revisiting Plagiarism

In an age when collaboration, teaming and consensus-based decisions become more important and emphasized in the work place, incorporating these practices into meaningful assessment continues to challenge higher education. How do we know that our students know what we need them to know?
Adding to the challenge, thoughtful instructors attempting to foster digital literacy and the use of technology in rich ways in the course experience find that the most effective tools also make it easier for learners to "collaborate" where not intended by the instructor.

One of the most challenging tools for instructors is the use of online testing. The affordances include the ability to offer more small tests with lower stakes because the burden of grading is lightened. But how do we know they're not cheating??

In "traditional", face-to-face courses, online tests also save valuable class time both in delivery and in eliminating paper return. And they include the value to the student of instant feedback on understanding. The consequences of this transfer of exams to the digital space means we hear more stories of extending study groups to the exam taking. Students gather, enter the exam, and compare responses. Many honestly are surprised when "caught" and informed this is not allowed.

Here, the burden has to lie with the instructor who assumes their notion of assessment and ethical behavior is the same for the students. Unless directly stated in the syllabus or in the directions for the test, collaborating on an assessment may not be wrong in the eyes of the students. If the instructor believes it is wrong, they need to state that. And, once stated, if the instructor realizes that their values and ideas on assessment is not the same as that of many students, the instructor has a number of options. Electronic options; not whining like the crazy fellow in Florida because students had access to a large pool of possible questions and studied them all.

Keeping up with practices seen, technology enables an instructor to prevent collaboration as surely as it enables the learners to compare answers. Especially if the LMS is Blackboard with its plethora of unused assessment options. Creating an exam using a) randomized questions, b) randomized answers, c) pools where each student gets a different set of questions, and d) multiple formats for questions makes it very, very hard for students to compare questions during a timed test. Ok, there's a bit of a tech challenge in setting up these options as each is done in a different place when creating the exam, but ASU has some very good documentation and how to do it, and if you run into trouble, "operators are standing by" to help! Here's a start:
Digging deeper, Tom Angelo would ask if the assessment is being used to effectively improve student learning and would suggest we do much more with the technology. Used thoughtfully, electronic assessment enables students to learn more, learn deeply and to create a formative, lasting understanding of the course material. Instead of cursing the technology that creates a different kind of assessment, why not leverage online assessment to allow the student to try and try again to improve their understanding (and score) and to be better prepared for the next course, the next level, the next step in life?

Along with the randomized questions and unique question pools, Blackboard allows a) multiple attempts, b) accepting the highest attempt, and c) many, many formats of questions. Designing an assessment that allows students to examine, modify, and demonstrate what they have learned has never been easier. Leveraging technology in building assessments takes a bit of work up front but provides great return on investment. Tom Angelo would be proud.