Next: History of Computer-Aided Assessment
Up: Learning and Assessment
Previous: Constructivism
Contents
Before any discussion of grading tools can be undertaken, it is
necessary to first establish some concensus on what the minimum
requirements for ``good'' grading are. As has been pointed out within
official UCR CS&E Department policies on instruction, there is a
difference between assignments that are intended as practice and
assignments that are intended as evaluation. The majority of student
submissions are likely to be intended more as practice than as
assessment. In order for practice to be as educationally effective as
possible, these features are considered a bare minimum feature set for
a good grading system2.1:
- Response Time - Clearly the speed with which feedback is
generated is considered something of great value. One of the major
features of computer-based assessment schemes utilizing
computer-checkable discrete-response questions like MCQs, Matching,
and short Fill-in-the-Blanks questions is the fact that students can
instantly find out how well they are doing. Rapid feedback response
is one of the most touted features of many on-line tutorials and
quizzes [22] [11]. In the more general
domain of human-checked work, having prompt feedback is a highly
beneficial, if obvious, feature of a good system for CAA.
- Accuracy - Accuracy is another obvious requirement for
good grading: if the scores have little or nothing to do with
the submission then student motivation will undoubtedly drop.
- Quality of feedback - It is one thing for a student to
receive an accurate score of their work shortly after submitting it,
it is quite another to be given a detailed breakdown of what they
did well, what needs work, and what didn't work at all. Quality of
feedback is something that commonly slips through the cracks:
Professors often look only for scores, while TAs and Graders are looking
only to finish grading. The result is that it is not necessarily
in anyone's immediate interests to ensure that the students are told
what they did wrong. While there is some small feedback that
students get from their numeric aggregate final score, it is not
nearly so useful as if they were given detailed grading information.
- Consistency - Something that is time-consuming and
difficult to achieve when grading without the aid of a grading tool
is consistency from submission to submission. As any grader knows,
the majority of errors on a given assignment are repeated by more
than one student. In order to be completely fair to the students,
each time a given error is found within an assignment, the penalty
(and probably the feedback) ought to be the same. Without such
consistency, student requests for regrades increase, student moral
may drop, and complaints of favoritism may turn the group opinion of
the students against the instructor. An extremely detailed rubric
is the ideal method of ensuring consistency and stopping such
problems before they start, but generating such a
detailed rubric before knowing what errors the students actually
made is a difficult art indeed. Failing that, a good system for
grading should provide a method for boosting consistency, if not
outright ensuring it, by assisting in the retroactive creation of
a detailed rubric.
- Flexibility - The flexibility of a CAA system is an
essential feature: how much does the system allow the grader to
override? There is a facetious saying that says, ``If you build a
fool-proof system, they'll just build a better fool,'' which
educators should be able to recognize as applying double in the face
of student work. No matter how robustly a CAA system is built in
terms of attempting to deal with the unexpected in student
submissions, it is very important that the grader has ultimate
control to override all of the system's actions. Otherwise, the
grader will revert to the slower, but less restrictive,
method of grading by hand. If the flexiblity of a CAA system
is insufficient, the other factors listed here will likely suffer.
Next: History of Computer-Aided Assessment
Up: Learning and Assessment
Previous: Constructivism
Contents
Titus Winters
2005-02-17