next up previous contents
Next: Learning and Assessment Up: thesis Previous: List of Figures   Contents

Introduction

Computer-aided assessment (often referred to as automated grading or simply CAA) is an idea that surfaces in many (if not most) computer science departments in one form or another. In some academic institutions, CAA manifests itself as a custom script to help ensure some consistency across graders. For example, in the author's undergraduate career, grading for the Harvey Mudd College C++ and Data Structures course was done with the aid of a script written for each assignment by the professor in charge that made a rough estimate of the functional correctness of a student submission followed by a series of prompts regarding the level of style and documentation in the submission.

On the opposite end of the spectrum, CAA is commonly used with objective assessment items like multiple choice questions (MCQs), matching, and other computer-graded/discrete-response forms of assessment [11]. Indeed, CAA forms the basis for all (or nearly all) of the standardized testing that is done in the US, from elementary school standardized tests where optical mark recognition is used to evaluate multiple-choice question (MCQ) responses, to Item Response Theory [7]-based computer-adaptive tests, like the GRE, where a computer program adapts to the student's responses in an attempt to provide a more precise score for test takers.

Nevertheless, relatively little [19] [16] work has been done on the development of a generalized and usable framework for CAA until now. CAA offers the ability to reduce repetition, vanquish clerical error, and increase human time efficiency. This work is greatly motivated by the author's experiences with the development of Agar, a prime1.1 example of a generalized framework for CAA guided by principles gained from the domain of Human-Computer Interfaces (HCI). This work is not an advancement in the field of HCI, and as shown in Chapter 3, CAA has a long history of attempts to come up with a magic bullet for grading. What is important in this work is that we have allowed the development of Agar to be guided by HCI and such classic notions on Software Engineering as Fred Brooks' ``The Pilot System.''[9]. The lessons learned, insight gained, and user feedback gathered during the development and deployment of Agar has given us a very clear understanding of what is necessary to create a highly usable CAA tool.

The remainder of this document will discuss the design, development, and deployment of Agar, the system for Computer Assisted Assessment developed in the CS&E Department at UCR over the past year. As discussed in Chapter 6, this process was an ``organic'' variant on the spiral model: no clear notion of interface design or final feature set existed when development began. Rather, development was guided by the needs of users in a series of very short cycles of feature addition, interface alteration, and bug fixing.


next up previous contents
Next: Learning and Assessment Up: thesis Previous: List of Figures   Contents
Titus Winters 2005-02-17