What Exactly Do You Want Me To Do? Analysis of a Criterion Referenced Assessment Project
Jewels, Tony J., Ford, Marilyn , & Jones, Wendy C. (2007) What Exactly Do You Want Me To Do? Analysis of a Criterion Referenced Assessment Project. Journal of Information Technology Education, 6, pp. 311-326.
In tertiary institutions in Australia, and no doubt elsewhere, there is increasing pressure for accountability.No longer are academics assumed a priori to be responsible and capable of self management in teaching and assessing the subjects they run. Procedures are being dictated more from the ‘top down’. Although academics may not always appreciate ‘top down’ policies on teaching and learning, they should at least be open to the possibility that the policies may indeed have merit. On the other hand, academics should never be expected to blindly accept policies dictated from elsewhere. Responsible academics generally also need to evaluate for themselves the validity and legitimacy of externally introduced new policies and procedures. At one Australian tertiary institution, the Academic Board recently endorsed a new assessment policy, to implement criterion referenced assessment (CRA) throughout the university. This new policy is being promoted by the university’s teaching and learning support services and is to be implemented gradually throughout the university. As stated in the university’s manual of policies and procedures (MOPP) the ‘fundamental approach’ to the assessment of students was to be one based on ‘criterion referencing’. Such Assessment was to be free of any notion of awarding grades dependent on a bell curve, and the criteria against which a student’s work was to be assessed would have to be communicated in a detailed and clear way in advance of the actual assessment. Thus, for each piece of assessment, criteria must be given to students, with performance standards for each criteria detailing clearly what standard must be reached on each criterion to achieve a certain result. It is stated in the MOPP that CRA will give the assessment process ‘a great deal of objectivity’ and will contribute to the ‘reliability and validity of the assessment task’. Moreover, it is stated that high, but attainable, standards will ‘motivate students and focus their energy on learning rather than on competition with peers’. The teaching team of the final year Information Technology Project Management subject had a strong commitment to continuous improvement processes in the presentation and running of the subject. Given that the university had endorsed a new assessment policy that would soon relate to third year courses, the team decided to apply for, and were subsequently awarded, a faculty grant to implement and test the possible benefits of the endorsed CRA policy. An independent project, funded at faculty level, was undertaken to evaluate the effect of the new assessment policy on a final year unit. The project took a team-based approach to studying the use and effectiveness of criterion referenced assessment. All seven members of the unit’s teaching and marking groups were included in the decision making surrounding the design, development, and implementation of CRA and an environment was created in which the team members could openly communicate their experience of CRA. Members of the group initially held a variety of beliefs about CRA that ranged from a dismissal of CRA as merely the latest teaching and learning fad, to keen interest in how it might practically operate in the unit. This diversity of thought was regarded as valuable and in keeping with the attempt to take an impartial ‘warts and all’ view of the CRA implementation, noting equally its positive and negative impacts. It was found that many of the purported advantages attributed in the literature to CRA were not forthcoming. Moreover, some unexpected results were uncovered. While prior research suggests that CRA's failures are due to implementation errors, our work suggests that the real problem is that the assumptions behind CRA are flawed. This paper discusses the outcomes of the project for the students, for the academics, and for the institution.
Impact and interest:
Citation countsare sourced monthly fromand citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
|Item Type:||Journal Article|
|Additional Information:||The contents of this journal can be freely accessed online via the journal’s web page (see hypertext link).|
|Keywords:||Criterian Referenced Assessment|
|Subjects:||Australian and New Zealand Standard Research Classification > INFORMATION AND COMPUTING SCIENCES (080000) > INFORMATION SYSTEMS (080600) > Information Systems not elsewhere classified (080699)|
Australian and New Zealand Standard Research Classification > EDUCATION (130000) > SPECIALIST STUDIES IN EDUCATION (130300) > Education Assessment and Evaluation (130303)
|Divisions:||Past > QUT Faculties & Divisions > Faculty of Science and Technology|
|Copyright Owner:||Copyright 2007 Informing Science Institute|
|Deposited On:||01 Oct 2007|
|Last Modified:||29 Feb 2012 23:41|
Repository Staff Only: item control page