How difficult are exams? A framework for assessing the complexity of introductory programming exams

Sheard, Judy, Simon, Carbone, Angela, Chinn, Donald, Clear, Tony, Corney, Malcolm W., D'Souza, Daryl, Fenwick, Joel, Harland, James, Laakso, Mikko-Jussi, & Teague, Donna M. (2013) How difficult are exams? A framework for assessing the complexity of introductory programming exams. In Carbone, Angela & Whalley, Jacqueline (Eds.) Proceedings of the 15th Australasian Computing Education Conference, ACS, Adelaide, SA.

Abstract

Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.

Impact and interest:

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

297 since deposited on 26 Feb 2013
25 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 57540
Item Type: Conference Paper
Refereed: Yes
Keywords: standards, quality, examination papers, CS1, introductory programming, assessment, question complexity, question difficulty, HERN
Subjects: Australian and New Zealand Standard Research Classification > INFORMATION AND COMPUTING SCIENCES (080000) > COMPUTER SOFTWARE (080300)
Australian and New Zealand Standard Research Classification > EDUCATION (130000) > CURRICULUM AND PEDAGOGY (130200)
Divisions: Current > Schools > School of Electrical Engineering & Computer Science
Current > QUT Faculties and Divisions > Science & Engineering Faculty
Copyright Owner: Copyright 2013 Australian Computer Society, Inc.
Copyright Statement: Copyright © 2013, Australian Computer Society, Inc. This paper appeared at the 15th Australasian Computing Education Conference (ACE 2013). Conferences in Research and Practice in Information Technology (CRPIT), Vol. 136. Angela Carbone and Jacqueline Whalley, Eds. Reproduction for academic, not-for profit purposes permitted provided this text is included.
Deposited On: 26 Feb 2013 23:44
Last Modified: 25 Dec 2014 11:11

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page