Marking complex assignments using peer assessment with an electronic voting system and an automated feedback tool

Trevor Barker, Steve Bennett

    Research output: Contribution to journalArticlepeer-review

    92 Downloads (Pure)

    Abstract

    The work described in this paper relates to the development and use of a range of initiatives in order to mark complex masters' level assignments related to the development of computer web applications. In the past such assignments have proven difficult to mark since they assess a range of skills including programming, human computer interaction and design. Based on the experience of several years marking such assignments, the module delivery team decided to adopt an approach whereby the students marked each other's practical work using an electronic voting system (EVS). The results of this are presented in the paper along with statistical comparison with the tutors' marking, providing evidence for the efficacy of the approach. The second part of the assignment related to theory and documentation. This was marked by the tutors using an automated feedback tool. It was found that the time to mark the work was reduced by more than 30% in all cases compared to previous years. More importantly it was possible to provide good quality individual feedback to learners rapidly. Feedback was delivered to all within three weeks of the test submission date
    Original languageEnglish
    JournalInternational Journal of e-Assessment
    Volume1
    Issue number1
    Publication statusPublished - 2011

    Fingerprint

    Dive into the research topics of 'Marking complex assignments using peer assessment with an electronic voting system and an automated feedback tool'. Together they form a unique fingerprint.

    Cite this