Misleading ‘quality’ measures in Higher Education: problems from combining diverse indicators that include subjective ratings and academic performance and costs

    Research output: Contribution to journalArticlepeer-review

    67 Downloads (Pure)

    Abstract

    Quality indicators are often derived from weighted sums of diverse items, including ordinal Likert items. This procedure can be dangerously misleading because it takes no account of correlations among indicators. It also takes no account of whether indicators are input measures, e.g. prior achievement of incoming students, or outcome measures, e.g. proportion getting a good degrees or student satisfaction. UK Higher Education data for 04-05 were analyzed taking these issues into account. Multiple regression showed, unsurprisingly, that ‘bright’ students with high prior achievement did well on all outcome indicators. Getting a good degree was not influenced by any other measure. Completing a course was additionally positively associated with academic pay and spend on library and computing facilities. A good destination (not currently seeking work) was additionally positively associated with number of staff per student and vice-chancellor pay. Student satisfaction was additionally influenced, but negatively, with vice-chancellor pay. The implications for evaluating university quality are discussed
    Original languageEnglish
    Pages (from-to)48-63
    JournalRadical Statistics
    Issue number94
    Publication statusPublished - 2007

    Keywords

    • Likert scales
    • evaluation

    Fingerprint

    Dive into the research topics of 'Misleading ‘quality’ measures in Higher Education: problems from combining diverse indicators that include subjective ratings and academic performance and costs'. Together they form a unique fingerprint.

    Cite this