Assessments of student learning outcomes are important for institutional and system-level quality assurance, but should also help students improve their knowledge and skills.
A discussion of the development and implementation of large-scale common assessments intended to measure student learning outcomes in higher education has highlighted the importance of making participation attractive to students.
In a book chapter reflecting on the OECD Assessment of Higher Education Learning Outcomes (AHELO) and the Australian Medical Assessment Collaboration (AMAC), co-authors Dr Daniel Edwards and Mr Jacob Pearce state that common assessments of learning outcomes should benefit all stakeholders, including the students completing the assessment.
According to Edwards and Pearce, Principal Research Fellow and Research Fellow respectively at the Australian Council for Educational Research (ACER), the development and dissemination of student-level reporting is not only key to motivating students’ participation, but also an important factor in the overall success of common assessments.
‘The AMAC experience showed that good student reports can substantially increase the impact of assessment,’ they say.
In addition to providing institution-level reports with de-identified benchmarking data, AMAC gave students the opportunity to receive a detailed individual report. Designed to be delivered within a fortnight of taking the test, the report showed how the student performed across the assessment framework, and against de-identified students from their institution and other institutions.
The reports allowed students to identify potential weaknesses in their knowledge or understanding as they prepared for their final exams. Importantly, these reports were confidential – institutions did not receive identifiable student results.
Edwards and Pearce note that student-level reports offer students an incentive to participate that other incentives, such as cash or prizes, do not provide.
Feedback indicated that, overall, students appreciated the benefits AMAC offered them in the final stages of their degree.
This experience was in contrast to that of AHELO, in which a rotated assessment design meant that student-level reporting was not feasible. Edwards and Pearce note that many institutions reported the lack of specific feedback for students as ‘a defining factor in the general apathy towards participation’ in AHELO in Australia.
‘The lesson here for future developments of outcomes assessments is to ensure dissemination options that benefit all stakeholders – institutions, researchers, perhaps governments or quality authorities and students,’ Edwards and Pearce write.
‘The assessment design phase of such a project should include careful consideration, not only of the item types, competencies and frameworks being utilised in testing, but also in the types of output that are going to be most beneficial for achieving the aims of improving educational outcomes.’ ■
This article is based on the book chapter ‘Outcomes assessment in practice: Reflections on Australian implementations’, by Daniel Edwards and Jacob Pearce, published in Volume Six of Higher Education Learning Outcomes Assessment: International Perspectives.