skip to main content

Research Developments from ACER

Subscribe
Vocational & Adult
{rd-image-caption}

© Shutterstock/szefei

Improving confidence in expert judgements in medical specialty examinations

ACER is helping to set performance standards and improve examiner consistency in examinations for medical specialty colleges, report Daniel Urbach and Jacob Pearce.

Medical specialisation is a notoriously high-stakes process, with candidates undergoing a series of stringent examinations for which they prepare for many years in order to achieve certification to work independently as a specialist.

The expert judgement of fellows within specialist colleges plays a significant role in determining a candidate’s success or failure and, like any qualitative ruling, can be subject to criticism – justified or not. Legal challenges brought by disappointed candidates questioning the reliability, consistency or bias of professional judgements can sometimes result.

The Australian Council for Educational Research (ACER) is working with medical colleges to put best-practice assessment processes in place, allowing for recognition of the crucial role played by expert opinion while minimising potential inconsistencies and biases that could give candidates cause to question a result. These practices include double marking procedures, implementing standard setting methods and moderation techniques, and examiner training and calibration exercises.

ACER has been working with several colleges, including the Royal Australian and New Zealand College of Radiologists (RANZCR), the Royal Australasian College of Surgeons (RACS), the Australian and New Zealand College of Anaesthetists (ANZCA), the Australasian College of Dermatologists (ACD) and the Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM). While the specifics of the work for each college can vary, the aim of the work is consistent: to ensure that examinations and assessment processes are reliable, valid, of high quality and as efficient as practical. These goals are typically achieved by focussing on issues of fairness, consistency, transparency and, ultimately, defensibility. 

ACER test developers and psychometricians have been working for several years to improve assessment quality and data collection processes through reviews of assessment programs and by designing fit-for-purpose assessment, as previously reported in [rd]. More recently, ACER has also implemented robust, standard-setting methodologies in a variety of examinations, allowing for the desired performance standard to be expressed as a ‘cut score’ in a particular exam series.

Reviewing standards in collaborative expert panels greatly improves the defensibility of both the examination process and candidates’ results and reporting these standard setting outcomes to candidates improves examination transparency and puts candidates at ease. ACER also employs psychometric equating techniques where possible to help account for differences in examination difficulty and maintain standards over time.

An emerging area of ACER’s work with medical specialty colleges is in examiner calibration. High-stakes examinations of highly complex learning routinely rely on professionals with content expertise to judge candidate performance. However, candidates need to be able to trust that processes are fair and that their assessment will not be compromised by examiner bias. Examiner conduct and examiner calibration modules that aim to ensure that examiners are making analytic and holistic judgements regarding candidate performance consistently and correctly are currently being developed for the RANZCR Clinical Radiology oral examinations.

Working with specialist medical colleges to improve testing standards builds on ACER’s past work with the Medical Deans of Australia and New Zealand and members of the UK’s Medical Schools Council – Assessment Alliance to implement common assessments across medical schools.

To inquire about ACER’s capabilities in this area, please contact our experts in medical specialty assessment below. ■

RD

About the author

Daniel Urbach is a Senior Research Fellow in ACER's Psychometrics and Methodology research program.

More [rd] articles by Daniel Urbach

RD

About the author

Dr Jacob Pearce is a Senior Research Fellow in ACER's Assessment and Reporting division.

More [rd] articles by Jacob Pearce

View selected works of Jacob Pearce

Related articles

Higher Education
University completions and equity | RD

University completions and equity

06 May 2015

University students from disadvantaged groups have a lower completion rate than their more advantaged peers, but most disadvantaged students do complete their degrees, research reveals.

Evaluation, Quality & Standards, Higher Education, Indigenous, Featured higher education

School
Research Conference 2018 closes in Sydney | RD

Research Conference 2018 closes in Sydney

17 August 2018

ACER’s Research Conference 2018 took place in Sydney in August, with leaders in education from around the globe coming together to explore the theme ‘Teaching practices that make a difference: Insights from research’.

Assessment, Evaluation, Quality & Standards, Early Childhood, School, Education & Development, Indigenous, Featured home, Featured school

Education & Development
Supporting assessment of teaching and learning of 21st century skills | RD

Supporting assessment of teaching and learning of 21st century skills

13 August 2018

A new program is helping education stakeholders in Asia to develop and use assessment tools to improve teaching and student learning of 21st century skills. ACER Research Fellow Sally Robertson reports.

Assessment, Education & Development, Featured education & development