skip to main content

Research Developments from ACER

Subscribe
Higher Education
{rd-image-caption}

Image © Papua New Guinea University of Technology

Assessing aptitude for tertiary study

Appropriate university admissions depend on more information about students than their academic performance, as Marita MacMahon Ball and Shelley McLean explain.

Transition to university or other forms of higher education has historically been decided based on academic achievement in the final years of secondary school. While academic performance can inform admission decisions, it is neither the best nor most useful information to consider. A wealth of research indicates that a more diverse university selection process not only identifies a suitable cohort of higher education candidates, but also opens up higher education to a more diverse group of students.

Achievement and aptitude

We are all familiar with the notion of tests that assess whether students know and understand a body of knowledge in, say, chemistry, mathematics or history. This form of testing specifically enables teaching staff to make an assessment of learning achievement but also, where gaps in learning are evident, to make informed plans for further teaching.

When we assess candidates using an aptitude test, however, we are measuring innate and acquired skills, such as problem solving, critical thinking, written communication, non-verbal reasoning and scientific reasoning. Because the constructs – the particular skill areas being assessed in an aptitude test – are developed over time and because they vary, based on the requirements of the assessing institution, it is not possible for candidates to study for an aptitude test.

While not testing specific knowledge, nevertheless all aptitude tests are based on a level of assumed knowledge in a particular subject; that is, the core knowledge that a person in the targeted demographic group is assumed to have acquired prior to sitting the aptitude test. In most instances, however, it is not the knowledge itself that is being assessed; rather it is knowledge that assists candidates to display their ability to reason in unfamiliar contexts. Of course, shifts in expectations of ‘assumed knowledge’ can and do occur, such as with the redesign of the 2016 version of the SAT.

Why use an aptitude test?

The fundamental reason to use an aptitude test is to obtain more information about the university applicant. Commonly used in many countries, one of the best known examples of aptitude tests is the SAT developed in the United States for college admissions. Examples in the United Kingdom and elsewhere in Europe, and in Australia, include BMAT, UKCAT, LNAT, GAMSAT, HPAT-Ireland, STAT and uniTEST.

Many institutions are currently seeking to select a more diversified student cohort. Results from aptitude testing are less influenced by socioeconomic status (SES) than are academic results where additional home, school and other support may be provided, and where the student may enjoy more enriching life experiences. Since aptitude testing asks the candidate to reason in unfamiliar areas, there is an opportunity for lower-SES students to demonstrate their aptitude in a way unavailable to them in academic assessments.

Where applicants are seeking admission to very high-demand courses, such as medicine or law, there is often very little to differentiate between high achievers. The addition of an aptitude test enables the collection of more information on candidates’ skills. In some instances an interview might also be included to increase understanding of the profile of the candidate.

A further reason to include an aptitude test in the university selection process is to obtain common information on all applicants. At undergraduate entry, academic results from a number of different awarding authorities are often accepted and, in the case of graduate admission, there is no assured means of calculating consistent GPAs. University grading systems can vary; even within faculties, variation with scores awarded can be evident. Scores in an aptitude test have the potential, should the university wish, to moderate academic results.

Large-scale aptitude testing using quality test development, standardised testing arrangements and expert psychometric analysis of response data, also allows for the efficient collection of information on university applicants.

Exactly what should an aptitude test measure?

An aptitude test measures skills that are acquired throughout the life of the candidate, from many different sources. Course assessment, in contrast, provides information about the academic achievement of a candidate, and may be reported as a Grade Point Average (GPA), A-Level results, International Baccalaureate score or other similar measures of secondary school or university achievement. Combined, academic and aptitude test scores provide greater predictability of success at university than academic scores alone.

Deciding the particular skill areas to be assessed in an aptitude test – the constructs – is a complex process requiring input from the institutions wishing to use the test and the testing authority. Determining the constructs is influenced by a number of factors, such as:

  • the purpose of the test and the skills to be identified
  • the targeted demographic group
  • the required level of assumed knowledge
  • the use of the test as a hurdle or a discriminator
  • how the test is financed – by institution or candidate
  • the form of the test, online or paper-based
  • the level of security required, and
  • the frequency of test sittings.

Case study: Papua New Guinea

The Special Tertiary Admissions Test (STAT) was introduced in 2016 for university admission to a selection of Papua New Guinean universities, following extensive consultation with ACER that included consideration of local factors. The Papua New Guinea University of Technology (Unitech) endorsed a requirement for all Grade 12 applicants to take STAT as part of their application from 2016. The University of Goroka (UOG) followed in 2017.

At the outset Unitech recognised that, overwhelmingly, the quality of students admitted to courses required improvement. Although Grade 12 academic achievement results suggested that enrolled students were good-to-high achievers, this was not necessarily reflected in the performance of many once they commenced tertiary study. It was also well known that high-SES candidates were often admitted into university over other candidates.

Unitech made the decision to introduce an independent selection measure to assess the skills of students acquired over the course of their broader education. It was essential that this assessment be developed, managed and scored by an organisation external to the PNG education authorities. ACER was selected to provide a culturally appropriate version of STAT, a high-stakes, secure aptitude test.

STAT consists of a series of multiple-choice tests, differentiated by client base or difficulty level. STAT measures candidates' aptitude or capacity to perform, rather than learning achievement, and gives candidates an opportunity to demonstrate their ability to cope with tertiary studies in an Australian context. Scores on STAT are considered widely by universities in Australia, as well as some universities in New Zealand, Ireland and the UK.

STAT P is a test form that has been adapted and refined to minimise cultural bias and language issues for use in the Pacific region. STAT P test scores are considered alongside students' Grade 12 National Exam results (and other assessment factors for mature applicants) to better inform Unitech and UOG’s admissions process and help selectors to determine with confidence those students who are likely to succeed at university.

The outcome of the introduction of STAT P will be realised in time, as students are tracked over the coming years. Initial anecdotal evidence, however, is that STAT has assisted the participating PNG universities to improve fairness and widen access to include a broader range of people in university courses. ■

Further information:

For more about STAT, visit https://stat.acer.org.

For more about STAT P, visit https://statpng.acer.org.

This is an edited version of an article first published in The Bulletin, the magazine of the Association of Commonwealth Universities.

RD

About the author

Marita MacMahon Ball is the General Manager, Assessment Services, Higher Education at ACER.

More [rd] articles by Marita MacMahon Ball

RD

About the author

Shelley McLean is the Project Director of a number of tertiary aptitude tests developed by ACER.

More [rd] articles by Shelley McLean

Related articles

Higher Education
University completions and equity | RD

University completions and equity

06 May 2015

University students from disadvantaged groups have a lower completion rate than their more advantaged peers, but most disadvantaged students do complete their degrees, research reveals.

Evaluation, Quality & Standards, Higher Education, Indigenous, Featured higher education

School
The key to improving learning in Australian schools | RD

The key to improving learning in Australian schools

16 October 2018

The Gonski 2.0 recommendations may provide our best hope of reversing the long-term decline in the reading, mathematics and science levels of Australian 15-year-olds, says ACER Chief Executive Professor Geoff Masters.

Assessment, Literacy, Numeracy, School, Featured home, Featured school, Australia, Global

Higher Education
Tracing Australia’s contribution to international development | RD

Tracing Australia’s contribution to international development

08 October 2018

For over 60 years the Australian Government has invested in generations of global leaders, providing opportunities for study, research and professional development in Australia. ACER is evaluating the long-term outcomes of this investment.

Evaluation, Higher Education, Education & Development, Featured higher education, Australia, Global, Indonesia